Tuesday, September 20, 2022

Cybersecurity Maturity Model Certification (CMMC)

The Cybersecurity Maturity Model Certification (CMMC) is a training, certification, and third-party assessment program of cybersecurity for DoD contractors (also referred to as the Defense Industrial Base or DIB). Why should you care about CMMC? FAR and DFARS clauses require it. It will be a factor in proposal scores (e.g. the Polaris GWAC scores 6,000 points for cybersecurity and 5,000 for risk assessment compared to 750 points for CMMI certification). NASA, DHS, GSA, and other government organizations are expected to follow DoD with the implementation of CMMC. These are the important bullet points for CMMC:

  • FAR 52.204-21. Basic Safeguarding of Covered Contractor Information Systems. Correlates with the CMMC Level 1 requirements for protecting Federal Contract Information (FCI).
  • DFARS clause 252.204.7012 Safeguarding Covered Defense Information and Cyber Incident Reporting (October 2016) required compliance with NIST SP 800-171 no later than December 31, 2017.
  • DFARS clause 252.204-7019 Notice of NIST SP 800-171 DoD Assessment Requirements (November 2020). Suppliers are required to perform an assessment for each covered contractor information system that is relevant to the offer, contract, task order, or delivery order, at a Basic, Medium, or High level in accordance with the NIST SP 800-171 DoD Assessment Methodology and submit their score (not more than 3 years old) in the Supplier Performance Risk System (SPRS). A Basic assessment is a self-generated score. A Medium assessment is performed by the Government. A High assessment includes everything in a Medium assessment as well as validation of the contractor's System Security Plan (SSP).
  • DFARS clause 252.204.7020 NIST SP 800-171 DoD Assessment Requirements (November 2020). Defines Basic, Medium, and High assessments.
  • DFARS clause 252.204-7021 Cybersecurity Maturity Model Certification Requirements (November 2020). Requires CMMC certificate at the CMMC level appropriate for the information that is being flowed down to the contractor.
  • CMMC v1.0 was released on January 31, 2021.
  • CMMC v2.0 was released on November 4, 2021. Per Town Hall sessions held by the Deputy DoD CIO (David McKeown) in February 2022, CMMC v2.0 is not expected to be finalized until it completes the DoD rulemaking process which could take up to 24 months. The new version simplified the program:
    • 3 Levels
      • Level 1. Foundational
        • For contractors and subcontractors that only handle Federal Contract Information (FCI) as defined in the FAR. The DoD estimates that about 140,000 such companies exist in the DIB.
        • 17 security controls aligned with FAR 52.204-21.
        • Annual self-assessment.
      • Level 2. Advanced.
        • Allows CUI handling.
        • Aligns with NIST SP 800-171 rev 2. 110 security controls. The rumor is that the "delta 20" which were in the CMMC v1.0 Level 3 will be added in to NIST SP 800-171 version 3. These include FAR Clause 52.204-21, NIST SP 800-53 Rev. 4, an d NIST Cybersecurity Framework (CSF) v1.1). Includes Level 1 requirements.
        • Annual self-assessment.
        • Triennial 3rd-party and government-led assessments for some Level 2 programs. The original estimate is that 40,000 companies will require 3rd party assessment. Per the February 10, 2022 Town Hall, Deputy DoD CIO David McKeown said further analysis has shown all 80,000 CMMC Level 2 DIB contractors will require third-party assessments.
      • Level 3. Expert.
        • NIST SP 800-172 Enhanced Security Requirement for Protecting Controlled Unclassified Information, a supplement to NIST SP 800-171. Includes Level 2 requirements.
        • Only about 500 companies out of 300,000 in the DIB will be subject to Level 3 certification.
        • Triennial 3rd-party and government-led assessments via the Defense Contract Management Agency Defense (DCMA) Industrial Base Cybersecurity Assessment Center (DIBCAC).
  • "CMMC eMASS" is expected to be available to the DIB to store assessment artifacts, create POA&Ms, and maintain the System Security Plan (SSP). Expected April/May 2022.
  • A Plan of Action and Milestones (POA&M) will be allowed for up to six months for non-compliant controls. POA&Ms for the highest-weighted requirements will not be allowed. A minimum score will be required to support certification with POA&Ms. Waivers will be allowed on a very limited basis, accompanied by strategies to mitigate CUI risk. Waivers will be time bound and require senior DoD approval.
  • The Department of Justice announced in their Civil Cyber-Fraud Initiative that they will utilize the False Claims Act to pursue cybersecurity-related fraud by government clients (which includes falsely claiming compliance with CMMC). 
  • The government may offer incentives for DoD contractors who comply earlier than the CMMC v2.0 implementation deadline (or when it makes it through the rulemaking process and the DFARS clauses are allowed in contracts).
  • DoD will lay out the new policies, such as waiver processes, through Title 32 National Defense regulations. The Pentagon will also codify the policy into Title 48 Federal Acquisition Regulations (FAR) and Defense Acquisition Regulation Supplement (DFARS) so contracting officers can use CMMC 2.0 in acquisitions. This could take up to 2 years (expect CMMC 2.0 to be in contracts by summer 2023). Rulemaking under 32 CFR is required to establish the CMMC program. Rulemaking under 48 CFR is required to update the contractual requirements in the DFARS to implement the CMMC 2.0 program. Until rulemaking formally implements CMMC 2.0, the DIB's participation in CMMC will be voluntary. Expect the final CMMC Rule 32 CFR and 48 CFR between December 2022 and May 2023.
  • The CMMC Assessment Process (CAP) was released on July 26, 2022. C3PAOs can begin assessing DIB companies now.
  • CMMC Roles:
    • OSC. Organization Seeking Certification
    • C3PAO. CMMC Third-Party Assessor Organization. Contract with OSCs, hire and train certified assessors, schedule assessments, and manage assessments.
    • Assessors. Certified CMMC Professionals (CCP) and Certified CMMC Assessors (CCA). Credentialed to conduct assessments at a particular level (1, 2, or 3).
    • RP. Registered Practitioner. Individuals that provide advice, consulting, and recommendations to their clients. Do not conduct Certified CMMC Assessments.
    • RPO. Registered Provider Organization. Implementers and consultants that assist companies with CMMC. Do not conduct Certified CMMC Assessments.
    • LPP. Licensed Partner Publisher. Publish educational courses and content related to CMMC.
    • LTP. Licensed Training Partner. Provide education and training services related to CMMC.

Wednesday, September 14, 2022

If I Were an Authorizing Official (AO)

I have been involved in the Certification and Accreditation (C&A) and Assessment and Authorization (A&A) processes used by government customers since the DoD Information Technology Security Certification and Accreditation Process (DITSCAP) was released in 1997. As we progressed through the Department of Defense Information Assurance Certification and Accreditation Process (DIACAP) in 2006 and in 2014 moved to the DoDI 8510.01, "Risk Management Framework (RMF) for DOD Information Technology (IT)", I've gone through hundreds of security controls, thousands of assessment procedures, or Control Correlation Indenters (CCIs), authored and review dozens of policy and procedure documents, and otherwise managed accreditation packages that eventually navigated through an approval chain to obtain an Authority to Operate (ATO) issued by an Authorizing Official (AO). I've thought about what I would do if I was sitting in the AO's seat and here's what I'd look for:

  • More time spent on risk assessment versus compliance. We are very good at applying checklists and providing compliance scores and dashboards, but not much time is spent analyzing assessment results, managing Plans of Actions and Milestones (POA&Ms), applying mitigations, and determining residual risk. At the end of the day, the AO needs to decide whether to authorize a system or not, and to do that, the risk of operating the system must be understood, not how many security controls are open.
  • Develop, provide, and use the Continuous Monitoring Plan as soon as possible in the System/Software Development Lifecycle (SDLC). Per NIST SP 800-37, a Continuous Monitoring Plan should be submitted to the AO at Step 2 of the Risk Management Framework (RMF) process (Selection of Security Controls). Attention should be paid to the change management process, providing a security impact analysis for changes, and documenting the decision to accept the risk introduced with the change. The change management process needs to be clear as to what constitutes a major change that must be elevated to the AO for approval and what changes can be accepted by the team responsible for the day-to-day development, operations, and security of the system. An effective Continuous Monitoring Plan keeps the security posture of the system at the same level it was when it was authorized and there should never be a mad rush to clean it up before the ATO expires and the system will be reassessed.
  • Don't be afraid of reporting bad news. Use your skills to find vulnerabilities. If open findings cannot be closed, expose them to the AO, and don't try to bury them in a POA&M that has unrealistic mitigations or milestones to close them. 
  • Be more technical and less administrative. The assessment and authorization process should not be a paperwork drill to populate an Enterprise Mission Support System (eMASS) record. When looking at open findings in Security Technical Implementation Guide (STIG) assessments, discuss ways to close or mitigate the finding, and just don't open a POA&M item with an indefinite due date. Establish realistic milestones and follow up on due dates to make sure progress is being made.
  • Use inherited controls from your common control providers. When deploying an application in Cloud Service Provider (CSP) environment, two NIST SP 800-53 control families should be entirely inheritable:  Physical and Environmental (PE) and Media Protection (MP) as well as several in Maintenance (MA). Many more security controls, particularly Incident Response (IR) should be inheritable from the Cybersecurity Service Provider (CSSP). Applying an enterprise policy record should further reduce the controls the system owner is responsible for. For example, on an Impact Level 4 system deployed in AWS GovCloud (US), we were able to inherit 91 CCIs from the CSP, 120 CCIs from the CSSP, and 435 CCIs from the enterprise policy record.
  • Keep an accurate inventory. Know the operating systems, applications, databases, network components, and cloud resources (if applicable) within your accreditation boundary. Be aware of the cloud resources you are consuming are in scope at your Impact Level when deploying a DoD application in accordance with the DoD Cloud Computing Security Requirements Guide (SRG). Keep a STIG traceability matrix that is associated with your inventory. Be able to generate a Software Bill of Materials (SBOM) so if a major vulnerability such as Log4j is announced you'll know quickly whether you are affected and which systems need attention.
  • Document data flows. Your architecture and data flow diagrams should reflect the external services your system needs to access, external connections to on-premise services, and interfaces to the CSSP for vulnerability assessment and log collection. Ports, protocols and services exceptions and firewall rules should align with data flow documentation.
Whether you are responsible for NIST SP 800-53, NIST SP 800-171, NIST SP 800-172, PCI DSS, ISO 27001, or another security control set, keep these points in mind as you perform your assessments. As a Defense Industrial Base (DIB) contractor, I sit in the position as the company official who has to implement, assess, and attest to compliance with the 110 NIST SP 800-171 security controls required per Cybersecurity Maturity Model Certification (CMMC) Level 2 - essentially an AO. I'm fortunate we have the in-house expertise and experience to be well along the way towards being ready to be assessed by a Cybersecurity 3rd Party Assessment Organization (3CPAO).

Wednesday, September 7, 2022

Where Can I Find a List of Products Approved for DoD?

 

Tuesday, December 14, 2021

My AWS re:Invent 2021 Experience

 

The 10th AWS re:Invent conference was held in Las Vegas, NV, on November 28th through December 3rd. The conference included keynote announcements, training and certification opportunities, access to 1,500+ technical sessions, the Expo, and after-hours events, along with all the entertainment Las Vegas has to offer. This event drew visitors from all over the world - 60,000 people attended this year. This post is about my experience at the conference – if you want to read about  all of the announcements AWS made at the event – check out:  https://aws.amazon.com/blogs/aws/top-announcements-of-aws-reinvent-2021/.

PSI brought a small technical team to the conference, just David Nicholls, Zach Melnick, and myself. We’re both experienced cloud engineers interested in learning more about AWS so we can sharpen our skills in cloud operations, security, and development. We spent hours on the Expo floor connecting, interacting with, and learning from AWS builders and partners. We also attended several of the Breakout sessions and Workshops. This event was also a great opportunity to meet with some of our PSI customers at our “Cloud Convo & Cocktails with PSI” event.

Was it worth the trip? Definitely yes – there were hundreds of Breakout sessions for all AWS skill levels. AWS categorizes the Breakout session level as 100 (Introductory), 200 (Intermediate), 300 (Advanced), and 4 (Expert) depending on your experience working with AWS. You can also choose topics by learning track (e.g. Artificial Intelligence and Machine Learning, Containers, Serverless, etc. – there are too many to list - around 70 in total) or job role (e.g. Architect, Developer/Engineer, InfoSec, etc.). The Breakout sessions are great if you want a chance to ask questions or meet one-on-one with the presenter; however, these sessions are recorded and available on demand also on YouTube so the Workshops and Jam Sessions are a better use of your time. Workshops and Jam Sessions give you the opportunity to get your hands dirty and build something. AWS also planned many social events which provided an opportunity to either relax or interact with your peers. David and I even rode a mechanical bull (which earned us a nice AWS re:Invent 10th anniversary jacket). On the Expo floor, many vendors gave demos of AI/ML, ransomware defense, observability, cost management, and other services and tools that are designed to help you build, deploy, or manage, applications using AWS services. The AWS Builders' Fair is a fun and interactive way to learn how to apply AWS services to real-world use cases.

What other tips can I offer to someone who hasn’t been to AWS re:Invent? Register early if you want to attend in person (this year’s event sold out) and reserve seats in sessions that interest you most – they fill up quickly; although I found that I was always able to get into any session that had a waitlist. The conference is spread out among six hotels so be prepared to walk or look for the shuttles and manage your time so you can get to and from any scheduled session. The Casinos are large and like a maze so it’s easy to get lost. Meals are included in the conference fee so best bet is to remain inside a venue so as not to spend your lunch hour getting in and out of a building.

What’s next? As solutions and services provider with over 100 projects among our Health, Federal/Civilian, and National Security Sectors, PSI helps build, migrate, modernize, and/or secure many cloud applications for our customers. In order to refine our solutions, train employees, prepare for code challenges, and demonstrate concepts to our customers, we are deploying a DevSecOps Platform in AWS, Azure, GCP, and Oracle. Here we’ll apply our best practices, lessons learned, and innovation from our DoD, DHS, DOS, and VA projects in an environment that meets the well architected frameworks of the leading cloud providers and showcases the knowledge, experience, and agile development capabilities of our employees.

Thursday, September 2, 2021

Cost Management in the Cloud

Cloud environments differ from on-premise environments in many ways but one important aspect is the focus on operational expense (OPEX) versus capital expense (CAPEX). Many IT professionals are familiar with on-premise environments typically having lifecycle management policies for investment in hardware, software, and other infrastructure expenses. Network connectivity on-premise is usually a fixed/predictable cost and resource consumption is not a cost concern - hypervisor and storage equipment costs the same regardless of how much capacity is consumed. What is different in the cloud? The cloud pay-as-you go model requires a thinking differently about costs. Based on my experience with design, implementation, and management of projects in Amazon Web Services (AWS), these are some cost concerns to be aware of:

  • Cost Tools. In planning a new cloud deployment or migration, a cost estimate should be prepared using the AWS Pricing Calculator. The proposed design should meet the project's budget constraints and an alert created in AWS Budgets to ensure spending is in alignment with the budget. AWS Cost Explorer can be used to further analyze cost and usage of AWS resources.
  • Resource Sizing. In the cloud pay-as-you model, the size (CPU, memory, disk), performance (e.g. IOPS, network throughput), and number of virtual machines provisioned affect cost. In an on-premise data center, typically the virtualization platform, storage area network (SAN) or network attached storage (NAS) are paid up-front and sized to support the maximum workload, with some headroom for growth (almost always resulting in an over provisioning resources). Analyze workload requirements and size cloud resources appropriately. Use Auto Scaling Groups to automatically increase or decrease capacity based on demand, such as the number of connections, CPU, or memory utilization.
  • Shared versus Dedicated Hosts. When using Infrastructure as a Service (IaaS) in AWS, Elastic Compute Cloud (EC2) instances can be launched on shared infrastructure or Dedicated Hosts. Shared infrastructure is cheaper; however, be aware of compliance or licensing requirements which may require dedicated hosts. For example, the DoD Cloud Computing Security Requirements Guide requires dedicated hosts for Impact Level 5 workloads. Another reason for choosing dedicated hosting is software licenses bound to the number of sockets or physical cores of the host.
  • Commitment. In AWS, significant cost savings up to 72% can be realized by using Reserved Instances (RI) if you are able to commit to a 1 to 3 year term. Examine workloads - look for steady-state applications (e.g. authentication and authorization services, log collection and analysis, or any web application that is typically "always on") and make use of RIs. For ad-hoc or periodically-scheduled workloads such as batch processes, On-Demand or Spot instances can be used. If the workload can tolerate interruptions, Spot instances offer significant savings as compared to On-Demand instances. Spot instances are also a good choice for Auto Scaling Groups.
  • Egress and Cross Region Charges. AWS charges fees for data egress from their network to the Internet as well as across Regions within their network. There are also charges for Transit Gateway (TGW) and VPC Peering attachments and data transfer. On-premise network connectivity is typically a fixed monthly charge regardless of how much bandwidth is consumed. Variable data transfer charges which are unpredictable makes it difficult to factor in to the cost estimate; however, my experience has shown that egress charges amount to 1 to 3% of the overall cloud spending.
  • Tags. Tags attached to AWS resources are useful for configuration management, cost reporting, and cost efficiency. For example, by assigning a Tag with Key "Environment" and Value "Dev" to development resources, Amazon EventBridge and AWS Lambda can be used to turn those resources off after business hours thus realizing a cost savings up to 50% or more. By defining and activating a Tag for "Cost Center" as a User-Defined Cost Allocation Tag in the AWS Billing and Management console, costs can easily be tracked by customer, project, and/or team.
  • Abandoned Resources. Unused resources such as EC2 instances, Simple Storage Service (S3) buckets, Elastic Load Balancers (ELB), and databases in Amazon Relational Database Service (RDS) and DynamoDB can consume costs even though they are not in use. Use the cost tools mentioned above and audit monthly invoices to look for unused resources for which you are being charged.
  • Storage Tiering. AWS shared storage services such as S3 and Elastic File System (EFS) offer either automatic ("Intelligent Tiering") or user-defined lifecycle rules which migrate data to a cheaper storage class (e.g. "Infrequent Access") based on age (time since last accessed). Amazon FSx for Windows File Server and Amazon FSx for Lustre currently do not have the storage tiering feature.
  • Storage Lifecycle Management. Amazon Elastic Block Storage (EBS) Snaphots can be used to protect boot and data volumes attached to EC2 instances. In many environments I've found that because they are cheap, EBS snapshots are taken too frequently and retained indefinitely which overtime begins to increase cost. Analyze application requirements such as Recovery Point Objective (RPO) and retention policy and implement an EBS Snapshot management policy using  Amazon Data Lifecycle Manager. Amazon Data Lifecycle Manager can be used to control the frequency of EBS snapshots and automatically delete them once they reach a specified age thus meeting the RPO and retention requirements.
  • Serverless. AWS serverless offerings such as AWS Lambda, AWS Fargate, and Amazon API Gateway feature automatic scaling, built-in high availability, and a pay-for-use billing model to increase agility and optimize cost. By running a serverless application, we reduce costs by eliminating the need to provision EC2 instances and reduce operational expense by eliminating the need to patch, secure, backup, and monitor the underlying compute resources. We also reduce our cybersecurity exposure and labor expense in hardening operating system and web services (e.g. IIS, Apache, Nginx, etc.).
Resources:

Tuesday, July 20, 2021

Application and Code Testing

Application Security (AppSec) Strategy. "Shift-left" and get security testing incorporated early in the software development lifecycle (SDLC) - providing the security in DevSecOps. What problems need to be addressed with application security? Software Composition Analysis (SCA), securing Infrastructure as Code (IaC), security development infrastructure (e.g. source code repositories, container repositories, build systems), ensuring the confidentiality, integrity, and availability of the application in production. 

The following is a list of tool categories aimed at testing the security as well as stability or resiliency of an application and/or its code base. I have a few more draft notes written about this in OneNote, Sticky Notes, and the SharePoint blog I had started migrating to last year (abandoning that effort), so this post is by no means completed and will be continue to be expanded as I consolidated those notes.

Software Composition Analysis (SCA). Analysis of open-source component dependencies to identify vulnerabilities. Not a line-by-line scan of code as with SAST.
  • Contrast Security
  • Cycode
  • MergeBase
  • ShiftLeft
  • Snyk.
  • Veracode.
  • WhiteSource.
Pipeline Composition Analysis (PCA). Advances SCA to identify vulnerabilities in the software delivery pipeline.
  • Cycode
Software Bill of Materials. A software bill of materials is required per the White House Executive Order on Improving the Nation's Cybersecurity, May 12, 2021.Tools should be able to output a report in one of these three reporting formats:  Software Package Data Exchange (SPDX), CycloneDX, or Software Identification (SWID) Tags (see https://fossa.com/blog/software-bill-of-materials-formats-use-cases-tools/). The SPDX specification has been published as ISO/IEC 5962:2021 and recognized as the open standard for security, license compliance and other software supply chain artifacts.  

 Interactive Application Security Testing (IAST)

Static Application Security Testing (SAST).
  • Checkmarx.
  • GitLab Ultimate 10.3. If  you are using GitLab CI/CD, you can analyze your source code for known vulnerabilities using SAST. Supported languages and frameworks include:  .NET, C/C++, Elixir (Phoenix), Go, Groovy, Java, JavaScript, Node.js, PHP, Python, Ruby on Rails, Scala, and Typescript. The output is a SAST report artifact that can be included in a GitLab Security Dashboard.
  • Micro Focus Fortify Static Code Analyzer.
  • Parasoft. https://blog.executivebiz.com/2021/10/parasofts-software-security-testing-tool-gets-ok-for-use-on-dod-devt-programs/. 
  • ShiftLeft
  • Synk. Integrates with 30 developer tools including six integrated development environments (IDE) (Jetbrains, Visual Studio, Eclipse, ...). Partner with Rapid 7 for DAST. Call Friday 7/21/2021.
  • WhiteSource. SAST solution announced 2/16/2022 based on technologies acquired from Xanitizer and DefenseCode. Able to identify over 70 types of security flaws including OWASP Top 10 and SANS Top 25.
Dynamic Application Security Testing (DAST).
  • Checkmarx.
  • GitLab Ultimate 10.4. If you are using GitLab CI/CD, you can analyze your running web application(s) for known vulnerabilities using DAST. DAST used the open source tool OWASP ZAProxy to perform an analysis of your running web application. DAST can be configured to do a passive or active scan (active scans will attempt to attack your application and thus provide a more extensive security report). The output is a DAST report artifact that can be included in a GitLab Security Dashboard.
  • Micro Focus Fortify WebInspect.

    Container Scanning

    • Anjuna.
    • Aqua Security.
    • Bridgecrew.
    • Grype. Anchore’s open source Grype vulnerability scanner tool for containers is generally available for DevOps teams that are running the latest version of the GitLab CI/CD platform. Grype leverages Syft libraries that employ deep inspection algorithms to create an accurate software bill of materials (SBOM) for an application and then runs a scan to identify vulnerabilities. That data is then surfaced within a GitLab workflow to advance adoption of DevSecOps best practices.
    • Kubscape. ARMO’s Kubescape tool, based on guidance from a 52-page joint report co-authored by the National Security Agency (NSA) and the Cybersecurity and Infrastructure Security Agency (CISA), can test whether Kubernetes clusters have been deployed securely. The Kubescape tool is based on open source software the company created to secure Kubernetes environments using the Open Policy Agent (OPA) framework being advanced under the auspices of the Cloud Native Computing Foundation (CNCF)
    • Lacework Labs.
    • NeuVector.
    • Rapid 7.
    • RapidFort.
    • Snyk. Build time.
    • sysdig.
    • Tenable.io CS Scanner.
    • Trivy.
    • Twistlock (now owned by Palo Alto). Run time.

    Infrastructure as Code (IaC). Analysis of deployment templates, e.g. JSON or YAML files used by AWS CloudFormation, Azure, GCP, Terraform, etc.)

    Observability

    • Metrics
      • Amazon CloudWatch. Amazon CloudWatch Events and EventBridge for alerting.
    • Events
    • Logs
      • Amazon CloudWatch. Amazon CloudWatch Events and EventBridge for alerting.
      • Splunk. Talal Balouch from SecuriGence is an expert.
    • Traces
      • Amazon X-Ray. Works with Amazon Elastic Compute Cloud (EC2), Amazon EC2 Container Service (Amazon ECS), AWS Lambda, AWS Elastic Beanstalk.
    • Analysis of logs, metrics, traces
    Chaos Engineering. An approach to application fault tolerance that intentionally provokes errors in live deployments. It incorporates an element of randomness to mimic the unpredictability of real-world outages.

    Failure Mode and Effects Analysis (FMEA). https://asq.org/quality-resources/fmea. 

    Vendors/Products:
    • Acunetix
    • Akeero
    • ARMO.
    • Azure Monitor
    • Aqua Security
    • Bridgecrew.
    • BluBracket.
    • Checkmarx
    • Chronosphere.io
    • Contrast Security
    • Cycode
    • Datadog.
    • Data Theorem
    • env0
    • Contrast Security
    • Fugue. Provide free masterclasses on cloud security.
    • GitHub
    • GitLab
    • HCL Software
    • Instana
    • Invincti
    • Jfrog
    • Lacework Labs
    • Lightstep.
    • Logz.io
    • MergeBase
    • Micro Focus
    • Moogsoft. 
    • Neuvector
    • New Relic.
    • NTT Application Security (acquired WhiteHat Security)
    • Onapsis
    • Parasoft.
    • Rapid 7.
    • RapidFort
    • ShiftLeft
    • SonarQube
    • Sonatype
    • Snyk.
    • StackState.
    • Synopsys Software Integrity Group
    • Sysdig
    • Veracode. Forrester Research recently released The Forrester Wave™: Software Composition Analysis, Q3 2021 report with Veracode ranked as a strong performer for software composition analysis (SCA). Evaluating 10 SCA vendors against 37 criteria, the report is helpful for security professionals who are selecting an SCA vendor to best suit their organization’s needs. 2021 Gartner Peer Insights Customers’ Choice for AST.
    • Traceable.AI. API security
    • Trilio. Data Protection for Kubernetes. See GigaOm Radar for Kubernetes Data Protection Report.
    • WhiteSource. 
    Other tools to lookup:  Datadog, PagerDuty, Gremlin

    Security criteria:  CIS Security Benchmarks, Cyber.mil STIGs, SRGs, UCF.

    Beyond the application:  DDoS protection, Web Application Firewall (WAF), packet analysis (e.g. Snort, VPC Flow Logs), Incident Response Plan (IRP), Business Continuity Plan (BCP), Disaster Recovery Plan (DRP), Privileged Account Management (PAM), Zero Trust Network Architecture (ZTNA), ...

    Tuesday, April 27, 2021

    Use of Personal Email for Government Business

    Each government entity most likely has their own policy(ies); however, the overarching one may be that using personal email for government business must be in compliance with the Federal Records Act (44 U.S.C Chapter 31). ​The National Archives and Records Administration (NARA) is charged with enforcing the Federal Records Act and Office of Management and Budget (OMB) requirements that include email and instant messages​. NARA issued Bulletin 2014-06, Guidance on Managing Email, September 15, 2014, addressed to heads of Federal agencies, states that email sent on personal email accounts pertaining to agency business and meeting the definition of Federal records (44 U.S.C. 3301) must be filed in an agency recordkeeping system.

    References:

    DoD CIO Web and Social Media Policies