The 7-Point Checklist for Integrating Splunk Observability Cloud into Your AWS Environment

So, you have decided that it is time to say goodbye to outdated legacy monitoring systems. You are tired of relying on systems that only analyze samples of data. Ones that cannot keep up with the speed of your AWS environment (those containers do spin up and spin down quickly after all). It is time to embrace a new solution that can provide your team with the critical insights and support needed to promptly identify, triage, and resolve behavioral abnormalities – Splunk Observability Cloud.  

If your team has been contemplating an observational methodology, we encourage you to commit. The longer you wait to integrate a tool like Splunk Observability Cloud into your AWS environment, the more likely you are to miss the critical information needed to quickly identify and resolve issues, which could directly impact your team’s performance and your company’s bottom line.  

Positive performance can have a significant impact on your systems’ ability to convert end users into paying customers. A recent study found that decreasing page load time from eight (8) to two (2) seconds can increase conversion rates by as much as 74 percent. And on the other end of the spectrum, a critical application failure can cost between $500,000 and $1 million 

But before you start layering Splunk onto your AWS platform and recoding your software for observability, you need to get organized.

The 7-Point Splunk Observability Cloud Success Checklist  

At TekStream, we’ve had the privilege of assisting many organizations in implementing Splunk Observability CloudWe know what it takes for a successful implementation, and our team of experienced professionals has put together a list of the seven must-haves of any successful Splunk Observability Cloud integration into AWS and accompanying on-premise systems 

1. Name Your Implementation Destination 

Our number one piece of implementation advice? Start with the end in mind.  

Think ahead to the results and insights that will have the most significant impact on your team and organization. Ask yourself questions like:  

  • – What aspect of your business would benefit from observability? 
  • – What processes do you need to have visibility into? 
  • – What would the business benefit be if you had that information today? 
  • – What information do you need to be able to determine the health of those processes? 

Observability is very much a purpose-driven methodology, similar to the DevOps methodology. If there is a specific result you are trying to achieve through observability, then you need to integrate observability in a way that aligns with those goals.  

Identify your desired end-state, then work backwards to develop your implementation plan.  

2. Understand What Must Change to Prepare Your Organization for Observability 

Look at your current system and identify what changes you’ll need to make to become observability-ready.  You undoubtedly will have to make some changes to your code to support Splunk Observability Cloud. But code is not the only thing that may need to change.  

Existing processes, response protocols, and even team mindsets are all aspects of your organization that will need to evolve to embrace observability. Lay out a plan for how you will introduce observability and earn team buy-in before you think about implementing Splunk Observability Cloud.  

3. Determine Who in Your Organization Needs to Be Involved in The Implementation 

Yes, your developers will be involved. However, successful Splunk Observability Cloud implementation goes beyond any individual developer. Everyone involved in supporting the business process should be part of the transformation, including site reliability engineers (SREs), DevOps engineers, leadership, and more.  

Create a list of these individuals and match them to the specific implementation tasks needed for a successful integration. Be sure to include executive sponsorship and leadership support as well as who will be managing the project. Use this list to identify any gaps or overlaps in responsibilities.

4. IdentifAny Third-Party Systems That Need to Be Considered  

Your first-party systems are not the only technologies that may need to be updated. If your organization uses any third-party tool, you will want to ensure those systems also are integrated into your observability platform.  

Start with an audit of your third-party systems. Be sure to consider the limitations and supporting framework of each platform. Is it possible to integrate the current third-party system with Splunk Observability Cloud? Is it necessary?  

Once you complete your assessment, affirm that your timeframe and roadmap align with your findings. You may need to account for additional time, support, or resources.   

Additional Consideration: To accurately assess the ease of integration of your third-party tools, you may need to ask your vendors for additional access to the system. Check with each third-party platform to see if you have an opportunity to peek under the hood and gain insight into their system.  

5. Put Together a Clear Implementation Timeframe 

Do you have a specific end-time that your observability platform must be operational?  

Of course, nearly every organization will say, “as soon as possible.” However, we believe that a successful timeframe considers the scope of the implementation lift as well as the resources your organization can allocate.  

Align your timeframe directly to your roadmap by including sub-goals, milestones, deliverables, and other accountability metrics. Not only will this help you understand if your ideal timeline is too aggressive for the scope of the endeavor, but it also will help your team determine if additional resources are needed to complete the project within the desired timeframe.  

6. Clarify a Specific Approach to Your Implementation 

How are you planning on rolling out Splunk Observability Cloud? Are you only adding observability to new code? Are you rolling out one application or process at a time? Are you recoding all technologies before implementing them across your entire AWS environment?  

Some of these implementation options may be more practical and useful than others. Take the time to investigate the feasibility and bottom-line impact of each approach. Do not get distracted recoding systems that will not help your organization reach its performance monitoring goals.  

7. Choose an Implementation Partner

If the above sounds daunting, know that you do not have to go it alone. The right partner will guide your team through each of these points, lending their proven experience and process to better ensure a successful Splunk Observability Cloud implementation.  

At TekStream, we work with our clients to form a complete understanding of their observational goals, as well as the systems and processes that will need to be updated to achieve the desired outcome.  

From there, we will develop a timeline and implementation roadmap that takes you from where you are today to where you want to be with observability. Along the way, we will provide our strategic recommendations and insights across several project aspects that are imperative to a successful AWS integration

Get Started Today 

Ready to abandon your legacy monitoring tools in favor of a system that can keep up with the ephemeral nature of AWS? We can help. TekStream has proven experience assisting companies with their adoption of observability. Our team of dedicated experts stands ready to offer our support. Together, we will craft an implementation strategy that aligns directly with the needs of your team. Reach out to us today to get started. 

Interested in learning more about observability and how Splunk Observability Cloud can help you monitor and improve your AWS platform? Download our latest eBook:

Unlock Observability: 3 Ways Splunk Observability Cloud Works with AWS to Improve Your Monitoring Capabilities

According to the numbers, there are over 1,000,000 active AWS customers. In fact, there is a good chance that, like Netflix, Facebook, and LinkedIn, you, too, are using Amazon Web Services to support all or a portion of your cloud-based platforms and systems. Cloud technologies like AWS provide a host of benefits including scalability, cost-efficiencies, and reliability. But the very nature of cloud processing also introduces new layers of complexity. One critical added complexity is in monitoring cloud systems to identify and resolve issues. Traditional alert monitoring tools were not designed to address the ephemeral nature of cloud processing.  

Fortunately, Splunk has brought a full observability suite to market that integrates seamlessly with AWS’s portfolio of services to provide AWS users and their DevOps teams with the tools they need to improve the performance of their cloud-based systems. Below, we have laid out a brief primer on observability and paired that overview with three ways that the Splunk Observability Cloud works with AWS to streamline your monitoring.  

Introduction to the Splunk Observability Cloud 

While there is no shortage of observability tools on the market, Splunk’s acquisition of SignalFX in 2019, its subsequent additions to the platform, and its existing AWS integrations make it a powerful choice for organizations that use AWS as well as other leading cloud solutions like Microsoft Azure and Google Cloud Platform.  

Splunk offers a fully integrated observability set of products designed to bring all metric, trace, and log telemetry into a single source of data truth. Additionally, you can seamlessly merge this data with other Splunk Enterprise data such as security, IT and DevOps for the most comprehensive and integrated view of your environment. 

The Splunk Observability Cloud is comprised of several monitoring and observability products, including:   

  • – Splunk Infrastructure Monitoring: AI-driven infrastructure monitoring for hybrid or multi-cloud environments.  
  • – Splunk APM: NoSample™ full-fidelity application performance monitoring and AI-driven directed troubleshooting. 
  • – Splunk On-Call: Incident response and collaboration.   
  • – Splunk RUM (coming soon): Works with Splunk APM to provide end-to-end full-fidelity visibility by providing metrics about the actual user experience as seen from the browser. 
  • – Splunk Log Observer: Built specifically for SREs, DevOps engineers, and developers who need a logging experience that empowers their troubleshooting and debugging processes. 

Three Benefits of Integrating Splunk Observability Cloud with AWS 

For organizations already using AWS, Splunk works seamlessly with Amazon to provide DevOps teams with out-of-the-box visibility across their complete AWS environment.  With Splunk Observability Cloud, all data is shown within a single system, making it easy for your team to identify issues across any of the AWS tools you utilize.  

As data passes from your AWS services into your Splunk environment, it is analyzed in real time across the full Splunk Observability Cloud. The result is comprehensive reporting and monitoring that allows you to identify and respond to issues the moment they occur – regardless of your platform’s size. 

A Venn-diagram style graphic displaying the features of AWS and Splunk.

While there are several efficiencies and benefits to be gained by layering Splunk Observability Cloud onto your AWS environment, here are three that stick out to our team:  

1. Global Monitoring of Amazon Container Services 

Splunk’s Infrastructure Monitoring tool (part of the Observability Cloud) is built to specifically monitor the ephemeral and dynamic nature of container environments. Through this tool, customers can have key insight into Amazon ECS and Amazon EKS performance characteristics and containerized applications.  

Out-of-the box dashboards and reporting provide teams with the information they need to capture immediate value from the platform.  

2. Real-time Fidelity Tracing 

Are you tired of having to sample data or work with limiting data ingestion caps? Splunk Observability Cloud includes two powerful tools that, together with AWS, provide teams with end-to-end full-fidelity tracing. 

First, Splunk APM utilizes OpenTelemetry-enabled instrumentation to ingest all trace data. No more sampling. Splunk APM captures, analyzes and stores 100% of available trace data. Once captured, Splunk Real User Monitoring (RUM) can tie that trace data to specific user actions within your AWS environment.  

These systems work in tandem to provide your team with rich visibility into the bugs and bottlenecks that could harm your user experience.  

3. Automated Incident Response 

Not only does Splunk Observability Cloud provide real-time visibility across your complete cloud stack, but it also can reduce your team’s mean time to recovery (MTTR) through automated responses. Through the platform, DevOps teams can set automated remediations that fire regardless of human oversight.  

Built-in artificial intelligence and machine-learning capabilities help further improve the efficiency and latency of automated responses.  

Enhance Your AWS Platform with Splunk Observability Cloud 

If your legacy monitoring systems cannot keep up with the complexities and intricacies of AWS, it’s time to make a shift in your team’s mindset towards observability. By making the structural changes necessary to facilitate observability and embracing robust tools like Splunk Observability Cloud, your team will gain the capacity to improve the performance of your AWS environment.  

Interested in learning more about observability and how Splunk Observability Cloud can help you monitor and improve your AWS platform? Download our latest eBook:

Four Small Java/Coffee Tips That Result in Better Performance (and Better Taste)

By: William Phelps | Senior Technical Architect

 

It’s really no secret that it’s little things that often yield the biggest results.

As you are reading this article, chances are that you are perhaps drinking a cup of coffee. There are quite a few small things about coffee that you may or may not know.

  • – Simply adding cream to your coffee will keep the coffee warmer about 20 percent longer. This occurrence is similar in nature to the effect that allows warm water to be frozen into ice cubes faster than cold water. Try it and see.
  • – Adding a small pinch of salt to your coffee will cut down on the acidity of the coffee, and results in a much smoother cup of coffee. Add the salt to the coffee pot if you brew by the pot, or to your cup if you like a single-serve variety.  (I personally use kosher salt for this trick, not regular table salt… you’ll likely use too much with the table salt.)
  • – Coffee grounds are an excellent fertilizer source for house plants. I’d imagine the amount found in a regular single-serving cup or Keurig “K” cup would be ideal for small potted plants.
  •  – The first webcam was implemented by Cambridge University to monitor a coffee pot. The coffee was disappearing very quickly, so a webcam was used to monitor when the pot was finished brewing so people could get a cup.

Rather than going to the extreme of putting a camera on a pot, some folks opt to go out for their coffee fix.  Have you ever noticed that the cup of coffee you buy in-store at Starbucks tastes so much better when brewed in the shop, than from the same bag of grounds that you bought in the shop, and then took it home and brewed it yourself?  This is likely due to a handful of reasons, but the biggest reason is probably the tuning of the exact process that the average barista follows.

The very same tuning principle applies to your Java program. “Compile once, run anywhere” code is still very dependent on the environment in which it’s deployed. Think of Java as the coffee whereas the Java Virtual Machine (“JVM”) is the coffee pot that comes in the Java Development Kit (“JDK”). A better “pot” and the process of handling said pot will result in more consistent results. There are a lot of “coffee pot“ manufacturers but some basic setups are all the same.

  • –  “Avoid installing Java into a file system location with spaces in the path.”Primarily a Windows issue, the Windows Java installer suggests a default installation path in the “Program Files” directory. This is a problem for many programs when your programs start looking for jars to add in the class loader. Unless the program was coded to wrap the classpath in quotes, your program will fail in very odd ways that is hard to debug. While this is predominantly a Windows problem, the same thing can happen on other operating systems.  A coffee pot will have a proper storage “space” in your home/office. Ditch the space however in your install paths.
  • –  “Install the JVM/JDK into a generic path location.”While this may seem counterintuitive, once the basic installation is done, the process of updating the JVM/JDK in the future becomes very simple. Seeing the version of the JDK in a file path is a weird comfort for some people, but having to edit numerous files to update the location reference is fraught with issues, and in some cases may be impossible.  It’s simply easier for other folks to find the “working” coffee pot if it’s always stored in the same place. Multiple pots can exist on a server, but this approach makes it clearer which pot is being used.
  • –  “Change/update the random number generator source.”Have you ever got tired of waiting for the coffee pot to heat up? Sometimes a JVM is really sluggish in performing because of insufficient entropy on a server. This is a somewhat complex topic, but in essence, some operating systems rely on basic I/O to generate input for random number generation. If the generation is slow, and multiple processes are waiting for a number, your program can seemingly hang, when in reality it’s just waiting in the queue.

There is a small change that can be made in the JVM to change the random number generation process to look at another source. In the jdk’s jre/lib/security folder, find the java.security file. Search for the line that references “secure.random”.

Add a . to the setting as shown, and restart your processes that use the JVM.

This trick has been shown to significantly improve startup times in WebLogic server.

  • – “Reinstall any certificates from the old JVM to the new JVM.”Finally, if the coffee pot is getting an upgrade, some of the “attachments” may still be needed. This is true of certificates that may have been installed in the cacerts file of the old JVM. Before upgrading the JDK make a copy of the existing cacerts file. Then you can reimport the certificates by basically merging the deltas from the old cacerts file into the new version.

This command will only insert/import certificates that exist in the old cacerts file, but not in the new cacerts file. This is really handy when it’s not known which exact certificates have changed over time.

It’s the little things that make both a smooth cup of coffee and a smooth-running JVM.

 

Want more Java tips? Contact us today!

Solution-Driven CMMC Implementation – Solve First, Ask Questions Later

We’re halfway through 2020 and we’re seeing customers begin to implement and level up within the Cybersecurity Maturity Model Certification (CMMC) framework. Offering a cyber framework for contractors doing business with the DoD, CMMC will eventually become the singular standard for Controlled Unclassified Information (CUI) cybersecurity.

An answer to limitations of NIST 800-171, CMMC requires attestation by a Certified Third-Party Assessor Organization (C3PAO). Once CMMC is in full effect, every company in the Department of Defense’s (DoD’s) supply chain, including Defense Industrial Base (DIB) contractors, will need to be certified to work with the Department of Defense.

As such, DIB contractors and members of the larger DoD supply chain find themselves asking: when should my organization start the compliance process, and what is the best path to achieving CMMC compliance?

First, it is important to start working toward compliance now. Why?

  • – Contracts requiring CMMC certification are expected as early as October and if we wait to certify until we see an eligible contract, it’s too late.
  • – You can currently treat CMMC compliance as an “allowable cost.” The cost of becoming compliant (tools, remediation, preparation) can be expensed back to the DoD. The amount of funding allocated to defray these expenses and the allowable thresholds are unclear but the overall cost is likely to exceed initial estimates and as with any federal program, going back for additional appropriations can be challenging.

As far as the best path to achieving CMMC goes – the more direct, the better.

Understanding Current Approaches to CMMC Compliance

CMMC is new enough that many organizations have yet to go through the compliance process. Broadly, we’ve seen a range of recommendations, most of which start with a heavy upfront lift of comprehensive analysis.

The general process is as follows:

  1. Assess current operations for compliance with CMMC, especially as it relates to its extension of NIST 800-171 standards.
  2. Document your System Security Plan (SSP) to identify what makes up the CUI environment. The plans should describe system boundaries, operation environments, the process by which security requirements are implemented, and the relationship with and/or connections to other systems.
  3. Create a logical network diagram of your network(s), including third-party services, remote access methods, and cloud instances.
  4. List an inventory of all systems, applications, and services: servers, workstations, network devices, mobile devices, databases, third-party service providers, cloud instances, major applications, and others.
  5. Document Plans of Action and Milestones (POAMs). The POAMs should spell out how system vulnerabilities will be solved for and existing deficiencies corrected.
  6. Execute POAMs to achieve full compliance through appropriate security technologies and tools.

This assessment-first approach, while functional, is not ideal.

In taking the traditional approach to becoming CMMC compliant, the emphasis is put on analysis and process first; the tools and technologies to satisfy those processes are secondary. By beginning with a full compliance assessment, you are spending time guessing where your compliance issues and gaps are, and by deprioritizing technology selection, potentially relying upon multiple tools, there is the potential to have granular processes that increase the problem of swivel-chair compliance (e.g., having to go to multiple tools and interfaces to establish, monitor, and maintain compliance and the required underlying cybersecurity). This is actually creating more work for your compliance and security team when you have to architect an integrated, cohesive compliance solution.

Then, the whole process has to be redone every time a contractor’s compliance certification is up.

Big picture, having to guess at your compliance gaps upfront can lead to analysis paralysis. By trying to analyze so many different pieces of the process and make sure they’re compliant, it is easy to become overwhelmed and feel defeated before even starting.

With NIST 800-171, even though it has been in effect since January 1, 2018, compliance across the DIB has not been consistent or widespread. CMMC is effectively forcing the compliance mandate by addressing key loopholes and caveats in NIST 800-171:

  • – You can no longer self-certify.
  • – You can no longer rely on applicability caveats.
  • – There is no flexibility for in-process compliance.

So, if you’ve been skirting the strictness of compliance previously, know you can no longer do that with CMMC, and are overwhelmed with where to even begin, we recommend you fully dive into and leverage a tool that can be a single source of truth for your whole process – Splunk.

Leverage a Prescriptive Solution and Implementation Consultancy to Expedite CMMC Compliance

Rather than getting bogged down in analysis paralysis, accelerate your journey to CMMC compliance by implementing an automated CMMC monitoring solution like Splunk. Splunk labels itself “the data to everything platform.” It is purpose-built to act as a big data clearinghouse for all relevant enterprise data regardless of context. In this case, as the leading SIEM provider, Splunk is uniquely able to provide visibility to compliance-related events as the overlap with security-related data is comprehensive.

Generally, the process will begin with ingesting all available information across your enterprise and then implementing automated practice compliance. Through that implementation process, gaps are naturally discovered. If there is missing or unavailable data, processes can then be defined as “gap fillers” to ensure compliance.

The automated practice controls are then leveraged as Standard Operating Procedures (SOPs) that are repurposed into applicable System Security Plans (SSPs), Plans of Action and Milestones (POAMs), and business plans. In many cases, much of the specific content for these documents can be generated from the dashboards that we deliver as a part of our CMMC solution.

The benefits realized by a solution-driven approach, rather than an analysis-driven one, are many:

  1. Starting with a capable solution reduces the overall time to compliance.
  2. Gaps are difficult to anticipate, as they are often not discovered until the source of data is examined (e.g. one cannot presume that data includes a user, or an IP address, or a MAC address until the data is exposed). Assumption-driven analysis is foreshortened.
  3. Automated practice dashboards and the collection of underlying metadata (e.g authorized ports, machines, users, etc.) can be harvested for document generation.
  4. Having a consolidated solution for overall compliance tracking across all security appliances and technologies provides guidance and visibility to C3PAOs, quelling natural audit curiosity creep, and shortening the attestation cycle.

Not only does this process get you past the analysis paralysis barrier, but it reduces non-compliance risk and the effort needed for attestation. It also helps keep you compliant – and out of auditors’ crosshairs.

Let Splunk and TekStream to Get You Compliant in Weeks, Not Months

Beyond the guides and assessments consulting firms are offering for CMMC, TekStream has a practical, proven, and effective solution to get you compliant in under 30 days.

By working with TekStream and Splunk, you’ll get:

  • – Installation and configuration of Splunk, CMMC App, and Premium Apps
  • – Pre/Post CMMC Assessment consulting work to ensure you meet or exceed your CMMC level requirements
  • – Optional MSP/MSSP/compliance monitoring services to take away the burden of data management, security, and compliance monitoring
  • Ongoing monitoring for each practice on an automated basis and summarized in a central auditing dashboard.
  • – Comprehensive TekStream ownership of your Splunk instance, including implementation, licensing, support, outsourcing (compliance, security, and admin), and resource staffing.

If you’re already a Splunk user, this opportunity is a no brainer. If you’re new to Splunk, this is the best way to procure best-in-class security, full compliance, and an operational intelligence platform, especially when you consider the financial benefit of allowable costs.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Maturity – Understanding What is Needed to Level Up

At its core, the Cybersecurity Maturity Model Certification (CMMC) is designed to protect mission-critical government systems and data and has the primary objective of protecting the government’s Controlled Unclassified Information (CUI) from cyber risk.

CMMC goes beyond NIST 800-171 to require strict adherence to a complex set of standards, an attestation, and a certification by a third-party assessor.

The Cybersecurity Model has a framework with five maturity (or “trust”) levels. As you likely know, the certification level your organization needs to reach is going to be largely situational and dependent on the kinds of contracts you currently have and will seek out in the future.

The CMMC compliance process is still so new that many organizations are just prioritizing what baseline level they need to reach. For most, that’s level 3. With that said, there is certainly value to gain from an incremental approach to leveling up.

Why Seek CMMC Level 4 or 5 Compliance, Anyway?

First, let’s define our terms and understand the meaning behind the jump from Level 3 up to 4 or 5. CMMC trust levels 3-5 are defined as:

Level 3: Managed

  • – 130 practices (including all 110 from NIST 800-171 Rev. 1)
  • – Meant to protect CUI in environments that hold and transmit classified information
  • – All contractors must establish, maintain, and resource a plan that includes their identified domain

Level 4: Reviewed

  • – Additional 26 practices
  • Proactive and focuses on the protection of CUI from Advanced Persistent Threats (APTs) and encompasses a subset of the enhanced security requirements from Draft NIST SP 800-171B (as well as other cyber-security best practices). In Splunk terms, that means a shift from monitoring and maintaining compliance to proactively responding to threats. This puts an emphasis on SOAR tools such as Splunk Phantom to automate security threat response in specific practice categories.
  • – All contractors should review and measure their identified domain activities for effectiveness

Level 5: Optimizing

  • – Additional 15 practices
  • – An advanced and proactive approach to protect CUI from APTs
  • – Requires a contractor to standardize and optimize process implementation across their organization. In Splunk terms, this means expansion to more sophisticated threat identification algorithms to include tools such as User Behavior Analytics.

The benefits of taking an incremental approach and making the jump up to Level 4 (and potentially 5 later) are two-fold:

  1. It can make your bids more appealing. Even if the contracts that you are seeking only require Level 3 compliance, having the added security level is an enticing differentiator in a competitive bidding market.
  2. You can open your organization up to new contracts and opportunities that require a higher level of certification and are often worth a lot more money.
  3. It puts in place the tools and techniques to automatically respond to security-related events. This shortens response times to threats, shortens triage, increases accuracy and visibility, automates tasks that would typically be done manually by expensive security resources, and makes you safer.

Plus, with “allowable costs” in the mix, by defraying the spend on compliance back to the DoD, you get the added financial benefit as well.

How Do You Move Up to the Higher CMMC Trust Levels?

Our recommendation is to start small and at a manageable level. Seek the compliance level that matches your current contract needs. As was highlighted earlier, for most, that is Level 3.

To have reached Level 3, you are already using a single technology solution (like Splunk) or a combination of other tools.

Getting to Level 4 and adhering to the additional 14 practices is going to be an incremental process of layering in another tool or technique or technology that goes on top of all your previous work. It’s additive.

For TekStream clients, that translates to adding Splunk Phantom to your Splunk Core and Enterprise Security solution. It’s not a massive or insurmountable task, and it is a great way to defray costs associated with manual security tasks and differentiate your organization from your fellow DIB contractors.

TekStream Can Help You Reach the Right Certification Level for You

Ready to start your compliance process? Ready to reach Level 3, Level 4, or even Level 5? Acting now positions you to meet DoD needs immediately and opens the door for early opportunities. See how TekStream has teamed up with Splunk to bring you a prescriptive solution and implementation consultancy.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Response – Managing Security & Compliance Alerts & Response for Maturity Levels 4 and 5

The Cybersecurity Maturity Model Certification (CMMC) is here and staying. There are increased complexities that come with the new compliance model as compared to NIST 800-171, and organizations have to be prepared to not only navigate the new process but also reach the level that makes the most sense for them.

Level 3 (Good Cyber Hygiene, 130 Practices, NIST SP 800-171 + New Practices) is the most common compliance threshold that Defense Industrial Base (DIB) contractors are seeking out. However, there can be significant value in increasing to a Level 4 and eventually a Level 5, especially if you’re leveraging the Splunk for CMMC Solution.

Thanks to the DoD’s “allowable costs” model (where you can defray costs of becoming CMMC compliant back to the DoD), reaching Level 4 offers significant value at no expense to your organization.

Even if you’re not currently pursuing contracts that mandate Level 4 compliance, by using TekStream and Splunk’s combined CMMC solution to reach Level 4, you end up with:

  • – A winning differentiator against the competition when bidding on Level 3 (and below) contracts
  • – The option to bid on Level 4 contracts worth considerably more money
  • – Automating security tasks with Splunk ES & Phantom
  • – Excellent security posture with Splunk ES & Phantom

And all of these benefits fall under the “allowable costs” umbrella.

The case for reaching Level 4 is clear, but there are definitely complexities as you move up the maturity model. For this blog, we want to zero in on a specific complexity — the alert and response set up needed to be at Level 4 or 5 and how a SOAR solution like Splunk Phantom can get you there.

How Does Splunk Phantom Factor into Levels 4 and 5?

Level 4 is 26 practices above Level 3 and 15 practices below Level 5. Level 4 focuses primarily on protecting CUI and security practices that surround the detection and response capabilities of an organization. Level 5 is centered on standardizing process implementation and has additional practices to enhance the cybersecurity capabilities of the organization.

Both Level 4 and Level 5 are considered proactive, and 5 is even considered advanced/progressive.

Alert and incident response are foundational to Levels 4 and 5, and Splunk Phantom is a SOAR (Security Orchestration, Automation, and Response) tool that helps DIB contractors focus on automating the alert process and responding as necessary.

You can think about Splunk Phantom in three parts:

  1. SOC Automation: Phantom gives teams the power to execute automated actions across their security infrastructure in seconds, rather than the hours+ it would take manually. Teams can codify workflows into Phantom’s automated playbooks using the visual editor or the integrated Python development environment.
  2. Orchestration: Phantom connects existing security tools to help them work better together, unifying the defense strategy.
  3. Incident Response: Phantom’s automated detection, investigation, and response capabilities mean that teams can reduce malware dwell time, execute response actions at machine speed, and lower their overall mean time to resolve (MTTR).

The above features of Phantom allow contractors to home in on their ability to respond to incidents.

By using Phantom’s workbooks, you’re able to put playbooks into reusable templates, as well as divide and assign tasks among members and document operations and processes. You’re also able to build custom workbooks as well as use included industry-standard workbooks. This is particularly useful for Level 5 contractors as a focus of Level 5 is the standardization of your cybersecurity operations.

TekStream and Splunk’s CMMC Solution

With TekStream and Splunk’s CMMC Solution, our approach is to introduce as much automation as possible to the security & compliance alerts & response requirements of Levels 4 and 5.

Leveraging Splunk Phantom, we’re able to introduce important automation and workbook features to standardize processes, free up time, and make the process of handling, verifying, and testing incident responses significantly more manageable.

If you’d like to talk to someone from our team, fill out the form below.