Auditing Apps for Splunk 8.0

By: Eric Howell | Splunk Consultant

Introduction

The release of Splunk 8.0 marked a pivotal change in the functional workings of Splunk; the tool transitioned from leveraging Python 2 to Python 3. This shift is due to the fact that support was dropped for Python 2 by the governing vendor on January 1st, 2020.  Due to this change, administrators working to maintain a supported, healthy environment will be required to perform a comprehensive review of app-Splunk version compatibility and upgrades in addition to usual upgrade procedures.

Audit Existing Applications

Compile List of Installed Apps and TAs

In each environment, make an inventory of the deployed Splunk architecture components:

  • Search Heads – taking into account any stand-alone instances that might not be in the primary cluster (example: Enterprise Security)
  • Indexers
  • Monitoring Console
  • Deployer
  • Deployment Server
  • License Master
  • Cluster Master
  • Heavy Forwarders – if applicable

For each of these components, compile a list of the installed and active applications (apps) and Technology Add-Ons (TAs). This can be done by running the following SPL query in the Search UI:

| rest /services/apps/local
| stats count(title) by splunk_server label version eai:acl.app author
| rename label AS App, version AS Version, eai:acl.app AS Base, author AS Author
| table splunk_server, App, Base, Author, Version

Review Installed Apps on SplunkBase

The above search will provide you with the necessary information to compare the installed version of an App against that found within Splunkbase, including the App Name, Author, Version, and installation status. This information will need to be compared against what is found by locating the app within Splunkbase at https://Splunkbase.Splunk.com .

As an example, using the Splunk Add-on for Microsoft Windows:

App Author Version Installed
Splunk Add-on for Microsoft Windows Splunk 4.8.2 Yes

Searching this add-on in SplunkBase leads us to the following link: https://splunkbase.splunk.com/app/742/

The SplunkBase page for each app contains information regarding app version (adjustable via the dropdown indicated in the next image) and compatible versions of Splunk Enterprise. The details tab often contains app-specific information (frequently linking to Splunk supported documentation) and can provide insight into the appropriate upgrade path for the app. These upgrade paths are critical to follow due to major adjustments often made in newer iterations of any app. These iterative changes can cause negative impact if not accounted for: loss of data, non-functional commands, and new formats for dashboards.

If your environment includes apps that are not found in SplunkBase (very likely due to custom app creation), use your best judgment. The Upgrade Readiness app, which is discussed later in this document, will provide further insight into likely xml or python related complications found in any apps that are scanned. It is advised further in this document to create a Dev environment to test these upgrades prior to releasing in Prod, and these apps are perfect candidates for additional testing outside of Prod.

Fig 1. SplunkBase page breakdown

 

Compile the list of installed apps and TAs in the environment and cross-reference them with SplunkBase to provide insight:

  • Does the current version of the App support the version of Splunk you are upgrading to?
  • What is the upgrade path for the App?
  • Does the app still benefit from ongoing development?
  • What Apps can be removed from the environment or will cause conflict once the upgrade has been performed?

Run Splunk Platform Upgrade Readiness App

SplunkBase link: https://SplunkBase.Splunk.com/app/4698/

Running the Upgrade Readiness App provides will provide further insight into whether your apps are ready for the upgrade to Splunk 8. As Python 2 is no longer vendor-supported, continued use of apps reliant on Python 2 can leave your environment vulnerable to intrusion or bad actors. The Upgrade Readiness App will advise which of your apps contain python files that are not dual-compatible or strictly compatible with Python 3, and it will also indicate xml files that support Splunk’s Advanced XML which has been sunset and replaced by SimpleXML. Additional details can be found here:

https://docs.Splunk.com/Documentation/UpgradeReadiness/latest/Use/About

After running the Readiness App each of the installed apps on the Splunk instance should return a value result, such as, Passed, Warning, Skipped, etc.

Please note that this app will need to be installed on each instance of Splunk for comprehensive review.

Preparing Next Steps

From the above steps, the path forward should emerge after documenting the findings.

  • Identify which apps/TAs can be and require upgrade
  • Identify Apps that are no longer supported
  • Remove and/or disable Apps that are no longer relevant in the environment or will cause issues post-upgrade.
  • Identify Apps that have not been thoroughly documented and will require additional testing (ideally in a Dev environment).

Once the plan has been developed with the steps above, separate the apps by the appropriate configuration management tool/server (Cluster Master, Deployer, etc)

Performing App Upgrades

The major contributing factor to lost functionality when upgrading to Splunk 8.0+ is found in apps that leverage a great deal of Python files that are not dual-compatible with Python 2 and Python 3. This is discussed in greater detail in the links below:

To maintain a functional, supported version of the Enterprise Security app throughout the upgrade process, it will likely be necessary to upgrade Apps as you upgrade Splunk. Several apps are heavily Python-dependent in their operation and will feature a Python-version change between app versions.

These Python version-specific apps, if they are being leveraged in your environment, should be upgraded during the same scheduled change window as the Splunk Enterprise upgrade to 8.0+. Otherwise, they will cease to function correctly (or at all) due to their reliance on Python 3. The apps that require this specific process are listed here:

  • Splunk Enterprise Security App ver 6.1
  • Splunk Machine Learning Toolkit ver 5.0
  • Deep Learning Toolkit ver 3.0
  • Python for Scientific Computer ver 2.0

 

Want to learn more about auditing apps in Splunk 8.0? Contact us today!

How to Connect AWS and Splunk to Ingest Log Data

By: Don Arnold | Splunk Consultant

 

Though a number of cloud solutions have popped up over the past 10 years, Amazon Web Services, better known as simply AWS, seems to be taking the lead in cloud infrastructure.  And, companies that are using AWS have either migrated their entire infrastructure or are using on-premises systems with some AWS services in a hybrid solution.  Whichever may be the case, the AWS environment is within the security boundary and should be a part of the System Security Plan (SSP) and needs to include Continuous Monitoring, which is a requirement in most security frameworks.  Splunk meets the Continuous Monitoring requirements, which includes instances and services within AWS.

Data push

There are 2 separate ways to get data from AWS into Splunk.  The first is to “push” data from AWS using “Kinesis Firehose” to a Splunk.  This requires IP connectivity between AWS and a Splunk Heavy Forwarder, a HTTP Event Collector token, and the “Splunk Add-on for Amazon Kinesis Firehose” from Splunkbase.

Splunk Heavy Forwarder Setup

  1. Ensure the organization firewall has a rule to allow connectivity from AWS to the Splunk Heavy Forwarder over HTTPs.
  2. Go to Splunkbase.com and download/install the “Splunk Add-on for Amazon Kinesis Firehose” – Restart the Splunk Heavy Forwarder
  3. Create an HTTP Event Collector token:
    1. Go to Settings > Data Inputs > HTTP Event Collector
    2. Select New Token
    3. Enter a name for your token. Example:  “AWS”.  Select Next
    4. For Source type, click Select > Structured and choose “aws:firehose:json”. For App Context choose “Add-on for Kinesis Firehose”. Select Review
    5. Verify the settings and select
    6. Go back to Settings > Data Inputs > HTTP Event Collector and select Global Settings
    7. For “All Tokens” select Enabled, ensure “Enable SSL” is selected, and the “HTTP port number” is set to 8088. Select Save.
    8. Copy the “Token Value” for setup in AWS Kinesis Firehose.

AWS Kinesis Firehose Setup

  1. Log in to AWS and go to the Kinesis service and select the “Get Started” button.
  2. On the top right you will see “Deliver Streaming data with Kinesis Firehose Delivery Streams.” Select the “Create Delivery System” button.
  3. Give your delivery system a name. Under Source, choose “Direct PUT or other sources”.  Select the “Next” button.
  4. Select “Disabled” for both Data transformation and Record format conversion.
  5. For Destination select “Splunk”. For Splunk cluster endpoint, enter the URL with port 8088 of your Splunk Heavy Forwarder.  For Splunk endpoint type select “Raw endpoint”.  For Authentication, token enter the Splunk HTTP Event Collector token number created in the Splunk Heavy Forwarder setup.
  6. For S3 backup select a S3 bucket. If one does not exist you can create one by selecting “Create New”.  Select Next.
  7. Scroll down to Permissions and click “Create new or choose” button. Choose an existing IAM role or create one.  Click Allow to return to the previous menu.  Select Next.
  8. Review the settings and select Create Delivery Stream.
  9. You will see a message stating “Successfully created delivery stream…”.

Test the Connection

  1. It is recommended that test data be used to verify the new connection by choosing the delivery stream and selecting “Test with Demo Data”. Go to step 2 and select “Start sending demo data”.  You will see the delivery stream sending demo data to Splunk.
  2. Log into Splunk and enter index=main sourcetype=aws:firehose:json to verify events are streaming into Splunk.
  3. If no events show up, go back and verify all steps have been configured properly and firewall rules are set to allow AWS HTTPs events through to the Splunk Heavy Forwarder.

Send Production Data

  1. Go to AWS Kinesis and select the delivery stream your setup. The status for the delivery stream should display “Active”.
  2. Go to Splunk and verify events are ingesting: index=mainsourcetype=aws:firehose:json and verify the timestamp is correct with the events.

Data pull

The second way to get data into Splunk from AWS is to have Splunk “pull” data via a REST API call.

AWS Prerequisites Setup

  1. There are AWS service prerequisites that require set up prior to performing REST API calls from the Splunk Heavy Forwarder. The prerequisites can be found in this document:  https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureAWS
  2. Ensure all prerequisites are configured in AWS prior to configuring the “Splunk Add-on for AWS” on the Splunk Heavy Forwarder.

Splunk Heavy Forwarder Setup

  1. Ensure the organization firewall has a rule to allow connectivity from the Splunk Heavy Forwarder to AWS.
  2. Go to Splunkbase.com and install the “Splunk Add-on for AWS” – Restart the Splunk Heavy Forwarder.
  3. Launch the “Splunk Add-on for AWS” on the Splunk Heavy Forwarder.
  4. Go to the Configurations tab.
    1. Account tab: Select Add. Give the connection a name, enter the Key ID and Secret Key from the AWS IAM user account and select Add.

(To get the Key ID and Secret Key, go to AWS IAM > Access management > Users > (select user) > Security credentials > Create access key > Access Key ID and Secret Access key)

  1. IAM Role tab: Select Add.  Give the Role a name, enter the Role ARN and select Add.

(To get the Role ARN, go to AWS IAM > Access management > Roles > (select role).  At the top you will see the Role ARN)

  1. Go to the Inputs tab. Select Create New Input and select the type of data input from AWS to ingest.  Each selection is different and all will use the User and Role created in the previous step.  Go through the setup and select the AWS region, source type, and index and select Save.

Test the Connection

  1. Log into Splunk and enter index=main sourcetype=aws* to verify events are streaming into Splunk. Verify the sourcetype matches the one you selected in the input.
  2. If no events show up, go back and verify all steps have been configured properly and firewall rules are set to allow AWS HTTPs events through to the Splunk Heavy Forwarder.

With the popularity of AWS, more environments are starting to host hybrid solutions for a myriad of reasons.  With that, using Splunk to maintain Continuous Monitoring is easily achieved with 2 different approaches for monitoring the expanded security boundary into the cloud.  TekStream Solutions has Splunk and AWS engineers on staff with years of experience and can assist you in connecting your AWS environment to Splunk.

References

https://docs.splunk.com/Documentation/AddOns/released/Firehose/About

https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureFirehose

https://docs.splunk.com/Documentation/AddOns/released/AWS/Description

https://docs.splunk.com/Documentation/AddOns/released/AWS/ConfigureAWS

 

Want to learn more about connecting AWS and Splunk to ingest log data? Contact us today!

Splunk, AWS, and the Battle for Burst Balance

By: Karl Cepull | Senior Director, Operational Intelligence

 

Splunk and AWS: two of the most adopted tools of our time. Splunk allows fantastic insight into your company’s data at an incredible pace. AWS allows an affordable alternative to on-premise or even other cloud environments. Together both of these tools can come together and allow for one of the best combinations to further the overall ability to show the value in your data. But, there are many systems that need to come together to make all of this work.

In AWS, you have multiple types of storage options available to you for your Splunk servers with their Elastic Block Storage (EBS) offering. There are multiple drive types that you can use – e.g. “io1”, “gp2”, and others. The ‘gp2’ volume type is perhaps the most common one, particularly because it is usually the cheapest. However, when using this volume type, you need to be aware of Burst Balance.

Burst Balance can be a wonderful system. At its core, what Burst Balance does is allow your volume’s disk IOPS to burst higher when needed, without you needing to pay for the guaranteed IOPS all of the time (like you do with the “io1” volume type). What are IOPS? This stands for Inputs/Outputs Per Second, and represent the number of reads and writes that can occur over time. Allowing the IOPS to burst can come in handy when there is a spike in traffic to your Splunk Heavy Forwarder or Indexer, for example. However, this system does have its downside that can actually cause the volume to stop completely!

The way Burst Balance works is on a ‘credit’ system. Every second, the volume earns 3 ‘credits’ for every GB of configured size. For example, if the volume is 100GB, you would earn 300 credits every second. These credits are then used for reads and writes – 1 credit for each read or write. When the volume isn’t being used heavily, it will store up these credits (up to a cap of 5.4 million), and when the volume gets a spike of traffic, the credits are then used to handle the spike.

However, if your volume is constantly busy, or sees a lot of frequent spikes, you may not earn credits at a quick enough rate to keep up with the number of reads and writes. Using our above example, if you had an average of more than 300 reads and writes per second, you wouldn’t earn credits fast enough to keep up. What happens when you run out of credits? The volume stops. Period. No reads or writes occur until you earn more credits (again 3/GB/sec). So, all you can do is wait. That can be a very bad thing, so it is something you need to avoid!

The good news is that AWS has tools that you can use to monitor and alert if your Burst Balance gets low. You can use CloudWatch to monitor the Burst Balance percentage, and also set up an alert if it gets low. To view the Burst Balance percentage, one way is to click on the Volume in the AWS console, then go to the Monitoring tab. One of the metrics is the Burst Balance Percentage, and you can click to view it in a bigger view:

As you can see in the above example, the Burst Balance has been at 100% for most of the last 24 hours, with the exception of around 9pm on 3/19, where it dropped to about 95% shortly, before returning to 100%. You can also set up an alarm to alert you if the Burst Balance percentage drops below a certain threshold.

So, what can you do if the Burst Balance is constantly dipping dangerously low (or running out!)? There are three main solutions:

  1. You can switch to another volume type that doesn’t use the Burst Balance mechanism, such as the “io1” volume type. That volume type has guaranteed, consistent IOPS, so you don’t need to worry about “running out”. However, it is around twice the cost of the “gp2” volume type, so your storage costs could double.
  2. Since the rate that you earn Burst Balance credits is based on the size of the volume (3 credits/GB/second), if you increase the size of the volume, you will earn credits faster. For example, if you increase the size of the volume by 20%, you will earn credits 20% faster. If you are coming up short, but only by a little, this may be the easiest/most cost-effective option, even if you don’t actually need the additional storage space.
  3. You can modify your volume usage patterns to either reduce the number of reads and writes, or perhaps reduce the spikes and spread out the traffic more evenly throughout the day. That way, you have a better chance that you will have enough credits when needed. This may not be an easy thing to do, however.

In summary, AWS’s Burst Balance mechanism is a very creative and useful way to give you performance when you need it, without having to pay for it when you don’t. However, if you are not aware of how it works and how it could impact your environment, it can suddenly become a crippling issue. It pays to understand how this works, how to monitor and alert on it, and options to avoid the problem. This will help to ensure your Splunk environment stays running even in peak periods.

Want to learn more? Contact us today!

Using and Understanding Basic Subsearches in Splunk

By: Brent Mckinney | Splunk Consultant

A subsearch in Splunk is a unique way to stitch together results from your data. Simply put, a subsearch is a way to use the result of one search as the input to another. Subsearches contain an inner search, who’s results are then used as input to filter the results of an outer search. The inner search always runs first, and it’s important to note that subsearches return a maximum of 10,000 results and will only run up to 60 seconds by default.

First, it’s good to understand when to use Subsearch and when NOT to use Subsearches. Generally, you want to avoid using subsearches when working with large result sets. If your inner search produces a lot of results, then applying them as input to your outer search could be inefficient. When working with large result sets, it will likely be more efficient to create fields using the eval command and performing statistical results using the stats command. Because subsearches are computationally more expensive than most search types, it is ideal to have an inner search that produces a small set of results and use that to filter out a bigger outer search.

Example:

Suppose we have a network that should only be accessed from those local to the United States. We’re interested in seeing a list of users who’ve successfully accessed our network from outside of the United States. We could build one search to give us a list of IP addresses from outside of the U.S., and another search could be used to give a list of all accepted connections. A subsearch could then be used to stitch these results together and help us obtain a comprehensive list.

First, we’d need to decide what our inner results should be, a list of all accepted connections, or a list of all non-U.S. IPs? Using the latter as an inner search would probably work best, as it should return a much smaller set of results.

Our inner search would look something like this, using the iplocation command to give us more info on the IP address field.

index=security sourcetype=linux_secure | stats count by ip_address | iplocation ip_address | search Country !=“United States” | fields ip_address

This essentially results in a list of IP addresses that are not from the U.S.

From here, we want to create another search to return a list of all accepted connections. This will be our outer search, and look something like this:

index=security sourcetype=linux_secure connection_status=accepted | dedup ip_address | table ip_address, Country

To Combine these, we can use the following subsearch format. Inner searches are always surrounded by square brackets, and begin with the search keyword. Here’s what our final search would look like:

index=security sourcetype=linux_secure connection_status=accepted
[ search index=security sourcetype=linux_secure | stats count by ip_address | iplocation ip_address | search Country !=“United States” | fields ip_address ]
| dedup ip_address
| table ip_address, Country

Here, our inner search (enclosed in square brackets) would be run first and would return IP addresses that do not belong to the U.S. Those results would be used to filter out the outer search, with returns results of connections that were accepted by the network. Finally, the end of the outer search provides a table with the IP address and country for each result.

We have now obtained a list of IP addresses that have successfully accessed our network, along with the country that it was accessed from, all through the power of a Splunk subsearch!

Tips for troubleshooting if your subsearch is not producing desired results:

  1. Ensure that the syntax is correct. Make sure that the entire inner search is enclosed in square brackets, and that it is placed in the appropriate place of the outer search.
  2. Run both searches by themselves to ensure that they return the expected results independent of each other. Each search may need to be tuned a bit before combining them into a subsearch. Keep in mind that the results of the inner search are used as a filter for the outer search.
  3. You can check into the Splunk job inspector to see if anything stands out that looks out of the ordinary. The normalizedSearch property helps in showing the results of the subsearch.

This covers the basics of subsearches within Splunk. It’s worth noting, however, that there are advanced commands available to use with subsearches to achieve specific results. These commands include append, which could be used to combine searches that run over different periods or join, which can take a field from an inner search, and correlate that field to events from an outer search. These take on similar syntax to run, and are worth trying out once you have down the basics!

Want to learn more about basic subsearches in Splunk? Contact us today!

Solution-Driven CMMC Implementation – Solve First, Ask Questions Later

We’re halfway through 2020 and we’re seeing customers begin to implement and level up within the Cybersecurity Maturity Model Certification (CMMC) framework. Offering a cyber framework for contractors doing business with the DoD, CMMC will eventually become the singular standard for Controlled Unclassified Information (CUI) cybersecurity.

An answer to limitations of NIST 800-171, CMMC requires attestation by a Certified Third-Party Assessor Organization (C3PAO). Once CMMC is in full effect, every company in the Department of Defense’s (DoD’s) supply chain, including Defense Industrial Base (DIB) contractors, will need to be certified to work with the Department of Defense.

As such, DIB contractors and members of the larger DoD supply chain find themselves asking: when should my organization start the compliance process, and what is the best path to achieving CMMC compliance?

First, it is important to start working toward compliance now. Why?

  • – Contracts requiring CMMC certification are expected as early as October and if we wait to certify until we see an eligible contract, it’s too late.
  • – You can currently treat CMMC compliance as an “allowable cost.” The cost of becoming compliant (tools, remediation, preparation) can be expensed back to the DoD. The amount of funding allocated to defray these expenses and the allowable thresholds are unclear but the overall cost is likely to exceed initial estimates and as with any federal program, going back for additional appropriations can be challenging.

As far as the best path to achieving CMMC goes – the more direct, the better.

Understanding Current Approaches to CMMC Compliance

CMMC is new enough that many organizations have yet to go through the compliance process. Broadly, we’ve seen a range of recommendations, most of which start with a heavy upfront lift of comprehensive analysis.

The general process is as follows:

  1. Assess current operations for compliance with CMMC, especially as it relates to its extension of NIST 800-171 standards.
  2. Document your System Security Plan (SSP) to identify what makes up the CUI environment. The plans should describe system boundaries, operation environments, the process by which security requirements are implemented, and the relationship with and/or connections to other systems.
  3. Create a logical network diagram of your network(s), including third-party services, remote access methods, and cloud instances.
  4. List an inventory of all systems, applications, and services: servers, workstations, network devices, mobile devices, databases, third-party service providers, cloud instances, major applications, and others.
  5. Document Plans of Action and Milestones (POAMs). The POAMs should spell out how system vulnerabilities will be solved for and existing deficiencies corrected.
  6. Execute POAMs to achieve full compliance through appropriate security technologies and tools.

This assessment-first approach, while functional, is not ideal.

In taking the traditional approach to becoming CMMC compliant, the emphasis is put on analysis and process first; the tools and technologies to satisfy those processes are secondary. By beginning with a full compliance assessment, you are spending time guessing where your compliance issues and gaps are, and by deprioritizing technology selection, potentially relying upon multiple tools, there is the potential to have granular processes that increase the problem of swivel-chair compliance (e.g., having to go to multiple tools and interfaces to establish, monitor, and maintain compliance and the required underlying cybersecurity). This is actually creating more work for your compliance and security team when you have to architect an integrated, cohesive compliance solution.

Then, the whole process has to be redone every time a contractor’s compliance certification is up.

Big picture, having to guess at your compliance gaps upfront can lead to analysis paralysis. By trying to analyze so many different pieces of the process and make sure they’re compliant, it is easy to become overwhelmed and feel defeated before even starting.

With NIST 800-171, even though it has been in effect since January 1, 2018, compliance across the DIB has not been consistent or widespread. CMMC is effectively forcing the compliance mandate by addressing key loopholes and caveats in NIST 800-171:

  • – You can no longer self-certify.
  • – You can no longer rely on applicability caveats.
  • – There is no flexibility for in-process compliance.

So, if you’ve been skirting the strictness of compliance previously, know you can no longer do that with CMMC, and are overwhelmed with where to even begin, we recommend you fully dive into and leverage a tool that can be a single source of truth for your whole process – Splunk.

Leverage a Prescriptive Solution and Implementation Consultancy to Expedite CMMC Compliance

Rather than getting bogged down in analysis paralysis, accelerate your journey to CMMC compliance by implementing an automated CMMC monitoring solution like Splunk. Splunk labels itself “the data to everything platform.” It is purpose-built to act as a big data clearinghouse for all relevant enterprise data regardless of context. In this case, as the leading SIEM provider, Splunk is uniquely able to provide visibility to compliance-related events as the overlap with security-related data is comprehensive.

Generally, the process will begin with ingesting all available information across your enterprise and then implementing automated practice compliance. Through that implementation process, gaps are naturally discovered. If there is missing or unavailable data, processes can then be defined as “gap fillers” to ensure compliance.

The automated practice controls are then leveraged as Standard Operating Procedures (SOPs) that are repurposed into applicable System Security Plans (SSPs), Plans of Action and Milestones (POAMs), and business plans. In many cases, much of the specific content for these documents can be generated from the dashboards that we deliver as a part of our CMMC solution.

The benefits realized by a solution-driven approach, rather than an analysis-driven one, are many:

  1. Starting with a capable solution reduces the overall time to compliance.
  2. Gaps are difficult to anticipate, as they are often not discovered until the source of data is examined (e.g. one cannot presume that data includes a user, or an IP address, or a MAC address until the data is exposed). Assumption-driven analysis is foreshortened.
  3. Automated practice dashboards and the collection of underlying metadata (e.g authorized ports, machines, users, etc.) can be harvested for document generation.
  4. Having a consolidated solution for overall compliance tracking across all security appliances and technologies provides guidance and visibility to C3PAOs, quelling natural audit curiosity creep, and shortening the attestation cycle.

Not only does this process get you past the analysis paralysis barrier, but it reduces non-compliance risk and the effort needed for attestation. It also helps keep you compliant – and out of auditors’ crosshairs.

Let Splunk and TekStream to Get You Compliant in Weeks, Not Months

Beyond the guides and assessments consulting firms are offering for CMMC, TekStream has a practical, proven, and effective solution to get you compliant in under 30 days.

By working with TekStream and Splunk, you’ll get:

  • – Installation and configuration of Splunk, CMMC App, and Premium Apps
  • – Pre/Post CMMC Assessment consulting work to ensure you meet or exceed your CMMC level requirements
  • – Optional MSP/MSSP/compliance monitoring services to take away the burden of data management, security, and compliance monitoring
  • – Ongoing monitoring for each practice on an automated basis and summarized in a central auditing dashboard.
  • – Comprehensive TekStream ownership of your Splunk instance, including implementation, licensing, support, outsourcing (compliance, security, and admin), and resource staffing.

If you’re already a Splunk user, this opportunity is a no brainer. If you’re new to Splunk, this is the best way to procure best-in-class security, full compliance, and an operational intelligence platform, especially when you consider the financial benefit of allowable costs.

Visit the CMMC Resource Page

If you’d like to talk to someone from our team, fill out the form below.

CMMC Maturity – Understanding What is Needed to Level Up

At its core, the Cybersecurity Maturity Model Certification (CMMC) is designed to protect mission-critical government systems and data and has the primary objective of protecting the government’s Controlled Unclassified Information (CUI) from cyber risk.

CMMC goes beyond NIST 800-171 to require strict adherence to a complex set of standards, an attestation, and a certification by a third-party assessor.

The Cybersecurity Model has a framework with five maturity (or “trust”) levels. As you likely know, the certification level your organization needs to reach is going to be largely situational and dependent on the kinds of contracts you currently have and will seek out in the future.

The CMMC compliance process is still so new that many organizations are just prioritizing what baseline level they need to reach. For most, that’s level 3. With that said, there is certainly value to gain from an incremental approach to leveling up.

Why Seek CMMC Level 4 or 5 Compliance, Anyway?

First, let’s define our terms and understand the meaning behind the jump from Level 3 up to 4 or 5. CMMC trust levels 3-5 are defined as:

Level 3: Managed

  • – 130 practices (including all 110 from NIST 800-171 Rev. 1)
  • – Meant to protect CUI in environments that hold and transmit classified information
  • – All contractors must establish, maintain, and resource a plan that includes their identified domain

Level 4: Reviewed

  • – Additional 26 practices
  • Proactive and focuses on the protection of CUI from Advanced Persistent Threats (APTs) and encompasses a subset of the enhanced security requirements from Draft NIST SP 800-171B (as well as other cyber-security best practices). In Splunk terms, that means a shift from monitoring and maintaining compliance to proactively responding to threats. This puts an emphasis on SOAR tools such as Splunk Phantom to automate security threat response in specific practice categories.
  • – All contractors should review and measure their identified domain activities for effectiveness

Level 5: Optimizing

  • – Additional 15 practices
  • – An advanced and proactive approach to protect CUI from APTs
  • – Requires a contractor to standardize and optimize process implementation across their organization. In Splunk terms, this means expansion to more sophisticated threat identification algorithms to include tools such as User Behavior Analytics.

The benefits of taking an incremental approach and making the jump up to Level 4 (and potentially 5 later) are two-fold:

  1. It can make your bids more appealing. Even if the contracts that you are seeking only require Level 3 compliance, having the added security level is an enticing differentiator in a competitive bidding market.
  2. You can open your organization up to new contracts and opportunities that require a higher level of certification and are often worth a lot more money.
  3. It puts in place the tools and techniques to automatically respond to security-related events. This shortens response times to threats, shortens triage, increases accuracy and visibility, automates tasks that would typically be done manually by expensive security resources, and makes you safer.

Plus, with “allowable costs” in the mix, by defraying the spend on compliance back to the DoD, you get the added financial benefit as well.

How Do You Move Up to the Higher CMMC Trust Levels?

Our recommendation is to start small and at a manageable level. Seek the compliance level that matches your current contract needs. As was highlighted earlier, for most, that is Level 3.

To have reached Level 3, you are already using a single technology solution (like Splunk) or a combination of other tools.

Getting to Level 4 and adhering to the additional 14 practices is going to be an incremental process of layering in another tool or technique or technology that goes on top of all your previous work. It’s additive.

For TekStream clients, that translates to adding Splunk Phantom to your Splunk Core and Enterprise Security solution. It’s not a massive or insurmountable task, and it is a great way to defray costs associated with manual security tasks and differentiate your organization from your fellow DIB contractors.

TekStream Can Help You Reach the Right Certification Level for You

Ready to start your compliance process? Ready to reach Level 3, Level 4, or even Level 5? Acting now positions you to meet DoD needs immediately and opens the door for early opportunities. See how TekStream has teamed up with Splunk to bring you a prescriptive solution and implementation consultancy.

Visit the CMMC Resource Page

If you’d like to talk to someone from our team, fill out the form below.

CMMC Response – Managing Security & Compliance Alerts & Response for Maturity Levels 4 and 5

The Cybersecurity Maturity Model Certification (CMMC) is here and staying. There are increased complexities that come with the new compliance model as compared to NIST 800-171, and organizations have to be prepared to not only navigate the new process but also reach the level that makes the most sense for them.

Level 3 (Good Cyber Hygiene, 130 Practices, NIST SP 800-171 + New Practices) is the most common compliance threshold that Defense Industrial Base (DIB) contractors are seeking out. However, there can be significant value in increasing to a Level 4 and eventually a Level 5, especially if you’re leveraging the Splunk for CMMC Solution.

Thanks to the DoD’s “allowable costs” model (where you can defray costs of becoming CMMC compliant back to the DoD), reaching Level 4 offers significant value at no expense to your organization.

Even if you’re not currently pursuing contracts that mandate Level 4 compliance, by using TekStream and Splunk’s combined CMMC solution to reach Level 4, you end up with:

  • – A winning differentiator against the competition when bidding on Level 3 (and below) contracts
  • – The option to bid on Level 4 contracts worth considerably more money
  • – Automating security tasks with Splunk ES & Phantom
  • – Excellent security posture with Splunk ES & Phantom

And all of these benefits fall under the “allowable costs” umbrella.

The case for reaching Level 4 is clear, but there are definitely complexities as you move up the maturity model. For this blog, we want to zero in on a specific complexity — the alert and response set up needed to be at Level 4 or 5 and how a SOAR solution like Splunk Phantom can get you there.

How Does Splunk Phantom Factor into Levels 4 and 5?

Level 4 is 26 practices above Level 3 and 15 practices below Level 5. Level 4 focuses primarily on protecting CUI and security practices that surround the detection and response capabilities of an organization. Level 5 is centered on standardizing process implementation and has additional practices to enhance the cybersecurity capabilities of the organization.

Both Level 4 and Level 5 are considered proactive, and 5 is even considered advanced/progressive.

Alert and incident response are foundational to Levels 4 and 5, and Splunk Phantom is a SOAR (Security Orchestration, Automation, and Response) tool that helps DIB contractors focus on automating the alert process and responding as necessary.

You can think about Splunk Phantom in three parts:

  1. SOC Automation: Phantom gives teams the power to execute automated actions across their security infrastructure in seconds, rather than the hours+ it would take manually. Teams can codify workflows into Phantom’s automated playbooks using the visual editor or the integrated Python development environment.
  2. Orchestration: Phantom connects existing security tools to help them work better together, unifying the defense strategy.
  3. Incident Response: Phantom’s automated detection, investigation, and response capabilities mean that teams can reduce malware dwell time, execute response actions at machine speed, and lower their overall mean time to resolve (MTTR).

The above features of Phantom allow contractors to home in on their ability to respond to incidents.

By using Phantom’s workbooks, you’re able to put playbooks into reusable templates, as well as divide and assign tasks among members and document operations and processes. You’re also able to build custom workbooks as well as use included industry-standard workbooks. This is particularly useful for Level 5 contractors as a focus of Level 5 is the standardization of your cybersecurity operations.

TekStream and Splunk’s CMMC Solution

With TekStream and Splunk’s CMMC Solution, our approach is to introduce as much automation as possible to the security & compliance alerts & response requirements of Levels 4 and 5.

Leveraging Splunk Phantom, we’re able to introduce important automation and workbook features to standardize processes, free up time, and make the process of handling, verifying, and testing incident responses significantly more manageable.

Visit the CMMC Resource Page

If you’d like to talk to someone from our team, fill out the form below.

Troubleshooting Your Splunk Environment Utilizing Btool

By: Chris Winarski | Splunk Consultant

 

Btool is a utility created and provided within the Splunk Enterprise download and when it comes to troubleshooting your .conf files, Btool is your friend. From a technical standpoint, Btool shows you the “merged” .conf files that are written to disc and what the current .conf files contain at the time of execution, HOWEVER, this may not show you what Splunk is actually using at that specific time, because Splunk is running off of the settings that are written in memory and for your changes to a .conf file to be read from disc to memory requires a restart of that specific Splunk instance or force Splunk to reload of the .conf files. This blog is focused primarily on a Linux environment, but if you would like more information on how to go about this in a Windows environment feel free to inquire below! These are some use cases for your troubleshooting using Btool.

 

Btool checks disk NOT what Splunk has in Memory

Let’s say you just changed an inputs.conf file on a forwarder – Adding a sourcetype to the incoming data:

The next step would be to change directory to $SPLUNK_HOME/bin directory

($SPLUNK_HOME = where you installed splunk, best practice is /opt/splunk)

Now once in the bin directory, you will be able to use the command:

 

./splunk Btool inputs list

This will output every inputs.conf file that is currently saved to that machine taking the current precedence and what their attributes are. This is what will be merged when Splunk restarts and in which is written to memory, which is why the current instance running needs to be restarted to write our “sourcetype” change above to the memory so it can utilize that attribute. If we don’t restart the instance, Splunk will have no idea that we edited a .conf file and will not use our added attribute.

The above command shows us that our change was saved to disc, but in order for Splunk to utilize this attribute, we still have to restart the instance.

 

./splunk restart

Once restarted, all Btool merge files are in memory and describe how Splunk is currently acting at that given time.

 

Btool conf file creating a file with returned results

The above example will just simply print out the results to the console, where the code below will run the command and then create a file located in your “tmp” folder of all the returned text involving the inputs.conf files in your splunk instance.

 

./splunk btool inputs list > /tmp/btool_inputs.txt

 

Where do these conf files come from?

When running the normal Btool command above, we are returning ALL the settings in all inputs.conf files for the entire instance, however, we can’t tell which inputs.conf file each setting is defined in. This can be done by adding a –debug parameter.

 

./splunk btool inputs list –debug

 

Organizing the Btool legibility

When printing out the long list of conf files, they seem to be all smashed together, using the ‘sed’ command we are able to pretty it up a bit using some simple regex.

 

./splunk btool inputs list | sed ‘s/^\([^\[]\)/   \1/’

 

There are many other useful ways to utilize Btool such as incorporating scripts, etc. If you would like more information and would like to know more about how to utilize Btool in your environment, contact us today!

 

How to Filter Out Events at the Indexer by Date in Splunk

By: Jon Walthour | Splunk Consultant

 

A customer presented me with the following problem recently. He needed to be able to exclude events older than a specified time from being indexed in Splunk. His requirement was more precise than excluding events older than so many days ago. He was also dealing with streaming data coming in through an HTTP Event Collector (HEC). So, his data was not file-based, where an “ignoreOlderThan” setting in an inputs.conf file on a forwarder would solve his problem.

As I thought about his problem, I agreed with him—using “ignoreOlderThan” was not an option. Besides, this would work only based on the modification timestamp of a monitored file, not on the events themselves within that file. The solution to his problem needed to be more granular, more precise.

We needed a way to exclude events from being indexed into Splunk through whatever means they were arriving at the parsing layer (from a universal forwarder, via syslog or HEC) based on a precise definition of a time. This meant that it had to be more exact than a certain number of days ago (as in, for example, the “MAX_DAYS_AGO” setting in props.conf).

To meet his regulatory requirements for retention, my customer needed to be able to exclude, for example, events older than January 1 at midnight and do so with certainty.

As I set about finding (or creating) a solution, I found “INGEST_EVAL,” a setting in transforms.conf. This setting was introduced in version 7.2. It runs an eval expression at index-time on the parsing (indexing) tier in a similar (though not identical) way as a search-time eval expression works. The biggest difference with this new eval statement is that it is run in the indexing pipeline and any new fields created by it become indexed fields rather than search-time fields. These fields are stored in the rawdata journal of the index.

However, what if I could do an “if-then” type of statement in an eval that would change the value of a current field? What if I could evaluate the timestamp of the event, determine if it’s older than a given epoch date and change the queue the event was in from the indexing queue (“indexQueue”) to oblivion (“nullQueue”)?

I found some examples of this in Splunk’s documentation, but none of them worked for this specific use case. I also found that “INGEST_EVAL” is rather limited in what functions it can work with the eval statement. Functions like “relative_time()” and “now()” don’t work. I also found that, at the point in the ingestion pipeline where Splunk runs these INGEST_EVAL statements, fields like “_indextime” aren’t yet defined. This left me with using an older “time()” function. So, when you’re working with this feature in the future, be sure to test your eval expression carefully as not all functions have been fully evaluated in the documentation yet.

 

Here’s what I came up with:

props.conf

[<sourcetype>]

TRANSFORMS-oldevents=delete-older-than-January-1-2020-midnight-GMT

transforms.conf

[delete-older-than-January-1-2020-midnight-GMT]

INGEST_EVAL = queue=if(substr(tostring(1577836800-_time),1,1)=”-“, “indexQueue”, “nullQueue”)

 

The key is in the evaluation of the first character of the subtraction in the “queue=” calculation. A negative number yields a “-” for the first character; a positive number a digit. Generally, negative numbers are “younger than” your criteria and positive numbers are “older than” it. You keep the younger events by sending them to the indexQueue (by setting “queue” equal to “indexQueue”) and you chuck older events by sending them to the nullQueue (by setting “queue” equal to “nullQueue”).

Needless to say, my customer was pleased with the solution we provided. It addressed his use case “precisely.” I hope it is helpful for you, too. Happy Splunking!

Have a unique use case you would like our Splunk experts to help you with? Contact us today!