Solution-Driven CMMC Implementation – Solve First, Ask Questions Later

We’re halfway through 2020 and we’re seeing customers begin to implement and level up within the Cybersecurity Maturity Model Certification (CMMC) framework. Offering a cyber framework for contractors doing business with the DoD, CMMC will eventually become the singular standard for Controlled Unclassified Information (CUI) cybersecurity.

An answer to limitations of NIST 800-171, CMMC requires attestation by a Certified Third-Party Assessor Organization (C3PAO). Once CMMC is in full effect, every company in the Department of Defense’s (DoD’s) supply chain, including Defense Industrial Base (DIB) contractors, will need to be certified to work with the Department of Defense.

As such, DIB contractors and members of the larger DoD supply chain find themselves asking: when should my organization start the compliance process, and what is the best path to achieving CMMC compliance?

First, it is important to start working toward compliance now. Why?

  • – Contracts requiring CMMC certification are expected as early as October and if we wait to certify until we see an eligible contract, it’s too late.
  • – You can currently treat CMMC compliance as an “allowable cost.” The cost of becoming compliant (tools, remediation, preparation) can be expensed back to the DoD. The amount of funding allocated to defray these expenses and the allowable thresholds are unclear but the overall cost is likely to exceed initial estimates and as with any federal program, going back for additional appropriations can be challenging.

As far as the best path to achieving CMMC goes – the more direct, the better.

Understanding Current Approaches to CMMC Compliance

CMMC is new enough that many organizations have yet to go through the compliance process. Broadly, we’ve seen a range of recommendations, most of which start with a heavy upfront lift of comprehensive analysis.

The general process is as follows:

  1. Assess current operations for compliance with CMMC, especially as it relates to its extension of NIST 800-171 standards.
  2. Document your System Security Plan (SSP) to identify what makes up the CUI environment. The plans should describe system boundaries, operation environments, the process by which security requirements are implemented, and the relationship with and/or connections to other systems.
  3. Create a logical network diagram of your network(s), including third-party services, remote access methods, and cloud instances.
  4. List an inventory of all systems, applications, and services: servers, workstations, network devices, mobile devices, databases, third-party service providers, cloud instances, major applications, and others.
  5. Document Plans of Action and Milestones (POAMs). The POAMs should spell out how system vulnerabilities will be solved for and existing deficiencies corrected.
  6. Execute POAMs to achieve full compliance through appropriate security technologies and tools.

This assessment-first approach, while functional, is not ideal.

In taking the traditional approach to becoming CMMC compliant, the emphasis is put on analysis and process first; the tools and technologies to satisfy those processes are secondary. By beginning with a full compliance assessment, you are spending time guessing where your compliance issues and gaps are, and by deprioritizing technology selection, potentially relying upon multiple tools, there is the potential to have granular processes that increase the problem of swivel-chair compliance (e.g., having to go to multiple tools and interfaces to establish, monitor, and maintain compliance and the required underlying cybersecurity). This is actually creating more work for your compliance and security team when you have to architect an integrated, cohesive compliance solution.

Then, the whole process has to be redone every time a contractor’s compliance certification is up.

Big picture, having to guess at your compliance gaps upfront can lead to analysis paralysis. By trying to analyze so many different pieces of the process and make sure they’re compliant, it is easy to become overwhelmed and feel defeated before even starting.

With NIST 800-171, even though it has been in effect since January 1, 2018, compliance across the DIB has not been consistent or widespread. CMMC is effectively forcing the compliance mandate by addressing key loopholes and caveats in NIST 800-171:

  • – You can no longer self-certify.
  • – You can no longer rely on applicability caveats.
  • – There is no flexibility for in-process compliance.

So, if you’ve been skirting the strictness of compliance previously, know you can no longer do that with CMMC, and are overwhelmed with where to even begin, we recommend you fully dive into and leverage a tool that can be a single source of truth for your whole process – Splunk.

Leverage a Prescriptive Solution and Implementation Consultancy to Expedite CMMC Compliance

Rather than getting bogged down in analysis paralysis, accelerate your journey to CMMC compliance by implementing an automated CMMC monitoring solution like Splunk. Splunk labels itself “the data to everything platform.” It is purpose-built to act as a big data clearinghouse for all relevant enterprise data regardless of context. In this case, as the leading SIEM provider, Splunk is uniquely able to provide visibility to compliance-related events as the overlap with security-related data is comprehensive.

Generally, the process will begin with ingesting all available information across your enterprise and then implementing automated practice compliance. Through that implementation process, gaps are naturally discovered. If there is missing or unavailable data, processes can then be defined as “gap fillers” to ensure compliance.

The automated practice controls are then leveraged as Standard Operating Procedures (SOPs) that are repurposed into applicable System Security Plans (SSPs), Plans of Action and Milestones (POAMs), and business plans. In many cases, much of the specific content for these documents can be generated from the dashboards that we deliver as a part of our CMMC solution.

The benefits realized by a solution-driven approach, rather than an analysis-driven one, are many:

  1. Starting with a capable solution reduces the overall time to compliance.
  2. Gaps are difficult to anticipate, as they are often not discovered until the source of data is examined (e.g. one cannot presume that data includes a user, or an IP address, or a MAC address until the data is exposed). Assumption-driven analysis is foreshortened.
  3. Automated practice dashboards and the collection of underlying metadata (e.g authorized ports, machines, users, etc.) can be harvested for document generation.
  4. Having a consolidated solution for overall compliance tracking across all security appliances and technologies provides guidance and visibility to C3PAOs, quelling natural audit curiosity creep, and shortening the attestation cycle.

Not only does this process get you past the analysis paralysis barrier, but it reduces non-compliance risk and the effort needed for attestation. It also helps keep you compliant – and out of auditors’ crosshairs.

Let Splunk and TekStream to Get You Compliant in Weeks, Not Months

Beyond the guides and assessments consulting firms are offering for CMMC, TekStream has a practical, proven, and effective solution to get you compliant in under 30 days.

By working with TekStream and Splunk, you’ll get:

  • – Installation and configuration of Splunk, CMMC App, and Premium Apps
  • – Pre/Post CMMC Assessment consulting work to ensure you meet or exceed your CMMC level requirements
  • – Optional MSP/MSSP/compliance monitoring services to take away the burden of data management, security, and compliance monitoring
  • Ongoing monitoring for each practice on an automated basis and summarized in a central auditing dashboard.
  • – Comprehensive TekStream ownership of your Splunk instance, including implementation, licensing, support, outsourcing (compliance, security, and admin), and resource staffing.

If you’re already a Splunk user, this opportunity is a no brainer. If you’re new to Splunk, this is the best way to procure best-in-class security, full compliance, and an operational intelligence platform, especially when you consider the financial benefit of allowable costs.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Maturity – Understanding What is Needed to Level Up

At its core, the Cybersecurity Maturity Model Certification (CMMC) is designed to protect mission-critical government systems and data and has the primary objective of protecting the government’s Controlled Unclassified Information (CUI) from cyber risk.

CMMC goes beyond NIST 800-171 to require strict adherence to a complex set of standards, an attestation, and a certification by a third-party assessor.

The Cybersecurity Model has a framework with five maturity (or “trust”) levels. As you likely know, the certification level your organization needs to reach is going to be largely situational and dependent on the kinds of contracts you currently have and will seek out in the future.

The CMMC compliance process is still so new that many organizations are just prioritizing what baseline level they need to reach. For most, that’s level 3. With that said, there is certainly value to gain from an incremental approach to leveling up.

Why Seek CMMC Level 4 or 5 Compliance, Anyway?

First, let’s define our terms and understand the meaning behind the jump from Level 3 up to 4 or 5. CMMC trust levels 3-5 are defined as:

Level 3: Managed

  • – 130 practices (including all 110 from NIST 800-171 Rev. 1)
  • – Meant to protect CUI in environments that hold and transmit classified information
  • – All contractors must establish, maintain, and resource a plan that includes their identified domain

Level 4: Reviewed

  • – Additional 26 practices
  • Proactive and focuses on the protection of CUI from Advanced Persistent Threats (APTs) and encompasses a subset of the enhanced security requirements from Draft NIST SP 800-171B (as well as other cyber-security best practices). In Splunk terms, that means a shift from monitoring and maintaining compliance to proactively responding to threats. This puts an emphasis on SOAR tools such as Splunk Phantom to automate security threat response in specific practice categories.
  • – All contractors should review and measure their identified domain activities for effectiveness

Level 5: Optimizing

  • – Additional 15 practices
  • – An advanced and proactive approach to protect CUI from APTs
  • – Requires a contractor to standardize and optimize process implementation across their organization. In Splunk terms, this means expansion to more sophisticated threat identification algorithms to include tools such as User Behavior Analytics.

The benefits of taking an incremental approach and making the jump up to Level 4 (and potentially 5 later) are two-fold:

  1. It can make your bids more appealing. Even if the contracts that you are seeking only require Level 3 compliance, having the added security level is an enticing differentiator in a competitive bidding market.
  2. You can open your organization up to new contracts and opportunities that require a higher level of certification and are often worth a lot more money.
  3. It puts in place the tools and techniques to automatically respond to security-related events. This shortens response times to threats, shortens triage, increases accuracy and visibility, automates tasks that would typically be done manually by expensive security resources, and makes you safer.

Plus, with “allowable costs” in the mix, by defraying the spend on compliance back to the DoD, you get the added financial benefit as well.

How Do You Move Up to the Higher CMMC Trust Levels?

Our recommendation is to start small and at a manageable level. Seek the compliance level that matches your current contract needs. As was highlighted earlier, for most, that is Level 3.

To have reached Level 3, you are already using a single technology solution (like Splunk) or a combination of other tools.

Getting to Level 4 and adhering to the additional 14 practices is going to be an incremental process of layering in another tool or technique or technology that goes on top of all your previous work. It’s additive.

For TekStream clients, that translates to adding Splunk Phantom to your Splunk Core and Enterprise Security solution. It’s not a massive or insurmountable task, and it is a great way to defray costs associated with manual security tasks and differentiate your organization from your fellow DIB contractors.

TekStream Can Help You Reach the Right Certification Level for You

Ready to start your compliance process? Ready to reach Level 3, Level 4, or even Level 5? Acting now positions you to meet DoD needs immediately and opens the door for early opportunities. See how TekStream has teamed up with Splunk to bring you a prescriptive solution and implementation consultancy.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Response – Managing Security & Compliance Alerts & Response for Maturity Levels 4 and 5

The Cybersecurity Maturity Model Certification (CMMC) is here and staying. There are increased complexities that come with the new compliance model as compared to NIST 800-171, and organizations have to be prepared to not only navigate the new process but also reach the level that makes the most sense for them.

Level 3 (Good Cyber Hygiene, 130 Practices, NIST SP 800-171 + New Practices) is the most common compliance threshold that Defense Industrial Base (DIB) contractors are seeking out. However, there can be significant value in increasing to a Level 4 and eventually a Level 5, especially if you’re leveraging the Splunk for CMMC Solution.

Thanks to the DoD’s “allowable costs” model (where you can defray costs of becoming CMMC compliant back to the DoD), reaching Level 4 offers significant value at no expense to your organization.

Even if you’re not currently pursuing contracts that mandate Level 4 compliance, by using TekStream and Splunk’s combined CMMC solution to reach Level 4, you end up with:

  • – A winning differentiator against the competition when bidding on Level 3 (and below) contracts
  • – The option to bid on Level 4 contracts worth considerably more money
  • – Automating security tasks with Splunk ES & Phantom
  • – Excellent security posture with Splunk ES & Phantom

And all of these benefits fall under the “allowable costs” umbrella.

The case for reaching Level 4 is clear, but there are definitely complexities as you move up the maturity model. For this blog, we want to zero in on a specific complexity — the alert and response set up needed to be at Level 4 or 5 and how a SOAR solution like Splunk Phantom can get you there.

How Does Splunk Phantom Factor into Levels 4 and 5?

Level 4 is 26 practices above Level 3 and 15 practices below Level 5. Level 4 focuses primarily on protecting CUI and security practices that surround the detection and response capabilities of an organization. Level 5 is centered on standardizing process implementation and has additional practices to enhance the cybersecurity capabilities of the organization.

Both Level 4 and Level 5 are considered proactive, and 5 is even considered advanced/progressive.

Alert and incident response are foundational to Levels 4 and 5, and Splunk Phantom is a SOAR (Security Orchestration, Automation, and Response) tool that helps DIB contractors focus on automating the alert process and responding as necessary.

You can think about Splunk Phantom in three parts:

  1. SOC Automation: Phantom gives teams the power to execute automated actions across their security infrastructure in seconds, rather than the hours+ it would take manually. Teams can codify workflows into Phantom’s automated playbooks using the visual editor or the integrated Python development environment.
  2. Orchestration: Phantom connects existing security tools to help them work better together, unifying the defense strategy.
  3. Incident Response: Phantom’s automated detection, investigation, and response capabilities mean that teams can reduce malware dwell time, execute response actions at machine speed, and lower their overall mean time to resolve (MTTR).

The above features of Phantom allow contractors to home in on their ability to respond to incidents.

By using Phantom’s workbooks, you’re able to put playbooks into reusable templates, as well as divide and assign tasks among members and document operations and processes. You’re also able to build custom workbooks as well as use included industry-standard workbooks. This is particularly useful for Level 5 contractors as a focus of Level 5 is the standardization of your cybersecurity operations.

TekStream and Splunk’s CMMC Solution

With TekStream and Splunk’s CMMC Solution, our approach is to introduce as much automation as possible to the security & compliance alerts & response requirements of Levels 4 and 5.

Leveraging Splunk Phantom, we’re able to introduce important automation and workbook features to standardize processes, free up time, and make the process of handling, verifying, and testing incident responses significantly more manageable.

If you’d like to talk to someone from our team, fill out the form below.

Troubleshooting Your Splunk Environment Utilizing Btool

By: Chris Winarski | Splunk Consultant

 

Btool is a utility created and provided within the Splunk Enterprise download and when it comes to troubleshooting your .conf files, Btool is your friend. From a technical standpoint, Btool shows you the “merged” .conf files that are written to disc and what the current .conf files contain at the time of execution, HOWEVER, this may not show you what Splunk is actually using at that specific time, because Splunk is running off of the settings that are written in memory and for your changes to a .conf file to be read from disc to memory requires a restart of that specific Splunk instance or force Splunk to reload of the .conf files. This blog is focused primarily on a Linux environment, but if you would like more information on how to go about this in a Windows environment feel free to inquire below! These are some use cases for your troubleshooting using Btool.

 

Btool checks disk NOT what Splunk has in Memory

Let’s say you just changed an inputs.conf file on a forwarder – Adding a sourcetype to the incoming data:

The next step would be to change directory to $SPLUNK_HOME/bin directory

($SPLUNK_HOME = where you installed splunk, best practice is /opt/splunk)

Now once in the bin directory, you will be able to use the command:

 

./splunk Btool inputs list

This will output every inputs.conf file that is currently saved to that machine taking the current precedence and what their attributes are. This is what will be merged when Splunk restarts and in which is written to memory, which is why the current instance running needs to be restarted to write our “sourcetype” change above to the memory so it can utilize that attribute. If we don’t restart the instance, Splunk will have no idea that we edited a .conf file and will not use our added attribute.

The above command shows us that our change was saved to disc, but in order for Splunk to utilize this attribute, we still have to restart the instance.

 

./splunk restart

Once restarted, all Btool merge files are in memory and describe how Splunk is currently acting at that given time.

 

Btool conf file creating a file with returned results

The above example will just simply print out the results to the console, where the code below will run the command and then create a file located in your “tmp” folder of all the returned text involving the inputs.conf files in your splunk instance.

 

./splunk btool inputs list > /tmp/btool_inputs.txt

 

Where do these conf files come from?

When running the normal Btool command above, we are returning ALL the settings in all inputs.conf files for the entire instance, however, we can’t tell which inputs.conf file each setting is defined in. This can be done by adding a –debug parameter.

 

./splunk btool inputs list –debug

 

Organizing the Btool legibility

When printing out the long list of conf files, they seem to be all smashed together, using the ‘sed’ command we are able to pretty it up a bit using some simple regex.

 

./splunk btool inputs list | sed ‘s/^\([^\[]\)/   \1/’

 

There are many other useful ways to utilize Btool such as incorporating scripts, etc. If you would like more information and would like to know more about how to utilize Btool in your environment, contact us today!

 

How to Filter Out Events at the Indexer by Date in Splunk

By: Jon Walthour | Splunk Consultant

 

A customer presented me with the following problem recently. He needed to be able to exclude events older than a specified time from being indexed in Splunk. His requirement was more precise than excluding events older than so many days ago. He was also dealing with streaming data coming in through an HTTP Event Collector (HEC). So, his data was not file-based, where an “ignoreOlderThan” setting in an inputs.conf file on a forwarder would solve his problem.

As I thought about his problem, I agreed with him—using “ignoreOlderThan” was not an option. Besides, this would work only based on the modification timestamp of a monitored file, not on the events themselves within that file. The solution to his problem needed to be more granular, more precise.

We needed a way to exclude events from being indexed into Splunk through whatever means they were arriving at the parsing layer (from a universal forwarder, via syslog or HEC) based on a precise definition of a time. This meant that it had to be more exact than a certain number of days ago (as in, for example, the “MAX_DAYS_AGO” setting in props.conf).

To meet his regulatory requirements for retention, my customer needed to be able to exclude, for example, events older than January 1 at midnight and do so with certainty.

As I set about finding (or creating) a solution, I found “INGEST_EVAL,” a setting in transforms.conf. This setting was introduced in version 7.2. It runs an eval expression at index-time on the parsing (indexing) tier in a similar (though not identical) way as a search-time eval expression works. The biggest difference with this new eval statement is that it is run in the indexing pipeline and any new fields created by it become indexed fields rather than search-time fields. These fields are stored in the rawdata journal of the index.

However, what if I could do an “if-then” type of statement in an eval that would change the value of a current field? What if I could evaluate the timestamp of the event, determine if it’s older than a given epoch date and change the queue the event was in from the indexing queue (“indexQueue”) to oblivion (“nullQueue”)?

I found some examples of this in Splunk’s documentation, but none of them worked for this specific use case. I also found that “INGEST_EVAL” is rather limited in what functions it can work with the eval statement. Functions like “relative_time()” and “now()” don’t work. I also found that, at the point in the ingestion pipeline where Splunk runs these INGEST_EVAL statements, fields like “_indextime” aren’t yet defined. This left me with using an older “time()” function. So, when you’re working with this feature in the future, be sure to test your eval expression carefully as not all functions have been fully evaluated in the documentation yet.

 

Here’s what I came up with:

props.conf

[<sourcetype>]

TRANSFORMS-oldevents=delete-older-than-January-1-2020-midnight-GMT

transforms.conf

[delete-older-than-January-1-2020-midnight-GMT]

INGEST_EVAL = queue=if(substr(tostring(1577836800-_time),1,1)=”-“, “indexQueue”, “nullQueue”)

 

The key is in the evaluation of the first character of the subtraction in the “queue=” calculation. A negative number yields a “-” for the first character; a positive number a digit. Generally, negative numbers are “younger than” your criteria and positive numbers are “older than” it. You keep the younger events by sending them to the indexQueue (by setting “queue” equal to “indexQueue”) and you chuck older events by sending them to the nullQueue (by setting “queue” equal to “nullQueue”).

Needless to say, my customer was pleased with the solution we provided. It addressed his use case “precisely.” I hope it is helpful for you, too. Happy Splunking!

Have a unique use case you would like our Splunk experts to help you with? Contact us today!

Effective Use of Splunk and AWS in the Time of Coronavirus

By: Bruce Johnson | Director, Enterprise Security

Firstly, be safe and be well. The TekStream family has found itself pulling together in ways that transcend remote conference calls and we hope that your respective organizations are able to do the same. We feel very privileged to be in the Splunk ecosystem as uses for Splunk technology are becoming ever more immediate.

To that end, we have seen all of our customers putting emphasis on monitoring remote access. Was any company sizing their network for virtualizing their entire ecosystem overnight? Network access points were sized for pre-determined traffic profiles leveraging pre-determined bandwidth levels for remote access. Those network appliances were configured to support predictable traffic volumes from occasionally remote workers, they weren’t designed to support 100% of all internal access traffic. The impact to operational monitoring of services supporting remote users became the most critical part of your infrastructure overnight.

Likewise, what you were monitoring for security just got hidden in a cloud of chaff. The changes to network traffic have opened you up to new threats that demand immediate attention.

Security Impact

There are several new areas of concern in the context of the current climate:

Your threat surface has changed

Anomalies relative to RDP sessions or escalation of privileges for remotely logged in users used to be a smaller percentage of traffic and might have figured into evaluating potential threat risk. Obviously that is no longer the case. If you’re able to segregate traffic for access to critical systems from traffic that simply needs to be routed or tunneled to other public cloud-provided applications, that would help cut down on the traffic that needs to be monitored but that will require changes to network monitoring and Splunk searches.

Your policies and processes need to be reviewed and revised

Have you published security standards for home networks for remote workers? Do you have policies relative to working in public networks? Do you have adequate personal firewalls in place or standard implementations for users wanting to implement security add-ons for their home networks or work-provided laptops?

Some employees might now be faced with working on home networks which are not adequate to the bandwidth needs of video conferencing and may opt to work from shared public access points (although they might have to make due with working from the Starbucks parking lot as internal access is prohibited). Many do not have secure wireless access points or firewalls on their home networks. Publishing links to your employees on how to implement additional wi-fi security and/or products that are supported for additional security, as well as how to ensure access to critical systems through supported VPN/MFA methods is worth doing even if you have done it before. There is also the potential expansion of access to include personal devices in addition to company-owned devices. They will need to have the same level of security, and you will also need to consider the privacy implications of employee-owned devices connecting to your business network.

Likewise, help desk resources in support of these efforts as well as level1 security analysts monitoring this type of activity might need to be shifted or expanded.

New threats have emerged

Hackers don’t take the day off because they have to work from home and there are several creative threats that take advantage of Coronavirus panic. Hackers are nothing if not nimble. There are several well-publicized attacks which seek to take advantage of users anxious for more information on the progress of the pandemic. The World Health Organization (WHO) and the U.S. Federal Trade Commission (FTC) have publicly warned about impersonators. Thousands of domains are getting registered every day in support of Coronavirus related phishing attacks. Some of them are even targeting hospitals, which takes “unethical” hacking to a brand new low. Additionally, there are new threat lists to consider, for example, RiskIQ is publishing a list of rapidly expanding domains relative to coronavirus.

Stepping up the normal Splunk monitoring for those domains, moving up plans to augment email filtering, setting up a mailbox that Splunk ingests for reported attacks that can be easily forwarded from end-users that suspect a phishing email, or augmenting your Phantom SOAR implementation to highlight automated response to specific phishing attacks are all appropriate in that context.

Operational impact

 

VPN Monitoring

If you are not currently monitoring VPN usage in Splunk it is relatively straightforward to implement VPN/Firewall data sources and to begin monitoring utilization and health from those appliances. It is useful to monitor network traffic as a whole relative to VPN bandwidth as well as the normal CPU/memory metrics coming from those appliances directly.

If you’re already monitoring VPN traffic and likely you are if you have Splunk, at the very least, you need to alter your thresholds for what constitutes an alert or an anomaly.

The following are examples of dashboards we’ve built to monitor VPN related firewall traffic as well as cpu/memory:

In addition to straightforward monitoring of the environment, expect troubleshooting tickets to increase. Detailed metrics relative to the connectivity errors might need to be monitored more closely or events might be expanded to make troubleshooting more efficient. Below is an example of Palo Alto Splunk dashboards that track VPN errors:

There are several out of the box applications from Splunk for VPN / NGFW sources including but not limited to:

Palo Alto: Includes firewall data that monitors bandwidth across key links. Additionally, Global protect VPN monitoring can help customers with troubleshooting remote access. https://splunkbase.splunk.com/app/491/

Zscaler: Provides visibility into remote access, no matter where the users are connecting from.https://splunkbase.splunk.com/app/3866/

Cisco: Provides equivalent functionality to populate dashboards around remote access and bandwidth on key links.  https://splunkbase.splunk.com/app/1620/

Fortinet: Provides ability to ingest Fortigate Fortinet traffic  https://splunkbase.splunk.com/app/2846/

Nagios: Monitors the network for problems caused by overloaded data links or network connections, also monitors routers, switches and more. https://splunkbase.splunk.com/app/2703/

One of the techniques to consider in response to this spike in volume is to split network traffic on your VPNs to segregate priority or sensitive traffic from traffic that you can pass through to external applications.

https://www.networkworld.com/article/3532440/coronavirus-challenges-remote-networking.html

Split tunneling can be used to route traffic and it’s being recommended by Microsoft for O365 access. This also effects how VPN traffic and threats are monitored through established tunnels. Obviously, the traffic to internal critical infrastructure and applications would be the priority and all externally routed traffic could be, if not ignored, at least de-prioritized.

Additionally, MFA applications fall into much the same category as monitoring of VPN sources and the same types of use cases apply to monitoring those sources. Below are a subset of relevant application links.

RSA Multifactor Authenticationhttps://splunkbase.splunk.com/app/2958/

Duo Multifactor Authenticationhttps://splunkbase.splunk.com/app/3504/

Okta Multifactor Authentication: https://splunkbase.splunk.com/app/2806/

 

If you’re familiar with Splunk, you already know that it is typically only a few hours of effort to onboard a new data source and begin leveraging it in the context of searches, dashboards, and alerts.

Engaged Workers

There are some people that are focused in an office environment, then there are the people that work at home with one cup of coffee that can fuel them until they are dragged away from their laptop, then there are the people that have way to much to do around the house to bother with work. It’s nice to know whether people are actually plugged in. A whole new demand for monitoring remote productivity fueling new solution offerings from Splunk. They have developed a set of dashboards under the guise of Remote Work Insights.

Of course, monitoring your VPN and conferencing software is just the beginning and there are a plethora of sources that might be monitored to measure productivity. Often those sources vary by team and responsibilities. The power inherent in Splunk is that each team can be monitored individuality with different measures and aggregated into composite team views at multiple levels, similar to ITSI monitoring of infrastructure layers and components. We are finding a great deal of opportunity in this area and it is expected to be a set of techniques and solutions that will persist well beyond the shared immediate challenges of Coronavirus.

A related use case for VPN monitoring is to track login and logout to confirm that people are actually logging in rather than social distancing on the golf course, but this use case has been less common in practice.

Migrate VPN services to the cloud

Ultimately, when faced with dynamic scaling and provisioning problems, the cloud is your answer. If your VPN infrastructure is taxed, the traffic is now completely unpredictable, and there is no way to scale up your network appliances in the short term, consider moving VPN services to cloud connectivity points. You can move network security to the cloud and consume it just like any other SaaS application. This has the advantage of being instantly scalable up and down (once normal operations resume) as well as being secure. Implementation can be done in parallel to your existing VPN network-based solutions. Virtualizing VPN in AWS is relatively straightforward and it’s certainly something TekStream can help you to accomplish in short order. It has the advantage of scaling and doing so temporarily. There are a variety of options to consider.

AWS Marketplace has VPN appliances you can deploy immediately. This is a good approach if you are already using a commercial-grade VPN like a Cisco ASA or Palo Alto. This will have the least impact on existing users since they can continue to use the same client, just point their connection to a new hostname or IP but it can be a bit pricey.  Some examples of commercial options from the AWS Marketplace are:

Cisco ASA: https://aws.amazon.com/marketplace/pp/B00WH2LGM0

Barracuda Firewall: https://aws.amazon.com/marketplace/pp/B077G9FKK7

Juniper Networks: https://aws.amazon.com/marketplace/pp/B01LYWCGDX

 

You can use AWS’s managed VPN service. This is a great “middle of the road” compromise if you don’t currently have a VPN.  As a managed service AWS handles a lot of the nuts and bolts and you can get up and connected quickly.  Your users will connect to the AWS VPN which connects to your AWS VPC, (which is connected to your datacenter, network, on-prem resources, etc). As a fully managed client-based VPN you can manage and monitor all your connections from a single console.  AWS VPN is an elastic solution that leverages the cloud to automatically scale based on user demand, without the limitations of a hardware appliance.  It may also allow you to take advantage of additional AWS provided security mechanisms like rotating keys, credentialing, etc. to augment your security practices.

Finally, if you need something quick and have a smaller number of users, you can deploy your own VPN software on an Ec2 instance and “roll-your-own.” While this can be quick and dirty, this can be error-prone, less secure, and introduce a single point of failure, and it has to be manually managed.

Additional Services

There are a whole host of ancillary supporting services which can might need to be expanded for inclusion into Splunk such as Citrix, Webex, Skype, VoiP infrastructure, Teams, etc.. Below is an example of an Australian customer monitoring Video conferencing solutions with Splunk ITSI, but TekStream has been involved to build out monitoring of critical VoiP infrastructure and relate that to multi-channel support mechanisms including web and chat traffic. The point is that all of these channels might have just become critical infrastructure.

Conclusion

Much of the above recommendations can be accomplished in days or weeks. If there is an urgent need to temporarily expand your license to respond to the Coronavirus threat, that might be possible in the short term as well. With uncertainty around the duration of the pandemic, it would seem to warrant an all-out response from infrastructure, to processes and procedures, to operations, and security.

Your business can’t afford to fail. TekStream is here to help if you need us.

The Power of Splunk On-The-Go with Splunk Mobile and Splunk Cloud Gateway

By: Pete Chen | Splunk Practice Team Lead

 

Splunk can be a powerful tool in cybersecurity, infrastructure monitoring, and forensic investigations. While it’s great to use in the office, after-hour incidents require the ability to have data available immediately. Since most people carry a mobile device, such as a cell phone or a tablet, it’s easy to see how having dashboards and alerts on a mobile device can help bridge the information gap.

Splunk Mobile brings the power of Splunk dashboards to mobile devices, powered by Splunk Cloud Gateway. While Splunk Mobile is installed on a mobile device, Splunk Cloud Gateway feeds the mobile app from Splunk Enterprise. Between the two applications is Splunk’s AWS-hosted Cloud Bridge. Traffic between Splunk Enterprise and the mobile device is protected by TLS 1.2 encryption.

Architecture from Splunk

Splunk Cloud Gateway

Software Download https://splunkbase.splunk.com/app/4250/
Documentation https://docs.splunk.com/Documentation/Gateway

Splunk Cloud Gateway is a standard app found on Splunkbase (link above). It can be installed through the User Interface (UI), or by unpacking the file to <SPLUNK_HOME>/etc/apps/. When installed through the UI, Splunk will prompt for a restart once installation is complete. Otherwise, restart Splunk once the installation package has been unpacked into the Apps folder.

After restart, Splunk Cloud Gateway will appear as an app on Splunk Web. Browse to the app, and these are the pages available in the app:

The first page allows for devices to be manually registered. When Splunk Mobile is opened for the first time (or on a device not registered to another Splunk Cloud Gateway instance), an activation code will appear at the center of the display. That code can be used to register the device on Splunk. The “Device Name” field can be any value, used to identify that particular device. It’s helpful to identify the main user of the device and the type of device.

Skipping over Devices until a device is registered, and putting aside Splunk > AR for another time, the next important section is the “Configure” tab. At the top of the page, all the deployment configurations are listed. The Cloud Gateway ID can be modified through a configuration file to better reflect the environment. A configuration file can be downloaded for a Mobile Device Manager (MDM). This is also where the various products associated with Splunk Connected Experiences can be enabled.

In the Application section, look for Splunk Mobile. Under the Action column, click on Enable. This must be done before a device can be registered.

The App Selection Tab is where apps can be selected, based on each user’s preference, to determine which dashboards are visible through Splunk Mobile. When no apps are selected, all available dashboards are displayed. Select the apps desired by clicking them from the left panel, and they will appear on the right panel. Be sure to click save to commit the changes.

A couple of things to point out in this section.

  • Again, if an app is not selected, all available dashboards to the user will appear on Splunk Mobile.
  • Management of apps is based on the user, not centrally managed. During the registration of a device, a user must log in to authenticate. The apps selected in this page will be the same for all devices registered under this user.
  • Even if apps are specified, all dashboards set with global permissions will still be visible to the user.
  • To eliminate all dashboards and control what is viewable requires setting all dashboards to app-only permissions, and creating a generic app without dashboards. When this app is selected, and after all dashboards are converted to app-only permissions, no dashboards will appear.

The final tab is the dashboard for Splunk Cloud Gateway. This dashboard shows the status of the app, and provides metrics of usage. The top three panels may be the most important when first installing Cloud Gateway. If the service doesn’t seem to be working correctly, these three panels will help in troubleshooting the service.

 

Splunk Mobile

Google Play Store https://play.google.com/store/apps/details?id=com.splunk.android.alerts
Apple App Store https://apps.apple.com/us/app/splunk-mobile/id1420299852

Installing Splunk Mobile on a mobile device is as simple as going to the app store, and having the device set up the app. Once the app is ready, launching the app will bring up a registration page. On this page, there is a code needed to register the device with Splunk Cloud Gateway. Below is a secondary code. This is used to verify with Cloud Gateway, making sure the device is registered with the correct encryption key.

With the code above, return to Splunk Cloud Gateway, and register the device. Type in the activation code from Splunk Mobile. Enter in a device name, as explained above. Click on “Register” to continue.

Validate the confirmation code displayed in the UI with the code displayed on the device. If the codes don’t match, stop the registration process. If the codes do match, enter credentials for Splunk, and click “Continue”.

At this point, the device is registered with Splunk Cloud Gateway. Validate the device name in the Registered Devices page. Make sure the Device Type, and the Owner matches the device and user. If necessary, “Remove” is available to remove a device from Cloud Gateway.

From a mobile perspective, the initial page displayed is the list of potential alerts.

At the bottom of the screen, tap on “Dashboards” to see the list of dashboards available to the mobile device. Without any additional configuration, all available Splunk dashboards should appear in the list. Click on any dashboard.

As an example, when the Cloud Gateway Status Dashboard is selected, the dashboard opens and allows for a time-selector at the top of the page. The panels available from the UI are displayed in a single column on the mobile device.

Points to Consider

Now that Splunk Mobile and Splunk Cloud Gateway are configured, and ready to be used, here are some points to consider in an Enterprise deployment.

  • When installing on a search head cluster, Splunk Cloud Gateway must be installed on the cluster captain. The captain runs some of the scripts necessary to connect Cloud Gateway to the Spacebridge.
  • All dashboards set with global permissions will appear. To limit visibility, set dashboard permissions to app-only or private.
  • During device registration, the credentials used will determine the dashboards and alerts available to the device. Configuration is user-based, not centrally controlled.
  • Trellis is not a supported feature of Splunk Mobile. Dashboards with panels using trellis will need to be reconfigured.
  • Panel sizing and scaling is not adjustable at this time. Some dashboard re-design may be necessary to tell the best story.
  • Pay special attention to how long dashboards take to load. From a mobile perspective, dashboards will need to load faster for the mobile user.

Want to learn more about Splunk Mobile and Splunk Cloud Gateway? Contact us today!

Connecting Splunk to Lightweight Directory Access Protocol (LDAP)

By: Pete Chen | Splunk Team Lead

Overview

Splunk installation is complete. Forwarders are sending data to the indexers, search heads successfully searching the indexers. The next major step is to add central authentication to Splunk. Simply put, you log into your computer, your email, and your corporate assets with a username and password. Add Splunk to the list of tools available to you with those credentials. This also saves the time and hassle of creating user profiles for everyone who needs access to Splunk. Before embarking on this step, it’s important to develop a strategy for permissions and rights. This should answer the question, “who has access to what information?”

LDAP Basics

LDAP stands for Lightweight Directory Access Protocol. The most popular LDAP used by businesses is Microsoft’s Active Directory. The first step in working with LDAP is to determine the “base DN”. This is the name of the domain. Let’s use the domain “splunkrocks.local” as an example. In LDAP terms, it can be expressed as dc=splunkrocks,dc=local. Inside of the DN are the organizational units, OU’s. So an example of an organizational unit expressed is ou=users,dc=splunkrocks,dc=local.

Most technical services require some sort of authentication to access the information they provide. The credentials (username and password) needed to access the LDAP server is called the “BindDN”. When a connection is requested, the AD server will require a user with enough permissions to allow user and group information to be shared. In most business environments, the group managing Splunk will not be the same group managing the LDAP server. It’s best to ask for an LDAP administrator to type in the credentials during the setup process. The LDAP password is masked while it’s being typed and is hashed in the configuration file.

Keep in mind that connecting Splunk to the LDAP server doesn’t complete the task. It’s necessary to map LDAP groups to Splunk roles afterward.

Terms

LDAP: Lightweight Directory Access Protocol

AD: Active Directory

DN: Distinguished Name

DC: Domain Component

CN: Common Name

OU: Organizational Unit

SN: Surname

Sample Directory Structure

Using our sample domain, Splunkrocks Local Domain (splunkrocks.local), let’s assume an organizational unit for Splunk is created called “splunk”. Inside this OU, there are two sub-organizational units, one for users, one for groups. In Splunk terms, these are users and roles.

 

Group Users Users SN
User Austin Carson austin.carson
User Kim Gordon kim.gordon
User James Lawrence james.lawrence
User Wendy Moore wendy.moore
User Brad Hect brad.hect
User Tom Chu tom.chu
Power User Bruce Lin bruce.lin
Power User Catherine Lowe catherine.lowe
Power User Jeff Marlow jeff.marlow
Power User Heather Bradford heather.bradford
Power User Ben Baker ben.baker
Admin Bill Chang bill.chang
Admin Charles Smith charles.smith
Admin Candice Owens candice.owens
Admin Jennifer Cohen jennifer.cohen

Connecting Splunk to LDAP

From the main menu, go to Settings, and select Access Control.

Select Authentication Method

Select LDAP under External Authentication. Then click on Configure Splunk to use LDAP

 

In the LDAP Strategies page, there should not be any entries listed. At the top right corner of the page, click on New LDAP to add the Splunkrocks AD server as an LDAP source. Give the new LDAP connection a name.

The first section to configure is the LDAP Connection Settings. This section defines the LDAP server, the connection port, whether the connection is secure, and a user with permission to bind the Splunk server to the LDAP server.

The second section determines how Splunk finds the users within the AD server.

–        User base DN: Provide the path where Splunk can find the users in on the AD server.

–        User base filter: This can help reduce the number of users brought back into Splunk.

–        User name attribute: This is the attribute within the AD Server which contains the username. In most AD servers, this is “sAMAccountName”.

–        Real name attribute: This is the human-readable name. This is where “Ben Baker” is displayed” instead of “ben.baker”. In most AD servers, this is the “cn”, or Common Name.

–        Email attribute: this is the attribute in AD which contains the user’s email.

–        Group mapping attribute: If the LDAP server uses a group identifier for the users, this will be needed. It’s not required if distinguished names are used in the LDAP groups.

The second section determines how Splunk finds the groups within the AD server.

–        Group base DN: Provide the path where Splunk can find the groups in on the AD server.

–        Static group search filter: search filter to retrieve static groups.

–        Group name attribute: This is the attribute within the AD server which contains the group names. In most AD servers, this is simply “cn”, or Common Name.

–        Static member attribute: The group attribute with the group’s members contained. This is usually “member”.

The rest can be left blank for now. Click Save to continue. If all the settings are entered properly, the connection will be successful. A restart of Splunk will be necessary to enable the newly configured authentication method.  Remember, adding LDAP authentication is the first part of the process. To complete the setup, it’s also necessary to map Splunk roles to LDAP groups. Using the access and rights strategy mentioned above, create the necessary Splunk roles and LDAP groups. Then map the roles to the groups, and assign the necessary group or groups to each user. Developing this strategy and customizing roles is something we can help you do, based on your needs and best practices.

Want to learn more about connecting Splunk to LDAP? Contact us today!

You Can Stop Data Breaches Before They Start​

You would think that, given the ruinous financial and reputational consequences of data breaches, companies would take them seriously and do everything possible to prevent them. But, in many cases, you would be wrong.

The global cost of cybercrime is expected to exceed $2 trillion in 2019, according to Juniper Research’s The Future of Cybercrime & Security: Financial and Corporate Threats & Mitigation report. This is a four-fold increase when compared to the estimated cost of cybercrime just four years ago, in 2015.

While the average cost of a data breach is in the millions and malicious attacks are on the rise, 73 percent of businesses aren’t ready to respond to a cyber attack, according to the 2018 Hiscox Cyber Readiness Report. The study of more than 4,000 organizations across the US, UK, Germany, Spain and the Netherlands found that most organizations are unprepared and would be seriously impacted by an attack.

Why are organizations unprepared to deal successfully with such breaches? One potential issue is the toll working in cybersecurity takes on both CISOs and IT security professionals. One report indicates that two-thirds of those professionals are burned out and thinking about quitting their jobs. This is bad news when some 3 million cybersecurity jobs already are going unfilled, leaving companies vulnerable to data breaches.

In the executive suite, CISOs recently surveyed by ESG and the Information Systems Security Association (ISSA) said their reasons for leaving an organization after a brief tenure (18 to 24 months) include corporate cultures that don’t always emphasize cybersecurity and budgets that aren’t adequate for an organization’s size or industry.

We’d add one other factor: companies are often afraid to try new technology that can solve the problem.

Given the ongoing nature and potential negative impact of data breaches, all those factors need to change. Why put an organization, employees and clients under stress and at risk when there are solutions to not just managing, but eliminating data breaches?

Our clients have had particular success in identifying and stopping data breaches by using Splunk on AWS, which together offer a secure cloud-based platform and powerful event monitoring software. We are big believers in the combination, and we think that CISOs who are serious about security should be investigating their use. AWS dominates the cloud market and Splunk has spent six years as a Leader in the Gartner Security Information and Event Management (SIEM) Magic Quadrant, so we aren’t the only ones who are confident in their abilities.

Other technologies that monitor and identify potential issues do exist. The point is: learn the lessons offered by the disastrous data breaches of recent years and build a system that’s meant to prevent them. Yes, that might mean hiring skilled and experienced people and spending money to do it right, including a major technology overhaul if you haven’t already moved to the cloud.

But it’s a safe bet that hackers will continue to hack, and every organization that handles data is at risk. Building a technology foundation today that guards against potential issues tomorrow (or sooner) is the smart way for you to avoid becoming a news headline yourself.

Ready to Protect Your Company? As the only Splunk Premier MSP and Elite Professional Services partner in North America, TekStream is uniquely positioned to ensure your Splunk security solution is implemented successfully and your SOC is managed properly. Learn More.