Connecting Splunk to Lightweight Directory Access Protocol (LDAP)

By: Pete Chen | Splunk Team Lead

Overview

Splunk installation is complete. Forwarders are sending data to the indexers, search heads successfully searching the indexers. The next major step is to add central authentication to Splunk. Simply put, you log into your computer, your email, and your corporate assets with a username and password. Add Splunk to the list of tools available to you with those credentials. This also saves the time and hassle of creating user profiles for everyone who needs access to Splunk. Before embarking on this step, it’s important to develop a strategy for permissions and rights. This should answer the question, “who has access to what information?”

LDAP Basics

LDAP stands for Lightweight Directory Access Protocol. The most popular LDAP used by businesses is Microsoft’s Active Directory. The first step in working with LDAP is to determine the “base DN”. This is the name of the domain. Let’s use the domain “splunkrocks.local” as an example. In LDAP terms, it can be expressed as dc=splunkrocks,dc=local. Inside of the DN are the organizational units, OU’s. So an example of an organizational unit expressed is ou=users,dc=splunkrocks,dc=local.

Most technical services require some sort of authentication to access the information they provide. The credentials (username and password) needed to access the LDAP server is called the “BindDN”. When a connection is requested, the AD server will require a user with enough permissions to allow user and group information to be shared. In most business environments, the group managing Splunk will not be the same group managing the LDAP server. It’s best to ask for an LDAP administrator to type in the credentials during the setup process. The LDAP password is masked while it’s being typed and is hashed in the configuration file.

Keep in mind that connecting Splunk to the LDAP server doesn’t complete the task. It’s necessary to map LDAP groups to Splunk roles afterward.

Terms

LDAP: Lightweight Directory Access Protocol

AD: Active Directory

DN: Distinguished Name

DC: Domain Component

CN: Common Name

OU: Organizational Unit

SN: Surname

Sample Directory Structure

Using our sample domain, Splunkrocks Local Domain (splunkrocks.local), let’s assume an organizational unit for Splunk is created called “splunk”. Inside this OU, there are two sub-organizational units, one for users, one for groups. In Splunk terms, these are users and roles.

 

Group Users Users SN
User Austin Carson austin.carson
User Kim Gordon kim.gordon
User James Lawrence james.lawrence
User Wendy Moore wendy.moore
User Brad Hect brad.hect
User Tom Chu tom.chu
Power User Bruce Lin bruce.lin
Power User Catherine Lowe catherine.lowe
Power User Jeff Marlow jeff.marlow
Power User Heather Bradford heather.bradford
Power User Ben Baker ben.baker
Admin Bill Chang bill.chang
Admin Charles Smith charles.smith
Admin Candice Owens candice.owens
Admin Jennifer Cohen jennifer.cohen

Connecting Splunk to LDAP

From the main menu, go to Settings, and select Access Control.

Select Authentication Method

Select LDAP under External Authentication. Then click on Configure Splunk to use LDAP

 

In the LDAP Strategies page, there should not be any entries listed. At the top right corner of the page, click on New LDAP to add the Splunkrocks AD server as an LDAP source. Give the new LDAP connection a name.

The first section to configure is the LDAP Connection Settings. This section defines the LDAP server, the connection port, whether the connection is secure, and a user with permission to bind the Splunk server to the LDAP server.

The second section determines how Splunk finds the users within the AD server.

–        User base DN: Provide the path where Splunk can find the users in on the AD server.

–        User base filter: This can help reduce the number of users brought back into Splunk.

–        User name attribute: This is the attribute within the AD Server which contains the username. In most AD servers, this is “sAMAccountName”.

–        Real name attribute: This is the human-readable name. This is where “Ben Baker” is displayed” instead of “ben.baker”. In most AD servers, this is the “cn”, or Common Name.

–        Email attribute: this is the attribute in AD which contains the user’s email.

–        Group mapping attribute: If the LDAP server uses a group identifier for the users, this will be needed. It’s not required if distinguished names are used in the LDAP groups.

The second section determines how Splunk finds the groups within the AD server.

–        Group base DN: Provide the path where Splunk can find the groups in on the AD server.

–        Static group search filter: search filter to retrieve static groups.

–        Group name attribute: This is the attribute within the AD server which contains the group names. In most AD servers, this is simply “cn”, or Common Name.

–        Static member attribute: The group attribute with the group’s members contained. This is usually “member”.

The rest can be left blank for now. Click Save to continue. If all the settings are entered properly, the connection will be successful. A restart of Splunk will be necessary to enable the newly configured authentication method.  Remember, adding LDAP authentication is the first part of the process. To complete the setup, it’s also necessary to map Splunk roles to LDAP groups. Using the access and rights strategy mentioned above, create the necessary Splunk roles and LDAP groups. Then map the roles to the groups, and assign the necessary group or groups to each user. Developing this strategy and customizing roles is something we can help you do, based on your needs and best practices.

Want to learn more about connecting Splunk to LDAP? Contact us today!

You Can Stop Data Breaches Before They Start​

You would think that, given the ruinous financial and reputational consequences of data breaches, companies would take them seriously and do everything possible to prevent them. But, in many cases, you would be wrong.

The global cost of cybercrime is expected to exceed $2 trillion in 2019, according to Juniper Research’s The Future of Cybercrime & Security: Financial and Corporate Threats & Mitigation report. This is a four-fold increase when compared to the estimated cost of cybercrime just four years ago, in 2015.

While the average cost of a data breach is in the millions and malicious attacks are on the rise, 73 percent of businesses aren’t ready to respond to a cyber attack, according to the 2018 Hiscox Cyber Readiness Report. The study of more than 4,000 organizations across the US, UK, Germany, Spain and the Netherlands found that most organizations are unprepared and would be seriously impacted by an attack.

Why are organizations unprepared to deal successfully with such breaches? One potential issue is the toll working in cybersecurity takes on both CISOs and IT security professionals. One report indicates that two-thirds of those professionals are burned out and thinking about quitting their jobs. This is bad news when some 3 million cybersecurity jobs already are going unfilled, leaving companies vulnerable to data breaches.

In the executive suite, CISOs recently surveyed by ESG and the Information Systems Security Association (ISSA) said their reasons for leaving an organization after a brief tenure (18 to 24 months) include corporate cultures that don’t always emphasize cybersecurity and budgets that aren’t adequate for an organization’s size or industry.

We’d add one other factor: companies are often afraid to try new technology that can solve the problem.

Given the ongoing nature and potential negative impact of data breaches, all those factors need to change. Why put an organization, employees and clients under stress and at risk when there are solutions to not just managing, but eliminating data breaches?

Our clients have had particular success in identifying and stopping data breaches by using Splunk on AWS, which together offer a secure cloud-based platform and powerful event monitoring software. We are big believers in the combination, and we think that CISOs who are serious about security should be investigating their use. AWS dominates the cloud market and Splunk has spent six years as a Leader in the Gartner Security Information and Event Management (SIEM) Magic Quadrant, so we aren’t the only ones who are confident in their abilities.

Other technologies that monitor and identify potential issues do exist. The point is: learn the lessons offered by the disastrous data breaches of recent years and build a system that’s meant to prevent them. Yes, that might mean hiring skilled and experienced people and spending money to do it right, including a major technology overhaul if you haven’t already moved to the cloud.

But it’s a safe bet that hackers will continue to hack, and every organization that handles data is at risk. Building a technology foundation today that guards against potential issues tomorrow (or sooner) is the smart way for you to avoid becoming a news headline yourself.

Ready to Protect Your Company? As the only Splunk Premier MSP and Elite Professional Services partner in North America, TekStream is uniquely positioned to ensure your Splunk security solution is implemented successfully and your SOC is managed properly. Learn More.

Integrating Splunk Phantom with Splunk Enterprise

By: Joe Wohar | Splunk Consultant

 

There are multiple apps that can be used to integrate Phantom with Splunk, each exists for a different reason. Some of the functionality overlaps. The intent of this post is to provide a guide to knowing which one to leverage based upon what environment you are working in and what use cases are driving your requirement.

 

Application Install Target Usage
Splunk App for Phantom Phantom Pull event data from Splunk, push event data to Splunk, add Splunk actions to Phantom playbooks.
Phantom App for Splunk Splunk Push event data to Phantom
Phantom Remote Search Splunk Push Phantom data to Splunk
Splunk App for Phantom Reporting Splunk Report on Phantom data
Splunk Add-on for Phantom Splunk Used for monitoring Phantom as a service in Splunk ITSI

 

Splunk App for Phantom

The Splunk App for Phantom is a Phantom app used to connect Phantom to Splunk. Phantom apps that are built by Splunk are installed in Phantom by default, so no installation is required, however, you’ll need to configure an asset for it. In the asset settings, you’ll need the IP/hostname of your Splunk instance as well as a Splunk user with sufficient access to the data you wish to search. The Splunk App for Phantom can do the following: post data to Splunk as events, update notable events, run SPL queries, and pull events from Splunk to Phantom. 

  • To pull events from Splunk to Phantom, you’ll need to configure the asset settings and ingest settings in your configured asset. It is recommended that you create a new label in Phantom for the events you pull in from Splunk, which will make it easier to find the events in the Analyst Queue in Phantom.
  • There are four included actions which can be used in playbooks:
    • get host events – retrieves events about a specific host from Splunk
    • post data – creates an event in your Splunk instance
    • run query – runs an SPL query in Splunk and returns the results of the search to Phantom
    • update event – updates specified notable events within your Splunk Enterprise Security instance

For specific details on using these actions, search for “splunk” on the Apps page in Phantom and click the Documentation link:

 

Phantom App for Splunk https://splunkbase.splunk.com/app/3411/

The Phantom App for Splunk is a Splunkbase app that is installed in Splunk and connects Splunk to Phantom. The main function of this app is to send data from Splunk to Phantom. First, you’ll need to go through the Phantom Server Configuration page to connect Splunk to Phantom, which will require an automation user in Phantom. Then, to send events to Phantom, you’ll need to create a saved search in Splunk where the results of the search are the events you want ingested into Phantom. Open the Phantom App for Splunk and create a New Saved Search Export to start sending events over. There is also an option to create a Data Model Export, which follows the same set of steps used for exporting saved search results to Phantom:

This app also contains alert actions that can be used in Splunk Enterprise Security:

  • Send to Phantom – sends the event(s) that triggered the alert to Phantom
  • Run Playbook in Phantom – sends the event(s) that triggered the alert to Phantom and runs the specified playbook on them

For more information about the Phantom App for Splunk, review the following documents:

https://docs.splunk.com/Documentation/PhantomApp

https://my.phantom.us/4.6/docs/admin/splunk

 

Phantom Remote Search https://splunkbase.splunk.com/app/4153/

The Phantom Remote Search app is used for multiple reasons. Phantom has an embedded Splunk Enterprise instance built into it, however, you can configure Phantom to use an external Splunk Enterprise instance instead via this app. To do this, you’ll need to install the Phantom Remote Search app onto your Splunk instance, which contains Splunk roles needed for creating two Splunk users required by Phantom. You’ll also need to setup an HTTP Event Collector (HEC) input for receiving Phantom data. After installing the app, creating the necessary users, and creating the HEC input, you can go over to Phantom and change the “Search Settings” in the “Administration Settings”:

Click the following link for a more detailed list of instructions:

https://my.phantom.us/4.6/docs/admin/administration#SearchSettings

This app is also very useful because once you have completed the setup, Phantom will start sending data about itself over to Splunk. This allows you to shift your Phantom reporting out of Phantom and into Splunk. If your Phantom instance is brand new with no events and no active playbooks, configure an asset or create a playbook to test whether or not Phantom data is being sent to Splunk.

 

Splunk App for Phantom Reporting https://splunkbase.splunk.com/app/4399/

If you have already installed the Phantom Remote Search app onto your Splunk instance and configured your Search Settings in Phantom to use an external Splunk instance, you can install the Splunk App for Phantom Reporting onto your Splunk instance to gain insights into Phantom automation and containers:

Splunk Add-on for Phantom https://splunkbase.splunk.com/app/4726/

The Splunk Add-on for Phantom is a Splunk add-on is designed for use with Splunk ITSI to monitor your Phantom instance, although ITSI is not a pre-requisite, it can also be used with Splunk Enterprise but it publishes metrics in a manner that is consistent with ITSI health metrics. It also expects installation of the Phantom Remote Search add-on. The Phantom Remote Search add-on defines indexes and roles used by Phantom when Phantom is configured to use an external Splunk instance for search data. The Phantom Remote Search add-on is required in order to use the Content Pack for Monitoring Phantom as a Service. If you do want to use Splunk ITSI to monitor Phantom, you can follow the documentation for that here:

https://docs.splunk.com/Documentation/ITSICP/current/Config/AboutPhantom

For more information about Phantom, register at https://my.phantom.us/ which will give you access to knowledge articles, documentation, playbooks, and the OVA for Phantom so you can try it out yourself!

Need more help? Contact us today!

Why Splunk on AWS?

AWS is the world’s most comprehensive and widely-adopted cloud platform, offering over 165 services from data centers all over the globe. AWS allows you to build sophisticated applications with increased flexibility, scalability and reliability. The platform serves businesses from everyone from government agencies and Fortune 1,000 companies to small businesses and entrepreneurial startups.

Should your business consider using AWS? Changing databases to AWS is easy with the AWS database migration service, or by using AWS managed services or an AWS consulting agency.  Here’s why AWS is amongst the leading cloud computing services.

Reason #1: It’s Flexible 

Anyone can sign up for AWS and use the services without advanced programming language skills. AWS prioritizes consumer-centered design thinking, allowing users to select their preferred operating system, programming language, database, and other vital preferences. They also provide comprehensive developer resources and informative tools available to help maintain AWS’s ease of use and keep it up-to-date. 

Whether your team has the time to learn AWS or has access to AWS consulting, training users is simple. AWS offers their services with a no-commitment approach. Many software solutions use this as a way to market monthly subscriptions, but AWS services are charged on an hourly basis. As soon as you terminate a server, billing won’t include the next hour.   

With AWS, you can spin-up a new server within a matter of minutes compared to the hours or days it takes to procure a new traditional server. Plus, there’s no need to buy separate licenses for the new server. 

Reason #2: It’s Cost-Effective 

With AWS, you pay based on your actual usage. AWS’s cloud solution makes paying for what you use the standard for database storage, content delivery, compute power, and other services. No more fixed server cost or on-prem monitoring fees. Your cost structure scales as your business scales, providing your company with an affordable option that correlates with its current needs. This results in lower capital expenditure and faster time to value without sacrificing application performance or user experience. Amazon also has a strong focus on reducing infrastructure cost for buyers. 

Reason #3: It’s Secure 

Cloud security is the highest priority at AWS. Global banks, military, and other highly-sensitive organizations rely on AWS, which is backed by a deep set of cloud security tools. Maintaining high standards of security without managing your own facility allows companies to focus on scaling their business, with the help of:  

  • AWS multiple availability zones (which consist of one or more data centers around the globe with redundant power, networking, and connectivity) that aid your business in remaining resilient to most failure modes, like natural disasters or system failures. 
  • Configured, built-in firewall rules that allow you to transition from completely public to private or somewhere in between to control access according to circumstance.

Multiple Migration Options

Depending on your unique business and tech stack needs, AWS offers companies multiple options for realizing its host of benefits. For Splunk, those options include:

  1. Migrate your Splunk On-Prem directly to AWS
  2. Migrate your Splunk On-Prem to Splunk Cloud (which sits on AWS)

Migrating to the cloud can be a business challenge, but AWS makes it simpler. While on the journey towards stronger digital security and efficiency, AWS can save time and resources. With its flexibility, cost-effectiveness, and security, you can easily deploy a number of software-based processes to an inclusive cloud-based solution.

Implementing Right-Click Integrations in Splunk

By: Eric Howell | Splunk Consultant

 

Splunk provides Admins the opportunity to build a huge variety of customizations, visualizations, apps, and a near-infinitude of different options to finely tune your Splunk experience and have it best suit your needs. There is a use case and a solution for almost any task you can throw at Splunk. A use case that has been brought up frequently is to implement a “Right-Click Integration” to allow information from Splunk to be passed to another tool. Originally, this seemed a daunting task that could require the implementation of custom java script or a unique python script to solve for. Ultimately, the solution was much simpler and the logic is already found in Splunk – Workflow Actions!

Workflow actions allow for Splunk to perform a variety of tasks against available web resources for information found in ingested field/value pairs. They can be very easy to set up either through the UI or by deploying a workflow_actions.conf, as best depends on the size of your Splunk environment, and they offer quite a bit of additional functionality. Utilizing this functionality, your “Right-Click” integration may require an extra click, but it will be fully functional and should be fully serviceable.

Leveraging Workflow Actions to Pass Values and Fields

As discussed above, to set up a workflow action, an admin can leverage the WebUI on one of the Search Heads in an environment or deploy the workflow_actions.conf file as appropriate. This will allow a user to expand an event and pass a value from a field to an external source. In the examples below, we will use VirusTotal as an example.

WebUI Setup

For this example, we will be posting the value of a field to VirusTotal, allowing us to search their IOC database and verify if the value we are passing is a known, bad entity.

  1. Navigate to the Settings Dropdown in the Splunk navigator (In the top right by default) and select Fields under Knowledge

2. Next, you will select “Add new” in the “Workflow actions” category

3. This next step encompasses what this workflow action will do. We begin with creating a Label for the action, which will be what users see when they move to access it. Note, you can leverage             tokens in this descriptor to input values dynamically, based on what is clicked. In the case here, it will show the specific value that is being presented to VirusTotal.

4. Then you will indicate, in a comma-separated list, the fields that this workflow action can apply to. Leaving this blank will appear in all field menus.

 

5. If needed, the next step will be to select which eventtypes this workflow_action is applicable for. For additional reading on eventtypes: Follow this Link

a. Eventtypes are categorizations of data input by the user or an admin. An example would be when a specific IP address is found in your web data (sourcetype=access_combined src_ip=127.0.0.1), and you want to apply specific categorization to data that meets that criteria so that it appears with a new field/value pair (e.g. eventtype=local_access)

6. Next you will choose if the action is available in the Event Menu, Fields Menu, or both. This selection is a dropdown. Additional reading available here: Follow this Link

a. Event Menu – Will allow for you to apply this workflow action on the whole event. When an event returned by your search is expanded, the “Event Actions” dropdown will hold any workflow action with “Event Menu” selected here.

b. Fields Menus – Similar to the above, on an expanded event, the workflow action can be accessed by clicking the dropdown under “Action” in the list of included fields to allow one to apply the workflow action directly to the value of a specific field.

7. Next is to choose the Action type – meaning whether the workflow action acts as a “link” to another web location or whether it will run another “search”

8. The next step is configuring the properties based on which of the above Action Types was chosen

a. Link Configuration – Allows you to specify a URI (also accepts tokens), decide whether to open the link in a new window, select whether you intend to utilize a POST or GET method, and which arguments to pass to the web site.

b. Search Configuration – Allows you to specify a search string (token-friendly), dictate which app to run the search in, determine a name of a view (page) for the search to open in, whether to run the search in the same window, and input a timeframe.

9. Next click Save

 

Your workflow action is ready to test. Here is an example of what you can see when looking in an event view with an expanded event.

Deploying a workflow_actions.conf

Like most other configuration files, workflow_actions.conf is easily deployed via the method of your choosing (manually, scripted, or through the use of the Deployer). Below is what will be generated by Splunk in $SPLUNK_HOME/etc/system/local when created through the web UI:

 

[VirusTotalPost]

display_location = both

label = Search VirusTotal for $@field_value$

link.method = post

link.uri = https://www.virustotal.com/en/search/

link.target = blank

link.postargs.1.key = query

link.postargs.1.value = $@field_value$

type = link

 

This same information is deployable in an app of your choosing to $SPLUNK_HOME/etc/apps/ as appropriate for environments with clustered Search Heads or through other means of configuration management.

There are a wide variety of possible options with the stanza, and I recommend reviewing Splunk’s documentation regarding the specific .conf file here:

https://docs.splunk.com/Documentation/Splunk/latest/Admin/Workflow_actionsconf

Leveraging your workflow_actions with older versions of Splunk Enterprise Security

If you are utilizing a version of the Splunk Enterprise Security app prior to 5.3, even if your workflow actions have global permissions, the app will not import configurations from others unless they follow a strict naming convention. If you want your workflow action to be accessible from the Incident Review dashboard, for example, you will need to make sure your app follows the one of these naming conventions:

  • DA-ESS-*
  • SA-*
  • TA-*
  • Splunk_SA_*
  • Splunk_TA_*
  • Splunk_DA-ESS_*

 

Want to learn more about right-click integrations in Splunk? Contact us today!

 

Deep Freeze Your Splunk Data in AWS

By: Zubair Rauf | Splunk Consultant

 

In today’s day and age, storage has become a commodity, but, even now, reliable high-speed storage comes at a substantial cost. For on-premise Splunk deployments, Splunk recommends RAID 0 or 1+0 disks capable of at least 1200 IOPS and this increases in high-volume environments. Similarly, in bring-your-own-license cloud deployments, customers prefer to use SSD storage with at least 1200 IOPS or more.

Procuring these disks and maintaining them can carry a hefty recurring price tag. Aged data, that no longer needs to be accessed on a daily basis but has to be stored because of corporate governance policies or regulatory requirements, can effectively increase the storage cost for companies if done on these high-performance disks.

This data can securely be moved to Amazon Web Services (AWS) S3, S3 Glacier, or other inexpensive storage options of the Admin’s choosing.

In this blog post, we will dive into a script that we have developed at TekStream which can move buckets from Indexer Clusters to AWS S3 seamlessly, without duplication. It will only move one good copy of the bucket and ignore any duplicates (replicated buckets).

During the process of setting up indexes, Splunk Admins can decide and set data retention on a per-index basis by setting the ‘frozenTimePeriodInSecs’ setting in indexes.conf. This allows the admins to be flexible on their retention levels based on the type of data. Once the data becomes of age, Splunk deletes it or moves it to frozen storage.

Splunk achieves this by referring to the coldToFrozenScript setting in indexes.conf. If a coldToFrozenScript is defined, Splunk will run that script; once it successfully executes without problems, Splunk will go ahead and delete the aged bucket from the indexer.

The dependencies for this script include the following;

–   Python 2.7 – Installed with Splunk

–   AWS CLI tools – with credentials already working.

–   AWS Account, Access Key and Secret Key

–   AWS S3 Bucket

Testing AWS Connectivity

After you have installed AWS CLI and set it up with the Secret Key and Access Key for your account, test connectivity to S3 by using the following command:

Note: Please ensure that the AWS CLI commands are installed under /usr/bin/aws and the AWS account you are using should have read and write access to S3 artifacts.

If AWS CLI commands are set-up correctly, this should return a list of all the S3 buckets in your account.

I have created a bucket titled “splunk-to-s3-frozen-demo”.

Populate the Script with Bucket Name

Once the S3 bucket is ready, you can copy the script to your $SPLUNK_HOME/bin folder. After copying the script, edit it and change the name of your S3 Bucket where you wish to freeze your buckets.

Splunk Index Settings

After you have made the necessary edits to the script, it is time to update the settings on your index in indexes.conf.

Depending on where your index is defined, we need to set the indexes.conf accordingly. On my demo instance, the index is defined in the following location:

In the indexes.conf, my index settings are defined as follows;

 

Note: These settings are only for a test index, that will roll any data off to frozen (or delete if a coldToFrozenScript is not present) after 600 seconds.

Once you have your settings complete in indexes.conf, please restart your Splunk instance. Splunk will read the new settings at restart.

After the restart, I can see my index on the Settings > Indexes page.

Once the index is set up, I use the Add Data Wizard to add some sample data to my index. Ideally, this data should roll over to warm, and the script should be moved to my AWS S3 bucket after 10 minutes.

The remote path on S3 will be set up in the following order:

If you are running this on an indexer cluster, the script will not copy duplicate buckets. It will only copy the first copy of a bucket and ignore the rest. This helps manage storage costs and does not keep multiple copies of the same buckets in S3.

Finally, once the script runs successfully, I can see my frozen Splunk bucket in AWS S3. If you are running this on an indexer cluster, the script will not copy duplicate buckets. It will only copy the first copy of a bucket and ignore the rest. This helps manage storage costs and does not keep multiple copies of the same buckets in S3.

Note: This demo test was done on Splunk Enterprise 8.0 using native Python 2.7.1 that ships with Splunk Enterprise. If you wish to use any other distribution on Python, you will have to modify the script to be compatible.

If there is an error and the bucket does not transfer to S3, or it is not deleted from the source folder, then you can troubleshoot it with the following search:

This search will show you the stdout error that is thrown when the script runs into an error.

To wrap it up, I would highly recommend that you do implement this in a dev/sandbox environment before rolling it out into production. Doing so will ensure that it is robust for your environment and make you comfortable with the set-up.

To learn more about how to set-up AWS CLI Tools for your environment, please refer to the following link; https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

If you have any questions or are interested in getting the script, contact us today!

Sneak Peek Into Our Approach to Migrating Splunk On-Prem to AWS How to Migrate Splunk On-Prem to AWS

Splunk on AWS offers a special kind of magic. Splunk makes it simple to collect, analyze and act on data of all kinds. AWS applications for Splunk allow users to ingest that data quickly, with visualization tools, dashboards, and alerts. Together, they help organizations see through the noise. When a notable event (such as a potential breach) occurs, you can find and act on it quickly, making the combo a powerful tool for risk management. 

Running Splunk on the cloud gives organization resiliency, with all the advantages of scalability, flexibility, cost-optimization, and security.  Yet migrating to the cloud poses many challenges and implementing the new system alone can be intimidating, costly, and time-consuming if not done correctly. AWS database migration services are available to mitigate the impact of this necessary shift for your business. 

Finding an experienced and expert AWS and combined Splunk managed services partner to help you navigate can ease the process. Here’s a quick look into how to handle the change from Splunk On-Prem to AWS.

How Splunk Licensing Works

Each of your Splunk Enterprise instances require a license, which specifies how many gigabytes per day a given Splunk Enterprise instance can index and which features you have access to. Multiple types of licenses are available, and distributed deployments, consisting of multiple instances, require a few extra steps. 

Choosing the correct Splunk licensing option can be confusing. It requires outlining the types of business problems you wish to solve with Splunk, then estimating how much data usage you will need to perform this work over time.

Finding a Partner with Licensing Expertise

Non-compliance with licensing can lead to overages and penalties. As your advisory partner, TekStream can work with you to ensure that your Splunk and AWS licenses are in order.

TekStream has extensive experience navigating the specifics of complex license structures and contracts. Our Splunk Enterprise consultants will leverage their years of experience to help you assess your needs, accurately estimate your data usage, and determine the optimal license types and quantities for your unique needs.

In addition to Splunk licensing for new implementations, TekStream will also help your organization save money on licensing renewals. We will examine your Splunk usage to date, pinpoint areas where you may be overpaying, and provide you with viable alternatives to reduce your costs without sacrificing efficiency.

The 4 Most Common Licensing Structures

When selecting a licensing structure for Splunk on AWS, there are 4 main options. The best option will vary depending on the organization. Through careful analysis of your current licensing structures and your desired future state, we will work with you to determine the optimal licensing structure.  

Option 1: Migrate Your Existing Perpetual or Term License to AWS

Option 2: Convert Your Current License to Splunk Cloud (which would run on AWS)

Option 3: Convert to a Term or Infrastructure License (if on a Perpetual License)

Option 4: Pay-As-You-Go as part of a 3rd-Party Hosted MSP Solution

Each option has its pros and cons depending on an organization’s goals and usage. A partner can help you select the best option. TekStream’s deep experience overseeing complex data migrations empowers us to act as true consultative partners. We have the experience needed to quickly scope challenges and present solutions for your unique situation.

Take the risk out of your Splunk migration to AWS. We are so confident in our battle-tested strategy and proven database migration process that we guarantee that your database migration will be completed on-time and on-budget (when using TekStream’s Proven Process). We also guarantee optimal and cost-effective license and cloud subscriptions. 

 

Download the
Ultimate Guide

To find out more specifics about our proven process and get an in-depth look into our services, read The Ultimate Guide to Migrating Your Splunk On-Prem to Amazon Web Services. 

Optimizing Splunk Searches

By: Yetunde Awojoodu| Splunk Consultant

 

It may be interesting to learn that not all searches need to be optimized. Your search should be optimized if it will be administered often or queries large amounts of data. With Splunk Processing Language (SPL), you can obtain the same set of results using a different array of commands, however, when constructing your searches, it is important to consider the impact to memory and network resources in your environment.

How to Determine Whether Your Search is Unoptimized

  • It runs for a long period of time
  • It retrieves larger amounts of data than needed from the indexes

(View the amount of data retrieved from the indexer in the Job Inspector)

  • You tend to hit the disk search quota while running the search
  • It results in a slow or sluggish system

Creating Optimized Searches

To optimize the speed at which your search runs, it is important that you minimize the amount of processing time required by each component of the search. Your search can be slow because of the complexity of your query to retrieve events from the index. Below are some useful guidelines:

  1. Choose an Appropriate Time Frame

Set the time picker to search within the exact time window your results should be found. This will limit the number of buckets that will need to be searched in the specified index. For instance, if you specify a search for the past 24 hours, only buckets in the specified index with data for the last 24 hours will be searched. “All time” searches are discouraged and “Real-time” searches should almost never be used due to resource consumption.

  1. Use an Efficient Search Mode

Splunk has three search modes which are Fast, Smart and Verbose. Change your search mode depending on what you need to see. Select verbose mode sparingly, using only when needed. Since it returns all of the fields and event data it possibly can, it takes the longest time to run. See more on Splunk search modes here.

  1. Retrieve only what is needed

How you construct your search has a significant impact on the number of events retrieved from disk. Be restrictive and specific when retrieving events from the index. If you need only a portion of the whole data, limit your search early to extract the portion you need before any data manipulation or calculations. You can specify an index, source type, host, source, specific word or phrases in the events. Include as many search terms as possible in your base search. Also, use the head command to limit events retrieved when you need just a subset of the data and remove unnecessary fields from the search results by using commands such as fields and where.

Example:

index=audit sourcetype=access_combined host=admin (action=failed OR action=cancelled) | stats count by user

  1. Use Efficient SPL Commands

a) Choice of Commands

As mentioned above, you can arrive at the same results using a different combination of commands but your choice will determine the efficiency of your search. Below are a few tips:

      • Joins and Lookups – Avoid using multiple joins and lookups in your search. They are very resource-intensive. Perform joins and lookups only on the required data and consider using append over join.
      • Eval – Perform evaluations on the minimum number of events possible
      • Stats – Where possible, use stats command over the table command
      • Table – Use table command only at the end of your search since it cannot run until all results are returned to the search head.
      • Dedup – The stats command is more efficient than dedup. Consider listing the fields you want to dedup in the “by” clause of the stats command. For instance,

…| stats count by user action source dest

or

…| stats latest by user action source dest

Rather than

…| dedup user action source dest

b) Order of Commands

The order in which commands are specified in a search is extremely important since this determines where the commands are executed which could be at the search head or on the indexers. When part or all of a search is run on the indexers, the search processes in parallel and search performance is much faster. It is good to parallelize as much work as possible. The aim is to not overburden the search head by making the indexer(s) do some of the work.

If your commands can be arranged so that they execute on the indexer, the overall search will execute quicker. Move commands that bring data to the search head as late as possible in your search criteria

Streaming and Non-Streaming Commands

Understanding streaming and non-streaming commands is important in discussing the order of commands in a search. Non-streaming commands include transforming commands such as stats, timechart, top, rare, dedup, sort and append which operate on the entire result set of event data and are always executed on the search head regardless of order. To optimize your searches, place non-streaming commands as late as possible in your search string.

There are two types of streaming commands – Distributable and Centralized Streaming Commands. Distributable Streaming Commands such as eval, fields, rename, replace and regex operate on each event returned by a search regardless of the event order. They can be executed on the indexer. However, if any of the preceding search commands is executed on the search head, the distributable command will be executed on the search head as well. When possible, allow distributable streaming commands to precede non-streaming commands.

Similar to the distributable streaming commands, centralized streaming commands operate on each event returned by a search but event order is important and commands execute only on the search head.

To inspect your search, take a look at the Job Inspector and Search Job Properties. There are two search pieces: remoteSearch and reportSearch that show where parts of your search string are executed. RemoteSearch is the part of the search string executed on the remote nodes (indexers) while the reportSearch is the part executed on the search head.

Let’s look at a simple example to illustrate this:

index=security user=admin failed

| timechart count span=1h

| stats avg(count) as average

In this example, the base search retrieves events from the index and the search head executes the timechart and stats commands which are both transforming commands and outputs the results.

A second example:

index=network sourcetype=cisco_wsa_squid usage=” violation”

| stats count AS connections by username usage

| rename username as violator

| search connections >=10

In the above example, stats command is a transforming command so it will be executed on the search head. “Rename” is a distributable streaming command which could execute on the indexer but because it occurs after a transforming command, it will also execute on the search head.

For better performance, reorder the commands as follows so that “rename” precedes “stats” and is therefore executed on the indexer.

index=network sourcetype=cisco_wsa_squid usage=violation

| rename username as violator

| stats count AS connections by username usage

| search connections >=10

  1. Check the Job Inspector

Finally, check the job inspector tool to examine the overall stats of your search including where Splunk spent its time. Use the tool to troubleshoot search performance and understand the impact of knowledge objects (lookups, tags) on processing.

Reference Links

https://docs.splunk.com/Documentation/Splunk/latest/Search/Writebettersearches

https://docs.splunk.com/Documentation/Splunk/latest/Search/Aboutoptimization

https://docs.splunk.com/Documentation/Splunk/latest/Search/Quicktipsforoptimization

https://docs.splunk.com/Documentation/Splunk/latest/Search/Changethesearchmode

 

Want to learn more about optimizing your Splunk searches? Contact us today!

Splunk Alerts: Using Tokens to Prioritize Email Notifications

By: Brandon Mesa | Splunk Consultant

 

Background

Splunk Enterprise enables alerting through a variety of native and external alert actions. You can enhance your Splunk environment by creating alerts that add custom functionalities. Common alert actions include logging events, outputting results to a lookup, running a script, telemetry endpoints, sending emails, and many more. Additional Alert actions can be found at www.splunkbase.com.

Use case

A client wants to monitor their Splunk environment and be notified when their servers are not performing as expected. The client wants to monitor various key performance indicators, including, CPU, Disk, and Memory. While the client wants to monitor the performance of all their infrastructure, the main priority is ensuring production servers have as little downtime as possible. For this reason, the client would like to be alerted through email notifications when their servers are not performing as expected. Splunk search results that return a production server not performing up to expectations should send an email notification with the “Priority” set to the “highest” setting, which will deliver an email to recipients with the high importance icon. This will enable end-users to easily identify email notifications that should be prioritized.

To monitor server health, a Splunk alert should be created and configured. Returned results that meet the alert’s defined threshold should send email notifications to target recipients. The email alert should dynamically set the email priority based on the returned search results.

For example, if 3 servers are returned when the scheduled search is run, then the server environment should be dynamically evaluated by Splunk. If a server from the production environment is in the list returned, then Splunk will send the email notification marked as “important”, if not, then the email notification will be sent without any “high priority” settings configured.

Requirements

The “Send email” alert action, an out-of-box functionality that comes with Splunk Enterprise, partially meets the use case needs. Using the “Send email” alert action enables dynamic email notifications through the use of tokens. When an alert runs and search results are returned, the “Send email” action, evaluates the returned results to define the values for the tokenized fields set at configuration time.

Another way to configure email notifications is by using the SPL command “sendemail”. Various command arguments can be used with the “sendemail” command to configure email notifications. Furthermore, like the “Send email” alert action, the “sendemail” SPL command, enables users to dynamically define various email configuration settings with the use of tokens.

Tokens

By default, Splunk enables the use of tokens for select configurations for email notifications. Various configuration fields are enclosed within the “$” symbol. By enclosing a field name with the “$” symbol ($<token>$), field values can be dynamically defined based on various criteria, including returned results. Tokens can be set through the alert action GUI, or via SPL, by using the “sendemail” command. Splunk allows the following fields to be tokenized for email notifications:

  • To
  • Cc
  • Bcc
  • Subject
  • Message
  • Footer

More information on tokens, email notifications, and setting up alert actions can be found below:

https://docs.splunk.com/Documentation/Splunk/latest/Alert/Setupalertactions

https://docs.splunk.com/Documentation/Splunk/latest/Alert/EmailNotificationTokens

https://docs.splunk.com/Documentation/Splunk/latest/Alert/Emailnotification

Analysis

The goal is to notify users when servers meet or exceed a defined threshold. By using the “Send email” alert action, or the “sendemail” SPL command, we can alert specific recipients when servers are not performing up to expectations. We’ll also integrate the use of tokens to configure our email notification and define various email settings on the fly. The goal is to dynamically define our target recipients, as well as the email subject, message, and priority based on the returned results. To define these field values based on returned results we can use the tokens below for our configuration settings:

  • $result.To$
  • $result.Cc$
  • $result.Bcc$
  • $result.Subject$
  • $result.Message$

Using the token values above to configure our email alert will dynamically define the value for the target recipients as well as email subject and message. However, if we take a closer look at the configuration screen below, we notice that the “Priority” field is a drop-down that requires the selection of a static setting for the overall configuration of the alert being created:

 

 

Solution

By default, the “Send email” alert action requires the email priority to be set by selecting an option from the drop-down menu. Passing tokens to the “Priority” field is not an out-of-box feature with Splunk, whether the email notification is configured through SPL or via the GUI as an alert action. The goal of this blog is to walk you through how to optimize your Splunk environment to enable dynamic email “Priority” configuration with the use of tokens.

Solution

Let’s consider the out-of-box functionalities in a Splunk environment:

  • “Send email” Alert action which enables triggered alerts to send email notifications
  • SPL command (“sendemail”) that can send email notifications based on returned SPL results
  • Tokens can be applied to various fields including To, Cc, Bcc, Subject, Message, and Footer to generate dynamic field values based on results returned.

We know the “Priority” field can’t be tokenized and therefore cannot be dynamically defined. This limits our email notifications to have a static email priority configuration setting. If the email priority setting is set to normal, and a result of high importance is returned, we’re at a loss. Email notifications will not be configured to send as important in this specific case. Now, let’s consider the functionalities needed to meet the defined requirements:

  • Enable dynamic email prioritization based on our search results
  • Pass tokens to our “Priority” field, as shown below:

 

We can enhance our Splunk environment to include this functionality by creating a new alert action and SPL command that mimics the functionality of the native “Send email” alert action and “sendemail” SPL command. Configurations for alert actions can be found in the alert_actions.conf file, while search command configurations are located on the command.conf file.

Let’s take a look at our Splunk environment, perhaps we can identify files which are related to our current “Send email” alert action or “sendemail” command. Follow the steps below in your Splunk Enterprise instance, via the CLI:

  1. $ cd $SPLUNK_HOME/etc/apps/search/default
  2. $ cat commands.conf – you should see a “sendemail” stanza

Looking at our “sendemail” stanza in the default commands.conf, we see that “sendemail.py” is executed to enable the “sendemail” command functionality. The goal is to clone, analyze, and modify the new custom sendemail.py script to enable the tokenization of the priority field.

To mimic the native “sendemail” command and “Send email” alert action, complete the following steps:

  1. Create a custom app with the necessary permission settings. More information on creating a custom app:

https://dev.splunk.com/enterprise/docs/developapps/createapps/createsplunkapp/

  1. Create a new commands.conf and alert-actions.conf file that replicates the configurations settings from the native “Send email” alert action and “sendemail” command functionalities.

You can do this by copying the stanza settings from the default .conf files and adjusting the settings as needed. Remember, our goal is to create a new SPL command (“sendemail”) and alert action (“Send email”) which mimics the native Splunk functionalities, yet enables the tokenization for the “Priority” field.

Modify the configuration settings to create a new command and alert action as needed. For example, you can name your new command “custom_sendemail” to differentiate it from the native Splunk SPL “sendemail” command.

  1. Place your commands.conf and alert-actions.conf in the default directory of your custom app.
  2. Place your modified copy of the “sendemail.py” script into the bin directory of your custom application. Remember, this script should reflect source code changes that enable the tokenization of the priority field.
  3. In addition to the commands.conf, alert-actions.conf, and sendemail.py files ensure your custom app has the appropriate *.html files to ensure your custom alert action is available in the user interface. You can use the “Developer Tools” feature within the Google Chrome browser to copy and modify the HTML source code for the “Send email” alert action.

See below:

At this point, your new custom app should include the following directories along with the following files:

  • custom_app
    • appserver
      • static
        • png
      • bin
        • py
      • default
        • conf
        • conf
        • conf
        • data
          • ui
            • alerts
          • html
        • metadata
          • meta

 

The custom commands.conf and sendemail2¬.py files will create a new SPL command that enables the tokenization of the “Priority” field. The alert_actions.conf and email_priority.html files will create a new alert action in your Splunk environment which enables you to pass tokens to the “Priority” field. For example, rather than having a drop-down menu for the email “Priority” selection, this alert action will enable you to pass the $result.Priority$ token and further enable dynamic email notifications.

Deployment

Your custom app should be deployed to your Splunk search heads and placed in the $SPLUNK_HOME/etc/apps/ path. If you are running a search head cluster, the custom app should be placed on the deployer’s, $SPLUNK_HOME/etc/shcluster, and pushed down to the cluster members.

Once your custom app is deployed to your Splunk environment, you can configure dynamic email notifications one of two ways:

  • By creating an alert and selecting your new alert action. Pass the $result.FieldName$ token to the “Priority” field to configure email priority based on returned search results.
  • By creating and saving a SPL search that uses the new “sendemail2” command. Verify that you have included the “Priority” command argument and pass the token within the search result as follows: Priority=$result.FieldName$. This will ensure your email notification priority is dynamically configured based on returned search result values.

Want to learn more about using tokens to prioritize email notifications?  Contact us today!