The 7-Point Checklist for Integrating Splunk Observability Cloud into Your AWS Environment

So, you have decided that it is time to say goodbye to outdated legacy monitoring systems. You are tired of relying on systems that only analyze samples of data. Ones that cannot keep up with the speed of your AWS environment (those containers do spin up and spin down quickly after all). It is time to embrace a new solution that can provide your team with the critical insights and support needed to promptly identify, triage, and resolve behavioral abnormalities – Splunk Observability Cloud.  

If your team has been contemplating an observational methodology, we encourage you to commit. The longer you wait to integrate a tool like Splunk Observability Cloud into your AWS environment, the more likely you are to miss the critical information needed to quickly identify and resolve issues, which could directly impact your team’s performance and your company’s bottom line.  

Positive performance can have a significant impact on your systems’ ability to convert end users into paying customers. A recent study found that decreasing page load time from eight (8) to two (2) seconds can increase conversion rates by as much as 74 percent. And on the other end of the spectrum, a critical application failure can cost between $500,000 and $1 million 

But before you start layering Splunk onto your AWS platform and recoding your software for observability, you need to get organized.

The 7-Point Splunk Observability Cloud Success Checklist  

At TekStream, we’ve had the privilege of assisting many organizations in implementing Splunk Observability CloudWe know what it takes for a successful implementation, and our team of experienced professionals has put together a list of the seven must-haves of any successful Splunk Observability Cloud integration into AWS and accompanying on-premise systems 

1. Name Your Implementation Destination 

Our number one piece of implementation advice? Start with the end in mind.  

Think ahead to the results and insights that will have the most significant impact on your team and organization. Ask yourself questions like:  

  • – What aspect of your business would benefit from observability? 
  • – What processes do you need to have visibility into? 
  • – What would the business benefit be if you had that information today? 
  • – What information do you need to be able to determine the health of those processes? 

Observability is very much a purpose-driven methodology, similar to the DevOps methodology. If there is a specific result you are trying to achieve through observability, then you need to integrate observability in a way that aligns with those goals.  

Identify your desired end-state, then work backwards to develop your implementation plan.  

2. Understand What Must Change to Prepare Your Organization for Observability 

Look at your current system and identify what changes you’ll need to make to become observability-ready.  You undoubtedly will have to make some changes to your code to support Splunk Observability Cloud. But code is not the only thing that may need to change.  

Existing processes, response protocols, and even team mindsets are all aspects of your organization that will need to evolve to embrace observability. Lay out a plan for how you will introduce observability and earn team buy-in before you think about implementing Splunk Observability Cloud.  

3. Determine Who in Your Organization Needs to Be Involved in The Implementation 

Yes, your developers will be involved. However, successful Splunk Observability Cloud implementation goes beyond any individual developer. Everyone involved in supporting the business process should be part of the transformation, including site reliability engineers (SREs), DevOps engineers, leadership, and more.  

Create a list of these individuals and match them to the specific implementation tasks needed for a successful integration. Be sure to include executive sponsorship and leadership support as well as who will be managing the project. Use this list to identify any gaps or overlaps in responsibilities.

4. IdentifAny Third-Party Systems That Need to Be Considered  

Your first-party systems are not the only technologies that may need to be updated. If your organization uses any third-party tool, you will want to ensure those systems also are integrated into your observability platform.  

Start with an audit of your third-party systems. Be sure to consider the limitations and supporting framework of each platform. Is it possible to integrate the current third-party system with Splunk Observability Cloud? Is it necessary?  

Once you complete your assessment, affirm that your timeframe and roadmap align with your findings. You may need to account for additional time, support, or resources.   

Additional Consideration: To accurately assess the ease of integration of your third-party tools, you may need to ask your vendors for additional access to the system. Check with each third-party platform to see if you have an opportunity to peek under the hood and gain insight into their system.  

5. Put Together a Clear Implementation Timeframe 

Do you have a specific end-time that your observability platform must be operational?  

Of course, nearly every organization will say, “as soon as possible.” However, we believe that a successful timeframe considers the scope of the implementation lift as well as the resources your organization can allocate.  

Align your timeframe directly to your roadmap by including sub-goals, milestones, deliverables, and other accountability metrics. Not only will this help you understand if your ideal timeline is too aggressive for the scope of the endeavor, but it also will help your team determine if additional resources are needed to complete the project within the desired timeframe.  

6. Clarify a Specific Approach to Your Implementation 

How are you planning on rolling out Splunk Observability Cloud? Are you only adding observability to new code? Are you rolling out one application or process at a time? Are you recoding all technologies before implementing them across your entire AWS environment?  

Some of these implementation options may be more practical and useful than others. Take the time to investigate the feasibility and bottom-line impact of each approach. Do not get distracted recoding systems that will not help your organization reach its performance monitoring goals.  

7. Choose an Implementation Partner

If the above sounds daunting, know that you do not have to go it alone. The right partner will guide your team through each of these points, lending their proven experience and process to better ensure a successful Splunk Observability Cloud implementation.  

At TekStream, we work with our clients to form a complete understanding of their observational goals, as well as the systems and processes that will need to be updated to achieve the desired outcome.  

From there, we will develop a timeline and implementation roadmap that takes you from where you are today to where you want to be with observability. Along the way, we will provide our strategic recommendations and insights across several project aspects that are imperative to a successful AWS integration

Get Started Today 

Ready to abandon your legacy monitoring tools in favor of a system that can keep up with the ephemeral nature of AWS? We can help. TekStream has proven experience assisting companies with their adoption of observability. Our team of dedicated experts stands ready to offer our support. Together, we will craft an implementation strategy that aligns directly with the needs of your team. Reach out to us today to get started. 

Interested in learning more about observability and how Splunk Observability Cloud can help you monitor and improve your AWS platform? Download our latest eBook:

Unlock Observability: 3 Ways Splunk Observability Cloud Works with AWS to Improve Your Monitoring Capabilities

According to the numbers, there are over 1,000,000 active AWS customers. In fact, there is a good chance that, like Netflix, Facebook, and LinkedIn, you, too, are using Amazon Web Services to support all or a portion of your cloud-based platforms and systems. Cloud technologies like AWS provide a host of benefits including scalability, cost-efficiencies, and reliability. But the very nature of cloud processing also introduces new layers of complexity. One critical added complexity is in monitoring cloud systems to identify and resolve issues. Traditional alert monitoring tools were not designed to address the ephemeral nature of cloud processing.  

Fortunately, Splunk has brought a full observability suite to market that integrates seamlessly with AWS’s portfolio of services to provide AWS users and their DevOps teams with the tools they need to improve the performance of their cloud-based systems. Below, we have laid out a brief primer on observability and paired that overview with three ways that the Splunk Observability Cloud works with AWS to streamline your monitoring.  

Introduction to the Splunk Observability Cloud 

While there is no shortage of observability tools on the market, Splunk’s acquisition of SignalFX in 2019, its subsequent additions to the platform, and its existing AWS integrations make it a powerful choice for organizations that use AWS as well as other leading cloud solutions like Microsoft Azure and Google Cloud Platform.  

Splunk offers a fully integrated observability set of products designed to bring all metric, trace, and log telemetry into a single source of data truth. Additionally, you can seamlessly merge this data with other Splunk Enterprise data such as security, IT and DevOps for the most comprehensive and integrated view of your environment. 

The Splunk Observability Cloud is comprised of several monitoring and observability products, including:   

  • – Splunk Infrastructure Monitoring: AI-driven infrastructure monitoring for hybrid or multi-cloud environments.  
  • – Splunk APM: NoSample™ full-fidelity application performance monitoring and AI-driven directed troubleshooting. 
  • – Splunk On-Call: Incident response and collaboration.   
  • – Splunk RUM (coming soon): Works with Splunk APM to provide end-to-end full-fidelity visibility by providing metrics about the actual user experience as seen from the browser. 
  • – Splunk Log Observer: Built specifically for SREs, DevOps engineers, and developers who need a logging experience that empowers their troubleshooting and debugging processes. 

Three Benefits of Integrating Splunk Observability Cloud with AWS 

For organizations already using AWS, Splunk works seamlessly with Amazon to provide DevOps teams with out-of-the-box visibility across their complete AWS environment.  With Splunk Observability Cloud, all data is shown within a single system, making it easy for your team to identify issues across any of the AWS tools you utilize.  

As data passes from your AWS services into your Splunk environment, it is analyzed in real time across the full Splunk Observability Cloud. The result is comprehensive reporting and monitoring that allows you to identify and respond to issues the moment they occur – regardless of your platform’s size. 

A Venn-diagram style graphic displaying the features of AWS and Splunk.

While there are several efficiencies and benefits to be gained by layering Splunk Observability Cloud onto your AWS environment, here are three that stick out to our team:  

1. Global Monitoring of Amazon Container Services 

Splunk’s Infrastructure Monitoring tool (part of the Observability Cloud) is built to specifically monitor the ephemeral and dynamic nature of container environments. Through this tool, customers can have key insight into Amazon ECS and Amazon EKS performance characteristics and containerized applications.  

Out-of-the box dashboards and reporting provide teams with the information they need to capture immediate value from the platform.  

2. Real-time Fidelity Tracing 

Are you tired of having to sample data or work with limiting data ingestion caps? Splunk Observability Cloud includes two powerful tools that, together with AWS, provide teams with end-to-end full-fidelity tracing. 

First, Splunk APM utilizes OpenTelemetry-enabled instrumentation to ingest all trace data. No more sampling. Splunk APM captures, analyzes and stores 100% of available trace data. Once captured, Splunk Real User Monitoring (RUM) can tie that trace data to specific user actions within your AWS environment.  

These systems work in tandem to provide your team with rich visibility into the bugs and bottlenecks that could harm your user experience.  

3. Automated Incident Response 

Not only does Splunk Observability Cloud provide real-time visibility across your complete cloud stack, but it also can reduce your team’s mean time to recovery (MTTR) through automated responses. Through the platform, DevOps teams can set automated remediations that fire regardless of human oversight.  

Built-in artificial intelligence and machine-learning capabilities help further improve the efficiency and latency of automated responses.  

Enhance Your AWS Platform with Splunk Observability Cloud 

If your legacy monitoring systems cannot keep up with the complexities and intricacies of AWS, it’s time to make a shift in your team’s mindset towards observability. By making the structural changes necessary to facilitate observability and embracing robust tools like Splunk Observability Cloud, your team will gain the capacity to improve the performance of your AWS environment.  

Interested in learning more about observability and how Splunk Observability Cloud can help you monitor and improve your AWS platform? Download our latest eBook:

TekStream Recognized in 2021 Splunk Global and Regional Partner Awards

TekStream Named 2021 Global Services Partner of the Year and AMER Professional Services Partner of the Year for Outstanding Performance

 

TekStream today announced it has received the 2021 Global Services Partner of the Year and 2021 AMER Professional Services Partner of the Year awards for exceptional performance and commitment to Splunk’s Partner+ Program. The 2021 Global Services Partner of the Year Award recognizes a partner with excellence in post-sale and professional services implementations. This partner demonstrates a strong commitment to technical excellence, certifications, and customer satisfaction. The 2021 AMER Professional Services Partner of the Year Award recognizes an AMER Splunk partner that is actively engaged in services implementations, in addition to having a strong commitment to training and certification of their organization. For more information on Splunk’s Partner+ Program, visit the Splunk website.

“We are delighted to have won the 2021 Global Services Partner of the Year and 2021 AMER Professional Services Partner of the Year awards. It is a fantastic achievement to be awarded and even more satisfying to contribute to the success of Splunk and our customers. Our team is very excited to be recognized for its efforts and expertise and will wear this prized recognition proudly,” said Matthew Clemmons, Managing Director at TekStream.

“Congratulations to TekStream for being named the 2021 Splunk Global Services Partner of the Year and 2021 AMER Professional Services Partner of the Year,” said Bill Hustad, VP, Global GTM Partners, Splunk. “The 2021 Splunk Global Partner Awards highlight partners like TekStream that deliver successful business outcomes, as well as help our joint customers leverage Splunk’s Data-to-Everything Platform to drive value and unlock insights. Additionally, TekStream shares our commitment of prioritizing customer success.”

The Splunk Partner Awards recognize partners of the Splunk ecosystem for industry-leading business practices and dedication to constant collaboration. All award recipients were selected by a group of Splunk executives, thought leaders, and the global partner organization.

“We are very honored to have been selected by Splunk for not just one, but two Partner of the Year awards. TekStream prides itself on doing what is right for the customer above all else, and our commitment to that mantra drives everything that we do. We value our partnership and look forward to helping Splunk grow the ecosystem on its way to $5B,” said Karl Cepull, Senior Director, Operational Intelligence at TekStream.

About TekStream

TekStream accelerates clients’ digital transformation by navigating complex technology environments with a combination of technical expertise and staffing solutions. We guide clients’ decisions, quickly implement the right technologies with the right people, and keep them running for sustainable growth. Our battle-tested processes and methodology help companies with legacy systems get to the cloud faster, so they can be agile, reduce costs, and improve operational efficiencies. And with 100s of deployments under our belt, we can guarantee on-time and on-budget project delivery. That’s why 97% of clients are repeat customers. For more information visit https://www.tekstream.com/.

JSON Structured Data & the SEDCMD in Splunk

By: Khristian Pena | Splunk Consultant

 

 

Have you worked with structured data that is not following its structure? Maybe your JSON data has a syslog header. Maybe your field values have an extra quote, colon, or semicolon and your application team cannot remediate the issue. Today, we’re going to discuss a powerful tool for reformatting your data so automatic key-value fields are extracted at search-time. These field extractions utilize KV_MODE in props.conf to automatically extract fields for structured data formats like JSON, CSV, and from table-formatted events.

Props.conf

[<spec>]

KV_MODE = [none|auto|auto_escaped|multi|json|xml]

This article will focus on the JSON structure and walk through some ways to validate, remediate and ingest this data using the SEDCMD.  You may have used the SEDCMD to anonymize, or mask sensitive data (PHI,PCI, etc) but today we will use it to replace and append to existing strings.

 

JSON Structure

JSON supports two widely used (amongst programming languages) data structures.

  • A collection of name/value pairs. Different programming languages support this data structure in different names. Like object, record, struct, dictionary, hash table, keyed list, or associative array.
  • An ordered list of values. In various programming languages, it is called as array, vector, list, or sequence.

Syntax:

An object starts with an open curly bracket { and ends with a closed curly bracket } Between them, a number of key value pairs can reside. The key and value are separated by a colon : and if there are more than one KV pair, they are separated by a comma ,

{

  “Students“: [

                                      { “Name“:”Amit Goenka” ,

  “Major“:”Physics” },

                                      { “Name“:”Smita Pallod” ,

  “Major“:”Chemistry” },

                                      { “Name“:”Rajeev Sen” ,

  “Major“:”Mathematics” }

                                      ]

                                      }

 

An Array starts with an open bracket [ and ends with a closed bracket ]. Between them, a number of values can reside. If more than one values reside, they are separated by a comma , .

[

                                      {

  “name“: “Bidhan Chatterjee”,

  “email“: “bidhan@example.com”

                                      },

                                      {

  “name“: “Rameshwar Ghosh”,

  “email“: “datasoftonline@example.com”

                                      }

                                      ]

 

JSON Format Validation:
Now that we’re a bit more familiar with the structure Splunk expects to extract from, let’s work with a sample. The sample data is JSON wrapped in a syslog header. While this data can be ingested as is, you will have to manually extract each field if you choose to not reformat it. You can validate the structure by copying this event to https://jsonformatter.curiousconcept.com/ .

Sample Data:

May 14 13:28:51 <redacted_hostname> github_audit[22200]: { “above_lock_quota”:false, “above_warn_quota”:false, “babeld”:”eebf1bc7″, “babeld_proto”:”http”, “cloning”:false, “cmdline”:”/usr/bin/git upload-pack –strict –timeout=0 –stateless-rpc .”, “committer_date”:”1589477330 -0400″, “features”:” multi_ack_detailed no-done side-band-64k thin-pack include-tag ofs-delta agent=git/1.8.3.1″, “frontend”:”<redacted>”, “frontend_pid”:17688, “frontend_ppid”:6744, “git_dir”:”/data/user/repositories/7/nw/75/42/9d/4564/6435.git”, “gitauth_version”:”dcddc67b”, “hostname”:”<redacted>”, “pgroup”:”22182″, “pid”:22182, “ppid”:22181, “program”:”upload-pack”, “quotas_enabled”:false, “real_ip”:”10.160.194.177″, “remote_addr”:”127.0.0.1″, “remote_port”:”15820″, “repo_config”:”{\”ssh_enabled\”:\”true\”,\”ldap.debug_logging_enabled\”:\”true\”,\”auth.reactivate-suspended\”:\”true\”,\”default_repository_permission\”:\”write\”,\”allow_private_repository_forking\”:\”true\”}”, “repo_id”:6435, “repo_name”:”<redacted>”, “repo_public”:true, “request_id”:”43358116096ea9d54f31596345a0fc38″, “shallow”:false, “status”:”create_pack_file”, “uploaded_bytes”:968 }

 

The errors are noted and highlighted below:

As we can see, the timestamp, hostname and thread field are outside of the JSON object.

 

Replace strings in events with SEDCMD

You can use the SEDCMD method to replace strings or substitute characters. This must be placed on the parsing queue prior to index time. The syntax for a sed replace is:

SEDCMD-<class> = s/<regex>/\1<replacement>/flags

  • <class> is the unique stanza name. This is important because these are applied in ABC order
  • regexis a Perl language regular expression
  • replacementis a string to replace the regular expression match.
  • flagscan be either the letter g to replace all matches or a number to replace a specified match.
  • \1 – use this flag to insert the string back into the replacement

 

How To Test in Splunk:

Copy the data sample text into a notepad file and upload using Splunk’s built in Add Data feature under Settings to test. Try out each SEDCMD and note the difference in the data structure for each attribute.

BEFORE:

AFTER:

Props.conf – Search Time Field Extraction

[<spec>]

KV_MODE = json

 

Want to learn more about JSON structured data & the SEDCMD in Splunk? Contact us today!

Splunk KvStore Migration

By: Christopher Winarski | Splunk Consultant and

Bruce Johnson | Director, Enterprise Security

 

Migrating your Splunk environment can be a daunting task to some. With the worry of missing valuable data. Did my users’ settings migrate properly? Did all my applications migrate properly? Did all my lookup tables survive the migration? If you find yourself performing a Splunk migration you may be asking yourself some of these questions. Well, today I try to take one of those worries off your chest by walking you through a Splunk KvStore Migration, more specifically migrating the Splunk KvStore from a Search Head Cluster to a new Search Head Cluster. E.g On-prem Shcluster to AWS Shcluster

KvStore stores data in key-value pairs known as Collections. These tables of data are located in your collections.conf files. Records contain each entry of your data, similar to a row in a database table. Using KvStore as opposed to csv files you can define the storage definition schema for your data, perform create-read-update-delete operations on individual records using Splunk REST API and lookups using the Splunk search language. KvStore excels in performance when you start getting large lookups with many data points which is especially prevalent within Enterprise Security, one of Splunk’s Premium Apps.

The normal export/migration is to use csv export which is not really practical for large KvStores due to the limitations to file sizes on most operating system’s is what drive was used for mongodb in the first place. Gemini KvStore Tools helps to circumvent the normal semi-workable, tedious migration process.

Gemini KV Store Tools comes with some custom commands built for the Splunk search bar that makes our life/migration less complicated. The commands we are interested in for this migration are:

  • | Kvstorebackup
  • | Kvstorerestore

Requirements for this process:

  • You must already be utilizing Splunk’s KvStore for your lookups.
  • Downloaded and installed “Gemini KV Store Tools” application in both the originating environment Search Head Cluster and the new environment Search Head Cluster you are migrating too. https://splunkbase.splunk.com/app/3536/
  • You must have already migrated/copied the applications from the old Search Head Cluster. We are interested in the collections.conf within these applications.
    • tar -zcf apps.tgz /opt/splunk/etc/shcluster/apps
  • The collections.conf files must be present on the new environment before proceeding

 

Step 1: Of the original Search Head Cluster, Identify the kvstore captain, and log into the GUI environment, then open the search app. The KvStore captain is the instance in the search head cluster that receives the write operations regarding the KvStore collections where the Search head captain is the instance in the search head cluster that schedules jobs, pushes knowledge bundles to search peers, and replicates any runtime changes to knowledge objects throughout the search head cluster.**note** This may be different than the Search Head captain

 

Step 2: On this instance, also log into the backend and create a directory under the /tmp directory named “kvstore_backup”. Ensure Splunk has read/write permissions to this folder.

cd /tmp

mkdir kvstore_backup

sudo chown -R splunk:splunk /tmp/kvstore_backup

 

Step 3: Creates a json file per each collection to the destination path in the kvstore_backup folder, as well as should see “Success” per each collection zipped within the original environment. In the search bar on original environment KvStore captain, run:

| kvstorebackup path=”/tmp/kvstore_backup” global_scope=”true” compression=”true”

 

Step 4: Check KvStore monitoring console to verify if collection counts are listed and save the page to refer to the results/counts to verify later. (old environment)

Monitoring Console > Search > KvStore:Instance

 

Step 5: Now that you have created your collection backups and have verified that the number of records per is correct. Go on each new search head cluster member (CLI) and edit server.conf to have:

[kvstore]

                                                                        oplogSize = 10000

Also, on each instance, you have to edit/change the search head replication factor to 1 in the new environment on each search head cluster member. (server.conf)

[shclustering]

replication_factor = 1

Once both are set, restart the instance. Do this for every search head cluster member in the new environment.

 

Step 6: Identify and get Search Head captain to be the same instance as the kvstore captain.

Ensure the kvstore captain = Search Head Cluster captain.

Useful commands:

./splunk show shcluster-status

./splunk show kvstore-status

Transfer captaincy to one node by bootstrapping the kvstore captain as the search head captain.

On the KvStore captain, we want to make it also the search head captain, run this command (CLI):

./splunk edit shcluster-config -mode captain -captain_uri <URI>:<management_port> -election false

On each other non-captain instance, run this command (CLI):

./splunk edit shcluster-config -mode member -captain_uri <URI>:<management_port> -election false

This will allow you to specify the captain as the kvstore captain and get rid of dynamic captaincy for this purpose. At the end we will want to revert our search head cluster back to a dynamic captaincy.

 

Step 7: Once you have the kvstore captain = search head captain, Log into the CLI of the other search head nodes (every search head cluster member that is not the captain/kvstore captain). Starting the instance after cleaning the local kvstore will initialize a kvstore synchronization upon startup with kvstore captain.

SHUTDOWN Splunk: ./splunk stop

run: ./splunk clean kvstore –local

Then start splunk: ./splunk start

 

Step 8: SCP kvstore_backup from Step:2 to new environment search head captain/kvstore captain. Make sure that splunk has permissions to access the file. Follow these steps for guidance.

Old instance where the backup was created from:

scp -r kvstore_backup ec2-user@IPADDRESS:/tmp

Move file to /opt folder on kvstore/search head captain:

mv kvstore_backup /opt/kvstore_backup

Change ownership of the file and internal files to splunk for permissions

sudo chown -R splunk:splunk /opt/kvstore_backup

 

Step 9: Kvstore Gemini Tools is to be installed on the new search head cluster prior to running this step, if you have not done so please insure it is installed within the new search head cluster. Once the kvstore_backup has the permissions and is in place on the backend of the kvstore captain/search head captain. Now log on to the GUI of that splunk instance, open search and run:

| kvstorerestore filename=”/opt/kvstore_backup/*.json.gz”

On big restores, can take many minutes for the restore to complete, be patient and let the search run.

 

Step 10: Verify lookups return the same results in the new environment as back in the old environment with the saved page(step 4) , run:

| inputlookup <Lookup definition>

 

Step 11: We want to revert the search head cluster back to a dynamic captaincy now (the static captaincy bootstrapping was just used for the migration) and also change our replication factor back to the original setting in the environment.

You can do this by logging on to each instance CLI, stopping splunk then on the search head cluster captain, run:

./splunk edit shcluster-config -mode captain -captain_uri <URI>:<management_port> -election true

On the other non captain search head cluster members run:

./splunk edit shcluster-config -mode member -captain_uri <URI>:<management_port> -election true

Then we want to edit the config file again to revert replication factor back to the original number that was set before the migration. (server.conf)

[shclustering]

replication_factor = 2

**The “2” is arbitrary here, as this should be set to the number that was present prior to the migration**

That’s it! Migrations can be a scary endeavor and if not prepared, one can easily lose data. If you seek further assistance don’t hesitate to reach out to us here at TekStream Solutions. We would be happy to help! No Splunk project is too small or too big.

How to Set Up Splunk DB Connect to Connect to Multiple MSSQL Databases and Some Tips & Tricks

By: Jon Walthour |Team Lead, Senior Splunk Consultant

 

Over the years, I have found one tried and true method for getting Splunk connected to multiple Microsoft SQL Server instances spread across a corporate network—connect to Windows from Windows. That is to say, run the DB Connect application from Splunk on a Splunk Enterprise Heavy Forwarder, installed on a Windows environment. Why must Splunk be running Windows? It certainly doesn’t if you’re going to authenticate to the MSSQL instances with local database accounts. That authentication process can be handled by the database driver. However, when multiple connections to multiple MSSQL instances are required, as is often the case, a bunch of local account usernames and passwords can be a nightmare to manage for everyone involved. So, Windows AD authentication is preferred. When that becomes a requirement, you need a Windows server running Splunk. I tried getting Splunk running on Linux to connect to SQL Server using AD authentication via Kerberos for a month and never got it to work. Using a Windows server is so much simpler.

To accomplish this, the first thing you need to do is request two things from your Infrastructure teams—a service account for Splunk to use to connect to all the SQL Server instances and a server running Microsoft Windows. The service account must have “logon as a service” rights and the Windows server must meet the requirements for Splunk reference hardware with regards to CPUs, memory and storage. The best practice for Splunk generally speaking is to use General Policy Objects (GPOs) to define permissions so that they are consistent across a Windows environment. Relying on local Admin accounts can result in challenges, particularly across some of the “back-end” Splunk instances such as Splunk Search Head to Indexer permissions.

Once the server and service account have been provisioned, install Splunk Enterprise and Splunk DB Connect (from Splunkbase) on the it. Here’s the first trick: go into Settings > Control Panel > Services and configure the splunkd service to run under the service account. This is crucial. You want not just the database connections to be made using the service account, but the Splunk executables to be running under that account. This way, all of Splunk is authenticated to Active Directory and there are no odd authentication issues.

After you have Splunk running under the MSSQL service account with DB Connect installed as an app in the Splunk instance, you’ll want to install the Java Runtime Environment (JRE) software, either version 8 (https://www.oracle.com/java/technologies/javase-jre8-downloads.html) or version 11 (https://www.oracle.com/java/technologies/javase-jdk11-downloads.html), and download the appropriate MSSQL driver based on Splunk’s documentation (https://docs.splunk.com/Documentation/DBX/latest/DeployDBX/Installdatabasedrivers), which either the Microsoft drivers for the open source jTDS drivers. Personally, I’ve had better outcomes with the Microsoft drivers in this scenario.

Once you’ve downloaded the SQL database driver archive, unzip it. In the installation media, find the library “mssql-jdbc_auth-<version>.<arch>.dll” appropriate to the version and architecture you downloaded and copy it to the C:\Windows\System32 directory. Then, find the file jar “mssql-jdbc-<version>.<jre version>.jar” appropriate to your JRE version and copy it to $SPLUNK_HOME\etc\apps\splunk_app_db_connect\drivers.

Now, log into Splunk and go the Splunk DB Connect app. It will walk you through the configuration of DB Connect. In the “General” section, fill in the path to where you installed the JRE (JAVA_HOME). This is usually something like “C:\Program Files\Java\jre<version>”. The remaining settings you can leave blank. Just click “Save”. This will restart the task server, which is the java-based processing engine of DB Connect that runs all the database interactions.

In the “Drivers” section, if the MS SQL drivers are not listed with green checkmarks under the “Installed” column, click the “Reload” button to have the task server rescan the drivers folder for driver files. If they still do not have green checkmarks, ensure the right driver files are properly placed in $SPLUNK_HOME/etc/apps/splunk_app_db_connect/drivers.

Next, navigate to Configuration > Databases > Identities and click “New Identity”. Enter the username and password of the service account you’re using for the MSSQL connections and give it an appropriate name. Check “Use Windows Authentication Domain” and enter the appropriate value for your Active Directory domain. Save the identity.

Navigate to Configuration > Databases > Connections and click “New Connection”. Pick the identity you just created and use the “MS-SQL Server using MS Generic Driver With Windows Authentication” connection type. Select the appropriate timezone the database you’re connecting to is in. This is especially important so that Splunk knows how to interpret the timestamps it will ingest in the data. For the “host” field, enter the hostname or IP address of the MSSQL server. Usually the default port of 1433 doesn’t need to be changed nor the default database of “master”. Enable SSL if you’re connection is to be encrypted and I always select “Read Only” when creating a database input to make sure there is no way to input can change any data in the connected database.

Finally, a few miscellaneous tips for you.

For the “Connection Name” of database connections, I always name them after their hostname and port from the JDBC URL Settings. This is because in a complex DB Connect environment, you can have many inputs coming from many different databases. A hostname/port number combination, however, is unique. So, naming them with a pattern of “hostname-port#” (e.g., “sql01.mycompany.com-1433”) will prevent you from establishing duplicate connections to the same MSSQL installation.

Another tip is that you can edit the connection settings for your JDBC driver directly in the configuration. This is typically only useful when your development team has come up with specific, non-standard configurations they use for JDBC drivers.

Sometimes complex database queries that call stored procedures or use complex T-SQL constructions can be more than the JDBC driver and Task Server can handle. In that case, I ask the MSSQL DBAs if they will create a view for me constructed of the contents of the query and provide me select rights on the view. That leaves all the complex query language processing with SQL server rather than taxing the driver and DB Connect.

When dealing with ingesting data from a SQL server cluster, the usual construction of the JDBC connection string created by DB Connect won’t do. With a clustered environment, you also need to specify the instance name in addition to the hostname and port of the SQL Server listener. So, after setting up the connection information where the host is the listener and the port is the listener port, click the “Edit JDBC URL” checkbox and add “;instance=<database instance name>” to the end of the JDBC URL to ensure you connect to the proper database instance in the cluster. For example, the get to the “testdb” instance in the “sql01” cluster, you’d have a JDBC URL like: “jdbc:sqlserver://sql01.mycompany.com:1433;databaseName=master;selectMethod=cursor;integratedSecurity=true;instance=testdb”

I hope these directions and tips have been helpful in making your journey into Splunk DB Connect simpler and straightforward.

Happy Splunking!

Want to learn more about setting up Splunk DB Connect to connect to multiple MSSQL databases? Contact us today!

Creating Splunk Alerts (and Setting Permissions!) Through REST API

By: Marvin Martinez | Senior Developer

 

Creating Alerts via the Spunk REST API is fairly straightforward once you know exactly what parameters to use to ensure that Splunk recognizes the Saved Search as an Alert.  The same applies for ACL permissions on these alerts and other Splunk Knowledge Objects.

First things first, let’s create a scheduled search via REST using the “/services/saved/searches” endpoint.  The curl code below creates a simple search to pull some data from the _internal index for the last 10 minutes.

Note that, to create the saved search, all that was needed was authorization (a token in this case) and a couple of parameters in the call: (1) a name for the search and (2) the search itself.

This will create a search in the Searches, Reports and Alerts screen in Splunk Web.

As you can see, the search has been created and shows up as a Report.  But what if you need this to be an alert?! Even more importantly, what if you want to set this up with specific permissions?  Well, luckily, like essentially everything else in Splunk, this can also be done via the REST API.

To create a new search as an alert, you’ll need to call the same endpoint as shown above with the parameters mentioned below. Otherwise, call the “/services/saved/searches/{name}” endpoint if you’re modifying a search that’s already created.  For the purposes of this write-up, I will call the endpoint to manage an already created search (“/services/saved/searches/{name}”).

In order for Splunk to recognize the search as an alert, and not a Report, the following parameters have to be set correctly and passed along in your POST REST call.  The table below outlines the parameter name and a brief description of what they mean.

Parameter Description
alert_type ‘number of events’ (if this is set to ‘always’, which is the default, Splunk thinks it’s just a report)
is_scheduled true (this is a Boolean setting that Splunk checks to make sure there’s a set schedule for the report, which is required for alerts)
cron_schedule */10 * * * * (a cron schedule that represents the schedule which the alert will run on)
alert_comparator ‘greater than’ (this is the operator used in the alert settings to determine when to send the alert – associated with the alert_threshold below)
alert_threshold 0 (this is the number to compare with the operator above. i.e. only alert when results > 0)

 

The curl command for the REST call is shown below.  Note the aforementioned parameters that are now being included.

But how does this search now look in the Searches screen?  As can be seen in the image below, once the REST command has been executed successfully, your Alert should now be reflected appropriately as an “Alert”.

For further confirmation of these settings, click the Edit link under Actions, and click Advanced Edit from the drop-down menu.  This will bring up a lengthy listing of all the settings for this search.  If using the REST API is not your style, this is where you can alternately set these settings from Splunk Web.

The listing looks something like this:

All that’s left now is to set your permissions as desired.  To do this, you’ll need to call a new endpoint.  You’ll use the previous endpoint you used to manage a specific saved search, but you’ll add a new section at the end for “acl” (i.e. ‘https://localhost:8089/services/saved/searches/ATestRESTSearch/acl’).  This acl extension/option is available for any endpoint but, in this use case, we’ll use it to manage the permissions for the alert we created above.

In the case of a saved search, you’ll need to include the following parameters in your REST call:

Parameter Description
sharing ‘app’ – this can also be ‘global’ or ‘user’, depending on what the scope of the access you want this search to have (This is required when updating the ACL properties of any object)
app ‘search’ – this is the name of the app that this search belongs to.  (For saved searches, this is required when updating ACL properties of these objects)
perms.read A comma-delimited list indicating what roles to assign read permissions to
perms.write A comma-delimited list indicating what roles to assign write permissions to

 

A curl command that was used in this case is shown below.  In this example, the alert is being updated to give read permissions to admin and user-mmtestuser1.  Additionally, it is being updated to give write permissions to admin and power roles.

As an added bonus, here is an example of how Postman was leveraged to make this final call, in case that’s your REST API-calling tool of your choice.  The Authorization tab, in this example, was set to Basic Auth type with admin credentials.  In the Body tab, you’ll set your parameters to the REST call as “x-www-form-urlencoded” values.  Note the 4 parameters mentioned above shown included in the call below.

Once the REST call is made, navigate to your “Searches, Reports, and Alerts” screen in Splunk Web, and click to Edit Permissions of your alert.  You’ll notice that your permissions are now reflected just the way you designated them in your REST call.

The Splunk REST API is a great alternative, and a necessity for many, to using Splunk Web to create and manage knowledge objects.  Anything that can be done in Splunk Web can be done via the REST API, though it sometimes can be a bit hard to easily understand the process for how to achieve some of these desired actions.  Now, you can easily create alerts and set the permissions just the way you want…and all through REST!

Want to learn more about creating alerts via the Spunk REST API? Contact us today!

 

Using an External Application to Pull Splunk Search Results

By: Aaron Dobrzeniecki | Splunk Consultant

 

Have you ever wanted to pull logs from Splunk without actually being physically signed into the Splunk Search Head? With an external application, such as Postman, you can query the Splunk REST API endpoint to actually provide you with results from a search being run.

When Splunk runs a search, it creates a search ID which we can use to grab the results from the REST endpoint. We will be testing out two ways to get the results of a search. The first way is to grab the name of the Splunk search and query it against the /services/saved/searches/{search_name}/dispatch endpoint, which will provide us with the sid. We then use the sid to grab the results of the search, which will fire off the search and will poll for results as they come in. The second way to get the search results is by doing an export on the search name which will run the search and get the results without polling.

First things first, you need to make sure that the user you are authenticating to Splunk with has the “Search” capability, as well as access to search the necessary indexes. It’s that simple! If you are setting up a user for a particular person make sure they only have access to what they need. Giving further access is not necessary and can cause security issues.
In this example we are using the Postman application to query the Splunk REST API to grab search results from a couple of different reports/saved searches. Things we are going to need include:

  • Splunk user account with the Search capability. We need that user to be able to search the index we are going to be grabbing our data from.
  • We also need to know the Splunk URL we are going to be pulling from. In this case, I am using my localhost as an example. We will also be querying the Splunk management port of 8089 to get our results set.

The image above shows the type of request I am doing (POST), the REST API being used to query my search ID (/services/saved/searches/{name_of_search}/dispatch), and the authentication type of username and password. What the URL above is doing is it is reaching out to Splunk and grabbing the SID (search id) of the search named Index Retention Getting Close. With this search id we will be able to run a GET on the Splunk REST API and grab the results of the search.

Below I will be showing you two Splunk REST API endpoints that you can query (using POST) to get the Search ID for a specified search. The first endpoint is for searches that do not have Global permissions. As long as the user you are authenticating with has a role that has access to read the search, you can query the endpoint of /servicesNS/nobody/{app}/saved/searches/{name}/dispatch to retrive the Search ID. The second endpoint you can query if the search has Global permissions and you have read access is simply /services/saved/searches/{name}/dispatch to retrieve the Search ID. The two scenarios are below.

The image above shows the rest endpoint that can be used to grab a specific search ID that is in an app and has specific permissions. As long as my account has access to the app and search inside the app, I will be able to query it. For this example, we have changed the permissions of the search to be App only.

The image above is the results of the search in json, using the search ID we queried from the REST API.

The image above gives us the same results except they are in xml format.

The image above shows the search ID of the search with REST API I am querying. Since that search now has Global permissions, we do not need to use the ServicesNS endpoint. When you do a POST with a dispatch on the name of a search/report you will get the Search ID. As you can see the search ID is circled. We will be using this search ID to query the results of the search and show the actual search results in the Postman application. The Splunk REST API you will want to query next is the /services/search/jobs/{sid}/results?output_mode= (atom | csv | json | json_cols | json_rows | raw | xml). Any of those values will get you the results of the search in the format selected. In this example, I will be showing you json and xml.

As you can see above, the data results are shown in xml format for the search we were wanting to get results from.

This image shows the same results but in json format. With the options above for data output, you can query the Splunk REST API to get the search results and have them show in your preferred format.

Way 2: Query the REST API to show the results by using an export on the search name which will run the search and get the results without polling. Take a look at the screenshot below which queries the /services/search/jobs endpoint to stream in the results of the search as they come in.

Remember, you need to have the Search capability in Splunk, as well as you have to be able to read the results of the search. Whether that is setting Global permissions or having a role that has read access to the app and search. Below are some links referencing the Splunk REST API. If you have any questions at all regarding querying the Splunk REST API from an external application, please let me know!

https://docs.splunk.com/Documentation/Splunk/8.0.6/RESTTUT/RESTsearches

https://docs.splunk.com/Documentation/Splunk/8.0.6/RESTREF/RESTsearch#search.2Fjobs.2Fexport)

 

Want to learn more about using an external application to pull Splunk search results? Contact us today!

Forwarder 6.x Compatibility with Splunk 8.0

By: Forrest Lybarger | Splunk Consultant

 

If you are looking into upgrading Splunk to 8.0, you have probably come across the compatibility matrix for forwarders:

Source: https://docs.splunk.com/Documentation/VersionCompatibility/current/Matrix/Compatibilitybetweenforwardersandindexers

 

This table means that Splunk does not support, nor has it tested, the use of 6.x forwarders with 8.0 indexers. It doesn’t mean that it is impossible for them to work together. In other words, you can use 6.x forwarders at your own risk. Any problems you have with these forwarders, however, will almost always be caused by the version difference and most likely fixed by upgrading.

With all the caveats out of the way, how do you get this working? Well, it depends on what exact version your forwarders have. Here are the affected versions:

  • 6.0.0 to 6.0.6
  • 6.1.0 to 6.1.4
  • 6.2.0 to 6.2.6
  • 6.3.0 to 6.3.1
  • 6.3.1511.1

The issue is that some older 6.x versions of Splunk use a different SSL protocol from 6.6.x and later versions, which makes them unable to connect via the management port (usually port 8089) and unable to communicate with the deployment server. To correct this, you need to force the newer Splunk components to use an SSL version that the older components can understand. In this case, your forwarders are the only components not upgrading to 8.0, so you only need to fix the deployment server. To avoid issues with these forwarder versions add an app with a server.conf containing this stanza to your deployment server:

[sslConfig]

sslVersions = *,-ssl2

sslVersionsForClient = *,-ssl2

cipherSuite = TLSv1+HIGH:TLSv1.2+HIGH:@STRENGTH

Allow any sslConfigs apps your environment already has to override this app by giving it a lower priority name or just add the lines from the stanza that aren’t present in your current app. You can delete this new ssl config after your forwarders are upgraded.

This fix should only be used if you must upgrade to 8.0 and can’t wait for your forwarders to upgrade. Keep in mind that this is not Splunk supported, so for now it could work (latest version as of writing this is 8.0.6), but in the future, Splunk could break this workaround. When you do implement this fix, make sure to prioritize upgrading your forwarders and understand that any problems involving data ingestion or forwarding are most likely caused by not upgrading your forwarders to at least 7.0 (latest version possible is recommended).

Want to learn more about forwarder 6.x compatibility with Splunk 8.0? Contact us today!