Using Collect for Summary Indexing in Splunk

  By: Karl Cepull | Senior Director, Operational Intelligence

 

Splunk can be a valuable tool in cybersecurity. Attacks from outside forces, along with questionable activity within a network, put sensitive data and corporate assets at tremendous risk. Using Splunk to find bad actors or malicious events can help an organization protect itself and discover breaches in time to act. However, larger corporate environments may see millions of events from their firewalls in a single hour. When looking at traffic trends over the past seven days, the number of events may make the search inefficient and expensive to run. This is when a summary index can be used to reduce the volume of data being searched, and return meaningful results quickly.

Summary indexes can be tremendously helpful to find a needle in the haystack of data. If the need is to determine the most frequent source IP for inbound traffic, fields such as vendor, product, and product version may not be helpful. Including these fields adds more work to the search process, multiplied by millions (and sometimes billions) of events. Summarizing the data into source IP, destination IP, ports used, and action (just to name a few) helps to ease the strain of the search. When done correctly, summary indexes won’t have an impact on your license usage.

A summary index starts off as a normal index. The specifications of the index need to be defined in indexes.conf. Data cannot be summarized to an index that does not exist. Adding data to the index can be done by adding a “collect” statement at the end of the search. The structure of the command is:

collect index=<index name> <additional arguments>

The collect command only requires an index to be specified. Other arguments can be added to the collect command:

Argument Description Default
addtime True/False – This determines whether to add a time field to each event. True
file String – when specified, this is the name of the file where the events will be written. A timestamp (epoch) or a random number can be used by specifying file=$timestamp$ or file=$random$. <random-number>_events.stash
host String – The name of the host you want to specify for the events. n/a
marker String – different key-value pairs to append to each event, separated by a comma or a space. Spaces or commas in the value need to be escape quoted: field=value A will be changed to field=\”value A\”. n/a
output_format raw or hec – specifies the output format. raw
run_in_preview True/False – Controls whether the collect command is enabled during preview generation. Change to True to make sure the correct summary previews are generated. False
spool True/False – Default of True sets the data in the spool directory, where it’s indexed automatically. If set to False, the data is written to ../var/run/splunk. The file will remain there unless moved by other automation or administrative action. This can be helpful when troubleshooting so summary data doesn’t get ingested. True
source String – Name or value for the source. n/a
sourcetype String – Name or value for the sourcetype. The default summary sourcetype is “stash.” Stash
testmode True/False – If set to True, the results are not written to the summary index, but the search results are made to appear as they would be sent to the index. False

When using the collect command, there are two important details to remember:

  1. Changing the sourcetype to something other than “stash” will result in the summary data ingestion hitting your license usage.
  2. Unless specific fields are added to a table command, the collect command will grab all returned fields, including _raw. Having the raw data added to the summary search reduces the effectiveness of the summary index.

With the collect command set and data ready to be sent to a summary index, the next step is to create a scheduled search. The search should run frequently enough to find threats while impactful, but spaced out enough to reduce resource utilization. The summary index is more helpful for historical insights and trends than for real or near-time searches.

Going back to our original example of summarizing firewall logs, here is an example of a scheduled search:

index=firewall_index sourcetype=network_traffic
| fields action, application, bytes, dest_ip, dest_port, src_ip, dest_ip, packets
| table action, application, bytes, dest_ip, dest_port, src_ip, dest_ip, packets
| collect index=summary_firewall source=summary_search

Set this search to run every 10 minutes, looking back 10 minutes. The summary index will get a list of events with these fields specified, and continue to receive new events every 10 minutes. Historical searches for trends can now run without having to dig through unnecessary data and provide faster analysis on traffic patterns and trends.

Contact us for more help on using the Collect command for summary indexing your Splunk environment!

How to Configure SSL for a Distributed Splunk Environment

  By: Bruce Johnson  | Director, Enterprise Security

 

Many organizations use Splunk today. Of those adopters, most have a distributed Splunk environment. Often, organizations have sensitive data traversing their network, which makes its way into Splunk. More now than ever, security is at the forefront of everyone’s mind, and securing your Splunk environment is no exception. How to properly secure a distributed Splunk environment is not a new concept, but it is still frequently underutilized or improperly implemented. With that being said, an overview of how to implement SSL between your Splunk Deployment Server and Splunk Web instances will be discussed in detail. Manually configuring the Search Head and Search Peers will be overviewed.

As Splunk ships with OpenSSL, this method will be discussed in examples. Ensure you are using the proper version of OpenSSL on each Splunk instance. The steps provided assume you are configuring on a Linux-based host, and Splunk is installed in the /opt/splunk OR /opt/splunkforwarder directory, and using Splunk default ports.

Deployment Server and Splunk Web

You will want to secure your traffic from your web browser to your Deployment Server, as non-ssl traffic transfers raw data. Cleartext makes it easy for those that know how to intercept traffic to read your data. Use SSL certificates to help secure your data by turning your cleartext into ciphertext, especially when you need to access instances outside of your network. There are, of course, default certificates that ship with Splunk. It is a best practice to go with either a self-signed or a purchased CA-signed certificate instead. How to utilize self-signed certificates will be discussed.

First, make a directory for your self-signed certificates, to ensure you don’t interfere with the default Splunk certificates. Then, traverse to that folder. While in that folder, create your private key (PK) that you will utilize to sign your certificates. Ensure access to this folder is limited to personnel that needs access, as private keys should never be shared. Encrypted data can be decrypted by anyone that has the private key.

Next, you will need to generate your custom Certificate Signing Request (CSR). Use your CSR to create your public certificate (.pem), which is what you will distribute to your various Splunk instances. With the root certificate created to act as a CA, you will then utilize the CSR, CA certificate, and private key to generate and sign a server certificate that is valid for three years. Use the server certificate by distributing it to your indexers, forwarders, and other Splunk instances, which communicate over management port 8089. We will only discuss, however, implementing it on your Deployment Server.

  • 1. mkdir /opt/splunk/etc/auth/custcerts
  • 2. cd /opt/splunk/etc/auth/custcerts
  • 3. /opt/splunk/bin/splunk cmd openssl genrsa -aes256 -out mattCAPK.key 2048
    • a. Enter a secure password, then again to confirm.
  • 4. /opt/splunk/bin/splunk cmd openssl rsa -in mattCAPK.key -out mattCAPKNoPW.key
    • a. Removing the password makes it easier for testing.
    • b. You’ll need to enter the secure password you created in step 3 above.
  • 5. /opt/splunk/bin/splunk cmd openssl req -new -key mattCAPKNoPW.key -out mattCACert.csr
    • a. Enter details to questions asked.
  • 6. /opt/splunk/bin/splunk cmd openssl x509 -req -in mattCACert.csr -sha512 -signkey mattCAPKNoPW.key -CAcreateserial -out mattCACert.pem -days 1095

Now you will need to generate your server certificate:

  • 7. /opt/splunk/bin/splunk cmd openssl genrsa -aes256 -out mattServerPK.key 2048
  • 8. /opt/splunk/bin/splunk cmd openssl rsa -in mattServerPK.key -out mattServerNoPW
    • a. Again, removing the password makes testing easier.
  • 9. Use your new server private key mattServerPK.key to generate a CSR for your server certificate. (use sdsp01.nielsen.com for the common name in the CSR).

Similar to steps 1-6, you will use the private key to create the CSR, then both to create the server certificate.

  • 10. /opt/splunk/bin/splunk cmd openssl req -new -key mattServerNoPW.key -out mattServerCert.csr
  • 11. /opt/splunk/bin/splunk cmd openssl x509 -req -in mattServerCert.csr -SHA256 -CA mattCACert.pem -CAkey mattCAPKNoPW.key -CAcreateserial -out mattServerCert.pem -days 1095

You’ll now want to concatenate them all together (you will do this two different times in these steps). The format and reasoning are explained here:

  • 12. cat mattServerCert.pem mattServerNoPW.key mattCACert.pem > mattNewServerCert.pem

At this point, you will need to update the server.conf file on your Deployment Server. This file is located in the /opt/splunk/etc/system/local/ directory. You can get more granular in the stanzas if you prefer, and the options are listed in Splunk docs.

  • 13. Find the [sslConfig] stanza.
  • 14. [sslConfig]
  • 15. enableSplunkdSSL = true
  • 16. serverCert = /opt/splunk/etc/auth/custcerts/mattNewServerCert.pem
  • 17. caCertFile = /opt/splunk/etc/auth/custcerts/mattCACert

Here you will need to restart Splunk on your Deployment Server instance.

  • 18. /opt/splunk/bin/splunk restart\

You will need to generate a key specifically for the web UI for the Deployment Server. Please note that you must remove the password for the Splunk Web portion, as it’s not compatible with a password.

  • 19. /opt/splunk/bin/splunk cmd openssl genrsa -des3 -out mattWebPK.key 2048
  • 20. /opt/splunk/bin/splunk cmd openssl rsa -in mattWebPK.key -out mattWebPKNoPW.key
  • 21. /opt/splunk/bin/splunk cmd openssl req -new -key mattWebPKNoPW.key -out mattWebCert.csr
  • 22. /opt/splunk/bin/splunk cmd openssl x509 -req -in mattWebCert.csr -SHA256 -CA mattCACert.pem -CAkey mattCAPKNoPW.key -CAcreateserial -out mattWebCert.pem-days 1095
  • 23. cat mattWebCert.pemmattCACert.pem > mattWebCertificate.pe

(You should be noticing a trend by now!)

You will now need to update the web.conf [settings] stanza, which is located in the /opt/splunk/etc/system/local/ directory path.

  • 24. [settings]
  • 25. enableSplunkWebSSL = true
  • 26. privKeyPath = /opt/splunk/etc/auth/custcerts/mattWebPKNoPW.key
  • 27. serverCert = /opt/splunk/etc/auth/custcerts/mattWebCertificate.pem

** For reasons of Splunk magic, the Deployment Server has issues pushing certs to Deployment Peers, so configure them individually/manually. An app would be much simpler method, though.

Once implemented, test within your browser. I have had issues with Google Chrome (see Image 1 below), but Firefox allows the page to be load as desired (see Image 2 below for reference). You will find that https only works at this point, and http no longer will.

SHs & Search Peers/Indexers

When adding search peers (indexers) to a search head, many admins will simply use the Splunk user interface (UI), or the command-line interface (CLI). In many situations, these are efficient and complete methods. There are, however, use cases that require adding search peers via editing distsearch.conf directly. This manner provides more granular and advanced features to be implemented. When editing distsearch.conf directly, key files need to be distributed manually to each search peer. This is in contrast to the two other methods, which implement authentication automatically.

Upon adding your search peers to your search head(s) via editing distsearch.conf, the key files need to be copied to the proper path. On your search head(s), copy the public key file, and place it in your search peer(s)’ file structure (file location(s) examples follow this paragraph). If adding search peer(s) to multiple search heads, then each search head’s public key file needs to be in its own folder named after the search head (utilize the actual serverName that is listed in server.conf for the folder name). Once the files have been properly copied over, simply restart Splunk on each Splunk search peer instance. The file location examples are as follows:

On your search head:

  • $SPLUNK_HOME/etc/auth/distServerKeys/sh1PublicKey.pem

On your search peers/indexers:

  • $SPLUNK_HOME/etc/auth/distServerKeys/searchHead1Name/sh1PublicKey.pem
  • $SPLUNK_HOME/etc/auth/distServerKeys/searchHead2Name/sh2PublicKey.pem

Each instance of your Splunk deployment can, and should, be configured to use SSL. Each instance has its own caveats and nuances that need special attention to detail to configure properly. You should also look into securing your traffic between your forwarders and indexers.

Contact us for more help on configuring SSL for your distributed Splunk environment!

TekStream Provides Extra Value to Splunk Managed Services Customers

By: Matthew Clemmons | Managing Director

 

Earlier this year, TekStream was named Splunk’s Partner of the Year for Professional Services in two areas: Americas and Global. In addition to receiving these prestigious awards, we also collaborated to produce this video highlighting the extra value we bring to our Splunk managed services clients.

Splunk is robust data storage and management platform, but many organizations lack the in-house expertise needed to maximize the value of their Splunk environment. When you partner with TekStream, we help you transform your data into powerful insights to drive exponential business growth.

As a Splunk Elite Managed Services Provider, TekStream has specialized knowledge and experience to ensure your environment is architected for full efficiency, so you never miss out on any of the benefits Splunk provides for your business.

Interested in learning how the Splunk/TekStream partnership can improve your operations? Contact us today!

 

SignalFx Agent Configuration for Docker and Gunicorn

  By: William Phelps  |  Senior Technical Architect

 

This blog covers the basic steps for configuring the SignalFx agent and configuring a Python application running in Gunicorn to send trace data to SignalFx via the agent if Gunicorn is being executed within a Docker container.

Let’s start with a high-level overview of the technologies involved in the solution:

  • – The SignalFx Tracing Library for Python automatically instruments Python 2.7 or 3.4+ applications to capture and report distributed traces to SignalFx within a single function.
  • – The library accomplishes this by configuring an OpenTracing-compatible tracer to capture and export trace spans.
  • – The tracer can also be used to embed additional custom instrumentation into the automatically generated traces. This blog will concentrate solely on the auto-instrumentation approach.
  • – The SignalFx-Tracing Library for Python then works by detecting libraries and frameworks referenced by the application and then configuring available instrumentors for distributed tracing via the Python OpenTracing API 2.0. By default, the tracer footprint is small and doesn’t declare any instrumentors as dependencies.
  • – Gunicorn, or ‘Green Unicorn,’ is a Python web server gateway interface (WSGI) HTTP Server for UNIX/Linux. The Gunicorn server is a pre-fork worker model and is broadly compatible with various web frameworks, simply implemented, light on server resources, and is fairly speedy.

The following notes are provided as general prerequisites or assumptions:

  • – The SignalFx agent is already installed and initially configured on the Docker host. (The Otel collector is not yet compatible with this configuration.)
  • – Alternately, the SignalFx agent can be deployed as its own Docker container, but this article assumes a local installation on the Docker host.
  • – The SignalFx agent is already sending traces to the proper SignalFx realm and account.
  • – The Docker host is assumed to be Linux. (RHEL/Centos/Oracle/AWS). The steps would be similar for Ubuntu/Debian, but the commands shown will be RHEL-centric.
  • – Python 3 is installed on the host.
  • – Ensure that proper access to Terminal or a similar command-line interface application is available.
  • – Ensure that the installing Linux username has permission to run “curl” and “sudo.”

Process Steps

The general overall flow for this process is relatively short:

  1. Configure the SignalFX agent to monitor Docker containers.
  2. Create a Gunicorn configuration to support the tracing wrapper.
  3. Create a Dockerfile to deploy the application.

Docker Monitoring Configuration

If the SignalFx agent was installed previously, navigate to the folder where the agent.yaml resides.

Edit the agent.yaml file to enable Docker monitoring. Under the “Observers” section, add a type of “Docker” as shown, and under “Monitors,” add a type of “docker-container-stats.”

Also, under the “Monitors” section, ensure that for the type “signalfx-forwarder,” the attribute “listenAddress” is set to 0.0.0.0:9080, and not “localhost” or “127.0.0.1”.

Additionally, under the type “signalfx-forwarder,” uncomment the attribute “defaultSpanTags.”

Uncomment and set a meaningful value for “environment” as shown. This value will be sent with every trace from this host/SignalFX agent invocation and is used as a filtering device. “Environment” is a child attribute of “defaultSpanTags.” Be aware of the appropriate indentation, as YAML is very strict.

Save the file. At this point, SignalFx will be able to send Docker metrics data, but it likely will not be sending anything. A quick look at the logs via journalctl:

…will probably show a permissions issue with reading the Docker socket. Add the user “signalfx-agent” to the “Docker” group and restart the SignalFx service to address this issue.

 

Gunicorn Configuration

Use the following steps to configure Gunicorn for auto-instrumentation of Python for SignalFX. Again, the assumption for this blog is that Gunicorn is being deployed to a Docker container.

  1. In Python’s application root directory, create a file called “gunicorn.config.py”.
  2. The contents of this file should appear as follows (or be modified to include the following):

Local Docker Configuration

At a high level, the Dockerfile simply consists of the directives to create the necessary environment to add Gunicorn and the Python application to the container. These directives include:

  • – Creating the expected directory structure for the application.
  • – Creating a non-root user to execute Gunicorn and the application inside the container. (Running as root is not recommended and will likely cause the container build to fail.)
  • – Setting environment variables to pass to the container context.
  • – Loading the Splunk OpenTelemetry dependencies.
  • – Launching Gunicorn.

Please note the “ENV” directive for “SPLUNK_SERVICE_NAME” in step 7. The service name is the only configuration option that typically needs to be specified. Adjust this value to indicate the Python service being traced.

Other options can be found in the GitHub documentation under “All configuration options.”

This Dockerfile is using the Python3.8 parent image as the FROM target. Accordingly, the “pip” instruction in step 8 may need to be altered based on the parent image. The requirements file argument, however, should still be valid.

The requirements file appears as follows. This file lists out Gunicorn and specific Paste libraries, along with basic setup items and a version of Flask. The actual requirements for a project may vary. “Splunk-opentelemetry” in turn will load packages it requires. As such, this requirements file is not to be considered the complete library source.

Setting the PATH variable typically is NOT needed as shown in step 10. However, this ensures that the correct environment is present prior to running the bootstrap. The PATH must include the user’s “.local/bin” and “.local” folders from the home directory.

Finally, in step 12, note the use of both “- -paste” and “-c”. “ – -paste” of an .ini file allows additional configuration to be added to the build. “-c” is required to get the SignalFx configuration loaded that was defined in “gunicorn.config.py” earlier. This initialization line is shown to illustrate that both parameters can be used simultaneously. “-c” should follow “paste” if both are used.

Running the Dockerfile will generate a lot of output, but the final lines should look something like this:

Checking in the SignalFx UI should show the new service defined in step 7 of the Dockerfile.

Contact us for more help on configuring SignalFx for Docker & Gunicorn!

Don’t Be a Karen: Rebuilding the Terraform State File and Best Practices for Backend State File Storage

  By: Brandon Prasnicki  |  Technical Architect

 

It happened. It finally happened. After talking to the manager, Contractor Karen quit. She was solely responsible for managing the project’s cloud architecture with Terraform. Now that Karen left, a new resource needs to take her place and continue managing and building the cloud infrastructure. Luckily, the terraform code was in a git repository (excluding the .terraform dir), but no one is sure if it is up to date, and the state file was local to Karen’s machine and not recoverable. What to do now?

  1. Don’t be a Karen. Make it a company policy to configure the backend. A Terraform backend is the configuration on how (and where) to store your Terraform state in a centralized, remote location.
    • – A shared resource account or a production account is a good place to store terraform states.
    • – Having a remote backend is also a must for shared development environments.
  2. Use a versioned bucket. State files can get corrupt, and you may need to revert to an old version of the state file.
  3. Configure the backend. For each unique terraform state, make sure to update the key path to be reflective of the workload architecture the state file is associated with:

If it’s already too late, and you have been victimized by a Karen, then it’s time to rebuild the state file.

  1. Depending on the size of your workload, this will be a time-consuming process.
  2. For each resource, you will need to identify the key needed to import into the state. For this key, reference the terraform documentation. For example:
    • a. For a VPC you would reference this page and see that to import a VPC you would need the VPC ID:
      terraform import aws_vpc.test_vpc vpc-a01106c2
    • b. For an EC2 instance you would reference this page and see that to import an EC2 instance you would need the EC2 instance ID:
      terraform import aws_instance.web i-12345678
  3. After each import, you should run a plan and make sure the plan does not expect any changes you are not anticipating and correct them in the code if applicable. This process will take time.

Contact us for more help on rebuilding Terraform State Files!

How to Merge Two Multi-Site Indexer Clusters into One

  By: Jon Walthour  |  Team Lead, Senior Splunk Consultant

 

Problem: Take two multi-site indexer clusters and meld them into one with all the buckets from cluster A residing in and being managed by cluster B. Then, with all the buckets transferred to cluster B, cluster A indexer hardware can be decommissioned.

TL;DR: It is not possible to do this, because the buckets from cluster A will never be moved outside of cluster A’s storage by cluster B’s cluster manager.

Step 1 is to make the clusters identical in configuration. You’d have to:

  1. Ensure both clusters are running on the same version of Splunk.
  2. Ensure the indexes.conf files on both clusters were identical—they both contain all the indexes, that each index stanza is named the same in both (e.g. one indexer cluster can’t put its Windows Event Logs in an index named “windows” and the other put them in one named “wineventlog”). Configurations would need to be changed and directories renamed to make both clusters the same in terms of index names and locations of hot/warm and cold buckets. Bottom line: You need to be able to use the same indexes.conf in either cluster since, when they were merged, they would be.
  3. The contents of the controller apps and agent apps directories would also need to be identical in terms of contents and versions across both clusters. Again, they are eventually going to share the controller app’s contents. So, in preparation, they must be made identical.

Step 2: Turn cluster A from a multi-site indexer cluster into a single-site indexer cluster. The replicated buckets on site2 in cluster A are redundant and will get in the way. To do this, site search and replication factors in cluster A will need to be set to “total” values of “2” on each. Then, wait for all the fixup tasks to complete to get the cluster to meet site_replication_factor and site_search_factor. Finally, you would follow these steps to convert cluster A to a single-site indexer cluster.

Step 3: Now add single-site cluster A to multi-site cluster B as a new site. For instance, if multi-site cluster B has three sites, site1, site2, and site3, cluster A now becomes site4 of cluster B. You do this by:

  1. Edit server.conf on the cluster manager, adding “site4” to the “available_sites” attribute under the “clustering” stanza and restart Splunk.
  2. Add peers to the new site by running:
    splunk edit cluster-config -mode peer -site site4 -master_uri https://:8089 -repiication_port 8080 -secret
    splunk restart
  3. Add search heads by running:
    splunk edit cluster-config -mode searchhead -site site4 -master_uri https://:8089 -replication_port 8080 -secret
    splunk restart
  4. Point forwarders at the new cluster by adding the following to their server.conf:
    [general]
    site=site4

Step 4: In theory, then, you would next decommission site4, causing all its buckets to be dumped into the other three sites. Here’s the rub, though, and it’s a big one: the buckets created when site4 was cluster A won’t migrate off the site. Any buckets created when site4 was cluster A will continue to only exist on site4 and the cluster manager will never migrate them elsewhere into other sites.

This is clearly stated in the documentation for decommissioning a site of a multi-site indexer cluster:

If a peer on the decommissioned site contains any buckets that were created before the peer was part of a cluster, such buckets exist only on that peer and therefore will be lost when the site is decommissioned…Similarly, if the decommissioned site started out as a single-site cluster before it became part of a multisite cluster, any buckets that were created when it was a single-site cluster exist only on that site and will be lost when the site is decommissioned.

It is immaterial here that cluster A was once a multi-site indexer cluster. The condition is the same. They are buckets from a foreign cluster and the cluster manager will leave them segregated as long as they exist. So, the only way your end goal could be accomplished is if you could hold on to the hardware for one of the sites in cluster A until such time as all its buckets aged out to frozen and were, presumably, deleted. Then, you would be at a state where all the buckets in site4 were created after it became site4 and no buckets remained from when it was cluster A. Only at that time could you decommission site4 and reclaim the hardware without suffering any data loss.

Contact us for more help with merging multi-site indexer clusters:

Oracle Visual Builder Can Bring It All Together

How to Use Oracle Visual Builder to Create Fast, Custom Interfaces for Integrations, Process Workflows, and Custom Functionality

 

  By: Courtney Dooley  |  Technical Architect

 

Oracle Integration Cloud service is a robust Platform-as-a-Service (PaaS) offering that combines integration orchestration with business process workflows to provide custom solutions for any complex business routines. Visual Builder Cloud Service (VBCS) extends that functionality to create web and mobile applications and define service connections. Early versions of VBCS lacked some of the features to make implementation easy when connecting to process workflows and integrations, but recent updates have made this tool a must-have, bringing together all that the Oracle Integration Cloud Service provides.

Service Backends

The main menu of any Visual Builder Application displays the following:

  • – Mobile Applications
  • – Web Applications
  • – Services
  • – Business Objects
  • – Components
  • – Processes
  • – Source View

For both the Mobile and Web Applications, form development and data structure are available for customization and modification to meet the needs of any service. Those services are configured within the Services menu, and the Backends tab is where the Integration and Process services are defined.

Out of the box, these services are pre-configured and have both the Player (test mode) configuration and Default for production use. Additional servers can be configured if multiple instances are being used.

Additional Backend servers can be created and configured for use within Service Connections.

Service Connections

Creating a service connection is simple using the Create Service Connection wizard which allows you to choose from Catalog, Define by Specification, or Define by Endpoint to allow maximum flexibility when identifying the service connections you need. In most Integration connections, you will use the catalog option. For Process services, there is a separate menu to identify those connections (outlined in the next section). The catalog connections do not include the full REST API library, and in those cases where a specific endpoint is not available, the endpoint option can be used.


Processes

Adding a deployed process application to Visual Builder is as simple as clicking a button. When Process is pre-configured, all that’s left is to select which application to pull into the Visual Builder Application.

Built-in API service calls are available based on the functionality detected within the process application. Code snippets are even provided to help define what functionality and values are available.

All that’s left is to drag and drop the elements to the page.

Adding Input/Output to Service Pages

A blank application includes a base page that provides some basic layout design. To include elements that will display dynamic content from either Process or Integration, we need to add the elements matching the data we are expecting. In the case of Process Workflow Tasks, we can simply add a table to our page.

Once the table is added, the Add Data wizard will help identify what type of data should be added to the table, and then which elements should be included.

A query can also be identified when pulling in tasks to display a sub-set of the available tasks to display or limit the number of results returned. Once configured, the table will update to identify the data within.

Additional features can be added and configured to Add Task Actions for those tasks or display the details for the task by using the Add Detail Page wizard. Additional configuration and customization may be needed depending on what payload the task is expecting, however, these out-of-the-box features will get you most of the way there.

Adding Integration response data, although slightly different in implementation, is just as simple to configure. If an integration or process is complex or requires input, not to worry, Visual Builder has a large library of components as well as allowing custom development to meet any needs.

Visual Builder brings together all that Oracle Integration Cloud has to offer and more. It gives businesses the power to create custom user interfaces quickly that display and interact with important data all in one place.

Contact us for more tips and tricks on developing Oracle Visual Builder Cloud Service Applications!

Re-Index Raw Splunk Events to a New Index

      By: Zubair Rauf  |  Splunk Consultant, Team Lead

 

A few days ago, I came across a very rare use case in which a user had to reindex a specific subset of raw Splunk events into another index in their data. This was historical data and could not be easily routed to a new index at index-time.

After much deliberation on how to move this data over, we settled on the summary index method, using the collect command. This would enable us to search for the specific event we want and reindex them in a separate index.

When re-indexing raw events using the “collect” command, Splunk automatically assigns the source as search_id (for ad-hoc search) and saved search name (for scheduled searches), sourcetype as stash, and host as the Splunk server that runs the search. To change these, you can specify these values as parameters in the collect command. The method we were going to follow was simple – build the search to return the events we cared about, use the collect command with the “sourcetype,” “source,” and “host” values to get the original values of source, sourcetype, and host to show up in the new event.

To our dismay, this was not what happened, as anything added to those fields is treated as a literal string and doesn’t take the dynamic values of the field being referenced. For example, host = host would literally change the host value to “host” instead of the host field in the original event. We also discovered that when summarizing raw events, Splunk will not add orig_source, orig_sourcetype, and orig_index to the summarized/re-indexed events.

To solve this problem, I had to get creative, as there was no easy and direct way to do that. I chose props and transforms to solve my problem. This method is by no means perfect and only works with two types of events:

  • – Single-line plaintext events
  • – Single-line JSON events

Note: This method is custom and only works if all steps are followed properly. The transforms extract field values based on regex, therefore everything has to be set up with care to make sure the regex works as designed. This test was carried out on a single-instance Splunk server, but if you are doing it in a distributed environment, I will list the steps below on where to install props and transforms.

The method we employed was simple:

  1. Make sure the target index is created on the indexers.
  2. Create three new fields in search for source, sourcetype, and host.
  3. Append the new fields to the end of the raw event (for JSON, we had to make sure they were part of the JSON blob).
  4. Create Transforms to extract the values of those fields.
  5. Use props to apply those transforms on the source.

I will outline the process I used for both types of events using the _internal and _introspection index in which you can find both single-line plain text events and single-line JSON events.

Adding the Props and Transforms

The best way to add the props and transforms is to package them up in a separate app and push them out to the following servers:

  1. Search Head running the search to collect/re-index the data in a new index (If using a cluster, use the deployer to push out the configurations).
  2. Indexers that will be ingesting the new data (If using a clustered environment, use the Indexer cluster Manager Node, previously Master Server/Cluster Master to push out the configuration).

Splunk TA

I created the following app and pushed it out to my Splunk environment:

TA-collect-raw-events/
└── local
├── props.conf
└── transforms.conf

The files included the following settings:

Transforms.conf

[setDynamicSource]
FORMAT = source::$1
REGEX = ^.*myDynamicSource=\"([^\"]+)\"
DEST_KEY= MetaData:Source

[setDynamicSourcetype]
FORMAT = sourcetype::$1
REGEX = ^.*myDynamicSourcetype=\"([^\"]+)\"
DEST_KEY= MetaData:Sourcetype

[setDynamicHost]
FORMAT = host::$1
REGEX = ^.*myDynamicHost=\"([^\"]+)\"
DEST_KEY= MetaData:Host

[setDynamicSourceJSON]
FORMAT = source::$1
REGEX = ^.*\"myDynamicSourceJSON\":\"([^\"]+)\"
DEST_KEY= MetaData:Source

[setDynamicSourcetypeJSON]
FORMAT = sourcetype::$1
REGEX = ^.*\"myDynamicSourcetypeJSON\":\"([^\"]+)\"
DEST_KEY= MetaData:Sourcetype

[setDynamicHostJSON]
FORMAT = host::$1
REGEX = ^.*\"myDynamicHostJSON\":\"([^\"]+)\"
DEST_KEY= MetaData:Host

Props.conf

[source::myDynamicSource]
TRANSFORMS-set_source = setDynamicSource, setDynamicSourcetype, setDynamicHost

[source::myDynamicSourceJSON]
TRANSFORMS-set_source = setDynamicSourceJSON, setDynamicSourcetypeJSON, setDynamicHostJSON

Once the TA is successfully deployed on the Indexers and Search Heads, you can use the following searches to test this solution.

Single-line Plaintext Events

The easiest way to test this is by using the _internal, splunkd.log data as it is always generating when your Splunk instance is running. I used the following search to take ten sample events and re-index them using the metadata of the original event.

index=_internal earliest=-10m@m latest=now
| head 10
| eval myDynamicSource= source
| eval myDynamicSourcetype= sourcetype
| eval myDynamicHost= host
| eval _raw = _raw." myDynamicSource=\"".myDynamicSource."\" myDynamicSourcetype=\"".myDynamicSourcetype."\" myDynamicHost=\"".myDynamicHost."\""
| collect testmode=true index=collect_test source="myDynamicSource" sourcetype="myDynamicSourcetype" host="myDynamicHost"

Note: Change testmode=false when you want to index the new data as testmode=true is to test your search to see if it works.

The search does append some metadata fields (created with eval) in the original search to the newly indexed event. This method will use license to index this data again as the sourcetype is not stash.

Single-line JSON Events

To test JSON events, I am using the Splunk introspection logs from the _introspection index. This search also extracts ten desired events and re-indexes them in the new index. This search inserts metadata fields into the JSON event:

index=_introspection sourcetype=splunk_disk_objects earliest=-10m@m latest=now
| eval myDynamicSourceJSON=source
| eval myDynamicSourcetypeJSON=sourcetype
| eval myDynamicHostJSON=host
| rex mode=sed s/.$//g
| eval _raw = _raw.",\"myDynamicSourceJSON\":\"".myDynamicSourceJSON."\",\"myDynamicSourcetypeJSON\":\"
".myDynamicSourcetypeJSON."\":\",\"myDynamicHostJSON\":\"".myDynamicHostJSON."\
"}"
| collect testmode=true index=collect_test source="myDynamicSourceJSON" sourcetype="myDynamicSourcetypeJSON" Host="myDynamicHostJSON"

The data in the original events does not pretty-print as JSON, but all fields are extracted in the search as they are in the raw Splunk event.

The events are re-indexed into the new index, and with the | spath command, all the fields from the JSON are extracted as well as visible under Interesting Fields.

One thing to note here is that this is not a mass re-indexing solution. This is good for a very specific use case where there are not a lot of variables involved.

To learn more about this or if you need help with implementing a custom solution like this, please feel free to reach out to us.

Splunk Upgrade Script

      By: Chris Winarski  |  Splunk Consultant

 

We have all run into occasional difficult situations when upgrading Splunk environments, but have you ever had to upgrade many boxes all at once? The script below may help with that, and if properly tailored to your environmental settings, can ease the pain of Splunk upgrades across vast environments. I have put the script in the plainest terms possible and added comments to increase readability so that even the most inexperienced Splunk consultant can create a successful Splunk upgrade deployment.

The script is separated into three parts, one of which only requires your input and customization for the script to function properly. The variables are the most important part as this will point to what your environment would look like. The script should not need updating (other than customization for your environment), but feel free to omit anything you don’t wish to include. Script execution may not need any changes, but if your devices do not use keys, I have left in a line “#ssh -t “$i” “$REMOTE_UPGRADE_SCRIPT.” Just remove the pound sign and put a pound sign in from the line above it.

 

Splunk Upgrade Script

#!/usr/bin/env bash

### ========================================== ###
###                  VARIABLES                 ###
### ========================================== ###

HOST_FILE="hostlist" #Create a file on the local instance where you run this from called "hostlist" with hosts, *IMPORTANT - ONLY 1 host per line

SPLUNK_USER="splunk:splunk" #Splunk user and group, this can vary from environment to environment, however, i have populated the defaults

BACKUP_LOCATION="/tmp" #Where you would like the backup of your splunk is saved, /tmp is the chosen default

BACKUP_NAME="etc.bkp.tgz" #The backup file (this is an arbitrary name), however, keep the .tgz format for the purpose of this script

DOWNLOADED_FILE="splunk_upgrade_download.tgz" #What your download upgrade is going to be called, you can change this, however, keep it .tgz file format

SPLUNK_HOME="/opt/splunk" #Default home directory, again, change per your environment needs

PRIVATE_KEY_PATH="~/key" #This is the path in which your private key resides in which your target hosts contain your public key

BASE_FOLDER="/opt" #This is the base folder in which splunk resides,This is also the location where you will be downloading and untaring the downloaded upgrade /opt is the default where splunk is best practices to install in this location

SSH_USER="ec2-user" #This is the user on your target machine which has sudo permissions **Very Important**

#1. Go to https://www.splunk.com/en_us/download/previous-releases.html and click on your operating system, and what version of splunk you will be upgrading to
#2. Click "Download Now" for the .tgz.
#3. It will redirect you to another page and in the upper right you'll see a block with "Download via Command Line (wget)". Click that and copy the URL in between the ' ' and starting with https://
URL="'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.2.0&product=splunk&filename=splunk-8.2.0-e053ef3c985f-Linux-x86_64.tgz&wget=true'"

### ========================================== ###
###            REMOTE UPGRADE SCRIPT           ###
### ========================================== ###

REMOTE_UPGRADE_SCRIPT="
#Stopping Splunk as Splunk user..
sudo -u $SPLUNK_USER $SPLUNK_HOME/bin/splunk stop

#Creating Backup of /opt/splunk/etc and placing it into your designated backup location with name you choose above
sudo -u $SPLUNK_USER tar -cvf $BACKUP_LOCATION/$BACKUP_NAME $SPLUNK_HOME/etc

#Executing the download from Splunk of the upgrade version you choose above
sudo -u root wget -O $BASE_FOLDER/$DOWNLOADED_FILE $URL

#Extract the downloaded ungrade over the previously installed splunk
cd $BASE_FOLDER
sudo -u root tar -xvzf $DOWNLOADED_FILE

#Give the changes ownership to the splunk user
sudo -u root chown -R $SPLUNK_USER:$SPLUNK_USER $SPLUNK_HOME

#Launch splunk and complete the upgrade
sudo -u $SPLUNK_USER $SPLUNK_HOME/bin/splunk start --accept-license --answer-yes --no-prompt
echo ""Splunk has been upgraded""

#cleaning up downloaded file
sudo -u root rm -rf $DOWNLOADED_FILE
"

### ========================================== ###
###              SCRIPT EXECUTION              ###
### ========================================== ###

#The remote script above is executed below and will go through your hostlist file and host by host create a backup and upgrade each splunk instance.

echo "In 5 seconds, will run the following script on each remote host:"
echo
echo "===================="
echo "$REMOTE_UPGRADE_SCRIPT"
echo "===================="
echo
sleep 5
echo "Reading host logins from $HOST_FILE"
echo
echo "Starting."
for i in `cat "$HOST_FILE"`; do
if [ -z "$i" ]; then
continue;
fi
echo "---------------------------"
echo "Installing to $i"
ssh -i $PRIVATE_KEY_PATH -t "$SSH_USER@$i" "$REMOTE_UPGRADE_SCRIPT"
#ssh -t "$i" "$REMOTE_UPGRADE_SCRIPT"
done
echo "---------------------------"
echo "Done"

 

If you have any questions or concerns regarding the script or just don’t feel quite as comfortable with Splunk upgrades, feel free to contact us and we’ll be happy to lend a helping hand.