Version Source Control for your Splunk Environment

When Splunk environments grow in organizations, the need for source control also grows with it. It is good practice to use the widely available source control tools that are available for enterprise level source control.

There are many Version Source Control (VCS) software available online, but the one most widely used is the open source, Git, which has proven to be a very powerful tool for distributed source control. Using Git, multiple Splunk Admins can work with their local repositories and the changes shared separately.

To take the conversation further, I would separate the need for version control in two major segments

  • User Applications
  • Administrative Applications

I have broken down the applications into two segments to enable ease of management for Splunk Admins. The User Applications should consist of the search artifacts that are built and developed as use cases evolve and change often, whereas the Administrative applications I would classify as those which are mostly used to deploy setup Splunk like TAs and other deployment apps. These applications rarely change once set up unless new data sources are on-boarded, there are significant changes to the architecture, etc.

In the context of this blog post, we will focus on the administrative applications. These apps are the backbone on your Splunk deployment and should be cautiously changed to make sure there is no downtime in the environment. Changing these files could cause irreparable damage to the way data is indexed to Splunk, causing loss to indexed events, especially when changing sourcetypes, etc.

As I already mentioned, there are numerous flavors of source control and depending on your taste, you can use either. If you’re starting off fresh with source control, Git is easy to set-up and you can use it with Github or Atlassian Bitbucket. Both these tools can help you get started in a matter of minutes, where you can create repositories and setup source control for your distributed Splunk environment.

The Git server will host all the master repos in the Splunk Environment for all the administrative apps. The admins who need make edits do it in the following two ways;

  • Edit the master directly.
  • Create local clones of the master, make the required edits, commit them to the local branch and then push it out to the remote repo.

Ideally, no one should edit the master branch directly to reduce the risk of unwanted changes to the master files. All admins should edit in local branches, and then once the edits are approved, they should be merged to the master.

There should be three Master repos with their respective apps and TAs in those repos. These repos should correspond to the following servers;

  • Cluster Master for Indexers
  • Deployer for Search Head Cluster
  • Deployment Server for Forwarders

To deploy the repos to the servers, you can use git hooks or tie your git deployment back into your puppet or chef environment. This is based on your discretion and how you are comfortable with distributed deployment in your organization. The repos should be deployed to the following directories

  • Cluster Master to $SPLUNK_HOME/etc/master-apps/
  • Deployer to $SPLUNK_HOME/etc/shcluster/apps
  • Deployment Server to $SPLUNK_HOME/etc/deployment-apps

After the updated repos are deployed to the respective directories, you can push them out to the client nodes using Splunk commands.

If you are interested in more information, please reach out to us and someone will get in touch with you and discuss options on how TekStream can help you manage your Splunk environment.

[pardot-form id=”16859″ title=”Blog- Zubair Rauf – Version Source Control for your Splunk Environment”]

TekStream AXF 12c Upgrade Special Components

TekStream’s extension to Oracle’s Application eXtension Framework (AXF) provides enhanced customizations surrounding Invoice Reporting using Business Activity Monitor (BAM), auditing of user actions, and QuikTrace of BPEL process instance.   With the introduction of the 12c upgrade available with release 12.2.1.3 TekStream discovered that two of its reporting components were highly impacted by paradigm changes in 12c.   TekStream has gone through multiple iterations of 12c upgrade and has incorporated the necessary reporting enhancements to provide the functionality of the 11g release to its 12c counterpart.  This paper highlights the enhancements to the package to bring it on line with 12c.

BAM Dashboards:

The Business Activity Monitoring component of the SOA solution was significantly improved in the 12c release.  So significantly in fact that it precluded an upgrade path from 11g.   In the official upgrade procedures solutions incorporating this component are instructed to stand up an 11g version for BAM and slowly introduce a 12c version as all of the nuances of the new release are learned such that alternatives can be made.  In addition to a different dashboard component the layered introduction of ‘Business Queries’ and ‘Business Views’  add new elements to the solution that have to be solved before a dashboard can be constructed.  TekStream has done the necessary homework to bring the 11g based system directly online during the upgrade within a new InvoiceAnalytics package to save our customers the effort of introducing an interim solution during the process.  With TekStream’s 12c AXF upgrade we accommodate replacement dashboards, new 12c objects that are introduced with the release as well as upgrade of the 11g BAM data.  Clients will regain functionality (albeit with new upgraded BAM dashboards and underlying components) immediately after going online with 12c.  They will have direct replacements to the ‘Assigned Invoices’, ‘Invoice Aging’, and ‘Invoices’ reports and can use these with all of the 12c enhancements.

QuikTrace:

TekStream’s Audit and Reporting package ships with a component labeled QuikTrace which in addition to global Worklist views to locate all active invoices also provided technical tracing capability not available in AXF.   Technical staff can use key data points to find a record within the SOA composite execution stack for those records not active in a worklist and traceable via the global Worklist view.  The capability was based on an 11g primitive ‘ora:setCompositeInstanceTitle’ which on a per composite level allowed for the population of the title field which was then searchable via the Enterprise Manager (em).  The Audit and Reporting package allows for the searching based on Imaging Document ID, Invoice Number, Purchase Order Number, Supplier Name, and a customizable business key.

With 12c Oracle has changed their paradigm for a more efficient flow trace primitive ‘oraext:setFlowInstanceTitle’ which migrates the search element to a new single element SCA_FLOW_INSTANCE.title per composite flow.  To maintain the same functionality of the 11g system it is necessary to encapsulate all designed search elements into a single location.   TekStream has incorporated this into the Audit and Reporting package to offer the same functionality to its client base.

Upgrading AXF Clients:

For AXF clients with the reporting package we have the elements to bring you back online with the features that you are accustomed.    These will be available as soon as you bring AXF back up on 12c.

For AXF clients without the reporting package be assured that TekStream can get you to a 12c Audit and Reporting point as well.  We understand the 12c data and can pull together the data objects for functional dashboards and can introduce those QuikTrace touchpoints into the 12c based composites for that feature capability.

Want to learn more about Invoice Reporting using Business Activity Monitor? Contact us today!

[pardot-form id=”16747″ title=”Blog- John Schleicher – TekStream AXF 12c Upgrade Special Components”]