[Solved] Resizing x11 Window in MobaXterm

By: Brandon Prasnicki | Senior Solutions Cloud Architect

Recently on a project, I had an issue where I was using an Oracle Webcenter Graphical user interface installer by running an installer command on a Linux host using MobaXterm from my Windows OS.  Almost every time I launched the installer, it generated an X11 window in MobaXterm that was so small I couldn’t see the entire screen.  To make matters worse, the installer didn’t have scroll bars in place to navigate around to click on critical buttons like ‘Next’ or ‘OK’.  The activity was supposed to be a simple one but was making too much of my time unproductive. The issue also wasn’t consistent. Infrequently after multiple attempts, it would magically work without explanation.

After endless hours of googling and trying to change several X11 session variables and not finding results or resolution, I stumbled across an option that consistently rendered the installer window properly.

It’s a simple fix, but for me, it was a hard one to find.  The fix was to move the x11 session to its own single container window.

To find this option I navigated to Settings -> Configuration.

Settings -> Configuration Resolving Tiny x11 Window issue in MobaXterm

Here in the X11 tab, you will see this drop down option highlighted below. Select “Windowed mode”: X11 server constrained to a single container window

MobaXterm Configuration Window

Now when MobaXterm starts you will see a separate display dedicated to the X11 window in MobaXterm:

MobaXterm separate window X11

Want to learn more about Oracle Webcenter or MovaXterm? Contact us today!

Infosec App on Splunk Cloud – Part 1: Installing and Configuring

By: Khristian Pena | Team Lead

Are you looking to introduce security use cases to your Splunk Cloud deployment? If so, there’s a free app that is your entry to continuous monitoring and security investigations. The InfoSec App on Splunk is an entry level security solution powered by the Splunk platform that is designed to address the most common security use cases. You can also leverage the InfoSec app for a variety of advanced threat detection use cases and expand them using other security apps and add-ons that you can download from Splunkbase.

The following free Add-ons must be installed before you can start using InfoSec App on Splunk:

• Splunk Common Information Model (CIM): https://splunkbase.splunk.com/app/1621/

• Punchcard visualization: https://splunkbase.splunk.com/app/3129/

• Force Directed visualization: https://splunkbase.splunk.com/app/3767/(use add-on version 3.0.1 in Splunk Cloud)

• Lookup File Editor: https://splunkbase.splunk.com/app/1724/(new requirement starting from InfoSec v1.5)

• Sankey Diagram visualization: https://splunkbase.splunk.com/app/3112/(a new optional prerequisite for the experimental VPN Access dashboard starting from v1.5.3)

InfoSec App – https://splunkbase.splunk.com/app/4240/#/details

How to install apps & add-ons for the InfoSec App on Splunk Cloud

From your home screen you can download and install apps & add-ons by selecting the gear icon on the left side of the screen above your list of apps.download and install apps & add-ons by selecting the gear icon on the left side of the screen above your list of apps

  1. You can access public apps from Splunkbase by selecting the Browse more apps option on the following screen.
  2. Find your app or add-on, then click Install
  3. Enter your Splunk.com login credentials and select the checkbox to accept the app license terms
  4. Select Login and Install
    a. The app/add-on is downloaded from Splunkbase and installed on your deployment.

Note. You can install most Splunk apps on Splunk Cloud in a self-service manner without assistance from Splunk support except for customers that are on a Classic Cloud Experience designation. To determine your Splunk Cloud platform Experience select Support & Services > About located on the top right hand corner of your Splunk Cloud Web.  In the About panel, under Splunk Cloud you will find your experience (Classic or Victoria).

Data Requirements & Prerequisites

The following Data Models must contain data and be accelerated (examples included):

Splunk Overview of Data Models and Acceleration:
https://docs.splunk.com/Documentation/SplunkCloud/8.2.2202/Knowledge/Aboutdatamodels

• Authentication

    • Active Directory Data from the Windows Logs on Domain Controllers or O365
    • Linux Auth Logs
    • Cloud (AWS, Azure, or GCP) Authentication Data
    • Okta
    • Duo
    • Cisco ISE

•  Change

    • Windows Event Logging
    • S3 Audit Logs
    • ServiceNow
    • Puppet

• Intrusion Detection

    • Palo Alto Networks
    • Cisco FirePOWER
    • Check Point

• Malware

    • Crowdstrike
    • Symantec
    • McAfee
    • Trend Micro
    • Sophos

• Network_Sessions

    • Zscaler
    • Palo Alto Networks
    • Cisco ASA
    • AWS VPC Flow
    • Cisco iOS
    • Juniper
    • Netflow

• Network_Traffic

    • Palo Alto
    • Zscaler
    • SolarWinds

•  Endpoint

    • Crowdstrike Falcon Insight
    • Microsoft Endpoint Manager
    • Cisco Anyconnect

•  Web

    • IIS
    • Fortinet
    • WebSense
    • Apache

All data used by InfoSec app must be Common Information Model (CIM)- compliant. The easiest way to accomplish this is by using CIM-compliant data and the appropriate add-ons that will normalize and tag your data appropriately. There are custom solutions that require tagging and event types when your data is not being mapped to the data models but that will not be covered in this guide.

This concludes part 1 of Installing and Configuring InfoSec app on Splunk Cloud. Part 2 will be released soon and will contain additional information about leveraging the app and the dashboards, reports, and alerts available. It will also include basic troubleshooting tips and best practices when utilizing the app.

Want to learn more about introducing security use cases to your Splunk Cloud deployment?
Contact us today!

 

Restricting Access Using the Search Filter Generator: Drawbacks and Limitations

By: Nate Hufnagel | Splunk Consultant

Role-Based Access Control, or RBAC, is the paradigm for restricting access to authorized users in Splunk.  These practices are commonly used to limit what data is visible to end users, depending on their role in an organization.  Within Splunk, an admin has precise control over what privileges a user has, from searching data from particular indexes, to what alert actions they can create and also to what knowledge objects they can edit as well as many other Splunk behaviors.  With search filters, you can get even more granular with what information a user can see.  These are custom searches that get processed before each user’s search query that “filter” which results a certain user can see.  While this method can be useful for giving access to specific data within an index, it is not the most efficient way to segregate data.  Furthermore, it is not recommended as the best way to control who sees what in any Splunk deployment.  Here we’ll explore why restricting access using the search filter generator doesn’t always work as we’d expect.

User Privileges in the Splunk Web UI (Enterprise or Cloud)

Before diving into the search filter mechanics, let’s look at its context in defining user privileges.  There are 5 categories within the web UI that determine a users’ privileges: inheritance, capabilities, indexes, restrictions, and resources.  These can be found by navigating to Settings>Roles and clicking on a role name.

1. Inheritance

a. Where a new role type can be given the same capabilities of another. Here you can select from previously created roles and “inherit” their capabilities for a new role.

2. Capabilities

a. Where an admin can select what specific capabilities a role has, like edit_https or run_debug_commands to name a few. This affords very granular control over what a certain user can do within Splunk.

3. Indexes

a. Defines what indexes are searchable to a user. Restricting access on an index basis is a best practice in Splunk and should be the primary way to implement RBAC. Here and admin can also set what indexes a user can search by default when one is not explicitly defined in a search.

4. Restrictions

a. This is where the search filter lives. Here, there are two options for creating a search filter: by using the search filter generator, or by typing directly into the search filter.  We will explore these functions in more detail later.

5. Resources

a. Here, an admin can set resources usage limitations for a role. This can include only allowing searches within a certain app, restricting the number of search jobs a role can run at the same time, even the amount of disk space a search job can use.

Now that we have some context, lets dive into the search filter behavior, and why it doesn’t always work as expected. Here we can see the search filter as it appears in Splunk:

Here we can see the search filter as it appears in Splunk

On the left of the screen, we see the search filter generator.  As the name suggests, this tool pulls indexed fields and there values and creates a filter that get appended to any search run by a user assigned to the role.  At the top, there is a drop down option that controls how long Splunk will spend looking for a certain field or value.  By default, the search filter is populated with a wildcard.  We can preview what information this role can see by clicking “Preview search filter results”:We can preview what information this role can see by clicking “Preview search filter results”

By default, users can see all fields and indexes visible to them as defined by their role.  Next, let’s try and filter events a little further by excluding results where sourcetype=splunkd.  We’ll start by deleting the wildcard “*” in the search filter.start by deleting the wildcard “*” in the search filter

In this example, we type “sourcetype” into the Indexed fields box, then “splunkd” into the “Values” box.  Concatenation is greyed out here because the search filter is currently empty. Clicking “Add to search filter” adds our newly created filter as (sourcetype::splunkd).  But how can we know if the filter we created works as expected?

 

We can preview what information this role can see by clicking Preview search filter results.preview what information this role can see by clicking Preview search filter results.

As expected, only events with a source type of splunkd are returned.  To exclude events with the same settings, add NOT to the beginning of the filter.

To exclude events with the same settings, add NOT to the beginning of the filter.

There are some drawbacks to using an SPL filter this way.  The results that come back to a user can depend on multiple factors and aren’t always what you’d expect. Here are a few caveats to keep in mind when using the search filter, and why it should generally be avoided:

1. Using the search filter generator alone can create inefficient SPLs

a. Let’s take another look at the example above. If we wanted to exclude splunkd events from a specific source using the search filter generator, we would append the previous filter using the same method as before:

exclude splunkd events from a specific source using the search filter generator, we would append the previous filter using the same method as beforeNew Search screenshot

The results come back as expected, however using multiple NOT statements in this manner is inefficient and could cause serious performance issues.

2. Roles can inherit index restrictions and capabilities from other roles, but not their search filters

a. For example, lets say our standard power user role had some search restrictions assigned to it. If we wanted to create another power user role that inherited power user privileges and some additional capabilities (but maintain the search filter), we would need to copy the search filter and apply it to the new role.

3. Certain configurations in authorize.conf can alter how a search filter behaves

a. Within authorize.conf, there is a parameter called “srchFilterSelecting” that controls how the search filter behaves. It is set to true by default at a global level, which means that search filters select the results the user sees.  If set to false, it eliminates the results.  This can cause confusion even in a simple environment, especially if different apps assume one setting or another.

b. In the examples above, we executed our search filters with the default setting of srchFilterSelecting=true. If we set it to false, all of our filters would behave the opposite of what is expected.  So even though we successfully excluded certain events using a NOT statement, Splunk would now interpret the opposite of this statement and only include the results of our search filter.

4. The search filter only works on indexed fields

a. Any fields that are extracted at search time will not be effective in the search filter, even if they are entered manually. This limits the flexibility of the search filter, and presents an issues with no clear workaround.  We could move some search-time extractions to index-time however this is not feasible in the long term, as having too many index-time field extractions can negatively impact indexer performance.

There are better solutions when it comes to creating scalable role-based access controls in Splunk.  The following documents outline when and how to implement these restrictions in detail and includes some common use cases for handling sensitive data in Splunk.

1. Add and edit roles

2. About users and roles

Contact us to learn how Tekstream can help you do more with your data
and achieve better business outcomes.

Customizing Content Type Fields in Oracle Content Management Cloud

How to Create Dynamic or Dependent Select List Fields for Content Contribution

By Courtney Dooley | Technical Architect

Oracle Content Management Cloud allows for text fields to be a select list of options but does not allow dynamic population from an external source without customization.  Outlined below are two customization methods available, and the benefits and tips for using each when customizing content type fields.

Custom Field Editors

Within Content Management Component Development tools is an option to “Create Content Field Editor”.  When creating the component, the data type and handle multiple values fields are required, however the resulting component is not altered by those selections.  Those selections drive where this component will be available.  Without any customization, is not ready for use.

Create Content Field Editor Example

The component is created with an “assets” folder containing two files; view.html and edit.html.  Modifications are needed for edit.html, typically view.html can stay as the default.  Without modifications to edit.html the text field is replaced with a simple input field and JavaScript functions save the data to and recall data from the asset as the field is changed.

To create the select list with options supplied from an external source, we need to update the html and associated CSS to match the system fields, then add a JavaScript function to call and create the options for that select list.

Onload the getCountries function will be called, and the drop down will populate from the results.  Additional html tags and functions can be configured for validation prior to saving the selected value to the asset.

Customizing Content Type Fields Create Content Item Example

Benefits of using Form Editors:

Multiple fields can be created within a single editor.  System or other custom fields can be accessed, and their values used within JavaScript functions.  When modifications are made to the content type definition, the custom editor does not require changes unless the field, or its custom functionality is impacted directly by the change.

Tips when using Form Editors:

Any custom development in a cloud environment needs to be validated after upgrades and patches to make sure recent changes to the environment do not negatively impact the custom developed field editors.  It’s also important to validate the field values properly to avoid frustrating errors that only display after save has been requested causing lost work.

Custom Form Editors

Unlike the custom field editors, Custom Form Editors replace the system form entirely.  This allows the content saved to the asset to be structured entirely differently than how the entry form collects it.  For example, the screenshots below show a “Product” content type with multiple fields describing the product, but a single search and drop-down selection field is presented on the creation page.

While customizing content type fields “Product” content type with multiple fields describing the product, but a single search and drop-down selection field is presented on the creation page

On change that selection provides values for all the asset fields, including Name, Description, and properties such as language and slug.  Those values would never have to be displayed but could be populated and displayed upon creation.

On change that selection provides values for all the asset fields, including Name, Description, and properties such as language and slug

Benefits of using Content Form Editors:

The form fields, functionality, layout, even styles are all editable using this component type.  Libraries can be referenced to include 3rd party functions.  Multiple system functions allow for use of Content Pickers, and Media Pickers.

Tips for using Content Form Editors:

At the time this is being written, system supported calendar functionality for date fields is not available, so additional date picker functionality will need to be implemented.  This custom form editor should be developed with a function that creates the default type fields dynamically from the content type definition; otherwise, any change to the content type fields will require modifications to the custom form.  A custom form can easily become a bottle neck for content type changes if not designed with this in mind.

As you can see, Oracle Content Management Cloud has flexible robust standard forms that allow for a range of options to expand the tools capabilities by customizing content type fields.

Contact Us for more tips and tricks for developing custom Content Contribution Forms!

 

From Email Attachments to Organized Approved Content:

How Oracle Content Management Cloud Can Retrieve Email Attachments, Track Approvals and Organize Them as a Searchable Assets

By Courtney Dooley | Technical Architect

Are you tired of getting emails with file attachments that you have to manually move to the cloud for collaboration?  Oracle Content Management Cloud offers out-of-the-box email capture capabilities that allow attachments to be saved and organized for review and approval processing.

Preparing the Attachments Storage Location

Email attachments can be stored within Oracle Content Management as files with limited metadata, or as complex assets with metadata and attachment references.  Depending on the type of content you require, the below storage options are available.

1. Asset Repository

Content that may be published to a website or external source using publishing channels will need to be created within an Asset Repository.

2. Business Repository

Content that requires Approval but not publication should use a Business Repository for those attachments.

Both repository types support complex assets and will need security, content types, and language policies configured prior to creating the capture procedure.

3. Document Folder

A single folder is allowed as the destination for each commit profile, multiple profiles can ensure attachments are stored accurately.  To group attachments, batch folders can be created in the destination folder.  Approval workflows for folders and files require integration with Oracle Integration Cloud.

Getting the Attachment

To import emails, we need to first create a procedure under the Administration Capture page.

1. Security

After creating a new procedure, you will need to configure security for the user or group that will be executing the import.

This user or group will need to have access to the chosen Folder or Repository.

2. Metadata

Within the metadata tab, fields can be defined.  There is a limited amount of data available for each email.  The message body is only available as a text or EML file.

Other types of metadata definitions can be defined but are not used for this case.

3. Classification

Batch statuses, document profiles, and attachment types are all defined under Classifications.  Attachment Types are assigned to document profiles and used within processor jobs.

4. Capture

Client profiles can be created and configured for additional users.  These settings control how documents are created, separated in batches, and what metadata fields are available.

The Import Processor Jobs define the source where files will be captured.  The settings for each job are outlined below:

  • • General settings: batch prefix, import source (email)
  • • Image Import Settings
  • • Document Profile with Metadata Mapping
  • • Import Source Settings
    • ○ Email account connection settings
    • ○ Message filtering by folder, from address, subject, and/or body
    • ○ Message body and attachments capture options
    • ○ Post processing email message handling
  • • Post Processing (processor or commit)

Inbox rules and folders should be configured before completing the import setup, and the type of email account must be one of the following:

  • • Standard IMAP (Basic)
  • • Microsoft Exchange Web Services (Basic or OAuth)
  • • Gmail (OAuth)

5. Processing

Processing jobs can be configured for conversions to PDF or TIFF, XML Transformations, Character Recognition, or External Processors which can be configured to push or pull from Capture.  Metadata Field conditional value assignment is also configurable.

6. Commit

The commit profile defines how the files will be stored within OCM, selecting one of the three storage options outlined previously.  Within the Commit Driver Settings, a parent content type can be selected to include additional details from the email and link attachments.

Reviewing the New Asset

1. Simple Workflow

Oracle Content Management includes a single approval process flow which allows contributors to submit for review, and repository managers to approve or reject. Collaboration via conversations is also available.

2. Extending Workflow

Oracle Integration Cloud Service has a built-in integration with Oracle Content Management that allows robust approval workflows and tight integration to all Assets, Files, and Folders.  This integration includes role-based approvals, conditional progression, dynamic states, custom notifications, and integrations.

As you can see, Oracle Content Management Cloud offers many ways to keep your content organized and centralized for collaboration and approval.

Contact Us for more tips and tricks for Oracle Content Management Development!

Running Real-Time Searches: To Search or Not to Search

By David Allen | Senior Splunk Consultant

One of the characteristics of modern life seems to be that we are moving at an ever-increasing rate, regardless of turbulence or obstacles. We have come to expect things to be faster than they were even a week ago. We expect computers to run faster, cars to go faster as we desire more speed and instant gratification.  As a result, one would generally embrace the idea that getting data faster by running real-time searches in Splunk would also be a good thing. Well not so fast, let’s slow down a bit and think this through.

In this blog, we will discuss the pros and cons of running real-time searches in Splunk and what a best practice search scenario should look like.

To start off, we need to discuss the type of alert response system that you will be using. Is your response system completely automated or are you responding using humans? In almost all cases there will be some form of human decision making and reaction time involved. This is important because once you add in human delays you can get a more realistic idea of what the latency is to for analysts to even start to work on your alerts.  Often, due to these delays, it is not necessary to put a strain on your Splunk infrastructure to get your tickets a few seconds faster.

For instance, consider the amount of time it will take for one of your experienced analysts to receive, mentally process, and react to various alerts. Generally, if the alert occurs during regular work hours, then this reaction time may be in the order of 5-10 minutes. If you are not staffed for 24/7 analyst support, there are after-hour delays or delays when the analysts take breaks periodically though out the workday. Then there are the normal weekend delays which could be an hour or two.  You get the idea.

In many cases there may be substantial delays in reacting to an alert and if the alerts came in a few minutes later the overall reaction time would not change significantly. Weighing this against the significant impact to the Splunk infrastructure performance, in almost all cases it would be a much better approach to use indexed time real-time alerts.

Now let’s dig into the impact of the Splunk infrastructure when running real-time searches. Real-time searches need to be running all the time in the background and as a result, will consume one core on the Splunk search head and one core on EACH of your Splunk indexers for as long as the search is running. As your cores get consumed with more and more concurrent real-time searches the overall Splunk infrastructure performance comes crashing down.

For example, if your search head and indexers have 12 cores each and you have 10 continuously running real-time searches, this leaves 2 cores for all the remaining work that the enterprise must do. So, if you have a dashboard with 10 panels and it takes one core per panel to run then you only have 2 cores remaining and the performance will drop to one-fifth of what could be if all cores were available to run the dashboard.

Now compare this to a regular search running every 5 minutes that takes only 10 seconds to complete. This search consumes one core on every search head and indexer but for only 10 seconds. The results are the same, but this search consumes roughly 3% of the processing power of the real-time search.

Hopefully, by now you are starting to see that running real-time searches are not to be used carelessly but by the Splunk professional and only for use cases that are short-lived ad hoc searches.

Let’s look at a couple of ways to protect your Splunk infrastructure from Splunk users hogging up precious system resources. The following settings control a lot of the real-time searching capabilities…

rtsearch – This setting enables the user to do real-time searches

schedule_rtsearch – This setting enables the user to schedule real-time searches.

Remove Real-Time Search Capability for Certain Users

By default, rtsearch and schedule_rtsearch are enabled for the power role and is inherited by other roles. So as a minimum, you can disable these power role settings so users with this role do not have access to any real time searching capability. Be sure to also disable these settings for any future roles that you create.

The easiest way to do this is through the GUI. Go to Settings then under the Users and Authentication section select Roles.

How to remove the ability for users to start running real-time searches
Then at the Roles screen select Edit for the power role then select Capabilities.

under roles select edit beside power

From the capabilities screen, deselect the rtsearch and schedule_rtsearch settings and Save the updates.

From the capabilities screen deselect the rtsearch and schedule_rtsearch settings and Save the updates.Remember that the admin role inherits the roles of the power role so if you want your admins to have real-time searching capabilities then you will need to turn these settings on specifically for the admin role.

Remove Real-Time Search Capability from ALL Users
The easiest way to do this would be to use the GUI as described above and disable the rtsearch and schedule_rtsearch settings but that would not prevent anyone from easily enabling those settings later using the GUI.

A better way would be to disable the rtsearch and schedule_rtsearch settings using the CLI.

Remember that all settings in the /etc/system/local folder are system-wide settings and have a higher precedence than that same setting in any other folder. So, to turn off real-time searching for the entire system you will need to disable the rtsearch and schedule_rtsearch settings in the /etc/system/local folder. Here is how to do that.

Using the CLI go to the /etc/system/local folder on the search head or your all-in-one box and open the authorize.conf file and add the two settings to the default stanza as shown and restart Splunk.

[default]
rtsearch = disabled

schedule_rtsearch = disabled

Indexed Real-Time Search
If you decide that you do not need up-to-the-second accuracy, you can get close to real-time searching speed by running your real-time searches after the events are indexed which will greatly improve indexing performance. This runs searches like historical searches, but also continually updates the search with new events as the events appear on disk and looks just like a real-time search.

To select indexed real-time searching change the default indexed_realtime_use_by_default setting in the limits.conf to true as shown below and restart Splunk.

[realtime]
indexed_realtime_use_by_default=true

Disable RT Search Panel in Time Picker

For those that want to have real-time or indexed real-time search capability but would like to disable the real-time search panel in the time picker presets so casual users do not select the real-time presets, you may disable the show_realtime setting in times.conf as shown below.

running real-time searches do not select the real time presets, you may disable the show_realtime setting in times.conf as shown below.

[settings]
show_realtime=false

For those who would like to disable individual real-time time picker settings, you can do that by disabling the respective stanzas in times.conf.

[settings]

[real_time_last30s]
disabled = 1
[real_time_last1m]
disabled = 1
[real_time_last5m]
disabled = 1
[real_time_last30m]
disabled = 1
[real_time_last1h]
disabled = 1
[real_time_all]
disabled = 1

To know how many real-time searches are actually running on your enterprise, this search shows who is running which searches and how long they have been running.

| rest /services/search/jobs | search eventSorting=realtime | table label, author, dispatchState,  eai:acl.owner, label, isRealTimeSearch, performance.dispatch.stream.local.duration_secs, runDuration, searchProviders, splunk_server, title

In conclusion, running real-time searches is very powerful and beneficial if they are run for short periods of time by the right people who need to monitor streaming data before it is indexed. But for most use cases and considering the impact to the infrastructure, real-time searches should not be used but should be run as indexed real-time searches with very little difference in latency and virtually no impact on the infrastructure. Learn how to further optimize your Splunk searches or get help from one of our Splunk experts.

Securing Splunk Enterprise with SSL

Kamal Doriaraj | Senior Splunk Consultant

 

I recently worked with a customer where the entire Splunk architecture was not on SSL. The entire architecture migrated from the non-SSL to SSL communication using self-sign certificates. The following content consolidates the disparate information of securing Splunk enterprise with SSL, into one single blog in a step-by-step process.

PART ONE: CERTIFICATES

Splunk software ships with, and is configured to use, a set of default certificates. These certificates discourage casual snoopers but could still leave you vulnerable because the root certificate is the same in every Splunk download and anyone with the same root certificate can authenticate.

SELF-SIGN CERTIFICATES:

To use our own self-sign certificates; We need the below three files, which is everything you need to configure indexers, forwarders, and Splunk instances that communicate over the management port:

  • myServerCertificate.pem
  • myServerPrivateKey.key
  • myCACertificate.pem

If you already possess or know how to generate the needed certificates, you can skip this topic and go directly to the configuration steps.

a.     Create a new directory to work from when creating your certificates. In our example, we are using $SPLUNK_HOME/etc/auth/mycerts. (This ensures you do not overwrite the Splunk-provided certificates that reside in $SPLUNK_HOME/etc/auth)

b.     Create the root certificate:

First, you create a root certificate that serves as your root certificate authority. You use this root CA to sign the server certificates that you generate and distribute to your Splunk instances.

Generate a private key for your root certificate:

$SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out myCAPrivateKey.key 2048

When prompted, create a password for the key.

When the step is completed, the private key myCAPrivateKey.key appears in your directory.

Generate and sign the certificate:

$SPLUNK_HOME/bin/splunk cmd openssl req -new -key myCAPrivateKey.key -out myCACertificate.csr

When prompted, enter the password you created for the private key in $SPLUNK_HOME/etc/auth/mycerts/myCAPrivateKey.key.

Provide the requested certificate information, including the common name if you plan to use common name checking in your configuration. A new CSR myCACertificate.csr appears in your directory.

Use the CSR myCACertificate.csr to generate the public certificate:

$SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in myCACertificate.csr -sha512 -signkey myCAPrivateKey.key -CAcreateserial -out myCACertificate.pem -days 1095

When prompted, enter the password for the private key myCAPrivateKey.key.

A new file myCACertificate.pem appears in your directory. This is the public CA certificate that you will distribute to your Splunk instances.

c.    Create the server certificate:

Now that you have created a root certificate to serve as your CA, you must create and sign your server certificate.

Generate a key for your server certificate:

Generate a new RSA private key for your server certificate. In this example we are again using AES encryption and a 2048 bit key length:

$SPLUNK_HOME/bin/splunk cmd openssl genrsa -aes256 -out myServerPrivateKey.key 2048

When prompted, create a new password for your key. A new key myServerPrivateKey.key is created.

You will use this key to encrypt the outgoing data on any Splunk Software instance where you install it as part of the server certificate.

Generate and sign a new server certificate:

Use your new server private key myServerPrivateKey.key to generate a CSR for your server certificate.

$SPLUNK_HOME/bin/splunk cmd openssl req -new -key myServerPrivateKey.key -out myServerCertificate.csr

When prompted, provide the password to the private key myServerPrivateKey.key.

Provide the requested information for your certificate, including a Common Name if you plan to configure Splunk Software to authenticate via common name checking. A new CSR myServerCertificate.csr appears in your directory.

Use the CSR myServerCertificate.csr and your CA certificate and private key to generate a server certificate.

$SPLUNK_HOME/bin/splunk cmd openssl x509 -req -in myServerCertificate.csr -SHA256 -CA myCACertificate.pem -CAkey myCAPrivateKey.key -CAcreateserial -out myServerCertificate.pem -days 1095

When prompted, provide the password for the certificate authority private key myCAPrivateKey.key.
Make sure to sign this with your private key and not the server key you just created.

A new public server certificate myServerCertificate.pem appears in your directory.

d.    You should now have the following files in the directory you created:

      • myServerCertificate.pem
      • myServerPrivateKey.key
      • myCACertificate.pem

Prepare your signed certificates for Splunk authentication:

Once you have your certificates, you must combine the server certificate and your keys into a single file that Splunk software can use.

a.   Create a single PEM file:
Combine your server certificate and public certificate, in that order, into a single PEM file.

cat myServerCertificate.pem myServerPrivateKey.key myCACertificate.pem > myNewServerCertificate.pem

b.   Once created, the contents of the file myNewServerCertificate.pem should contain, in the following order:

      • The server certificate (myServerCertificate.pem)
      • The private key (myServerPrivateKey.key)
      • The certificate authority public key (myCACertificate.pem)

PART TWO: Securing Splunk Enterprise

What to do when you have your certificates. You can apply encryption and/or authentication using your own certificates for:

  1. Communications between the browser and Splunk Web
  2. Communication from Splunk forwarders to indexers
  3. Other types of communication, such as communications between Splunk instances over the management port

1. Communications between the browser and Splunk Web:

Assuming that you have already generated self-signed certificates or purchased third-party certificates.

Make sure your certificate and key are available from your folder. In this example we are using $SPLUNK_HOME/etc/auth/mycerts/:

  • $SPLUNK_HOME/etc/auth/mycerts/mySplunkWebCertificate.pem
  • $SPLUNK_HOME/etc/auth/mycerts/mySplunkWebPrivateKey.key

Open or create a local web.conf file in $SPLUNK_HOME/etc/system/local/web.conf, or in any other application location if you’re using a deployment server.

Under the [settings] stanza, configure the path to the file containing the web server SSL certificate private key and the path to the PEM format Splunk web server certificate file.

The following example shows an edited settings stanza:

[settings] enableSplunkWebSSL = true privKeyPath = </home/etc/auth/mycerts/mySplunkWebPrivateKey.key > serverCert = </home/etc/auth/mycerts/mySplunkWebCertificate.pem >

Restart your Splunk software: # $SPLUNK_HOME/bin/splunk restart splunkd

You must now prepend “https://” to the URL you use to access Splunk Web.

2. Communication from Splunk forwarders to indexers:

Using your own certificates to secure Splunk communications involves the following procedures:

· Configuring indexers to use a new signed certificate

· Configuring forwarders to use a new signed certificate

Configuring indexers to use a new signed certificate:

Copy the server certificate and CA public certificate into an accessible directory on the indexer you want to configure.

$SPLUNK_HOME/etc/auth/mycerts/

Configure the inputs.conf file on the indexer to use the new server certificate.
Add the following stanzas to the $SPLUNK_HOME/etc/system/local/inputs.conf file, or the appropriate directory of any app you are using to distribute your forwarding configuration:

[splunktcp-ssl://9998]
disabled=0
[SSL]
serverCert=/opt/splunk/etc/mycerts/myServerCertificate.pem
#requireClientCert = true
#sslAltNameToCheck = forwarder.local

Note: Configure your indexers to use SSL on port 9998. Because you can continue to have the existing non-SSL port to use on 9997. Once we configure complete enterprise to use SSL on port 9998 and everything is reporting on 9998 port, we can then disable the 9997.

Indexers are configured with the Root CA cert used to sign all certificates. This can be achieved by editing the file server.conf in $SPLUNK_HOME/etc/system/local on your indexer(s).

[sslConfig]
sslRootCAPath = /opt/splunk/etc/mycerts/myCACertificate.pem

Restart the splunkd process:
# $SPLUNK_HOME/bin/splunk restart splunkd

Repeat the above steps on all the Indexers.

Configure forwarders to use a signed certificate:

Given we would be having huge number of forwarders; We need to use DS to push the certificates and the configuration files.
I would recommend to test it with few set of forwarders before deploying into all the clients.

Steps:
In the DS, create an app ‘customer_cert_outputs’ (use your own naming) with all the cert

files (.pem) in the /local folder. The app should also contain the outputs.conf file.

Make sure to refer the correct app path in the outputs.conf file.

[tcpout:group1]
server=indexer01:9998, indexer02:9998
disabled = 0
clientCert = /opt/splunk/etc/apps/customer_cert_outputs/local/myServerCertificate.pem

Push this app out to the forwarders! (Don’t forget to mark the app as “restart required” in the server class on the DS.)

To verify your SSL connections in Splunk Web, try the following command:

index=_internal source=*metrics.log* group=tcpin_connections | dedup hostname | table _time hostname version sourceIp destPort ssl

You can also splunkd.log to validate and troubleshoot your configuration.

Once we see the clients forwarding thru port 9998. We can leverage the whitelist/blacklist in the serverclass to push the app to all forwarders in a phased manner.

3. Securing inter-Splunk communication:

Distributed search configurations share search information, knowledge objects and app and configuration information over the management port.

Communication between search heads and peers relies on public-key encryption. Upon startup, Splunk software generates a private key and public key on your Splunk installation. When you configure distributed search on the search head, the public keys are distributed by search heads to peers and those keys are used to secure communication. This default configuration provides built-in encryption as well as data compression that improves performance.

It is possible to swap these generated keys out with your own keys, though the existing keys are generally considered adequate for most configurations.

Click here to connect with a Splunk Specialist or learn more about our Splunk Managed Services Here.

 

Make Unstructured Data Searchable with a Multikv Command

Jeff Rabine | Splunk Consultant

Splunk works best with structured data. This blog post will cover how to make unstructured data searchable. In this real-world example, the customer wanted to use data in the second table of an unstructured log file. Changing the log format was not an option, and access to .conf files was not available, so all changes needed to happen at search time.

Raw log sample:

raw log sample where we need to make unstructured data searchable

As you can see, there are two tables of data in the log. The first step is to remove the top table from the results since it’s unnecessary for this search. We will do that using the rex command to over-write _raw capturing only the data that we need.

two tables of data

 

The next step is to use the multikv to break the tables into separate events. This command will attempt to create fields and values from the table however, in our case, we removed the headers from the table because the formatting of our table was not clean. This caused the multikv command to not work properly. Since we removed the headers, we will set them to noheader=t.

Multikv noheader=t

Now, the last thing we need to do is create our field extractions, and then we can use the data however we please.

final image multikv

 

As you can see, we now have nice clean data!

Other uses of the multikv command:

Depending on your data, there are other ways to use the multikv command. Neither of these examples was able to make unstructured data searchable for our customer, but I recommend trying them with your data. Your success with the following examples will depend on how cleanly formatted your logs are.

In our example, we stripped out the headers of the table to make unstructured data searchable. You may be able to leave the headers. That would save you from extracting the fields with the rex command. Also, by default, the command will attempt to process multiple tables within the log, so you might just have to use the multikv command. After running this search, check to see if the correct fields were extracted.

index="fruitfactory" sourcetype="fruitfactory"
| multikv

You can also tell the command what row contains the headers of the table. This would allow you to always look for the headers on the first, second, etc row of the event. Again, check and see if the correct fields were extracted after running this command.

index="fruitfactory" sourcetype="fruitfactory"
| multikv forceheader=

Want to learn more about unstructured data or using the multikv command? Contact us today!

Monitor Splunk Alerts for Errors

Zubair Rauf | Senior Splunk Consultant – Team Lead

In the past few years, Splunk has become a very powerful tool to help teams in organizations proactively analyze their log data for reactive and proactive actions in a plethora of use cases. I have observed almost every Splunker monitor Splunk Alerts for errors. Splunk Alerts use a saved search to look for events, this can be in real-time (if enabled) or on a schedule. Scheduled alerts are more commonplace and are frequently used. Alerts trigger when the search meets specific conditions specified by the alert owner.

Triggered alerts call alert actions which can help owners respond to alerts. Some standard alert actions are to send an email, add to triggered alerts, etc. Other Splunk TAs also help users integrate external alerting tools like PagerDuty, creating JIRA tickets, and many other things using these tools. Users can also create their own custom alert actions which can help them respond to alerts or integrate with external alerting or MoM tools. There are times that they may fail due to different reasons and a user may not get the intended alert they set up. This can be inconvenient for users and if the alerts are used to monitor critical services, this can have a financial impact as well and can prove to be costly if alerts are not received on time.

The following two searches can help users understand if any triggered alerts are not sending emails or the alert action is failing. Alert actions can fail because of multiple reasons, and Splunk internal logs will be able to capture most of those reasons as long as proper logging is set in the alert action script.

Please note that the user running the searches need to have access to the “_internal” index in Splunk.

The first search looks at email alerts and will tell you by subject which alert did not go through. You can use the information in the results of

index=_internal host=<search_head> sourcetype=splunk_python ERROR

| transaction startswith=“Sending email.” endswith=“while sending mail to”

| rex field=_raw “subject\”\=(?P<subject>[^\”]+)\””

| rex field=_raw “\-\s(?<error_message>.*)\swhile\ssending\smail\sto\:\s(?P<rec_mail>.*)”

| stats count values(host) as host by subject, rec_mail, error_message

Note: Please replace <search_head> with the name of your search head(s), wildcards will also work.

Legend;

host - The host the alert is saved/run on

subject - Subject of the email - by default it is Splunk Alert: <name_of_alert>

rec_mail - Recipients of the email alert

error_message - Message describing why the alert failed to send email

The second (below) search looks through the internal logs to find errors while sending alerts using alert actions to external alerting tools/integrations

| transaction action date_hour date_minute startswith=“Invoking” endswith=“exit code”

| eval alert_status = if(code=0, “success”, “failed”)

| table _time search action alert_status app owner code duration event_message

| eval event_message = mvjoin(event_message, “ -> “)

| bin _time span=2h

| stats values(action) as alert_action count(eval(alert_status=“failed”)) as failed_count count(eval(alert_status=“success”) as success_count latest(event_message) as failure_reason by search, _time

| search failed_count>0

Note: Please replace <search_head> with the name of your search head(s), wildcards will also work.

These two searches can be setup as their own alert, but I would recommend setting these up on an Alert Monitoring dashboard.  Splunk Administrators can monitor Splunk Alerts periodically to see whether any alerts are failing to send emails or any external alerting tools integrations are not working. Splunk puts a variety of tools in your hand but without proper knowledge, every tool becomes a hammer.

To learn more and have our consultants help you with your Splunk needs, please feel free to reach out to us.