Working with Multivalue Fields in Splunk

By: Yetunde Awojoodu | Splunk Consultant

 

Have you ever come across fields with multiple values in your event data in Splunk and wondered how to modify them to get the results you need? Each field in an event typically has a single value, but for events such as email logs you can often find multiple values in the “To” and “Cc” fields. Multivalue fields can also result from data augmentation using lookups. To properly evaluate and modify multivalue fields, Splunk has some multivalue search commands and functions. If you ignore multivalue fields in your data, you may end up with missing and inaccurate data, sometimes reporting only the first value of the multivalue field(s) in your results.

In this article, I have applied a simple scenario to illustrate how different multivalue commands and functions can be used individually or combined to meet different use cases. I will cover some common search commands and functions that work with multivalue fields. Note that multivalue functions can be used with eval, where or fieldformat search commands. In my illustrations, I employed the “makeresults” command to generate hypothetical data for my searches so that anyone can recreate them without the need to onboard data. Read more on the makeresults command.

 

Scenario

Within one purchase transaction, Mary bought eggs, milk and bread. She paid for the eggs with cash and covered the remaining items using her credit card. We can assume that this purchase transaction is equivalent to a log event. The values for each multivalue field are separated by the comma delimiter.

Example 1:

Please note that in all the results, I have deliberately excluded the default field, “_time” which is a default field generated when the makeresults command is used.

 

Makemv (Command)

This command is used to split the values of a field that appear like a single value into multiple values within an event based on the delimiter. A delimiter specifies the boundary between characters.

Example 2:

The values in the “groceries” field have been split within the same event based on the comma delimiter. The values in the “payment” field remain the same. The report shows the method of payment for all three grocery items but it does not specify the actual payment method used for each item. To expand the event into three separate events, one for each item and show the exact payment for each grocery item, we will need a combination of commands and functions.

 

Mvzip (Function)

The mvzip function is used to tie corresponding values in the different fields of an event together. This helps to keep the association among the field values. This function takes two multivalue fields, X and Y, and combines them by stitching together the first value of X with the first value of field Y, then the second with the second, and so on.

Example 3:

The new field, “zipped” is the result of the mvzip function. The values of the groceries and payment fields are properly zipped together before expanding into separate events. Note that at this point, the results are still within one event.

 

Mvexpand (Command)

This command expands the values of a multivalue field into separate events, one event for each value in the multivalue field. All other single field values and unexpanded multivalue field values will remain the same in each new event.

Example 4:

Mvexpand works great at splitting the values of a multivalue field into multiple events while keeping other field values in the event as is but it only works on one multivalue field at a time. For instance, in the above example, mvexpand cannot be used to split both “zipped” and “payment” fields at the same time. The next function will come in handy to accomplish this.

 

Mvindex (Function)

Having zipped the values and created one field, “zipped”, you can now expand the “zipped” field into multiple events. The mvindex function is a little more intricate. To further tie field values together so that accurate associations are made in the process of expanding the values into separate events, mvindex will separate the existing multivalued field into two chosen fields using index values. Indexes can start at zero if labeling from the first value. For example, if values= a,e,i,o,u; a=0 e=1 i=2 o=3 u=4. You could also label from the last character with -1; a=-5 e=-4 i=-3 o=-2 u=-1 or you could choose to have a combination of both index patterns; a=0 e=1 i=2 o=-2 u=-1.

Example 5:

Mvindex was used to assign index 0 to the first value in the group which represents groceries and index 1 to the second value representing payment method so that when the fields are split, the values will not get mixed up. The “split” command was used to separate the values on the comma delimiter. Using mvindex and split functions, the values are now separated into one value per event and the values correspond correctly.

Tip – The stats command can also be used in place of mvexpand to split the fields into separate events as shown below:

Example 6:

 

Mvcount (Function)

This function can be used to quickly determine the number of values in a multivalue field using the delimiter. If the field contains a single value, the function returns 1 and if the field has no values, the function returns NULL.

Example 7:

As with single value fields, keep in mind that you may need a combination of multivalue commands/functions to get your report in the required format that will meet your specific use case.

Note: If there are situations in your data where a field is sometimes multivalue and other times null, refer here

 

Want to learn more about working with multivalue fields in Splunk? Contact us today!

 

A Use Case for Ingest Time Eval

By: Zubair Rauf | Senior Splunk Consultant

 

A few days ago, I came across an interesting challenge that a customer put in front of me. They had been facing this for some time now. The customer works with an app that logs all of its events 7 hours ahead of Eastern time, irrespective of daylight savings time. The server clock reset to midnight when Eastern time was 5:00 PM all year round. To work around this problem and make sure the events were always synced with the correct time zone, they adjusted the sourcetype for those logs every time daylight savings time started or ended.

When presented with this problem, I spent a good amount of time to find a time zone that would change with eastern time when daylight savings time changed and have the same time offset as those logs. Not having any success on that front, I started looking at alternatives to help my customer overcome their issue and I came across this powerful way to solve the problem with a one-time fix with the sourcetype.

Splunk introduced Ingest time evals with Splunk Enterprise 7.2. Ingest time evals are similar to search time evals that have helped Splunk be the powerful tool that it always has been. Ingest time evals allow you to write an EVAL formula that is executed at ingestion time to create a new indexed field or to update a field’s value. They give you more control over Splunk index time fields as well. In my particular case, having control over and being able to manipulate index time fields helped me just do the trick for my customer.

For starters, _time is an index time field that is parsed from the raw log event. If the event does not have a time, the indexer will assign it with a current time when the event is ingested. In my particular challenge, the _time field needed a fixed offset by five hours as it was five hours ahead of eastern time.

To setup ingest time evals, we have to work with transforms.conf, props.conf, and fields.conf (only if creating new fields at ingest time). To further elaborate on the process of setting up ingest time evals to create new index time fields or manipulate existing fields at index time, we have used a sample log from a Cisco device.

To do a comparison, I ingested the log file with a custom sourcetype I created to parse the events.

With the above sourcetype, the following events were ingested.

If you look closely, the date/time was parsed exactly as it appears in the raw log event. Now if the raw event had a timestamp that needed to be offset, we could change the _time field at ingest time using ingest time eval.

To make my required changes, I will have to add an INGEST_EVAL expression in a transforms stanza in transforms.conf to update the _time field at ingest time after it has been parsed out from the actual event.

In the above example, I have used INGEST_EVAL to update my _time field to add 7200 seconds to it. This translates into 2 hours. I have also used the “:=” instead of “=” so that Splunk updates the _time field and not create another _time value resulting in a multivalued _time field in the final event. In this case, “:=” will overwrite the existing value in the field.

The above screenshot shows the updated _time field after the same log file has been ingested with the updated props and transforms. If you closely look at the Time column in the above screenshot in the first event it shows the timestamp being parsed as 01/16/20 1:43:43 PM but the timestamp in the event is 01/16/2020 11:43:31 AM. This tells us that the INGEST_EVAL expression in our transforms.conf successfully worked.

At this point, I would caution you to thoroughly test your INGEST_EVAL on a dev Splunk server so that you are sure that your eval works.

Ingest time eval can also be used to create new index time fields. While updating the _time field to offset the time difference, I thought about creating some custom index fields for demonstration purposes. This would further demonstrate how powerful ingest time evals are and how they can be useful.

Considering I was updating the _time field with my new timestamp, I figured it would be good to have a field that still parses and stores the original time. I named that field orig_time. This field is basically derived from the original _time field that was parsed before it was changed into the new timestamp.

I also thought it would be good to calculate the raw length of the event at ingest time, as that would create a field for me to calculate the size of the ingested data later. I particularly leaned towards demonstrating this, because not too long ago, I was also faced with the challenge to report host-level licensing information for every index. This helps Splunk users in an organization understand how much data their hosts are sending to Splunk.

Now, this is an easy fix if your environment is small. In that case, you can use the license_usage.log file available in the _internal index to calculate your license usage by index, sourcetype, source, or host. It definitely does become a problem when your environment grows too large. When the unique tuples cross 2000 by default, the license manager starts squashing source/host values and only index, sourcetype values remain in license_usage.log.

To work with this issue, I set up a daily license usage search which calculates the length of _raw for the past day for all the indexes and stores it in a summary index. This search runs at off-peak hours when the system is not being used by other users. That helps me populate my dashboards on demand for the users who want to see this data the next day.

Having raw event size calculated for every event at index size will definitely help me rid myself of those expensive searches that need to be run every night, these searches can be less reliable in case the search head that runs the summary generating search crashes. At index time I create a new field “event_size” using INGEST_EVAL in transforms.conf. The settings used to do this are as below;

If you look closely at the settings;

Transforms.conf

I have added two new stanzas to the transforms.conf to create the evals for the new fields, orig_time, and event_size.

Fields.conf

As we are creating two new fields at ingest time, we add their names as stanzas in fields.conf and make sure these fields are indexed by adding the parameter “INDEXED = true”.

Props.conf

I have updated the TRANSFORMS parameter in the relevant sourcetype. If you notice, the order of the TRANSFORMS stanza actually dictates which transform will be applied first to the data being parsed. In this particular case, the stanza is:

TRANSFORMS = orig-time,time-offset,event-size

In this specific transform the order will be as follows:

  • orig-time will preserve the original parsed time into the orig_time
  • time-offset will update the existing _time field to be offset by 02 hours.
  • event-size will calculate the total length of the event and create a new event_size field.

If you look at the final screenshot (above) closely on the left under “Interesting Fields” you will see that there are two new fields that you can see. These include orig_time and event_size

Now to calculate the total license usage by any measure, you can use your event_size with a | tstats search which will be many folds faster than a regular search.

There can be many other uses for Ingest Time Evals, one of which is listed on the documents page. To find out more, please visit Splunk documentation at https://docs.splunk.com/Documentation/Splunk/8.0.2/Data/IngestEval#Why_use_ingest-time_eval.3F

 

If you want to learn more or have TekStream help with implementing some Splunk use cases, contact us today!