Press Release: TekStream Achieves SOC 1 and SOC 2 Type 2 Compliance Certification

TekStream Achieves SOC 1 and SOC 2 Type 2 Compliance Certification

TekStream’s information security practices, policies, and procedures are officially approved to meet the SOC 1 and 2 trust principles criteria for security, availability, processing integrity, and confidentiality

ATLANTA, GA, January 10, 2019 /24-7PressRelease/ — TekStream announced today that the company has achieved the Service Organization Control (SOC) 1 and SOC 2 Type 2 compliance certification, an attestation standard defined by the Association of International Certified Professional Accountants (AICPA), certifying that TekStream’s information security practices, policies, and procedures are officially approved to meet the SOC 1 and 2 trust principles criteria for security, availability, processing integrity, and confidentiality.

In today’s global economy, more and more companies begin to outsource core business operations and activities to outside vendors. Service providers must have sufficient controls and safeguards in place when hosting or processing customer data. With SOC certification, customers can now be confident that TekStream has the controls and auditing in place to maintain the security, availability, and confidentiality of their systems. TekStream is organized to handle the data privacy concerns of the largest enterprises in highly regulated industries.

“Despite technology advancements, the Cloud and On-Premise environments are not getting any easier to maintain.” explains Judd Robins, Executive Vice President. “In fact, as Cloud and hybrid digital transformations become increasingly common; supporting and managing today’s leading technologies with security controls and protocols becomes even more difficult. Customers are no longer looking to maintain an expensive internal team of architects, developers, support personnel, admins, and infrastructure experts for critical applications. TekStream’s Support and Managed Services takes this burden off customer teams, so they can focus on growth while leaving the IT details to us.”

TekStream’s support and managed services offerings are designed to provide companies with flexible support hours to ensure enterprise solutions are running smoothly, securely and efficiently at all times. Our support technicians are dedicated to rapid request response and providing real-time solutions and services based on years of practice with Amazon, Hyland, Liferay, Oracle, and Splunk.

Our support and MSP services include:

  • Amazon Web Services (AWS) Support – Eliminate infrastructure headaches and the associated drain on your technical teams by outsourcing your AWS support to TekStream
  • Hyland Support – Enhance system security, prevent outages, and ensure that your OnBase solutions continue to run smoothly 24/7
  • Liferay Support – Continually optimize the stability of your implementation. By monitoring and maintaining your environment, TekStream ensures not just that your initial investment in the platform is protected, but that future issues are identified and addressed before they arise.
  • Oracle Cloud Support – Let TekStream’s team of experienced Oracle Cloud Support consultants take care of the management and maintenance of your business’ IaaS and PaaS technologies.
  • Oracle On-Prem Support – Save time by letting TekStream take responsibility for the operational management of your Oracle WebCenter environment. Our support services assess the stability of your current implementation and optimize to provide better security, ensure your applications run smoothly, and prevent outages.
  • Splunk Support – Whether you need guidance in setting up a new environment or creating new solutions to optimize existing environments, Our team of certified support engineers ensure your company is getting the most out of your investment. With proactive identification of errors and anomalies, we can prevent lengthy outages and keep your system running smoothly to keep operational disruptions to a minimum.

TekStream’s commitment to enterprise-level security, privacy, availability, and performance is driven by our unique and entrepreneurial culture built by individuals who are fanatically driven to exceed client expectations. With our SOC 1 and SOC 2 compliance, customers are in good hands with our team of experts.

TekStream
TekStream is an Atlanta-based technology solutions company that offers business and digital transformation, managed services, and recruiting expertise to help companies manage their applications, business processes, content, human capital, and machine data as well as take advantage of next-generation cloud-based solutions. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email Shichen Zhang at shichen.zhang@tekstream.com.

Machine Learning with Splunk: Fitting a Model

Machine Learning with Splunk: Fitting a Model

By: Abe Hardy  | Splunk Consultant

What is machine learning? A quick search online will return definitions using the words algorithm, statistics and model. A slightly less technical definition would be that machine learning is a general term used for formulas to determine outcomes based on features from provided data. If the goal was to classify plants, for example, height, petal color and environment might all be features in the data set.

 

The objective for today will be to learn how to fit a model using Splunk’s Machine Learning Toolkit.  We will be using data from the passenger list of the Titanic to fit a model to predict who survived. The data set can be found at https://www.kaggle.com/c/titanic/data. The training set is what will be used to build and validate our model.

 

The data set contains the features passenger ID (PassengerId), passenger class (Pclass), the name of the passenger (Name), the sex of the passenger (Sex), the age of the passenger (Age), the number of siblings and spouses on board with the passenger (SibSp), the number of parents on board with the passenger (Parch), the passenger’s ticket number (Ticket), the amount of the passenger’s fare (Fare), the passengers cabin number (Cabin), and the port the passenger embarked from (Embarked). The training set also has the additional feature of Survived.

Now that the data has been gathered the first step is to clean the data or in other words to make the data as complete and useful as possible. Not all features are useful; a good first step is to remove features that have no predictive value. It is better to leave features than to remove them before determining the predictive capabilities, but features that always have a unique entry such as ticket and name can be removed. (PassengerId will remain as the identifier for each entry.) There are a couple of options with Cabin; it can either be removed or turned into a binary feature. In this case the binary would be zero (0) for no cabin and a one (1) if a cabin is listed. For simplicity this feature will be removed as well due to a lack of information.

 

The next step involves dealing with data that is not complete but too important to remove. The age feature is not complete but due to “women and children first” age will most likely be a strong predictor in who survives. One option is to remove all entries missing the feature. While not a majority of the entries, a significant number are missing age and removing them may impact the predictive capabilities of other features.

 

A better option would be to insert a value that has little to no effect on the age feature and allows for the predictive potential from the other features to be used in the model. Setting an age too low will imply the passenger is a child; setting it too high may not be accurate either. The best option is to fill the blanks with the average age of all the passenger given. This should minimize any effects filling the blanks will have and allow the entries to remain in the data set. The average age for the data set is just under 30. Since age is counted in whole numbers the blank ages will be replaced with 30.

Now that the data has been cleaned it is time to begin using Splunk to build our model! Load the train.csv file into Splunk (it won’t have a timestamp so don’t worry about that) and then click on the Machine Learning Toolkit app. Once inside the app go to experiments and click on create new experiment and select predict categorical fields (since survivor is a category with only two possible values of one (1) or zero (0)). Name the experiment anything you like.

In the search field enter “source=train.csv”. The results should be 891 events. Select “logistic regression” for Algorithm, “survived” in field to predict, and set the split for training/test to 80/20. The split will allow for 80 percent of the data to be used to fit the model and 20 percent to be used to validate the model. The split is randomized from the 891 entries of the entire data set.

The final step in building this model is feature selection. Begin by selecting all the features except passengerId in the fields to use for predicting as shown below.

Depending on how the 80/20 split is done your results will vary but most likely it will be somewhere around 75 percent.  75 percent is better than what is expecting from guessing (50 percent), but the model can be improved. Unselect Embarked from the fields to use for predicting and fit the model again. The model should now be in the high 70’s to low 80’s.  The terms precision and recall measure the accuracy of the model. Precision is the percentage of returned values that were accurate and recall is the percentage of accurate values that were returned. In this scenario precision and recall are the same.  In the experiment history previous setting can be loaded and earlier models can be fit again allowing for several variations of the same model without needing different experiments.

 

The Splunk Machine Learning Toolkit is a powerful toolkit with the potential to take your data to the next step from descriptive to predictive. The Machine Learning Toolkit, however, is just that – a tool. Loading data into the MLT or knowing Splunk are not enough.  Understanding how to clean and wrangle data to have the most predictive impact is necessary as well.

Want to learn more about machine learning with Splunk? Contact us today!

 

New Feature in Splunk to Monitor Environment Health

New feature in Splunk to Monitor Environment Health

By: Pete Chen | Splunk Consultant

 

A new feature introduced in Splunk 7.2 is the Splunkd Health Status Report. Monitoring Splunk’s status by checking if Splunkd is running may tell you if Splunk is running, but it won’t tell you if there’s a problem developing while Splunk is running. In the latest version of Splunk, you’ll only need to look next to your name to figure out how Splunk is doing.

Once you click on the icon, a screen will pop up with the health status of Splunk.

The status tree is broken down into 4 areas, Splunkd, Feature Categories, Features, and Indicators.

Splunkd The overall status of Splunkd is based on the least healthy component in the tree. The status is for the specific host only.
Feature Categories This is the second stage, and represents a logical grouping of features. Feature categories won’t have a status.
Feature Each feature status is based on one or more indicators, with the least healthy indicator status as the status for the particular feature.
Indicators Indicators are the lowest levels of measurable health status that are tracked by each feature. The colors for status change as health for each feature changes.

 

In the event more details are required, the health report also generates a log, and can be found at: SPLUNK_HOME/splunk/var/log/splunk/health.log

Here’s a sample of the log:

Changing notification settings can be found in the Settings menu. From Settings, select Health report manager.

 

From there, each feature can be enabled or disabled, and the thresholds set.

 

As with other functions and features in Splunk, settings for health monitoring can be changed through a conf file. This is located in $SPLUNK_HOME/etc/system/local/health.conf. Alerting thresholds, intervals, and seriousness can all be defined in the configuration file. A tremendous benefit to being able to configure the health monitoring is the ability to add alerts. When an alert fires, it can send a notification via email or PagerDuty. To enable this feature, simply add the following stanza to health.conf:

[alert_action:email]

disabled = 0

action.to =  <recipient@example.com>

action.cc = <recipient_2@example.com>

action.bcc = <other_recipients@example.com>

 

And finally, the health monitoring feature can be used by other monitoring tools. Using a curl command, other tools can help to better monitor your Splunk environment. The curl command is:

curl -k -u admin:pass https://<host>:8089/services/server/health/splunkd

 

When things go wrong, it may be difficult to determine where to begin troubleshooting. The monitoring tool helps by proving a root cause, and the last 50 related messages. This will help the admin better asses the problem and remediate it.

 

Splunk health monitoring is a simple, effective tool to help keep a Splunk environment healthy. Adding features, tuning indicators, adjusting intervals, and setting

alerts are all ways this new tool pre-loaded into Splunk can help ensure Splunk is healthy.

 

Want to learn more about Splunkd Health Status Report? Contact us today!

 

Version Source Control for your Splunk Environment

Version Source Control for your Splunk Environment

By: Zubair Rauf | Splunk Consultant

 

When Splunk environments grow in organizations, the need for source control also grows with it. It is good practice to use the widely available source control tools that are available for enterprise level source control.

There are many Version Source Control (VCS) software available online, but the one most widely used is the open source, Git, which has proven to be a very powerful tool for distributed source control. Using Git, multiple Splunk Admins can work with their local repositories and the changes shared separately.

To take the conversation further, I would separate the need for version control in two major segments

  • User Applications
  • Administrative Applications

I have broken down the applications into two segments to enable ease of management for Splunk Admins. The User Applications should consist of the search artifacts that are built and developed as use cases evolve and change often, whereas the Administrative applications I would classify as those which are mostly used to deploy setup Splunk like TAs and other deployment apps. These applications rarely change once set up unless new data sources are on-boarded, there are significant changes to the architecture, etc.

In the context of this blog post, we will focus on the administrative applications. These apps are the backbone on your Splunk deployment and should be cautiously changed to make sure there is no downtime in the environment. Changing these files could cause irreparable damage to the way data is indexed to Splunk, causing loss to indexed events, especially when changing sourcetypes, etc.

As I already mentioned, there are numerous flavors of source control and depending on your taste, you can use either. If you’re starting off fresh with source control, Git is easy to set-up and you can use it with Github or Atlassian Bitbucket. Both these tools can help you get started in a matter of minutes, where you can create repositories and setup source control for your distributed Splunk environment.

The Git server will host all the master repos in the Splunk Environment for all the administrative apps. The admins who need make edits do it in the following two ways;

  • Edit the master directly.
  • Create local clones of the master, make the required edits, commit them to the local branch and then push it out to the remote repo.

Ideally, no one should edit the master branch directly to reduce the risk of unwanted changes to the master files. All admins should edit in local branches, and then once the edits are approved, they should be merged to the master.

There should be three Master repos with their respective apps and TAs in those repos. These repos should correspond to the following servers;

  • Cluster Master for Indexers
  • Deployer for Search Head Cluster
  • Deployment Server for Forwarders

To deploy the repos to the servers, you can use git hooks or tie your git deployment back into your puppet or chef environment. This is based on your discretion and how you are comfortable with distributed deployment in your organization. The repos should be deployed to the following directories

  • Cluster Master to $SPLUNK_HOME/etc/master-apps/
  • Deployer to $SPLUNK_HOME/etc/shcluster/apps
  • Deployment Server to $SPLUNK_HOME/etc/deployment-apps

After the updated repos are deployed to the respective directories, you can push them out to the client nodes using Splunk commands.

If you are interested in more information, please reach out to us and someone will get in touch with you and discuss options on how TekStream can help you manage your Splunk environment.

 

TekStream AXF 12c Upgrade Special Components

TekStream AXF 12c Upgrade Special Components

By: John Schleicher |Sr. Technical Architect

TekStream’s extension to Oracle’s Application eXtension Framework (AXF) provides enhanced customizations surrounding Invoice Reporting using Business Activity Monitor (BAM), auditing of user actions, and QuikTrace of BPEL process instance.   With the introduction of the 12c upgrade available with release 12.2.1.3 TekStream discovered that two of its reporting components were highly impacted by paradigm changes in 12c.   TekStream has gone through multiple iterations of 12c upgrade and has incorporated the necessary reporting enhancements to provide the functionality of the 11g release to its 12c counterpart.  This paper highlights the enhancements to the package to bring it on line with 12c.

BAM Dashboards:

The Business Activity Monitoring component of the SOA solution was significantly improved in the 12c release.  So significantly in fact that it precluded an upgrade path from 11g.   In the official upgrade procedures solutions incorporating this component are instructed to stand up an 11g version for BAM and slowly introduce a 12c version as all of the nuances of the new release are learned such that alternatives can be made.  In addition to a different dashboard component the layered introduction of ‘Business Queries’ and ‘Business Views’  add new elements to the solution that have to be solved before a dashboard can be constructed.  TekStream has done the necessary homework to bring the 11g based system directly online during the upgrade within a new InvoiceAnalytics package to save our customers the effort of introducing an interim solution during the process.  With TekStream’s 12c AXF upgrade we accommodate replacement dashboards, new 12c objects that are introduced with the release as well as upgrade of the 11g BAM data.  Clients will regain functionality (albeit with new upgraded BAM dashboards and underlying components) immediately after going online with 12c.  They will have direct replacements to the ‘Assigned Invoices’, ‘Invoice Aging’, and ‘Invoices’ reports and can use these with all of the 12c enhancements.

QuikTrace:

TekStream’s Audit and Reporting package ships with a component labeled QuikTrace which in addition to global Worklist views to locate all active invoices also provided technical tracing capability not available in AXF.   Technical staff can use key data points to find a record within the SOA composite execution stack for those records not active in a worklist and traceable via the global Worklist view.  The capability was based on an 11g primitive ‘ora:setCompositeInstanceTitle’ which on a per composite level allowed for the population of the title field which was then searchable via the Enterprise Manager (em).  The Audit and Reporting package allows for the searching based on Imaging Document ID, Invoice Number, Purchase Order Number, Supplier Name, and a customizable business key.

With 12c Oracle has changed their paradigm for a more efficient flow trace primitive ‘oraext:setFlowInstanceTitle’ which migrates the search element to a new single element SCA_FLOW_INSTANCE.title per composite flow.  To maintain the same functionality of the 11g system it is necessary to encapsulate all designed search elements into a single location.   TekStream has incorporated this into the Audit and Reporting package to offer the same functionality to its client base.

Upgrading AXF Clients:

For AXF clients with the reporting package we have the elements to bring you back online with the features that you are accustomed.    These will be available as soon as you bring AXF back up on 12c.

For AXF clients without the reporting package be assured that TekStream can get you to a 12c Audit and Reporting point as well.  We understand the 12c data and can pull together the data objects for functional dashboards and can introduce those QuikTrace touchpoints into the 12c based composites for that feature capability.

Want to learn more about Invoice Reporting using Business Activity Monitor? Contact us today!

What is Invoice Processing?

What is Invoice Processing?

By: John Schleicher |Sr. Technical Architect

 

In a nutshell, invoice processing is the set of practices put in place by a company for the payment of the bills it incurs associated with their business.  Essentially, ‘bills’ translate to invoices.  This doesn’t starkly differ from that of an individual managing their personal bills and budgeting their funds to ensure they can meet their obligations and keep creditors happy to ensure they can continue to make the necessary purchases to maintain their individual lifestyle and plan for the future.

With Invoice Processing in business the primary differences is sheer volume of ‘bills’; the details toward the budget allocation; the handling of the invoice to ensure it is properly approved; and the terms of the payment to the vendor (who sent in the invoice).  As the activities toward Invoice Processing to accommodate the handling are so manually intensive that specialized ‘Accounts Payable’ (AP) staff are set in place to manage the data entry and oversight of the flow of invoices.  Furthermore sophisticated computerized solutions are often employed to manage these activities and reduce the manual overhead.

Good and efficient Invoice Processing is critical to the business to ensure its ultimate survival in the competitive business marketplace.  Components of Invoice Processing include:

Invoice Processing Accounting:

Large business expenditures and planning of such warrant sophisticated budgetary management to ensure that monies aren’t misused and properly allocated toward their intended purpose.  To manage this need budgets are broken down into different categories for which funds are allocated and charged against as invoices are received.  These allocations are very specific to the line of business and require significant attention to detail.  These are often broken down into accounting codes with varying multi-level breakdowns as the allocations are specifically delineated.   Projects and/or tasks provide an alternative method toward these allocations.

Purchase Orders:

As part of the accounting processes purchases are frequently managed through an approval process before the expenditure is ever made.  Buyers (or users wanting to make purchases) initiate a request via ‘Purchase Orders’ which identify purchase details; which accounting pot to draw from; and which vendor is targeted for the purchase.  This typically travels through an approval hierarchy based on the purchase amount before it is finalized for the actual purchase.  When the purchase is finally made the corresponding invoice will reference the purchase order (number typically) and processing of the invoice is streamlined as the authorizations and allocations are already in place.   When associated with the PO these are routed for payment.

NON-PO (non-purchase orders):

This invoices don’t have the pre-approvals in place and all of the required processing is don’t upon receipt of the invoice.   These often require routing to the requester for appropriate accounting details and acknowledgement that they approve the purchase and in many cases have received the associated items from the invoice.   Upon their acknowledgement then any approvals based on the item spent must be made.   Only after completion of these touchpoints is the invoice routed for payment.

Recurring Invoices:

Invoices that are associated with frequent regularly scheduled charges such as utilities are defined as Recurring invoices.   They may be tied to a blanket purchase order but have the processing distinction that prompt attention to the invoice is required as service interruption may occur if not handled expeditiously.

Payment Terms:

The payment cycle of invoice processing is determined by agreements with the vendor on how quickly they want to be paid.  Terminology such as ‘NET 45’ (payment is expected 45 days after receipt) are applied to reference these arrangements.  Discounts may apply for processing these invoices within the prescribed timeframe.

Automation:

Invoice Processing is still heavily encumbered by paper invoices that require extensive manual intervention by members of the AP staff to key them into electronic invoices so they can be processed.  Automation via Optical Character Recognition to translate these paper images and verify the values are valid against stored data greatly reduces the manual overhead of the AP staff.  The OCR and automation activities though are burdened with varying invoices formats and quality which reduces the potential of a touchless invoice upon receipt.

The Premier Invoice Automation Solution – Inspyrus

Consider Inspyrus, as the premier AP automation solution.  Inspyrus offers the best automation solution that can be configured to suit the broadest Invoice Processing needs in the marketplace.   Their solution offers the capability to work all of the major back end financial systems:  EBS, PSFT, JDE, SAP, and others, by effectively offering an abstraction of these Enterprise Resource Planning (ERP) systems with the ability to dynamically route transactions to the relevant instance, even supporting multiple instances for a single client.  You won’t find many automation systems that compete in that regard.   Full automation of paper and electronic invoices reduces the daily costs of invoice processing.  Coupled with the a feature rich set of services that is out of the box and the configuration mechanisms built into the Inspyrus solution allows the diverse client base to match their specific business needs without costly software customizations.

So if you are talking ‘Invoice Processing’ for your business, you should have make sure you consider Inspyrus for those business needs.

Want to learn more about Invoice Automation with Inspyrus? Contact us today!

Optimizing Splunk Dashboards with Post-Process Searches

Optimizing Splunk Dashboards with Post-Process Searches

By: Josh Grissom, MSIT, CISSP | Senior Splunk Consultant

Optimizing Splunk Dashboards with Post-process Searches

When creating Splunk dashboards, we often have the same search run multiple times showing different types of graphs or with slight variations (i.e. one graph showing “allowed” and another showing “blocked”). This creates more overhead every time the dashboard is opened or refreshed, causing the dashboard to open or populate more slowly and increasing the demand on the Splunk infrastructure. There are other situations or limitations that occur such as user concurrent-search limits.

With proper optimization techniques a full typical dashboard with 10 panels can run less than three Splunk queries versus the 10 individual searches that would normally run. This is accomplished by using Post-process searches that are easily added in the SimpleXML of the desired dashboard.

 

Starting Point of Post-process Searches

When running a search in Splunk it will return RAW event data or transformed event data. Transformed event data is data that was returned by a search and is placed in the form of statistical tables which is used as the basis for visualizations. The primary transforming commands are:

  • Chart
  • Timechart
  • Top
  • Rare
  • Stats

 

The Post-process search is known and referred to as a base search. The base search should always avoid returning RAW events and instead return transformed results. This is largely due to one of the limitation of Post-process being it can only return a max of 500,000 events and it will truncate without warning. To circumvent this limitation, it is best practice to use one of the transforming commands and as always, refine your search as much as possible to reduce the number of results and reduce your search .

 

The Documented Limitations of Post-Process Searches

The documentation that is provided on Splunk Docs show a few limitations that you should consider before using the Post-process search:

http://docs.splunk.com/Documentation/Splunk/6.2.5/Viz/Savedsearches#Post-process_searches

 

  • Chaining for multiple post-process searches is not currently supported for SimpleXML dashboards.
  • If the base search is a non-transforming search, the Splunk platform retains only the first 500,000 events returned. The post-process search does not process events in excess of this 500,000 event limit, silently ignoring them. This results in incomplete data for the post-process search. A transforming search as the base search helps avoid reaching the 500,000 event limitation.
  • If the post-processing operation takes too long, it can exceed Splunk Web client’s non-configurable timeout value of 30 seconds. This can result in a timeout due to an unresponsive splunkd daemon/service. This scenario typically happens when you use a non-transforming search as the base .

 

Examples of the Basic Concepts

 

Splunk Search with non-transforming commands returning RAW results:

 

Splunk Search with transforming command retuning transformed results:

Examples of Post-process

There are many different ways to determine what should be the base search and what should be in each post-process search. One method is to create all of the queries for your dashboard first and then find the beginning commonality between the searches which will end up being your base search. Then the part that does not meet the commonality will then be the post-process searches. Keep in mind that if you have four Splunk queries and three have a commonality but the fourth is completely different, you can do the base search for the three common Splunk queries and the fourth will just run as a normal query.

 

We will take the following 5 Splunk queries as our example for what we have determined to put into our new dashboard. If you just ran these in our dashboard it would run 5 almost identical queries taking up valuable search resources and user limits.

sourcetype=”pan:threat” action=allowed | stats count by app
sourcetype=”pan:threat” action=allowed | stats count by rule
sourcetype=”pan:threat” action=allowed | stats count by category
sourcetype=”pan:threat” action=allowed | stats count by signature
sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip

 

As we can easily see, the commonality of the 5 queries is going to be:

 

sourcetype=”pan:threat” action=allowed |

 

The issue with just taking that portion as your base search is that it will return RAW results. If we review the 5 queries, they are using 5 different fields which means our transforming base search should include all 5 fields.

 

sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip

 

If we continue our method of initially creating our dashboard with our 5 independent queries:

 

Then we can switch to the XML source view of the dashboard and start making our base search and post-process searches. Below is how the dashboard’s XML looks before using any post-process searches.

<

<panel>

<table>

<title>Applications</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by app</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by rule</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by category</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by signature</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

 

We will create our base search with the following:

Base Search sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip
Post-process 1 | stats sum(count) as count by app
Post-process 2 | stats sum(count) as count by rule
Post-process 3 | stats sum(count) as count by category
Post-process 4 | stats sum(count) as count by signature
Post-process 5 | stats sum(count) as count, values(rule) as rule by dest_ip

 

Once in the XML Source view, create your base search at the top, under the label but before the first row:

The base search id can be named anything (in this case it is “baseSearch”) but it is best to make it something easy because you will need to use it throughout the dashboard. The base search id is referenced in each post-process search which appends the base search in front of each post-process search. To create the base search, the id is placed inside of the search tags at the top of the dashboard before all of the panels.
<search id=”{id name}”>

 

The id name must be in double quotes “” and the name is case sensitive. Next, the transforming base search query is added inside of the open and closed query tags
<query> {insert query here} </query>

 

After the query tags, any other supported tags can be used such as the timeframe tags including tokens created and assigned in the dashboard. Then close the search tag.
</search>

 

Next we will add the post-process searches to each of the panels on the dashboard. The time references should be removed since the base search controls the timeframe:

Similarly to the base search, the post-process search uses the base search id in the search tags.
<search base=”{id name of base search}”>

 

Next would be the query tags where the post-process search goes. This query should start with a pipe “|” because it will be appended to the base search like it was all one query.
<query> “ {post-process search that start with a pipe “|” } </query>

 

After the query tags, any other supported tags can be used except the timeframe tags since the post-process searches go off the timeframe of the base search. Then close the search tag.
</search>

 

After modifying all 5 of the post-process searches in the XML source, the dashboard will be ready to use the base search. If you run the dashboard and look at the current searches, there will only be 1 search compared to 5 searches. Below is how the dashboard’s XML looks after making the changes.

 

<dashboard>

<label>Threat Dashboard</label>

<!– Base Search Called “baseSearch” (This can be named anything) –>

<search id=”baseSearch”>

<query>sourcetype=”pan:threat” action=allowed | stats count by app, category, rule, signature, dest_ip, src_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<row>

<panel>

<table>

<title>Applications</title>

<!– post-process search 1 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by app</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<!– post-process search 2 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by rule</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<!– post-process search 3 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by category</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<!– post-process search 4 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by signature</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<!– post-process search 5 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count, values(rule) as rule by dest_ip</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

 

The use of post-process searches in dashboards might not always work if there are no common queries. In the situations where there is commonality, then post-process searches should be used. This is not only to reduce the work load that each query requires but it reduces the likeliness off users reaching their search limits especially if the dashboard has a large number of common panels.

 

Want to learn more about optimizing Splunk dashboards? Contact us today!

 

Inspyrus 3.0: Faster, Better, and in the Cloud

Calculator, Invoice, and credit cards in front of a laptop

Inspyrus 3.0: Faster, Better, and in the Cloud

By: Mariano Romano | Senior Developer

 

Inspyrus’ Invoice Automation product recently released version 3.0 with a slew of new features that continues to make an excellent product even better.  This new release comes with over 30 new features and a 10x increase in performance which makes a solid product even faster.  Here is a brief list of some of the top new features released in 3.0.

 

Inspyrus Routing Engine

One of the biggest changes to the product has been the introduction of their own routing engine.  By replacing Oracle BPM with their own engine, Inspyrus was able to improve the speed and stability of routing invoices while also making it easier to configure.

 

Supplier Central

Inspyrus can now retrieve vendor contact information from the ERP system to make it easier to enable new vendors.  But that is not all!  Because Inspyrus has access to vendor contact information, the product can also send the vendor an invitation to join Supplier Central.

 

Mobile App

The Inspyrus mobile app continues to improve and provide additional functionality.  For example, coders now have the ability to enter charge account, project coding (EBS), or cost factor (JDE) information from the app.

 

Forms Recognition

The heart of any good AP solution is automation.  In order to improve the success of Oracle Forms Recognition, Inspyrus continues to make several improvements to the Inspyrus extraction engine to improve accuracy.

 

Inspyrus continues to make under-the-cover improvements in order to ensure stability and increased speed.  They have switched cloud providers in order to improve stability and scale.  These new features prove that Inspyrus will continue to improve their product in terms of features, speed, and ease of use.  And with release 3.1, Inspyrus will begin to use ML (Machine Learning) in order to determine how to code an invoice based on previous coding done for that vendor.  We cannot wait to see what other features they have in store for 3.1!

 

Want to learn more about new Inspyrus 3.0 features? Contact us today!

 

Prepayments Feature Released for Inspyrus Invoice Automation

Prepayments Feature Released for Inspyrus Invoice Automation

By: Karla Broadrick | Technical Architect

 

Prepayments have been long time features of Oracle EBS and other ERP systems.  Prepayments allow advance payment to a supplier.  Later when the invoice is received from the supplier, the prepayment can be applied against that invoice.

In the latest release of the Inspyrus Invoice Automation Solution, a prepayments feature has been added for Oracle EBS.

The new prepayments tab in the UI has the ability to Add, Edit or Delete a Prepayment.   When adding a prepayment, simply click the Add button and a list of all available prepayments for the supplier is displayed.

The user can select a prepayment and edit how much of it should be applied to this particular invoice.

 

This amount is then displayed in the Prepayments tab in Inspyrus.

The prepayment information is then sent to Oracle EBS when the invoice record is created and the prepayment applied.

 

Contact TekStream for more information about the Inspyrus Invoice Automation Solution.