Press Release: TekStream Makes 2019 INC. 5000 List for Fifth Consecutive Year

Press Release: TekStream Makes 2019 INC. 5000 List for Fifth Consecutive Year

For the 5th Time, Atlanta-based Technology Company Named One of the Fastest-growing Private Companies in America with Three-Year Sales Growth of 166%

ATLANTA, GA, August 14, 2019– Atlanta-based technology company, TekStream Solutions, is excited to announce that for the fifth time in a row, it has made the Inc. 5000 list of the fastest-growing private companies in America. This prestigious recognition comes again just eight years after Rob Jansen, Judd Robins, and Mark Gannon left major firms and pursued a dream of creating a strategic offering to provide enterprise technology software, services, solutions, and sourcing. Now, they’re a part of an elite group that, over the years, has included companies such as Chobani, Intuit, Microsoft, Oracle, Timberland, Vizio, and Zappos.com.

“Being included in the Inc. 5000 for the fifth straight year is something we are truly proud of as very few organizations in the history of the Inc. 5000 list since 2007 can sustain the consistent and profitable growth year over year needed to be included in this prestigious group of companies,” said Chief Executive Officer, Rob Jansen. “The accelerated growth we are seeing to help clients leverage Cloud-based technologies and Big Data solutions to solve complex business problems has been truly exciting. We are helping our clients take advantage of today’s most advanced recruiting and technology solutions to digitally transform their businesses and address the ever-changing market.”

This year’s Inc. 5000 nomination comes after TekStream has seen a three-year growth of over 166%, and 2019 is already on pace to continue this exceptional growth rate. In addition, the company has added 30% more jobs over the last 12 months.

“Customers continue to invest in ‘Cloud First’ strategies to move their on-premises environments to the cloud, but often struggle with how to get started.  There is a vast market for specialized experts familiar with both legacy systems and newer cloud technology platforms.  Bridging those two worlds to address rapid line of business changes and reducing technology costs are focal points of those strategies. TekStream is well-positioned to continue that thought leadership position over the next several years.” stated Judd Robins, Executive Vice President of Sales.

To qualify for the award, companies had to be privately owned, established in the first quarter of 2015 or earlier, experienced a two-year growth in sales of more than 50 percent, and garnered revenue between $2 million and $300 million in 2018.

“The continued recognition is evidence of our team’s response to client’s recruiting needs across multiple industries and sectors. The growth in hiring demands commercially and federally along with the need to deliver on changing candidate demands have fueled the work we have put into having both outsourced and immediate response contingent recruiting solutions,” stated Mark Gannon, Executive Vice President of Recruitment.

TekStream
We are “The Experts of Business & Digital Transformation”, but more importantly, we understand the challenges facing businesses and the myriad of technology choices and skillsets required in today’s “always on” companies and markets. We help you navigate the mix of transformative enterprise platforms, talent and processes to create future-proof solutions in preparing for tomorrows opportunities…so you don’t have to. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email info@tekstream.com.


Integrating Oracle Human Capital Management (HCM) and Content and Experience Cloud (CEC)

Integrating Oracle Human Capital Management (HCM) and Content and Experience Cloud (CEC)

By: Greg Becker | Technical Architect

OVERVIEW

During the first phase of a recent project we built an employee file repository for a Healthcare client in the Oracle Cloud Infrastructure – Classic (OCI-C) space. A number of services were used including Oracle Content and Experience Cloud (repository), Oracle Process Cloud Service (for filing the documents in a logic structure), Oracle WebCenter Enterprise Capture (for scanning) and Oracle Database Cloud Service (for custom application tables).

During the second phase of the project our clients had a requirement to automatically update metadata values on their content items stored in the CEC repository. They wanted to trigger a change based on events or updates that occur for an employee record that is stored in Oracle Human Capital Management, for example when an Employee Status changes from Active to Inactive.

Our solution was to use an Oracle Process Cloud Service process to perform the metadata updates when certain values were passed into the process. The reason for updating the metadata is so that accurate searches can be performed by end users. The tricky part of the implementation is how to call the PCS process based on the change. To accomplish this Informatica is used to determine a ‘change’ based on data from the tables within the HCM structure and then pass that change record to a DB table used by the client solution. At that point a database function was developed to action the PCS REST Web Service. The final step of the process was to build a database trigger that called the function.

First you need to do some initial setup to be able to use the APEX libraries as well as create the network ACL to connect to the PCS domain you’re using. You can find this information in various places online. You can either use SOAP or REST web services and we chose REST. If you want to call the web service using SSL (which we did) you’ll have to also create an Oracle wallet.

CODE SNIPPETS

Function Definition:

SOAP Envelope:

Call the Function from a Trigger:

SUMMARY

There are more than one ways to fulfil this customer requirement but these are the pieces that worked well in this case. If you have any additional integration needs between Oracle Human Capital Management and Oracle Content and Experience Cloud please contact TekStream and we’d be happy to assist you.

Iplocation: Simple Explanation for Iplocation Search Command

Iplocation:
Simple Explanation for Iplocation Search Command

By: Charles Dills | Splunk Consultant

Iplocation can be used to find some very important information. It is a very simple yet powerful search command that can help with identifying where traffic from a specific IP is coming from.

To start iplocation on its own won’t display any visualizations. What it will do is add a number of additional fields that can be used in your searches that can be added to dashboards, panels, and tables. Below we will use a simple base search using Splunk example data:

From here we will add iplocation to our search, sorting by clientip. As you can see in the below screenshot, this added a few fields that we can use circled in red:

From here we can alter our search with a table to display the information we need. For example, for a company who is based and fully operates out of the US could consider and traffic going outside the us to a foreign country as unauthorized or malicious. Using the iplocation in combination with values, we are able to list out each IP address that is not located inside the US and display each by which country It is located:

The last thing we will do is clean up our table using rename and this can provide a simple way to distinguish where traffic from a specific IP address is coming from:

Want to learn more about iplocation? Contact us today!

Take Your Traditional OCR up a Notch

Take your Traditional OCR up a notch

By: Greg Moler | Director of Imaging Solutions

While the baseline OCR landscape has not changed much, AWS aims to correct that. Traditional OCR engines are quite limited in what details they can provide. Being able to detect the characters is only half the battle, the ability to get meaningful data out of them becomes the challenge. Traditional OCR follows the ‘what you see is what you get’ mantra, meaning once you run your document through, the blob of seemingly unnavigable text is all you are left with. What if we could enhance this output with other meaningful data elements useful in extraction confidence? What if we could improve the navigation of the traditional OCR block of text?

Enter Textract from AWS. A public web service aimed to improve your traditional OCR experience in an easily scalable, integrable, and low cost package. Textract is built upon an OCR extraction engine that is optimized by AWS’ advanced machine learning. It has been taught how to extract thousands of different types of forms so you don’t have to worry about it. The ‘template’ days are over. It also provides a number of useful advanced features that other engines simply do not offer: confidence ratings, word block identification, word and line object identification, table extraction, and key-value output. Let’s take a quick look at each of these:

  • Confidence Ratings: Ability to intelligently make choices to accept results, or require human intervention based on your own thresholds. Building this into your work flow or product can greatly improve data accuracy
  • Word Blocks: Textract will identify word blocks allowing you to navigate through them to help identify things like address blocks or known blocks of text in your documents. The ability to identify grouped wording rather than sifting through a massive blob of OCR output can help you find the information you are looking for faster
  • Word and Line Objects: Rather than getting a block of text from a traditional OCR engine, having code-navigable objects to parse your documents will greatly improve your efficiency and accuracy. Paired with location data, you can use the returned coordinates to pinpoint where it was extracted from. This becomes useful when you know your data is found in specific areas or ranges of a given document to further improve accuracy and filter out false positives
  • Table Extraction: Using AWS AI-backed extraction technology, Table extraction will intelligently identify and extract tabular data to pipe into whatever your use case may need, allowing you to quickly calculate and navigate these table data elements.
  • Key-value Output: AWS, again using AI-backed extraction technology, will intelligently identify key-value pairs found on the document without having to write custom engines to parse the data programmatically. Optionally, send these key-value pairs to your favorite key-value engine like Splunk or Elasticsearch (Elastic Stack) for easily searchable, trigger-able, and analytical actions for your document’s data.

Contact us today to find out how Textract from AWS can help streamline your OCR based solutions to improve your data’s accuracy!

Tsidx Reduction for Storage Savings

Tsidx Reduction for Storage Savings

By: Yetunde Awojoodu | Splunk Consultant

Introduction

Tsidx Reduction was introduced in Splunk Enterprise v6.4 to provide users with the option of reducing the size of index files (tsidx files) primarily to save on storage space. The tsidx reduction process transforms full size index files into minified versions which will contain only essential metadata. A few scenarios to consider tsidx reduction include:

  • Consistently running out of disk space or nearing storage limits but not ready to incur additional storage costs
  • Have older data that are not searched regularly
  • Can afford a tradeoff between storage costs and search performance

How it works

Each bucket contains a tsidx file (time series index data) and a journal.gz file (raw data). A tsidx file associates each unique keyword in your data with location references to events, which are stored in the associated rawdata file. This allows for fast full text searches. By default, an indexer retains tsidx files for all its indexed data for as long as it retains the data itself.

When buckets are tsidx reduced, they still contain a smaller version of the tsidx files. The reduction applies mainly to the lexicon of the bucket which is used to find events matching any keywords in the search. The bloom filters, tsidx headers, and metadata files are still left in place. This means that for reduced buckets, search terms will not be checked against the lexicon to see where they occur in the raw data. 

Once a bucket is identified as potentially containing a search term, the entire raw data of the bucket that matches the time range of the search will need to be scanned to find the search term rather than first scanning the lexicon to find a pointer to the term in the raw data. This is where the tradeoff with search performance occurs. If a search hits a reduced bucket, the resulting effect will be slower searches. By reducing tsidx files for older data, you incur little performance hit for most searches while gaining large savings in disk usage.

The process can decrease bucket size by one-third to two-thirds depending on the type of data. For example, a 1GB bucket would decrease in size between 350MB – 700MB. The exact amount depends on the type of data. Data with many unique terms require larger tsidx files. To make a rough estimate of a bucket’s reduction potential, look at the size of its merged_lexicon.lex file. The merged_lexicon.lex file is an indicator of the number of unique terms in a bucket’s data. Buckets with larger lexicon files have tsidx files that reduce to a greater degree.

When a search hits the reduced buckets, a message appears in Splunk Web to warn users of a potential delay in search completion: “Search on most recent data has completed. Expect slower search speeds as we search the minified buckets.” Once you enable tsidx reduction, the indexer begins to look for buckets to reduce. Each indexer reduces one bucket at a time, so performance impact should be minimal.

Benefits

  • Savings in disk usage due to reduced tsidx files
  • Extension of data lifespan by permitting data to be kept longer (and searchable) in Splunk
  • Longer term storage without the need for extra architectural steps like adding S3 archival or rolling to Hadoop.

Configuration

The configuration is pretty straight forward and you can perform a trial by starting with one index and observing the results before taking further action on any other indexes. You will need to specify a reduction age on a per-index basis:

  1. On Splunk UI:
  • Go to Settings > Indexes > Select an Index
    Set tsidx reduction policy.

2. Splunk Configuration File:

  • indexes.conf
    [<indexname>]
    enableTsidxReduction = true
    timePeriodInSecBeforeTsidxReduction = <NumberOfSeconds>

The attribute “timePeriodInSecBeforeTsidxReduction” is the amount of time, in seconds, that a bucket can age before it becomes eligible for tsidx reduction. When this time difference is exceeded, a bucket becomes eligible for tsidx reduction. Default Is 604800

To check whether a bucket is reduced, run the dbinspect search command:

| dbinspect index=_internal
The tsidxState field in the results specifies “full” or “mini” for each bucket.

To restore reduced buckets to their original state, refer toSplunk Docs

A few notes

  • Tsidx reduction should be used on old data and not on frequently searched data. You can continue to search across the aged data, if necessary, but such searches will exhibit significantly worse performance. Rare term searches, in particular, will run slowly      
  • A few search commands do not work with reduced buckets. These include ‘tstats’ and ‘typeahead’. Warnings will be included in search.log

Reference Links

https://docs.splunk.com/Documentation/Splunk/7.2.6/Indexer/Reducetsidxdiskusage

https://conf.splunk.com/files/2016/slides/behind-the-magnifying-glass-how-search-works.pdf

https://conf.splunk.com/files/2017/slides/splunk-data-life-cycle-determining-when-and-where-to-roll-data.pdf

Want to learn more about Tsidx Reduction for Storage Savings? Contact us today!

Operating a Splunk Environment with Multiple Deployment Servers

Operating a Splunk Environment with Multiple Deployment Servers

By: Eric Howell | Splunk Consultant

Splunk Environments come in all shapes and sizes, from the small single-server installation managing all of your Splunk needs in one easily-managed box, to the multi-site, extra complex environments scaled out for huge amounts of data and all the bells and whistles to get in-depth visibility and reporting into a wide variety of circumstances as suits functionally any use case you can throw at Splunk. And, of course, everything in between.

For those multi-site, or multi-homed environments, that many data centers require for any range of needs, managing your configurations begins to get complicated between the additional firewall rules, data management stipulations, and any other broad range of issues that might crop up.

Thankfully, Splunk Enterprise allows for your administrative team, or Splunk professional services, to set up a Deployment Server to manage the configurations (bundled into apps) for all of the universal forwarders, so long as they’ve been set up as deployment clients. In a complicated environment, you may find that you need two deployment servers to manage the workload, for any number of reasons. Perhaps you are trying to keep uniform configuration management systems in multiple environments, or perhaps you are aiming to spread the communication load across multiple servers for these deployments. Whatever the use case, setting up two (or more) deployment servers is not the heartache you may be worried about, and the guide below should be ample to get you on the right track.

Multiple Deployment Servers – Appropriate Setup

To set up multiple deployment servers in an environment, you will need to designate one of the Deployment Servers as the “Master” or “Parent” server (DS1). This is likely to be the original deployment server that houses all of the necessary apps, and is likely already serving as deployment server to your environment.

The use case below will allow you to service a multi-site environment where each environment requires the same pool of apps, but is small enough to be serviced by a single deployment server.

  1. Stand up a new box (or repurpose a decommissioned server, as is your prerogative)! Install Splunk on this new server. This will act as your second deployment server (DS2).
  2. The key difference between these servers is that DS2 will actually be a client of the DS1.
  3. Initial set up is minimal, but make sure that this server has any standard configurations the rest of your environment holds, such as an outputs.conf to send its internal logs to the indexer layer, if you are leveraging that functionality.
  • You will create a deployment client app on DS2. You could use a copy of a similar app that resides on one of your heavy forwarders that poll DS1 for configuration management, but you will need to make two key adjustments in deploymentclient.conf:
  • Once this change has been made, the apps that will be pulled down from DS1 will reside in the appropriate location on DS2 to be deployed out to any servers that poll it.
  • Restart Splunk on DS2
  • Next, you will need to navigate to the Forwarder Management UI on DS1 and create a Server Class for your Slave or Child Deployment Servers (DS2 in this case)
  • Add all apps to this new server class
    • Allowing Splunk to restart with these apps is fine, as changes made to the originating Deployment Server (DS1) will allow DS2 to recognize that the apps that it holds have been updated and are ready for deployment.
  • Add DS2 to this Server Class
  • Depending on the settings you have configured in deploymentclient.conf on DS2 for its polling period (phoneHomeIntervalInSecs attribute), and how many apps there are for it to pull down from DS1, wait an appropriate amount of time (longer than your polling period, and more) and verify if the apps have all been deployed.
  • After this, updates made to the apps on DS1 will propagate down to DS2.

Alternative Use Case

If you are planning to leverage multiple deployment servers to service the same group of servers/forwarders, you will want to also copy over the serverclass.conf from DS1. If all server classes have been created through the web ui, the file should be available here:

$SPLUNK_HOME/etc/system/local/serverclass.conf

If this is your intended use case, you will also want to work with your Network Team to place the Deployment Servers behind a loadbalancer. If you do so, you’ll need to modify the following attribute in deploymentclient.conf in your deployment client app that resides on your forwarders to indicate the VLAN:

You will also need to make sure both Deployment Servers generate the same “checksums” so that servers polling in and reaching different DS servers do not redownload the full list of apps with each check-in.

To do so, you will need to modify serverclass.conf on both Deployment Servers to include the following attribute:

This attribute may not be listed by default, so you may need to include it manually. This can be included with the other attributes in your [global] stanza.

Want to learn more about operating a Splunk environment with multiple deployment servers? Contact us today!

AXF 12c Upgrade Patches and FIPSA Components

AXF 12c Upgrade Patches and FIPSA Components

By: John Schleicher | Sr. Technical Architect

Introduction

This document contains the patch listing that was assembled during a recent Financials Image Processing Solution Accelerator (FIPSA) upgrade where the system was upgraded from 11.1.1.8 (imaging) to 12.2.1.3 release using the standard upgrade process and supplemented by post-upgrade activity to restore the system to full functionality.

The patch listing represents all of the WebLogic server components inclusive of Business Activity Monitoring (BAM)  that were present on the custom solution.   If your system doesn’t include BAM then the additional patches (26404239, 26081565, 28901325) aren’t required.

FIPSA Release

The FIPSA package 12.2.1.3.2 is required for the upgrade as it contains the necessary libraries and archives that are required for the AXF Solution Workspace and Coding form to run in the 12c environment.

Manual Edit

Due to a modification to the central task service engine which affects the SystemAttributes structure a single line edit is required of the InvoiceProcessing.bpel file of the 12.2.1.3.2 FIPSA release.  Presumably, this will be modified by subsequent releases.  Ensure that on line 3411 the reference to task:assigneeUsers/task:id is changed to task:updatedBy/task:id.  This is the least impact solution and may be adjusted in future releases but this has been tested and is working.

Note that active InvoiceProcessing tasks after upgrade cannot use the ‘SaveTask’ AXF action as the old paradigm will be engaged and the process will fault at the noted ‘assigneeUsers’ reference.  It is recommended that the ‘Save Task’ AXF action be disabled via the Imaging Solution Editor to avoid this fault until such time that active workflow instances are no longer present on that baseline.

Patch Listing

Here is an opatch lsinventory listing of the patches applied to the system representing bam, capture, content, soa, and WebLogic:

********************************************************************************

Oracle Interim Patch Installer version 13.9.4.0.0

Copyright (c) 2019, Oracle Corporation.  All rights reserved.

Oracle Home       : /oracle/middleware12c

Central Inventory : /oracle/oraInventory

   from           : /oracle/middleware12c/oraInst.loc

OPatch version    : 13.9.4.0.0

OUI version       : 13.9.3.0.0

Log file location : /oracle/middleware12c/cfgtoollogs/opatch/opatch2019-04-23_10-51-48AM_1.log

OPatch detects the Middleware Home as “/oracle/middleware12c”

Lsinventory Output file location : /oracle/middleware12c/cfgtoollogs/opatch/lsinv/lsinventory2019-04-23_10-51-48AM.txt

——————————————————————————–

Local Machine Information::

Hostname: imaging

ARU platform id: 226

ARU platform description:: Linux x86-64

Interim patches (18) :

Patch  26045997     : applied on Tue Apr 23 10:50:59 MDT 2019

Unique Patch ID:  22112962

Patch description:  “One-off”

   Created on 13 Apr 2018, 23:35:27 hrs UTC

   Bugs fixed:

     26045997

Patch  27133806     : applied on Tue Apr 23 10:41:52 MDT 2019

Unique Patch ID:  22061693

Patch description:  “One-off”

   Created on 27 Mar 2018, 16:59:09 hrs PST8PDT

   Bugs fixed:

     27133806

Patch  25830131     : applied on Tue Apr 23 10:35:35 MDT 2019

Unique Patch ID:  22704908

Patch description:  “One-off”

   Created on 27 Jan 2019, 12:26:12 hrs PST8PDT

   Bugs fixed:

     25830131

   This patch overlays patches:

     28710939

   This patch needs patches:

     28710939

   as prerequisites

Patch  28710939     : applied on Tue Apr 23 10:31:41 MDT 2019

Unique Patch ID:  22540742

Patch description:  “WLS PATCH SET UPDATE 12.2.1.3.190115”

   Created on 21 Dec 2018, 14:25:48 hrs PST8PDT

   Bugs fixed:

     23076695, 23103220, 25387569, 25488428, 25580220, 25665727, 25750303

     25800186, 25987400, 25993295, 26026959, 26080417, 26098043, 26144830

     26145911, 26248394, 26267487, 26268190, 26353793, 26439373, 26473149

     26499391, 26502060, 26547016, 26589850, 26608537, 26624375, 26626528

     26731253, 26806438, 26828499, 26835012, 26929163, 26936500, 26985581

     27055227, 27111664, 27117282, 27118731, 27131483, 27187631, 27213775

     27234961, 27272911, 27284496, 27411153, 27417245, 27445260, 27469756

     27486993, 27516977, 27561226, 27603087, 27617877, 27693510, 27803728

     27819370, 27912485, 27927071, 27928833, 27934864, 27947832, 27948303

     27988175, 28071913, 28103938, 28110087, 28138954, 28140800, 28142116

     28149607, 28166483, 28171852, 28172380, 28311332, 28313163, 28319690

     28360225, 28375173, 28375702, 28409586, 28503638, 28559579, 28594324

     28626991, 28632521

Patch  29620828     : applied on Tue Apr 23 08:57:20 MDT 2019

Unique Patch ID:  22858384

Patch description:  “ADF BUNDLE PATCH 12.2.1.3.0(ID:190404.0959.S)”

   Created on 15 Apr 2019, 17:17:00 hrs PST8PDT

   Bugs fixed:

     23565300, 24416138, 24717021, 25042794, 25802772, 25988251, 26587490

     26674023, 26760848, 26834987, 26957170, 27970267, 28368196, 28811387

     28849860

Patch  29367192     : applied on Tue Apr 23 08:50:38 MDT 2019

Unique Patch ID:  22751712

Patch description:  “One-off”

   Created on 12 Mar 2019, 01:07:01 hrs PST8PDT

   Bugs fixed:

     28843809, 28861250, 28998550, 29259548

   This patch overlays patches:

     28928412

   This patch needs patches:

     28928412

   as prerequisites

Patch  29257258     : applied on Tue Apr 23 08:45:17 MDT 2019

Unique Patch ID:  22807543

Patch description:  “OWEC Bundle Patch 12.2.1.3.190415”

   Created on 16 Apr 2019, 07:02:38 hrs PST8PDT

   Bugs fixed:

     18519793, 18877178, 19712986, 21110827, 21364112, 24702902, 25177136

     25181647, 25693368, 26650230, 27333909, 27412572, 27454558, 27570740

     27578454, 27713280, 27713320, 27839431, 27846706, 28128298, 28179003

     28324896, 28361985, 28373191, 28411455, 28460624, 28517373, 28581435

     28629570, 28705938, 28709611, 28818965, 28878198, 28893677, 28912243

     29197309, 29198801, 29279156, 29285826, 29286452, 29305336, 29305347

     29349853, 29473784, 29620912, 29620944, 29635114

Patch  28901325     : applied on Tue Apr 23 08:36:49 MDT 2019

Unique Patch ID:  22605292

Patch description:  “One-off”

   Created on 30 Nov 2018, 21:05:48 hrs PST8PDT

   Bugs fixed:

     28901325

Patch  26081565     : applied on Tue Apr 23 08:35:28 MDT 2019

Unique Patch ID:  21885885

Patch description:  “One-off”

   Created on 19 Jan 2018, 08:12:44 hrs PST8PDT

   Bugs fixed:

     26081565

Patch  26404239     : applied on Tue Apr 23 08:33:47 MDT 2019

Unique Patch ID:  21885962

Patch description:  “One-off”

   Created on 18 Jan 2018, 21:09:57 hrs PST8PDT

   Bugs fixed:

     26404239

Patch  24950713     : applied on Tue Apr 23 08:24:45 MDT 2019

Unique Patch ID:  22708973

Patch description:  “One-off”

   Created on 29 Jan 2019, 08:18:55 hrs PST8PDT

   Bugs fixed:

     24950713

   This patch overlays patches:

     29142661

   This patch needs patches:

     29142661

   as prerequisites

Patch  29142661     : applied on Wed Apr 17 12:22:50 MDT 2019

Unique Patch ID:  22643444

Patch description:  “SOA Bundle Patch 12.2.1.3.0(ID:181223.0212.0069)”

   Created on 23 Dec 2018, 12:57:19 hrs PST8PDT

   Bugs fixed:

     24922173, 24971871, 25941324, 25980718, 26031784, 26372043, 26385451

     26401629, 26408150, 26416702, 26472963, 26484903, 26498324, 26536677

     26571201, 26573292, 26644038, 26645118, 26669595, 26696469, 26720287

     26739808, 26796979, 26851150, 26868517, 26869494, 26895927, 26935112

     26947728, 26953820, 26957074, 26957183, 26982712, 26997999, 27018879

     27019442, 27024693, 27030883, 27073918, 27078536, 27119541, 27141953

     27150210, 27157900, 27171517, 27210380, 27230444, 27241933, 27247726

     27260565, 27268787, 27311023, 27368311, 27379937, 27411143, 27429480

     27449047, 27486624, 27494478, 27561639, 27627502, 27633270, 27639691

     27640635, 27651368, 27653922, 27656577, 27708766, 27708925, 27715066

     27767587, 27785937, 27832726, 27876754, 27879887, 27880006, 27929443

     27932274, 27940458, 27957338, 28000870, 28034163, 28035648, 28042548

     28053563, 28067002, 28096509, 28163159, 28178811, 28178850, 28265638

     28290635, 28317024, 28324134, 28368230, 28389624, 28392941, 28448109

     28468835, 28597768, 28620247, 28632418, 28702757, 28808901, 28901363

     29005814

Patch  28928412     : applied on Mon Jan 28 13:14:33 MST 2019

Unique Patch ID:  22610612

Patch description:  “WebCenter Content Bundle Patch 12.2.1.3.190115”

   Created on 14 Dec 2018, 02:53:41 hrs PST8PDT

   Bugs fixed:

     16546231, 17278216, 21443677, 23526550, 23567875, 23717512, 24660722

     25051178, 25228941, 25311639, 25357798, 25605764, 25606440, 25801227

     25822038, 25858327, 25885770, 25928125, 25928588, 25979019, 25985875

     26075990, 26105301, 26185222, 26228118, 26283098, 26300787, 26358746

     26415656, 26430590, 26545951, 26574381, 26576630, 26586426, 26596903

     26636302, 26723147, 26732710, 26786056, 26813909, 26820528, 26847632

     26890620, 26893963, 26954901, 27020230, 27065201, 27099662, 27102908

     27119372, 27140730, 27190092, 27190553, 27193483, 27206340, 27233223

     27254464, 27314625, 27319352, 27346199, 27365218, 27383350, 27383732

     27390329, 27396349, 27406356, 27453228, 27457939, 27458003, 27496856

     27502500, 27507189, 27547665, 27574477, 27608152, 27620996, 27648991

     27661839, 27744442, 27771468, 27801161, 27804618, 27814273, 27824132

     27839174, 27877814, 27879502, 27916698, 27921859, 27943295, 27983987

     27984425, 28043459, 28048684, 28098831, 28165088, 28180857, 28185865

     28225141, 28295718, 28302949, 28317851, 28319312, 28378394, 28380642

     28405721, 28425934, 28452764, 28475951, 28481653, 28485796, 28486569

     28556894, 28593461, 28621910, 28635203, 28651169, 28663117, 28704291

     28707740, 28798285, 28872073, 28872314, 28889421, 29011518

Patch  28278427     : applied on Fri Aug 17 08:15:59 MDT 2018

Unique Patch ID:  22374151

Patch description:  “One-off”

   Created on 6 Aug 2018, 05:40:17 hrs PST8PDT

   Bugs fixed:

     28278427

Patch  26355633     : applied on Thu Mar 29 12:51:10 MDT 2018

Unique Patch ID:  21447583

Patch description:  “One-off”

   Created on 1 Aug 2017, 21:40:20 hrs UTC

   Bugs fixed:

     26355633

Patch  26287183     : applied on Thu Mar 29 12:50:58 MDT 2018

Unique Patch ID:  21447582

Patch description:  “One-off”

   Created on 1 Aug 2017, 21:41:27 hrs UTC

   Bugs fixed:

     26287183

Patch  26261906     : applied on Thu Mar 29 12:50:32 MDT 2018

Unique Patch ID:  21344506

Patch description:  “One-off”

   Created on 12 Jun 2017, 23:36:08 hrs UTC

   Bugs fixed:

     25559137, 25232931, 24811916

Patch  26051289     : applied on Thu Mar 29 12:50:26 MDT 2018

Unique Patch ID:  21455037

Patch description:  “One-off”

   Created on 31 Jul 2017, 22:11:57 hrs UTC

   Bugs fixed:

     26051289

Noted Patch Exceptions

The above listing doesn’t leverage the latest bundle patches for SOA nor WebLogic Server as there were overlay patches with dependencies on the bundle that had yet to be released.  Monitor the release of patches 24950713 and 25830131 for inclusion of the latest bundle release.

Conclusion

TekStream has performed the 12.2.1.3 FIPSA upgrade and worked through the issues necessary to restore full functionality on the new baseline.  

Have questions or need assistance with your upgrade? Contact us today!



[1] Application eXtension Framework

Inspyrus Velocity: The Proof is in the…Concept

Inspyrus Velocity:  The Proof is in the…Concept

By: Marvin Martinez | Senior Developer

The Inspyrus Invoice Automation solution can significantly streamline a company’s accounts payable (AP) process.  With automated PO matching, workflow routing, streamlined and even touchless approvals of purchase orders, it can greatly increase efficiency of invoice processing. Deep prebuilt integrations into the world leading ERP software allow Inspyrus to ensure all exception handling is done upfront, minimizing errors and ensuring accuracy. However, sometimes just hearing about it isn’t enough.  Sometimes, one has to see it to believe it.  That is where TekStream’s Inspyrus Velocity option can help.

Inspyrus Velocity is a Tekstream offering that allows the deployment of a usable Inspyrus implementation, connected to your ERP and Active Directory, for proof-of-concept (POC) and hands-on evaluation purposes.  With this offering, a prospective customer can get an idea of the kind of improvement and benefit that the Inspyrus Invoice Automation solution is able to provide.  This simplified POC environment, while likely a subset of the entire solution that a customer might require, still showcases plenty of standard Inspyrus features that are sure to impress anyone.

Included with the Inspyrus Velocity offering are the following standard features:

  • Out-of-the-box standard Inspyrus workflows, including 2-way POs, 3-way POs, Non-PO, prepayment, and credit memo invoices
  • Real-time integration to a non-production EBS ERP system
  • AP Initial Review assignments to a single AP work queue
  • Automated pairing for 2-way and 3-way POs
  • Batch Matching to automate receipt matching of 3-way PO invoices if invoice was received before shipment was received
  • Configuration of 1 organizational/operating unit
  • Approval hierarchy for approvals, including email approvals
  • Email monitoring of 1 customer email inbox for invoice ingestion
  • Up to 5 routing reason codes/exception codes
  • Integration to 1 Active Directory domain
  • Dedicated site to site connectivity through VPN for ERP and Active Directory connections
  • Recurring invoices

These out-of-the-box features, while only a subset of the suite of features that the solution offers, still constitute a feature-rich application able to showcase the power, ease of use, and versatility of the Inspyrus Invoice Automation solution.  With this proof-of-concept offering via TekStream’s Inspyrus Velocity, a prospective customer can get a feel for how their accounts payable process could be streamlined and how their day-to-day processes greatly improved.  If an out-of-the-box proof of concept can demonstrate these improvements, imagine how much additional features like available auto-coding and predictive coding of non-PO invoices and customized validation logic for proprietary/internal procedures and policies can do? 

Want to learn more about what Inspyrus Invoice Automation can do for you, and even see it working in real-time for your ERP? Contact us today!

Email Routing Using Sendemail in Splunk Enterprise Security

Email Routing Using Sendemail in Splunk Enterprise Security

By: Bruce Johnson | Practice Lead, Operational Intelligence

This was the use case scenario: Something went bump in the night. We needed to be able to send alerts from correlation searches to the security guards after hours and on weekends for a few specific correlation searches. Certain categories of activity (e.g, access violations, creating a new account, getting deleted, getting locked out, clearing security or system logs, using service accounts, etc.) needed to alert the after-hours team.

Now there are plenty of tools that do this very effectively (VictorOps among them). We needed something simple and, in Splunk, it really couldn’t be simpler.

The brute force method would have been to create correlation searches that run after hours and send to different email aliases. In other words, you have a different schedule to run a correlation search because you want that correlation search to route to different people, so create a duplicate search with different schedule settings. I suppose this would have been appropriate if the after-hours search had different levels of severity because of the timing, in which case I would have definitely taken that approach, but that was not the case. There is also no way to use cron to do conditionals. So I couldn’t  do a single secondary search that would run on both after hours on weekdays and all hours on weekends (e.g. <*/15 0,1,2,3,4,5,6,19,20,21,22,23 * * 1-5> OR <*/5 * * * 6-7>). Practically speaking that would mean three different correlation searches – untenable for Splunkers like me that are aspirationally lazy (not very successful yet but someday).

What we needed was a means to determine whether a search result was run after hours or on weekends and set a flag. Then use a lookup to return the emails that we would route to and pass that as a parameter to the email action set up in the correlation search. This was just so much simpler than I thought it would be.

The lookup (mail_recipients.csv) for routing purposes at its simplest level:

email after_hours
bjohnson@whitehouse.gov 1
bruce.johnson@tekstream.com 0
bruce.johnson@match.com 1

I added other columns for userid, escalation level, cc, bcc and some fields that we might anticipate using should our routing need to be more complex, but for now we focused simply on the “after hours” use case. By The Way – the Sendresults app makes sending emails to a column dead simple but our use case was so basic, it really wasn’t needed. If you want to play with it: https://splunkbase.splunk.com/app/1794/

Here’s the search – formatted to use _internal instead of CIM or wineventlog for testing purposes. The sendemail is included for testing as well. All we want to do in the format of the correlation search is to set the routing to $result.recipients$ in the To field. This may not work if you have no errors in your environment (insert appropriate emoji).

In the final version I pulled out the code between the evals and the recipient creation and put it in a macro (stripping all the fields I used except for recipients. Then inserted the macro into every correlation search that needed the routing.

The eventual correlation searches just needed to insert the macro, ensure that the recipients field was in the final result, and change the routing on the email action to go to $result.recipients$ – simple but useful.

The eventual search looked similar to this…

Next up: Modify the search to use data models and to actually use the max hour for the search so that if the search results that come back have a mix of times that cross the current hour boundary, the most conservative path is chosen.

Want to learn more about email routing in Splunk Enterprise Security? Contact us today!