WFR(ee) Things A Customer Can Do To Improve Extraction

By: William Phelps | Senior Technical Architect


When using Oracle Forms Recognition(“OFR”)  or WebCenter Forms Recognition(“WFR”) with the Oracle Solution Accelerator or Inspyrus, clients often engage consulting companies (like TekStream) to fine-tune extraction of invoice data.  Depending on the desired data to extract from the invoice, the terms “confidence”, “training”, and “scripting” are often used in discussing and designing the solution.  While these techniques justifiably have their place, it may be overkill in many situations.

Chances are, if you are reading this article, that you may already be using WFR.  However, the extraction isn’t as good as desired.  You may have been using it for quite a while, with less-than-optimal results.

In reality, there are several no-cost options that a customer can (and should) perform before considering ANY changes to a WFR project file or attempting to bring in consulting.  This approach is the “don’t step over a dollar to pick up a dime” approach.  Many seemingly impossible extraction issues are truly and purely data-related, and in all likelihood, these basic steps are going to be needed anyway as part of any solution.  There is a much greater potential return on investment by simply doing the boring work of data cleanup before engaging consulting.

The areas for free improvement should begin by answering the following questions:

  1. Does the vendor address data found in the ERP match the address for the vendor found on the actual invoice image?
  2. Is the vendor-defined in the ERP designated as a valid pay site?
  3. In the vendor data, are intercompany and employee vendors correctly marked/identified?
  4. Do you know the basic characteristics of a PO number used by your company?
  5. Are the vendors simply sending bad quality invoice images?

Vendor Address Considerations

The absolute biggest free boost that a customer can do to increase extraction is to actually look at the invoice image for the vendor, and compare the address found on the invoice to the information stored in the ERP.  WFR looks at the zip code and address line as key information points.  Mismatches in the ERP data will lower the extraction success rate.  This affects both PO and non-PO invoice vendor extraction from an invoice.

To illustrate this point at a high level, let’s use some basic data tools found within the Oracle database for testing.  The “utl_match” packages will work to get a basic feel for how seemingly minor string differences can affect calculations.

Using utl.match.edit_distance_similarity in a simple query, two strings can be compared as to how similar the first string is to the second.  A higher return value indicates a closer match.

  • This first example shows the result when a string (“expresso”) is compared to itself, which unsurprisingly returns 100.

  • Changing just one letter can affect the calculation in a negative direction. Here, the second letter of the word is changed from an “x” to an “s”.  Note the decrease in the calculation.

  • The case in the words can matter to a degree as well for this comparison. Simply changing the first letter to uppercase will result in a similar reduction.

  • Using the Jaro Winkler function, which tries to account for data entry errors, the results are slightly better when changing from “x” to “s”.

Let’s now move away from theory.  In more of a real-world example, consider the following zip code strings, where the first zip code is a zip + 4 that may be found on the invoice by WFR, and the second zip code is the actual value recorded in the ERP.

In the distance similarity test, the determination is that the strings are 50/50 in resemblance.

However, Jaro Winkler is a bit more forgiving.  There is a difference, but it’s closer to matching both values.

The illustrations above are purely representative and do not reflect the exact process used by WFR to assign “confidence”.  However, it’s a very good illustration to visually highlight the impact of data accuracy.

The takeaway from this ERP data quality discussion should be that small differences in data between what appears on the invoice compared to the data found in the ERP matters.  This data cleanup is “free” in the sense that the customer can (and should) undertake this operation without using consulting dollars.

Both the Inspyrus and Oracle Accelerator implementation of the WFR project leverage a custom vendor view in the ERP.

  • Making sure this view returns all of the valid vendors is critical for correct identification of the vendor. A vendor that is not found in this view cannot be found by WFR – plain and simple since the WFR process collects and stores the vendor information for processing.
  • Also, be sure in this view to filter out intercompany and employee vendor records. These vendor types are typically handled differently, and the addresses of these kinds of vendors typically appear as the bill-to address on an invoice.  Your company address appearing multiple times on the invoice can lead to false positives.
  • In EBS, there is a concept of “pay sites”. A “pay site” is where the vendor/vendor site combination is valid for accepting payments and purchases.  Be sure to either configure the vendor/vendor site combination as a pay site, or look to remove the vendor from the vendor view.

PO Number Considerations

On a similar path, take a good look at your purchase order number information.  WFR operates on the concept of looking for string patterns that may/may not be representative of your organization’s PO number structure.  For example, when describing the characteristics of your company’s PO numbers, these are some basic questions you should answer:

  • How long are our PO numbers? 3 characters? 4 characters? 5 or more characters? A mix?  What is that mix?
  • Do our PO numbers contain just digits? Or letters and digits? Other special characters?
  • Do our PO numbers start with a certain sequence? For example, do our PO numbers always start with 2 random letters? Or two fixed letters like “AB”? Or three characters like “X2Z”?

Answering this seeming basic set of questions allows WFR to be configured to only consider the valid combinations.

  • By discarding the noise candidates, better identification and extraction of PO number data can occur.
  • More accurate PO number extractions can lead to increased efficiency inline data extraction, since the PO data from the ERP can be leveraged/paired, and can lead to better vendor extraction since the vendor can be set based on the PO number.

Avoid trying to be too general with this exercise.  Trying to cast too wide of a net will actually make things worse.  Simply saying “our PO numbers are all numbers 7 to 10 digits long” will result in configurations that pick up zip codes, telephone numbers, and other noise strings. If the number of variations is too many, concentrate on vendors using the 80/20 rule, where 80% of the invoices come from 20% of the vendor base.

General Invoice Quality

Now, one might think “I cannot tell the vendor what kind of invoice to send.”  That’s not an accurate statement at all.  If explained correctly, and provided with a proper incentive, the vendor will typically work to send better invoices.  WFR is very forgiving, but not perfect, and looking at the items in the following list will help.

  • Concentrate initially on the vendors who send in high volumes of invoices.
  • Make sure the invoices are good quality images containing no extra markings on the image that is covering key data, like PO numbers, invoice numbers, dates, total amount, etc.
  • Types of marks could be handwriting, customs stamps, tax identification stamps, mailroom stamps, or other non-typed or machine-generated characters. Dirty rollers on scanners can leave a line across the image.

Hopefully, this article will give an idea of the free things that can be done to increase the efficiency of WFR.

Want to learn more? Contact us today!

How to use Visual Builder to Create Public Facing Functionality for Sites

By: Courtney Dooley | Technical Architect


Content and Experience Cloud form functionality such as Contact Us, Feedback, and Survey information is not offered out of the box.  In fact, developing these forms and functionality can sometimes require additional services to be purchased or custom development to be implemented.  But if you have Integration Cloud Service, you may not realize that Visual Builder offers a publically accessible form and process that can be used by sites built within Oracle Content and Experience Cloud.

Visual Builder

Visual Builder is a Platform as a Service (PaaS) cloud-based solution that offers the ability to create Web Applications, Mobile Applications, define Service Connections and even integrate with Process Cloud.  Although many of these functions will require authentication, Visual Builder does have the unique option for publically accessible applications.  In the Feedback use case, we will use Business Objects to define and handle the feedback functionality for public-facing sites.  Although this functionality could be handled using a Web or Mobile Application, business objects are quick to set up and configure.

Building Options

The main menu of Visual Builder displays the options below.

  • Mobile Applications
  • Web Applications
  • Service Connections
  • Business Objects
  • Components
  • Processes – Integration with Oracle Integration Cloud Process Applications

For both the Mobile and Web Applications, form development and data structure is available for customization and modification to meet the needs of any service.

Additional services can be configured within the Service Connections then called by a form function or workflow.  These services can be selected from a catalog of predefined services, a specification document that defines the service, or by specifying the endpoint for the service.

Components are elements which can be added to a form such as Images, Text, Buttons, Menus, and Links.  Field types such as dropdowns, text inputs, rich text, and specific field types such as Currency, Email, Phone etc. are all available out of the box.

Business Objects

A quick and easy way to create a public service is by creating a Business Object.

  • Overview – besides general properties, relationships can be established to other business objects for other services.
  • Fields – define information to be received and used within the service including audit fields such as creationDate, createdBy etc.
  • Security – set the authentication needed for the service. In the case of a public service, selecting Anonymous User permissions allow for public access.

  • Business Rules – define how to handle the information being provided, below are the types of handlers which can be defined.
    • Object Triggers – we will use this one in our Feedback Use Case
    • Field Triggers
    • Object Validators
    • Field Validators
    • Object Functions
  • Endpoints – a base set of API endpoints created automatically when the Business Object is created


  • Data – shows all processed data for development, staging, and live processes including the ability to query specific data.


Feedback Use Case

For a simple Feedback Form that can be made public in Content and Experience Sites, we created the Business Object, as described in the previous section.  We then specify the fields we expect from the Feedback form and configure their properties for requirement, uniqueness, and searchability.

Lastly, we add an Object Triggered business rule that executes before a new feedback form is inserted.  This Business Rule will simply send the feedback data to a specific email inbox.


New Actions can be added by clicking on the plus sign within the process flow diagram, then configuring the action to take.

The Email information can be configured by clicking the edit pencil on the Action.  The Email address can be a set value as shown below, or it can be an expression where the value is derived from a service or other data.

Once the business object is configured and saved, the form to present on the site can be created one of two ways.

  1. Create a Web Application that provides the form and on submit inserts the business object which will process the notification. This form would then be presented to users via I-Frame.
  2. Create the form on a Content and Experience Cloud layout or custom component which calls the Visual Builder Cloud Service API for that business object on submit.

The Feedback service will not be available until it has been Staged then Deployed, but once deployed, it should be available for use on any public-facing site.

Contact us for more tips and tricks on developing Oracle Visual Builder Cloud Service Applications!

How to Filter Out Events at the Indexer by Date in Splunk

By: Jon Walthour | Splunk Consultant


A customer presented me with the following problem recently. He needed to be able to exclude events older than a specified time from being indexed in Splunk. His requirement was more precise than excluding events older than so many days ago. He was also dealing with streaming data coming in through an HTTP Event Collector (HEC). So, his data was not file-based, where an “ignoreOlderThan” setting in an inputs.conf file on a forwarder would solve his problem.

As I thought about his problem, I agreed with him—using “ignoreOlderThan” was not an option. Besides, this would work only based on the modification timestamp of a monitored file, not on the events themselves within that file. The solution to his problem needed to be more granular, more precise.

We needed a way to exclude events from being indexed into Splunk through whatever means they were arriving at the parsing layer (from a universal forwarder, via syslog or HEC) based on a precise definition of a time. This meant that it had to be more exact than a certain number of days ago (as in, for example, the “MAX_DAYS_AGO” setting in props.conf).

To meet his regulatory requirements for retention, my customer needed to be able to exclude, for example, events older than January 1 at midnight and do so with certainty.

As I set about finding (or creating) a solution, I found “INGEST_EVAL,” a setting in transforms.conf. This setting was introduced in version 7.2. It runs an eval expression at index-time on the parsing (indexing) tier in a similar (though not identical) way as a search-time eval expression works. The biggest difference with this new eval statement is that it is run in the indexing pipeline and any new fields created by it become indexed fields rather than search-time fields. These fields are stored in the rawdata journal of the index.

However, what if I could do an “if-then” type of statement in an eval that would change the value of a current field? What if I could evaluate the timestamp of the event, determine if it’s older than a given epoch date and change the queue the event was in from the indexing queue (“indexQueue”) to oblivion (“nullQueue”)?

I found some examples of this in Splunk’s documentation, but none of them worked for this specific use case. I also found that “INGEST_EVAL” is rather limited in what functions it can work with the eval statement. Functions like “relative_time()” and “now()” don’t work. I also found that, at the point in the ingestion pipeline where Splunk runs these INGEST_EVAL statements, fields like “_indextime” aren’t yet defined. This left me with using an older “time()” function. So, when you’re working with this feature in the future, be sure to test your eval expression carefully as not all functions have been fully evaluated in the documentation yet.


Here’s what I came up with:






INGEST_EVAL = queue=if(substr(tostring(1577836800-_time),1,1)=”-“, “indexQueue”, “nullQueue”)


The key is in the evaluation of the first character of the subtraction in the “queue=” calculation. A negative number yields a “-” for the first character; a positive number a digit. Generally, negative numbers are “younger than” your criteria and positive numbers are “older than” it. You keep the younger events by sending them to the indexQueue (by setting “queue” equal to “indexQueue”) and you chuck older events by sending them to the nullQueue (by setting “queue” equal to “nullQueue”).

Needless to say, my customer was pleased with the solution we provided. It addressed his use case “precisely.” I hope it is helpful for you, too. Happy Splunking!

Have a unique use case you would like our Splunk experts to help you with? Contact us today!

Effective Use of Splunk and AWS in the Time of Coronavirus

By: Bruce Johnson | Director, Enterprise Security

Firstly, be safe and be well. The TekStream family has found itself pulling together in ways that transcend remote conference calls and we hope that your respective organizations are able to do the same. We feel very privileged to be in the Splunk ecosystem as uses for Splunk technology are becoming ever more immediate.

To that end, we have seen all of our customers putting emphasis on monitoring remote access. Was any company sizing their network for virtualizing their entire ecosystem overnight? Network access points were sized for pre-determined traffic profiles leveraging pre-determined bandwidth levels for remote access. Those network appliances were configured to support predictable traffic volumes from occasionally remote workers, they weren’t designed to support 100% of all internal access traffic. The impact to operational monitoring of services supporting remote users became the most critical part of your infrastructure overnight.

Likewise, what you were monitoring for security just got hidden in a cloud of chaff. The changes to network traffic have opened you up to new threats that demand immediate attention.

Security Impact

There are several new areas of concern in the context of the current climate:

Your threat surface has changed

Anomalies relative to RDP sessions or escalation of privileges for remotely logged in users used to be a smaller percentage of traffic and might have figured into evaluating potential threat risk. Obviously that is no longer the case. If you’re able to segregate traffic for access to critical systems from traffic that simply needs to be routed or tunneled to other public cloud-provided applications, that would help cut down on the traffic that needs to be monitored but that will require changes to network monitoring and Splunk searches.

Your policies and processes need to be reviewed and revised

Have you published security standards for home networks for remote workers? Do you have policies relative to working in public networks? Do you have adequate personal firewalls in place or standard implementations for users wanting to implement security add-ons for their home networks or work-provided laptops?

Some employees might now be faced with working on home networks which are not adequate to the bandwidth needs of video conferencing and may opt to work from shared public access points (although they might have to make due with working from the Starbucks parking lot as internal access is prohibited). Many do not have secure wireless access points or firewalls on their home networks. Publishing links to your employees on how to implement additional wi-fi security and/or products that are supported for additional security, as well as how to ensure access to critical systems through supported VPN/MFA methods is worth doing even if you have done it before. There is also the potential expansion of access to include personal devices in addition to company-owned devices. They will need to have the same level of security, and you will also need to consider the privacy implications of employee-owned devices connecting to your business network.

Likewise, help desk resources in support of these efforts as well as level1 security analysts monitoring this type of activity might need to be shifted or expanded.

New threats have emerged

Hackers don’t take the day off because they have to work from home and there are several creative threats that take advantage of Coronavirus panic. Hackers are nothing if not nimble. There are several well-publicized attacks which seek to take advantage of users anxious for more information on the progress of the pandemic. The World Health Organization (WHO) and the U.S. Federal Trade Commission (FTC) have publicly warned about impersonators. Thousands of domains are getting registered every day in support of Coronavirus related phishing attacks. Some of them are even targeting hospitals, which takes “unethical” hacking to a brand new low. Additionally, there are new threat lists to consider, for example, RiskIQ is publishing a list of rapidly expanding domains relative to coronavirus.

Stepping up the normal Splunk monitoring for those domains, moving up plans to augment email filtering, setting up a mailbox that Splunk ingests for reported attacks that can be easily forwarded from end-users that suspect a phishing email, or augmenting your Phantom SOAR implementation to highlight automated response to specific phishing attacks are all appropriate in that context.

Operational impact


VPN Monitoring

If you are not currently monitoring VPN usage in Splunk it is relatively straightforward to implement VPN/Firewall data sources and to begin monitoring utilization and health from those appliances. It is useful to monitor network traffic as a whole relative to VPN bandwidth as well as the normal CPU/memory metrics coming from those appliances directly.

If you’re already monitoring VPN traffic and likely you are if you have Splunk, at the very least, you need to alter your thresholds for what constitutes an alert or an anomaly.

The following are examples of dashboards we’ve built to monitor VPN related firewall traffic as well as cpu/memory:

In addition to straightforward monitoring of the environment, expect troubleshooting tickets to increase. Detailed metrics relative to the connectivity errors might need to be monitored more closely or events might be expanded to make troubleshooting more efficient. Below is an example of Palo Alto Splunk dashboards that track VPN errors:

There are several out of the box applications from Splunk for VPN / NGFW sources including but not limited to:

Palo Alto: Includes firewall data that monitors bandwidth across key links. Additionally, Global protect VPN monitoring can help customers with troubleshooting remote access.

Zscaler: Provides visibility into remote access, no matter where the users are connecting from.

Cisco: Provides equivalent functionality to populate dashboards around remote access and bandwidth on key links.

Fortinet: Provides ability to ingest Fortigate Fortinet traffic

Nagios: Monitors the network for problems caused by overloaded data links or network connections, also monitors routers, switches and more.

One of the techniques to consider in response to this spike in volume is to split network traffic on your VPNs to segregate priority or sensitive traffic from traffic that you can pass through to external applications.

Split tunneling can be used to route traffic and it’s being recommended by Microsoft for O365 access. This also effects how VPN traffic and threats are monitored through established tunnels. Obviously, the traffic to internal critical infrastructure and applications would be the priority and all externally routed traffic could be, if not ignored, at least de-prioritized.

Additionally, MFA applications fall into much the same category as monitoring of VPN sources and the same types of use cases apply to monitoring those sources. Below are a subset of relevant application links.

RSA Multifactor Authentication

Duo Multifactor Authentication

Okta Multifactor Authentication:


If you’re familiar with Splunk, you already know that it is typically only a few hours of effort to onboard a new data source and begin leveraging it in the context of searches, dashboards, and alerts.

Engaged Workers

There are some people that are focused in an office environment, then there are the people that work at home with one cup of coffee that can fuel them until they are dragged away from their laptop, then there are the people that have way to much to do around the house to bother with work. It’s nice to know whether people are actually plugged in. A whole new demand for monitoring remote productivity fueling new solution offerings from Splunk. They have developed a set of dashboards under the guise of Remote Work Insights.

Of course, monitoring your VPN and conferencing software is just the beginning and there are a plethora of sources that might be monitored to measure productivity. Often those sources vary by team and responsibilities. The power inherent in Splunk is that each team can be monitored individuality with different measures and aggregated into composite team views at multiple levels, similar to ITSI monitoring of infrastructure layers and components. We are finding a great deal of opportunity in this area and it is expected to be a set of techniques and solutions that will persist well beyond the shared immediate challenges of Coronavirus.

A related use case for VPN monitoring is to track login and logout to confirm that people are actually logging in rather than social distancing on the golf course, but this use case has been less common in practice.

Migrate VPN services to the cloud

Ultimately, when faced with dynamic scaling and provisioning problems, the cloud is your answer. If your VPN infrastructure is taxed, the traffic is now completely unpredictable, and there is no way to scale up your network appliances in the short term, consider moving VPN services to cloud connectivity points. You can move network security to the cloud and consume it just like any other SaaS application. This has the advantage of being instantly scalable up and down (once normal operations resume) as well as being secure. Implementation can be done in parallel to your existing VPN network-based solutions. Virtualizing VPN in AWS is relatively straightforward and it’s certainly something TekStream can help you to accomplish in short order. It has the advantage of scaling and doing so temporarily. There are a variety of options to consider.

AWS Marketplace has VPN appliances you can deploy immediately. This is a good approach if you are already using a commercial-grade VPN like a Cisco ASA or Palo Alto. This will have the least impact on existing users since they can continue to use the same client, just point their connection to a new hostname or IP but it can be a bit pricey.  Some examples of commercial options from the AWS Marketplace are:

Cisco ASA:

Barracuda Firewall:

Juniper Networks:


You can use AWS’s managed VPN service. This is a great “middle of the road” compromise if you don’t currently have a VPN.  As a managed service AWS handles a lot of the nuts and bolts and you can get up and connected quickly.  Your users will connect to the AWS VPN which connects to your AWS VPC, (which is connected to your datacenter, network, on-prem resources, etc). As a fully managed client-based VPN you can manage and monitor all your connections from a single console.  AWS VPN is an elastic solution that leverages the cloud to automatically scale based on user demand, without the limitations of a hardware appliance.  It may also allow you to take advantage of additional AWS provided security mechanisms like rotating keys, credentialing, etc. to augment your security practices.

Finally, if you need something quick and have a smaller number of users, you can deploy your own VPN software on an Ec2 instance and “roll-your-own.” While this can be quick and dirty, this can be error-prone, less secure, and introduce a single point of failure, and it has to be manually managed.

Additional Services

There are a whole host of ancillary supporting services which can might need to be expanded for inclusion into Splunk such as Citrix, Webex, Skype, VoiP infrastructure, Teams, etc.. Below is an example of an Australian customer monitoring Video conferencing solutions with Splunk ITSI, but TekStream has been involved to build out monitoring of critical VoiP infrastructure and relate that to multi-channel support mechanisms including web and chat traffic. The point is that all of these channels might have just become critical infrastructure.


Much of the above recommendations can be accomplished in days or weeks. If there is an urgent need to temporarily expand your license to respond to the Coronavirus threat, that might be possible in the short term as well. With uncertainty around the duration of the pandemic, it would seem to warrant an all-out response from infrastructure, to processes and procedures, to operations, and security.

Your business can’t afford to fail. TekStream is here to help if you need us.

The Power of Splunk On-The-Go with Splunk Mobile and Splunk Cloud Gateway

By: Pete Chen | Splunk Practice Team Lead


Splunk can be a powerful tool in cybersecurity, infrastructure monitoring, and forensic investigations. While it’s great to use in the office, after-hour incidents require the ability to have data available immediately. Since most people carry a mobile device, such as a cell phone or a tablet, it’s easy to see how having dashboards and alerts on a mobile device can help bridge the information gap.

Splunk Mobile brings the power of Splunk dashboards to mobile devices, powered by Splunk Cloud Gateway. While Splunk Mobile is installed on a mobile device, Splunk Cloud Gateway feeds the mobile app from Splunk Enterprise. Between the two applications is Splunk’s AWS-hosted Cloud Bridge. Traffic between Splunk Enterprise and the mobile device is protected by TLS 1.2 encryption.

Architecture from Splunk

Splunk Cloud Gateway

Software Download

Splunk Cloud Gateway is a standard app found on Splunkbase (link above). It can be installed through the User Interface (UI), or by unpacking the file to <SPLUNK_HOME>/etc/apps/. When installed through the UI, Splunk will prompt for a restart once installation is complete. Otherwise, restart Splunk once the installation package has been unpacked into the Apps folder.

After restart, Splunk Cloud Gateway will appear as an app on Splunk Web. Browse to the app, and these are the pages available in the app:

The first page allows for devices to be manually registered. When Splunk Mobile is opened for the first time (or on a device not registered to another Splunk Cloud Gateway instance), an activation code will appear at the center of the display. That code can be used to register the device on Splunk. The “Device Name” field can be any value, used to identify that particular device. It’s helpful to identify the main user of the device and the type of device.

Skipping over Devices until a device is registered, and putting aside Splunk > AR for another time, the next important section is the “Configure” tab. At the top of the page, all the deployment configurations are listed. The Cloud Gateway ID can be modified through a configuration file to better reflect the environment. A configuration file can be downloaded for a Mobile Device Manager (MDM). This is also where the various products associated with Splunk Connected Experiences can be enabled.

In the Application section, look for Splunk Mobile. Under the Action column, click on Enable. This must be done before a device can be registered.

The App Selection Tab is where apps can be selected, based on each user’s preference, to determine which dashboards are visible through Splunk Mobile. When no apps are selected, all available dashboards are displayed. Select the apps desired by clicking them from the left panel, and they will appear on the right panel. Be sure to click save to commit the changes.

A couple of things to point out in this section.

  • Again, if an app is not selected, all available dashboards to the user will appear on Splunk Mobile.
  • Management of apps is based on the user, not centrally managed. During the registration of a device, a user must log in to authenticate. The apps selected in this page will be the same for all devices registered under this user.
  • Even if apps are specified, all dashboards set with global permissions will still be visible to the user.
  • To eliminate all dashboards and control what is viewable requires setting all dashboards to app-only permissions, and creating a generic app without dashboards. When this app is selected, and after all dashboards are converted to app-only permissions, no dashboards will appear.

The final tab is the dashboard for Splunk Cloud Gateway. This dashboard shows the status of the app, and provides metrics of usage. The top three panels may be the most important when first installing Cloud Gateway. If the service doesn’t seem to be working correctly, these three panels will help in troubleshooting the service.


Splunk Mobile

Google Play Store
Apple App Store

Installing Splunk Mobile on a mobile device is as simple as going to the app store, and having the device set up the app. Once the app is ready, launching the app will bring up a registration page. On this page, there is a code needed to register the device with Splunk Cloud Gateway. Below is a secondary code. This is used to verify with Cloud Gateway, making sure the device is registered with the correct encryption key.

With the code above, return to Splunk Cloud Gateway, and register the device. Type in the activation code from Splunk Mobile. Enter in a device name, as explained above. Click on “Register” to continue.

Validate the confirmation code displayed in the UI with the code displayed on the device. If the codes don’t match, stop the registration process. If the codes do match, enter credentials for Splunk, and click “Continue”.

At this point, the device is registered with Splunk Cloud Gateway. Validate the device name in the Registered Devices page. Make sure the Device Type, and the Owner matches the device and user. If necessary, “Remove” is available to remove a device from Cloud Gateway.

From a mobile perspective, the initial page displayed is the list of potential alerts.

At the bottom of the screen, tap on “Dashboards” to see the list of dashboards available to the mobile device. Without any additional configuration, all available Splunk dashboards should appear in the list. Click on any dashboard.

As an example, when the Cloud Gateway Status Dashboard is selected, the dashboard opens and allows for a time-selector at the top of the page. The panels available from the UI are displayed in a single column on the mobile device.

Points to Consider

Now that Splunk Mobile and Splunk Cloud Gateway are configured, and ready to be used, here are some points to consider in an Enterprise deployment.

  • When installing on a search head cluster, Splunk Cloud Gateway must be installed on the cluster captain. The captain runs some of the scripts necessary to connect Cloud Gateway to the Spacebridge.
  • All dashboards set with global permissions will appear. To limit visibility, set dashboard permissions to app-only or private.
  • During device registration, the credentials used will determine the dashboards and alerts available to the device. Configuration is user-based, not centrally controlled.
  • Trellis is not a supported feature of Splunk Mobile. Dashboards with panels using trellis will need to be reconfigured.
  • Panel sizing and scaling is not adjustable at this time. Some dashboard re-design may be necessary to tell the best story.
  • Pay special attention to how long dashboards take to load. From a mobile perspective, dashboards will need to load faster for the mobile user.

Want to learn more about Splunk Mobile and Splunk Cloud Gateway? Contact us today!

How to Extract Your PO Numbers Consistently in Oracle’s Forms Recognition AP Project

By: William Phelps | Senior Technical Architect

One of the thornier issues when working with Oracle’s Forms Recognition Accounts Payable (“AP”) project is simply and correctly determining and extracting a correct purchase order number from the invoice image.  This seemingly mundane task is further complicated when the purchase order number is a mere simple string of digits, much like, and sometimes confused with, telephone numbers, serial numbers, shipment numbers, and similar purely numeric strings found on the invoice.

This is a common problem for many companies using the AP Solution project, and it’s a fair bet that if you are reading this article, your company has the same or similar issue.

Let’s note upfront that there is no one magic solution bullet that will fix all extraction problems.  This article is intended as a fine-tuning methodology once very basic solutions and ERP data cleanup has occurred.  It’s at that point, when the easy stuff has been done that any additional techniques should be applied.  (A certified partner can help make these advanced changes with less overall effort and better end results.)

In general terms, the Oracle AP project provides a process called “PO masking” to allow the customer to tell the software about the general characteristics of their PO number structure.  This approach uses somewhat simple regular expressions (or “masks”) to derive potential strings deemed to be viable PO number “candidates” that it encounters while parsing the invoice text.  This kind of generalized setup almost always produces extraneous candidates.  Often it’s further determined by the process that, from this list of candidates that it extracts, some candidates are deemed a better match based on where the string is found in the document.  It places a lower ranking, called “weighting”, on candidates that may be embedded within the body of the invoice, like the case when the PO number is listed within a line description, and instead places a higher “weight” on a wrong value near the page header or top of the invoice.

A somewhat more educated and targeted way to help Forms Recognition get to that right value will involve an additional detailed look at the list of potential candidates.  During this further programmatic inspection, we can try removing or reducing the “weights” of those potential candidates that we think are misses by using true regular expressions in Visual Basic.

For a very simple example, a given operating unit may have only a handful of unique patterns for their PO numbers. Wide, generalized mask definitions intended for multiple operating units will likely result in more misses.

In WFR using the Inspyrus/Solution Accelerator PO header view (“xx_ofr_po_header_v”), the operating unit is available in the view alongside the PO number.  Using this information indirectly, the PO candidate weights can be altered to increase the accuracy of the extraction.

In these cases, the incoming invoice should be coming from a process that is pre-assigning the correct operating unit.  Since we will know the general PO number patterns for each operating unit, the list of extracted potentials can then be whittled down to a very precise list. (The real work is in determining the exact regular expression per operating unit, which is beyond the scope of this post.)

For today’s example,

  • Open the AP Solution project in WFR Designer and edit the script for the Invoices class.
  • On the UserExits script page, add the following function at the very bottom of the sheet. (Be sure to only add custom code in designated or legal areas of the script page for supportability.)

Then, in “UserExitPONUmberPostEvaluate” on the same script sheet, update the subroutine with the PO filtering code below:

Save the project file and try processing those problem vendors and purchase order numbers again.

Variations of this code have been deployed at several customers, resulting in much-improved PO number extraction rates.  This increased extraction success rate translates into less manual correction and increased invoice processing throughput since PO lines can then also be paired with a greater success rate automatically.

As noted earlier, a certified partner can help make these kinds of advanced changes with less overall effort and better end results.

Contact us if this express lane to regular payments sounds like a great idea!

Integrating with Salesforce using Oracle Integration Cloud

By: Courtney Dooley | Technical Architect

With all of the available integration options, it’s easy to overlook or undervalue tools that are offered to make these integrations easier.  In fact, many of these offerings are not nearly as helpful as they appear to be.  Oracle’s Integration Cloud offers a Salesforce adapter that really minimizes the development required to set up a simple integration with any other system or service.

A Simple Use Case

Salesforce Opportunities often result in a contract for products and/or services.  These contracts are often managed or produced using a contract management tool which processes approvals and renditions before the final contract is sent for customer signature.  Oracle Integration Cloud includes Process Cloud as workflow approval process engine, and tightly interacts with integrations to any number of systems and services.  Although a contract management solution can be easily built within Oracle Integration Cloud Process Applications; for this use case we will use Atlassian’s Jira on-premise service.

Jira offers a built-in REST API library that allows for easy integration to create, get, or delete issues.  For this reason, we do not need an Atlassian Jira Adapter, but can use the out-of-the-box REST API adapter.

Salesforce integrations can be triggered either by a workflow action outbound message or by simply calling the integration from a button.  For the integration to be triggered by an outbound message, the outbound message WSDL is required.  The workflow action will send not only the Opportunity ID but also other field data when triggering the integration.

For our use case, we did not have a specific set of field data that would indicate when the integration would be triggered and although custom links can trigger the outbound message, we went with the option for a button that could be used at any point in the Opportunity life cycle and is easily found alongside the other Opportunity buttons.

When triggered the integration retrieves the opportunity details, checks Jira for existing contracts issues that are linked with the Opportunity (this can be tracked within Jira or Salesforce) and based on the information the integration has acquired makes another REST API call to create a new issue in Jira or returns the existing Jira Contract information.  We could also update the existing contract with the information from the Opportunity.


Need to Know

  1. Connector Requirements

In order to create a connector in Oracle Integration Cloud Service for Salesforce, you will need an integration user to authenticate with and has access to all Opportunities within Salesforce.  You will also need access to the Salesforce environment to create an Enterprise WSDL to identify the Salesforce service you are trying to integrate with.

Once the generated WSDL is downloaded and the user credentials have been set, including appending the user security token to the end of the user’s password. The connector can be created using the Salesforce Adapter.


Oracle Integration Cloud Salesforce Adapter

 New Connection Dialog Screen

 Salesforce Connector Configuration using the Salesforce Adapter


  1. Trigger vs Invoke

Once a connector has been configured and tested, it can be used either as a Trigger (which requires the Outbound Message WSDL), Invoke or both depending on how the connector was created.  When the Salesforce connector is used within integrations, the functionality available for use is displayed in the “Action” step of the setup.


  1. Salesforce Buttons

Two ways to trigger the integration using a button are to execute JavaScript on click, or execute a URL which calls the integration.  Below is an example using the URL option and returns a JSON response including the contract URL or existing contract message.

Other Service Connections

  1. Oracle Integration Cloud out-of-the-box Adapters

AutomationAnywhere – Robot Process Automation (RPA)





Google – Calendars, Emails, and Tasks

Microsoft – Calendar, Contacts, and Emails

JD Edwards EnterpriseOne



Oracle EBS

Oracle Database

Oracle DBaaS






REST – for use with any system that has a REST API library

SOAP – for any system with a Soap API library



So as you can see, Oracle Integration Cloud offers many ways to integrate Salesforce with almost any system or service quickly and easily.  By developing simple integrations, you can eliminate the re-work of entering data into multiple systems, as well as keeping data aligned and your business in sync across all resources.


Contact us for more tips and tricks on developing Integrations using Oracle Integration Cloud!

TekStream Solutions Attains AWS Well-Architected Framework Partner Status

TekStream joins elite group of less than 150 AWS partners worldwide certified to conduct Well-Architected Framework evaluations.

TekStream Solutions, an Atlanta-based digital transformation technology firm and Advanced Consulting Partner in the Amazon Web Services Partner Network (APN), is excited to announce that they are now a part of the AWS Well-Architected Framework.

The Well-Architected Framework was developed to help cloud architects build secure, high-performing, resilient, and efficient infrastructure for their applications. Based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization — the Framework provides a consistent approach for AWS customers and partners to evaluate architectures and implement designs that will scale over time.

Fewer than 150 of all AWS consulting partners have achieved the Well-Architected Framework (WAF) Partner designation. To qualify for the WAF Partner Program, partners must have earned Advanced Tier status, have a specified number of AWS Certified Solutions Architects on staff, and have committed to completing a minimum number of Well-Architected Reviews per quarter.

” The AWS Well-Architected Framework is the best-practice template for consistent customer cloud success. Our commitment to delivering the highest standards for digital transformation made achieving this partner designation a priority and squarely aligns with our mission and the needs of our customers.” stated Judd Robins, Executive Vice President of Sales.  “We’re very excited that TekStream is now one of the leading AWS partners approved to deliver AWS Well-Architected Reviews.”

For more information about AWS’ Well Architected Framework, visit

About TekStream Solutions

TekStream accelerates clients’ digital transformation by navigating complex technology environments with a combination of technical expertise and staffing solutions. We guide clients’ decisions, quickly implement the right technologies with the right people, and keep them running for sustainable growth.  Our battle-tested processes and methodology help companies with legacy systems get to the cloud faster, so they can be agile, reduce costs, and improve operational efficiencies. And with 100s of deployments under our belt, we can guarantee on-time and on-budget project delivery.  That’s why 97% of clients are repeat customers. For more information visit

Connecting Splunk to Lightweight Directory Access Protocol (LDAP)

By: Pete Chen | Splunk Team Lead


Splunk installation is complete. Forwarders are sending data to the indexers, search heads successfully searching the indexers. The next major step is to add central authentication to Splunk. Simply put, you log into your computer, your email, and your corporate assets with a username and password. Add Splunk to the list of tools available to you with those credentials. This also saves the time and hassle of creating user profiles for everyone who needs access to Splunk. Before embarking on this step, it’s important to develop a strategy for permissions and rights. This should answer the question, “who has access to what information?”

LDAP Basics

LDAP stands for Lightweight Directory Access Protocol. The most popular LDAP used by businesses is Microsoft’s Active Directory. The first step in working with LDAP is to determine the “base DN”. This is the name of the domain. Let’s use the domain “splunkrocks.local” as an example. In LDAP terms, it can be expressed as dc=splunkrocks,dc=local. Inside of the DN are the organizational units, OU’s. So an example of an organizational unit expressed is ou=users,dc=splunkrocks,dc=local.

Most technical services require some sort of authentication to access the information they provide. The credentials (username and password) needed to access the LDAP server is called the “BindDN”. When a connection is requested, the AD server will require a user with enough permissions to allow user and group information to be shared. In most business environments, the group managing Splunk will not be the same group managing the LDAP server. It’s best to ask for an LDAP administrator to type in the credentials during the setup process. The LDAP password is masked while it’s being typed and is hashed in the configuration file.

Keep in mind that connecting Splunk to the LDAP server doesn’t complete the task. It’s necessary to map LDAP groups to Splunk roles afterward.


LDAP: Lightweight Directory Access Protocol

AD: Active Directory

DN: Distinguished Name

DC: Domain Component

CN: Common Name

OU: Organizational Unit

SN: Surname

Sample Directory Structure

Using our sample domain, Splunkrocks Local Domain (splunkrocks.local), let’s assume an organizational unit for Splunk is created called “splunk”. Inside this OU, there are two sub-organizational units, one for users, one for groups. In Splunk terms, these are users and roles.


Group Users Users SN
User Austin Carson austin.carson
User Kim Gordon kim.gordon
User James Lawrence james.lawrence
User Wendy Moore wendy.moore
User Brad Hect brad.hect
User Tom Chu tom.chu
Power User Bruce Lin bruce.lin
Power User Catherine Lowe catherine.lowe
Power User Jeff Marlow jeff.marlow
Power User Heather Bradford heather.bradford
Power User Ben Baker ben.baker
Admin Bill Chang bill.chang
Admin Charles Smith charles.smith
Admin Candice Owens candice.owens
Admin Jennifer Cohen jennifer.cohen

Connecting Splunk to LDAP

From the main menu, go to Settings, and select Access Control.

Select Authentication Method

Select LDAP under External Authentication. Then click on Configure Splunk to use LDAP


In the LDAP Strategies page, there should not be any entries listed. At the top right corner of the page, click on New LDAP to add the Splunkrocks AD server as an LDAP source. Give the new LDAP connection a name.

The first section to configure is the LDAP Connection Settings. This section defines the LDAP server, the connection port, whether the connection is secure, and a user with permission to bind the Splunk server to the LDAP server.

The second section determines how Splunk finds the users within the AD server.

–        User base DN: Provide the path where Splunk can find the users in on the AD server.

–        User base filter: This can help reduce the number of users brought back into Splunk.

–        User name attribute: This is the attribute within the AD Server which contains the username. In most AD servers, this is “sAMAccountName”.

–        Real name attribute: This is the human-readable name. This is where “Ben Baker” is displayed” instead of “ben.baker”. In most AD servers, this is the “cn”, or Common Name.

–        Email attribute: this is the attribute in AD which contains the user’s email.

–        Group mapping attribute: If the LDAP server uses a group identifier for the users, this will be needed. It’s not required if distinguished names are used in the LDAP groups.

The second section determines how Splunk finds the groups within the AD server.

–        Group base DN: Provide the path where Splunk can find the groups in on the AD server.

–        Static group search filter: search filter to retrieve static groups.

–        Group name attribute: This is the attribute within the AD server which contains the group names. In most AD servers, this is simply “cn”, or Common Name.

–        Static member attribute: The group attribute with the group’s members contained. This is usually “member”.

The rest can be left blank for now. Click Save to continue. If all the settings are entered properly, the connection will be successful. A restart of Splunk will be necessary to enable the newly configured authentication method.  Remember, adding LDAP authentication is the first part of the process. To complete the setup, it’s also necessary to map Splunk roles to LDAP groups. Using the access and rights strategy mentioned above, create the necessary Splunk roles and LDAP groups. Then map the roles to the groups, and assign the necessary group or groups to each user. Developing this strategy and customizing roles is something we can help you do, based on your needs and best practices.

Want to learn more about connecting Splunk to LDAP? Contact us today!