AP Automation: Expedite Your Business Day

By: Marvin Martinez | Senior Developer

              The Accounts Payable (AP) process is, by its nature, complex and attempts at automating some facets of it can easily end up being unfruitful and costly, and sometimes unsuccessful.  Inspyrus AP automation provides some very beneficial, time-saving, and just downright impressive automation features to streamline any AP processing.   A trio of some of the more advanced automation features is detailed below.

3-way PO matching – One common type of purchase order (PO) is a 3-way PO.  To be matched, these require a PO Number, PO Line Number, and a receipt.  Inspyrus AP automation can detect a PO number from the invoice and determine the appropriate line number from the line item information on the invoice.  From there, if the receipt exists, the Inspyrus solution will automatically find and match the invoice to the receipt in the ERP.  This allows for no-touch processing of 3-way POs with the Inspyrus AP automation solution.  So, what if the receipt is not yet available?  For these situations, Batch Matching is another great automation feature!

When a purchase order (PO) is identified and paired successfully but the receipt does not yet exist for it, AP must wait for the receipt to be available before going back and matching the appropriate invoice to the receipt and processing through.  With Batch Matching, this process can be automated.  Once the invoice is in the Inspyrus solution and the PO information is known, the Inspyrus solution will check for availability of the receipt daily and, once available, automatically perform the matching of the receipt to the invoice and process it through for payment.  No more necessity of human monitoring, leaving those resources free to perform other crucial activities.

What if the invoice is received but only includes one item line referencing a purchase order across two different PO lines?  There’s a convenient automation feature just for that as well!  Inspyrus AP automation can determine the PO for the line and, if the total is spread across various PO lines, will automatically split the single line into the corresponding receipt lines available in the ERP.  For example, if the invoice contains a line for a quantity of 100 but the ERP shows two receipts for quantities of 70 and 30, Inspyrus AP automation will split the single line on the invoice to two separate ones matching to the 70/30 lines automatically.

Inspyrus AP automation provides a plethora of features that can help a company streamline their invoice processing responsibilities.  The three highlighted above can greatly reduce the time spent on 3-way POs to the point of near full automation.  Together with the myriad other invoice processing features,  Inspyrus AP automation can save an AP department a tremendous amount of time.

Contact us for more information!

[pardot-form id=”15771″ title=”Blog-Marvin-AP Automation: Expedite Your Business Day”]

Taking Advantage of Oracle Process Cloud Service Advanced Features

Taking Advantage of Oracle Process Cloud Service Advanced Features

How to Minimize Future Development by Building Reusable Common Processes

By: Courtney Dooley | Content Developer

When you look at few of your business processes do you see any similarities?  Do the same individuals approve different process requests?  Are documents processed and archived in a similar manner?  Oracle Process Cloud has multiple ways to reuse development and minimize future development and maintenance efforts.

 

QuickStart Applications

  1. Overview

QuickStart Applications are pre-defined applications intended as a starting point for particular type of process.  To create a new QuickStart Application, simply click the Create button from the PCS Composer homepage, and choose the option “QuickStart App”.  The QuickStart Apps page will open displaying all available QuickStart templates, including QuickStart templates that come with the Process Cloud Service.

A QuickStart Master Template can be created from any Oracle Process Cloud Application.  Converting an application to a QuickStart Master Template allows you to restrict modifications to specific elements of the application or select the option “Allow Advanced View” which gives users creating a QuickStart Application the ability to edit all elements of the new application.  This may be necessary as Form elements and process flow are not available in the controlled modification options.

  1. Development Tips

When developing a QuickStart Application it is important to remember that various individuals in different departments may use this template.

  • Keep elements general – rather than using a role of IT Supervisor use Supervisor. Generalizations will reduce the effort to customize an application to a specific need.
  • Use application variables as often as possible – The application name can be retrieved and displayed as the form title rather than setting a static form title.

FormTitle.value = data.getParameter(‘app.name’);

  • Leave application elements configurable – If you have an element that requires specific values that will change for each application, put in a placeholder or leave the values empty for QuickStart Application creators to modify.
  • Keep it simple – the more complex a QuickStart Master Template is, the less likely it will be used as it may be difficult to alter to fit the needs of various business processes.
  1. Ideal Use Cases

These QuickStart Applications work best when they are created as a starting point.  A fully developed QuickStart application will limit its versatility and may cause additional development efforts to both remove unwanted elements as well as creating the missing functionality that is needed.

QuickStart Master Templates that will require minimal maintenance long term will make useful QuickStart Applications.  Keep in mind that multiple applications maybe created from this template, and if the QuickStart Master Template requires an update, so will every application created from it.

Cloned Applications

  1. Overview

Any Application within Process Cloud can be cloned to create a new application.  This will create a new application from the last published version of the Original application.  All elements within the cloned application will be the same as the original including elements which are not editable such as process ID and form name.

  1. Development Tips

If the application being developed is likely to be cloned for other purposes, the following tips will help minimize confusion between applications.

  • Set process instance title – since process ID is not an editable value, and process tracking information can only be filtered by process ID and not application name, specify the process title in the predefined variables to include the application name.

  • Name application elements generically – Process name unlike ID is editable; however Form, Integration, and Decision names cannot be modified. Keeping naming conventions generic will help minimize confusion when used in other applications.
  1. Ideal Use Cases

Production Applications which need to be slightly modified for a separate business process is ideal for this option.  Developers can save hours of work starting with a fully functional Application which may need minor changes to fit the need of a new process.

Called Applications

  1. Overview

Called applications can minimize application maintenance, development, and troubleshooting when created and used wisely.  Called applications should contain message start processes with defined inputs and outputs that link data between applications.  When a called application is deployed, a web service URL is supplied for other application to integrate with.  This allows common processes to be referenced rather than duplicated in multiple applications.

 

  1. Development Tips

When developing Called Applications, it’s important to develop inputs and outputs that will meet the needs of any application calling it.

  • Naming inputs and outputs – When a developer sets up the connection to the called application, the inputs and outputs should be easy to understand what values should be passed.

  • Application Name – Naming the application appropriate to the process will help other developers know which application has the process they need to reference.
  1. Ideal Use Cases

Sub-processes that will be used within multiple applications are ideal for this option.  Processes which may have a sequence of service calls or decisions allow for maintenance to be done in a single location rather than updating multiple applications.  Below are a couple of examples of great uses for this option.

  • Archiving Content – attachments that may need folders located or created prior to moving the attachments to those folders or have metadata assigned to those content items.
  • Management Approval Process – a process with a specific series of approvals which will not change based on the form or content being approved.
  • Concurrent Processes – A process that can be executed concurrently with its originating process. The called application can be invoked at the beginning of the originating process and the output received at a later point in the originating process.  The called application in this case may have a timed event which will wait until a specific date to complete.

So as you can see, Process Cloud offers many ways to start new development by reusing existing applications.  By developing applications that can be used in multiple business processes, you can reduce the time spent not only developing new applications but maintaining old ones as well.

Contact Us for more tips and tricks for developing smart Oracle Process Cloud Applications!

[pardot-form id=”15726″ title=”Blog – Ways to Reuse Your Development – Courtney Dooley”]

Step by Step Guide to Installing Splunk Insights for Infrastructure

Step by Step Guide to Installing Splunk Insights for Infrastructure

By: Pete Chen | Splunk Consultant

Overview

Since the release of Splunk Insights for Infrastructure, I’ve heard a few people tried to install it, and have had some challenges along the way. There are some pre-requisites before starting a successful installation, which will be covered in this blog. We’ll talk a little about what Insights for Infrastructure is, installing the home instance, and installing remote instances.

Before we go any further, the environment I used in my installation consisted of:

1 x Home Instance

1 virtual CPU

8 GB RAM

128 GB HD

1 x Remote Instance

1 virtual CPU

4 GB RAM

80 GB HD

Both servers are virtual, with Microsoft Hyper-V as the hypervisor. The OS used for the servers is CentOS 7 (x86_64). Using the .iso from CentOS, the servers are installed as bare minimum servers.

What is Splunk Insights for Infrastructure?

Splunk Insights for Infrastructure is a new product offering from Splunk, which aims to provide a faster and easier way to gain insight into collecting monitoring data from servers in a technical infrastructure. While traditional Splunk offers licenses based on daily ingestion rate, Splunk Insights for Infrastructure pricing is based on Storage (GB) per month. And if that wasn’t enough to get you excited, the first 200GB is FREE!

At the present time, the only operating systems supported by Splunk Insights for Infrastructure are Red Hat Enterprise Linux 6 2.6.32+

After logging in, this is the grid view of the entities being monitored. The blocks are color-coded based on health. The server being monitored in this example is healthy.

Installing Splunk Insights for Infrastructure Base

From a base installation of CentOS 7, this is a description of the steps needed to complete the task. Please keep in mind that this guide uses a .tgz installation (compared to rpm, deb, or, dmg). Using a different version of Linux may change the commands used below.

Step-by-Step

Step 1: Prepare the server for Splunk by disabling the firewall service. By default, firewalld is enabled, which may block access to port 22 (ssh), 8000 (web access), and 8089 (Splunk Admin Port). The first command listed in this step stops the firewall service. The second step turns the service off for future restarts.

Step 2: SELinux is “Security-Enhanced” Linux. While this is helpful to secure a server, it interferes with the operations of Splunk Insights for Infrastructure. To disable this, use any text editor to change the SELinux configuration file. Change the value from “SELINUX=enforcing” to “SELINUX=disabled”.

Step 3: Updating the server libraries and applications is never a bad idea. Installing updates can provide better security and add newer features and capabilities. This is not a required step.

Step 4: WGET allows a server to download an application from the web. This will help in downloading the software on the base server. On the remote servers, using a script to install monitoring services will also require WGET.

Step 5: EPEL stands for “Extra Packages for Enterprise Linux”. The additional packages don’t conflict with existing standard Linux packages, and can add more functionality to the server. CollectD is a package found in EPEL (and not in standard Enterprise Linux) and will be necessary to configure remote servers for monitoring. Since it’s helpful to monitor the performance of the base server as well, this should be installed.

Step 6: CollectD is a background process which collects, transfers, and stores performance data of the server. This data is the foundation of Splunk Insights for Infrastructure and determines the health of a server. Once CollectD is installed, the data collected will be sent to the base server via Splunk Universal Forwarder.

At this point, the prerequisite work is complete, and the server is ready to download and install Splunk Insights for Infrastructure.

Step 7: Use WGET to download the installation tar file directly to the server. The alternative is downloading the software locally, then having to find a way to transfer the installation file to the server. Using WGET makes it much simpler.

Step 8: To install Splunk, copy the installation file into the folder /opt. This will require root permissions. Once the file is copied, enter the command “tar -vxzf” followed by the file name. Tar is the application used to decompress the installation file. The subsequent letters also have value. V stands for Verbose. Z tells the application to decompress the file. X tells the application to extract the files. F tells the application a file name will be specified. Depending on how your Splunk user is set up, this may require root permissions.

Step 9: This step is a precautionary step. Changing the ownership of the Splunk folder will ensure the Splunk user can run the software without permission concerns. The R makes the change recursive, so subordinate directories will also have their permissions changed. “splunk:splunk” changes the owner of folders to the user “splunk” (first), within the group “splunk” (second). This will need to be run as root.

Step 10: This is the standard Splunk start command. The first time Splunk is run, there will be a requirement to read through and accept the software license agreement. To skip this and accept the license automatically, use “–accept-license”. This command assumes Splunk was installed in the /opt folder.

Step 11: Servers can restart for many reasons. If an application is not configured to run on start, it will have to be manually restarted after the server is back online. Running the “enable boot-start” creates an init script, which is used to start Splunk as the server is starting up.

Now, Splunk is set up, and should be accessible through a web browser by going to the site https://<hostname>:8000. When going to the URL, if a security certificate is not properly set (which it isn’t in this case), there will be a warning about the site not being secure. Advancing to the site is safe.

Installing Splunk Insights for Infrastructure Remote

Much like the installation of the base Insights for Infrastructure server, there are a few assumptions made in this document for the installation of remote services. This document will detail steps taken for a CentOS 7 server. The remote nature of the server simply means a different server, gathering its own metrics, and sending them to the base server for analysis. The prerequisite steps are the same as above.

Step by Step

Step 1: Prepare the server for Splunk by disabling the firewall service. By default, firewalld is enabled, which may block access to port 8089 (Splunk Admin Port). The first command listed in this step stops the firewall service. The second step turns the service off for future restarts.

Step 2: SELinux is “Security-Enhanced” Linux. While this is helpful to secure a server, it interferes with the operations of Splunk Insights for Infrastructure. To disable this, use any text editor to change the SELinux configuration file. Change the value from “SELINUX=enforcing” to “SELINUX=disabled”.

Step 3: Updating the server libraries and applications is never a bad idea. Installing updates can provide better security and add newer features and capabilities. This is not a required step.

Step 4: WGET allows a server to download an application from the web. This will help in downloading the software on the remote servers, using a script to install monitoring services will also require WGET.

Step 5: EPEL stands for “Extra Packages for Enterprise Linux”. The additional packages don’t conflict with existing standard Linux packages, and can add more functionality to the server. CollectD is a package found in EPEL (and not in standard Enterprise Linux) and will be necessary to configure remote servers for monitoring.

Step 6: CollectD is a background process which collects, transfers, and stores performance data of the server. This data is the foundation of Insights for Infrastructure and determines the health of a server. Once CollectD is installed, the data collected will be sent to the base server via Splunk Universal Forwarder.

Step 7: Run the installation script found in the configuration page of the Splunk Insights for Infrastructure base. In this document, the default script will be used. In a production environment, key-value pairs can be added for troubleshooting, analysis, and filtering hosts. This will need to be run as root on the remote server.

Step 8: Once the script is run, a Collectd folder will be created in /opt. Browse to /opt/collectd/etc and modify collectd.conf. By default, core server metrics should be enabled.

At this point, the remote server should start aggregating metrics into Collectd, and sent to the base server through Splunk Universal Forwarder. Within a few minutes, data should start to appear on Insights for Infrastructure.

If you have questions or need further help installing SII, please contact us today: 

[pardot-form id=”15664″ title=”Blog- Pete Chen – Step by Step Guide to Installing Splunk Insights for Infrastructure”]