TekStream Solutions Recognized as One of the 40 Fastest-Growing Companies in Georgia

FAST 40

ATLANTA, GA, May 23, 2017 – The Atlanta Chapter of the Association for Corporate Growth (ACG), a global professional organization with the mission of Driving Middle-Market Growth, has recognized TekStream Solutions as a member of the 2017 Georgia Fast 40, recognizing the top 40 fastest-growing middle-market companies in Georgia.

“We are very proud of the consistent growth that we have been able to maintain since founding the company in 2011 and the recognition by ACG for this success the last two years in a row,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping clients leverage Cloud-based technologies to solve complex business problems have provided us with a platform for continued growth into the future. It has also provided our employees with advancement opportunities to continue their passion to learn new technologies and provide cutting edge solutions to our clients.

Applicants were required to submit three years of verifiable revenue and employment growth records, which were validated by national accounting firm and founding Diamond sponsor, Cherry Bekaert LLP. An ACG Selection Committee evaluated each application and conducted in-person interviews with all qualified applicants. All companies on the list are for-profit, headquartered in Georgia and reported 2016 annual revenues ranging from $15 to $500 million.

“Our continued success and growth can be attributed to pivotal industry changes and embracing new technologies. With the colossal rise of cloud computing, we are working side by side with industry leaders like Oracle to help customers make the leap from legacy on-premise models to more up to date Platform-as-service (PaaS) and Infrastructure-as-a-Service (IaaS) environments. New partnerships like Splunk and other upcoming service lines will continue to fuel our growth for 2017 and into 2018,” stated Judd Robins, Executive Vice President of Sales.

“These 40 companies represent 9,574 new jobs and $2.03 billion in revenue growth,” said Justin Palmer, chairman of the 2017 Georgia Fast 40 Awards and Vice President at Genesis Capital, LLC. “The lower middle-market honorees had more than a 133 percent weighted growth rate, and the upper middle-market honorees had more than a 91 percent weighted growth rate.”

TekStream has seen a three-year growth of over 178% and added over 45 jobs in the last 12-18 months. The company’s impressive rise has allowed it to receive accolades from groups like Inc. 5000 and AJC’s Top Workplaces.

“To be a part of this for the past 2 years is a demonstration of consistent and an unwavering growth with an incredible team,” stated Mark Gannon, Executive Vice President of Recruitment. “There is unlimited potential for growth both within TekStream and our clients and we look forward to building on this recognition with another successful year.”

About TekStream Solutions

TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs and remain competitive in an extremely fast-changing market. For more information about TekStream, visit www.tekstream.com or email Shichen Zhang at Shichen.zhang@tekstream.com.

About ACG Atlanta

ACG comprises more than 14,500 members from corporations, private equity, finance, and professional service firms representing Fortune 500, Fortune1000, FTSE 100, and mid-market companies in 59 chapters in North America and Europe. Founded in 1974, ACG Atlanta is one of the oldest and most active chapters, providing the area’s executives and professionals a unique forum for exchanging ideas and experiences concerning organic and acquisitive growth. Programs include Atlanta ACG Capital Connection, The Georgia Fast 40 Honoree Awards and Gala, a Wine Tasting Reception, a Deal of the Year event as well as an active Women’s Forum and Young Professionals group.

Watch the hightlight video here:

# # #

Containerization and Splunk: How Docker and Splunk Work Together

Splunk and Docker

Containerization and Splunk: How Docker and Splunk Work Together

By: Karl Cepull | Director, Operational Intelligence and Managed Services

Note: Much of the information in this blog post was also presented as a TekTalk, including a live demo of Splunk running in Docker, and how to use Splunk to ingest Docker logs. Please see the recording of the TekTalk at http://go.tekstream.com/l/54832/2017-03-30/bknxhd.

You’ve heard of Splunk, and maybe used it. You’ve heard of Docker, and maybe used it, too. But have you tried using them together? It can be a powerful combination when you do!
But first, let’s review what Splunk and Docker are, just to set a baseline. Also, learn more about TekStream’s Splunk Services.

What is Splunk?

Splunk is the industry-leading solution for turning “digital exhaust” into business value. “Digital exhaust” refers to the almost unlimited amount of data being output by just about every digital device in the world today, such as application and web servers, databases, security and access devices, networking equipment, and even your mobile devices.

Usually, this data is in the form of log files. And, due to the volume being produced, it usually just sits on a hard drive somewhere until it expires and is deleted. It is only looked at when something goes wrong, and requires a lot of digging and searching to find anything useful.

Splunk changes all of that. It ingests those log files in near-real-time, and provides a “Google-like” search interface, making it extremely easy to search large amounts of data quickly. It also can correlate the information in myriad log files, allowing for an easier analysis of the bigger picture. Finally, it has a plethora of visualization and alerting options, allowing you to create rich reports and dashboards to view the information, and generate various alerts when specific conditions are met.

What is Docker?

Docker is also an industry leader. It is a container manager, that allows you to run multiple applications (or containers) side by side in an isolated manner, but without the overhead of creating multiple virtual machines (VMs) to do so. Containers give you the ability to “build once, run anywhere,” as Docker containers are designed to run on any host that can run Docker. Docker containers can also be distributed as whole “images,” making it easy to deploy applications and microservices.

Why use Splunk and Docker Together?

While there are many ways that you can use Splunk and Docker, there are two main configurations that we will address.

Using Docker to run Splunk in a Container

Running Splunk as a container in Docker has a lot of advantages. You can create an image that has Splunk pre-configured, which makes it easy to fire up an instance for testing, a proof-of-concept, or other needs. In fact, Splunk even has pre-configured images of Splunk Enterprise and the Universal Forwarder available in the Docker Hub for you to download!

Using Splunk to Monitor a Docker Container

In this configuration, one or more Docker containers are configured to send their logs and other operational information to a Splunk instance (which can also be running in another container, if desired!). Splunk has a free app for Docker that provides out-of-the-box dashboards and reports that show a variety of useful information about the events and health of the Docker containers being monitored, which provides value without having to customize a thing. If you are also using Splunk to ingest log information from the applications and services running inside of the containers, you can then correlate that information with that from the container itself to provide even more visibility and value.

Our Demo Environment

To showcase both of the above use cases, Splunk has a repository in GitHub that was used at their .conf2016 event in September of 2016. You can download and use the instructions to create a set of Docker containers that demonstrate both running Splunk in a container, as well as using Splunk to monitor a Docker container.

If you download and follow their instructions, what you build and run ends up looking like the following:

splunk 1

There are 5 containers that are built as part of the demo. The ‘wordpress’ and ‘wordpress_db’ containers are sample applications that you might typically run in Docker, and are instances of publicly-available images from the Docker Hub. Splunk Enterprise is running in a container as well, as is an instance of the Splunk Universal Forwarder. Finally, the container named “my_app” is running a custom app that provides a simple web page, and also generates some fake log data so there is something in Splunk to search.

By using a shared Volume (think of it as a shared drive) that the WordPress database logs are stored on, the Splunk Universal Forwarder is used to ingest the logs on that volume using a normal “monitor” input. This shows one way to ingest logs without having to install the UF on the container with the app.

The HTTP Event Collector (HEC) is also running on the ‘splunk’ container, and is used to receive events generated by the ‘my_app’ application. This show another way to ingest logs without using a UF.

Finally, HEC is also used to ingest events about the ‘wordpress’ and ‘wordpress_db’ containers themselves.

If you would like to see a demo of the above in action, please take a look at the recording of our TekTalk, which is available at http://go.tekstream.com/l/54832/2017-03-30/bknxhd

Here is a screenshot of one of the dashboards in the Docker app, showing statistics about the running containers, to whet your appetite.

splunk22

How Does it Work?

Running Splunk in a Container

Running Splunk in a container is actually fairly easy! As mentioned above, Splunk has pre-configured images available for you to download from the Docker Hub (a public repository of Docker images).

There are 4 images of interest – two for Splunk Enterprise (a full installation of Splunk that can be used as an indexer, search head, etc.), and two for the Universal Forwarder. For each type of Splunk (Enterprise vs Universal Forwarder), there is an image that just has the base code, and an image that also contains the Docker app.

Here’s a table showing the image name and details about each one:

Image NameDescription
splunk/splunk:6.5.2The base installation of Splunk Enterprise v6.5.2 (the current version available as of this writing).
splunk/splunk:6.5.2-monitor splunk/splunk:latest The base installation of Splunk Enterprise v6.5.2, with the Docker app also installed.
splunk/universalforwarder:6.5.2 splunk/universalforwarder:latestThe base installation of the Splunk Universal Forwarder, v6.5.2.
splunk/universalforwarder:6.5.2-monitorThe base installation of the Splunk Universal Forwarder v6.5.2, with the Docker add-in also installed.

Get the image(s):

  1. If you haven’t already, download a copy of Docker and install it on your system, and make sure it is running.
  2. Next, create an account at the Docker Hub – you’ll need that in a bit.
  3. From a command shell, log in to the Docker Hub using the account you created in step 2, using the following command:

    docker login
  4. Now, download the appropriate image (from the list above) using the following command:
    docker pull <imagename>
    

Start the Container:

To run Splunk Enterprise in a Docker container, use the following command:

docker run –d \
   --name splunk
   -e “SPLUNK_START_ARGS=--accept-license” \
   -e “SPLUNK_USER=root” \
   –p “8000:8000” \
   splunk/splunk

To run the Universal Forwarder in a Docker container, use the following command:

docker run –d \
  --name splunkuniversalforwarder \
  --env SPLUNK_START_ARGS=--accept-license \
  --env SPLUNK_FORWARD_SERVER=splunk_ip:9997 \
  --env SPLUNK_USER=root \
  splunk/universalforwarder

In both cases, the “docker run” command tells Docker to create and run an instance of a given image (the “splunk/splunk” image in this case). The “-d” parameter tells it to run it as a “daemon” (meaning in the background). The “-e” (or “–env”) parameters set various environment variables that are passed to the application in the container (more below), and the “-p” parameter tells Docker to map the host port 8000 to port 8000 in the container. (This is so we can go to http://localhost:8000 on the host machine to get to the Splunk web interface.)

So, what are those “-e” values? Below is a table showing the various environment variables that can be passed to the Splunk image, and what they do. If a variable only applies to Splunk Enterprise, it is noted.

Environment VariableDescriptionHow Used
SPLUNK_USERUser to run Splunk as. Defaults to ‘root’.
SPLUNK_BEFORE_START_CMD
SPLUNK_BEFORE_START_CMD_n
Splunk command(s) to execute prior to starting Splunk. ‘n’ is 1 to 30. Non-suffixed command executed first, followed by suffixed commands in order (no breaks)../bin/splunk <SPLUNK_BEFORE_START_CMD[_n]>
SPLUNK_START_ARGSArguments to the Splunk ‘start’ command./bin/splunk start <SPLUNK_START_ARGS>
SPLUNK_ENABLE_DEPLOY_SERVERIf ‘true’, will enable the deployment server function. (Splunk Enterprise only.)
SPLUNK_DEPLOYMENT_SERVERDeployment server to point this instance to./bin/splunk set deploy-poll <SPLUNK_DEPLOYMENT_SERVER>
SPLUNK_ENABLE_LISTEN
SPLUNK_ENABLE_LISTEN_ARGS
The port and optional arguments to enable Splunk to listen on. (Splunk Enterprise only)./bin/splunk enable listen <SPLUNK_ENABLE_LISTEN> <SPLUNK_ENABLE_LISTEN_ARGS>
SPLUNK_FORWARD_SERVER
SPLUNK_FORWARD_SERVER_n
SPLUNK_FORWARD_SERVER_ARGS
SPLUNK_FORWARD_SERVER_ARGS_n
One or more Splunk servers to forward events to, with optional arguments. ‘n’ is 1 to 10../bin/splunk add forward-server <SPLUNK_FORWARD_SERVER[_n]> <SPLUNK_FORWARD_SERVER_ARGS[_n]>
SPLUNK_ADD
SPLUNK_ADD_n
Any monitors to set up. ‘n’ is 1 to 30../bin/splunk add <SPLUNK_ADD[_n]>
SPLUNK_CMD
SPLUNK_CMD_n
Any additional Splunk commands to run after it is started. ‘n’ is 1 to 30../bin/splunk <SPLUNK_CMD[_n]>

 

Splunking a Docker Container

There are 2 main parts to setting up your environment to Splunk a Docker container. First, we need to set up Splunk to listen for events using the HTTP Event Collector. Second, we need to tell Docker to send its container logs and events to Splunk.

Setting up the HTTP Event Collector

The HTTP Event Collector (HEC) is a listener in Splunk that provides for an HTTP(S)-based URL that any process or application can POST an event to. (For more information, see our upcoming TekTalk and blog post on the HTTP Event Collector coming in June 2017.) To enable and configure HEC, do the following:

  1. From the Splunk web UI on the Splunk instance you want HEC to listen on, go to Settings | Data inputs | HTTP Event Collector.
  2. In the top right corner, click the Global Settings button to display the Edit Global Settings dialog. splunk img 33Usually, these settings do not need to be changed. However, this is where you can set what the default sourcetype and index are for events, whether to forward events to another Splunk instance (e.g. if you were running HEC on a “heavy forwarder”), and the port to listen on (default of 8088 using SSL). Click Save when done.
  3. Next, we need to create a token. Any application connecting to HEC to deliver an event must pass a valid token to the HEC listener. This token not only authenticates the sender as valid, but also ties it to settings, such as the sourcetype and index to use for the event. Click the New Token to bring up the wizard.
  4. On the Select Source panel of the wizard, give the token a name and optional description. If desired, specify the default source name to use if not specified in an event. You can also set a specific output group to forward events to. Click Next when done. splunk 4
  5. On the Input Settings panel, you can select (or create) a default sourcetype to use for events that don’t specify one. Perhaps one of the most important options is on this screen – selecting a list of allowed indexes. If specified, events using this token can only be written to one of the listed events. If an index is specified in an event that is not on this list, it is dropped. You can also set a default index to use if none are specified in an individual event. splunk 5
  6. Click Review when done with the Input Settings panel. Review your choices, then click Submit when done to create the token.
  7. The generated token value will then be shown to you. You will use this later when configuring the output destination for the Docker containers. (You can find this value later in the list of HTTP Event Collector tokens.)

Configuring Docker to send to Splunk

Now that Splunk is set up to receive event information using HEC, let’s see how to tell Docker to send data to Splunk. You do this by telling Docker to use the ‘splunk’ logging driver, which is built-in to Docker starting with version 1.10. You pass required and optional “log-opt” name/value pairs to provide additional information to Docker to tell it how to connect to Splunk.

The various “log-opt” values for the Splunk logging driver are:

‘log-opt’ ArgumentRequired?Description
splunk-tokenYesSplunk HTTP Event Collector token
splunk-urlYesURL and port to HTTP Event Collector, e.g.: https://your.splunkserver.com:8088
splunk-sourceNoSource name to use for all events
splunk-sourcetypeNoSourcetype of events
splunk-indexNoIndex for events
splunk-formatNoMessage format. One of “inline”, “json”, or “raw”. Defaults to “inline”.
labels / envNoDocker container labels and/or environment variables to include with the event.

In addition to the above “log-opt” variables, there are environment variables you can set to control advanced settings of the Splunk logging driver. See the Splunk logging driver page on the Docker Docs site for more information.

Splunking Every Container

To tell Docker to send the container information for all containers, specify the Splunk logging driver and log-opts when you start up the Docker daemon. This can be done in a variety of ways, but below are two common ones.

  1. If you start Docker from the command-line using the ‘dockerd’ command, specify the “–log-driver=splunk” option, like this:

    dockerd --log-driver=splunk \
      --log-opt splunk-token=4222EA8B-D060-4FEE-8B00-40C545760B64 \
      --log-opt splunk-url=https://localhost:8088 \
      --log-opt splunk-format=json
  2. If you use a GUI to start Docker, or don’t want to have to remember to specify the log-driver and log-opt values, you can create (or edit) the daemon.json configuration file for Docker. (See the Docker docs for information on where this file is located for your environment.) A sample daemon.json looks like this:
       {
         “log-driver”:”splunk”,
         “log-opts”:{
              “splunk-token”:”4222EA8B-D060-4FEE-8B00-40C545760B64”
              “splunk-url”:”https://localhost:8088”,
               “splunk-format”:”json”
              } 
        }

Either of the above options will tell Docker to send the container information for ALL containers to the specified Splunk server on the localhost port 8088 over https, using the HEC token that you created above. In addition, we have also overridden the default event format of “inline”, telling Docker to instead send the events in JSON format, if possible.

Splunk a Specific Docker Container

Instead of sending the container events for ALL containers to Splunk, you can also tell Docker to just send the container events for the containers you want. This is done by specifying the log-driver and log-opt values as parameters to the “docker run” command. An example is below.

docker run --log-driver=splunk \
  --log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
  --log-opt splunk-url=https://splunkhost:8088 \
  --log-opt splunk-capath=/path/to/cert/cacert.pem \
  --log-opt splunk-caname=SplunkServerDefaultCert \
  --log-opt tag="{{.Name}}/{{.FullID}}" \
  --log-opt labels=location \
  --log-opt env=TEST \
  --env "TEST=false" \
  --label location=west \
  your/application

The above example shows how to set and pass environment variables (“TEST”) and/or container labels (“location”), on each event sent to Splunk. It also shows how you can use the Docker template markup language to set a tag on each event with the container name and the container ID.

Hints and Tips

Running Splunk in a Container

  • As of this writing, running Splunk in a Docker container has not been certified, and is unsupported. That doesn’t mean you can’t get support, just that if the problem is found to be related to running Splunk in the container, you may be on your own. However, Splunk has plans to support running in a container in the near future, so stay tuned!
  • One of the advantages to running things in containers is that the containers can be started and stopped quickly and easily, and this can be leveraged to provide scalability by starting up more instances of an image when needed, and shutting them down when load subsides.
    A Splunk environment, however, is not really suited for this type of activity, at least not in a production setup. For examples, spinning up or shutting down an additional indexer due to load isn’t easy – it needs to be part of a cluster, and clusters don’t like their members to be going up and down.
  • Whether running natively, in a VM, or in a container, Splunk has certain minimum resource needs (e.g. CPU, memory, etc.). By default, when running in a container, these resources are shared by all containers. It is possible to specify the maximum amounts of CPU and memory a container can use, but not the minimum, so you could end up starving your Splunk containers.

Splunking a Docker Container

  • Definitely use the Docker app from Splunk! This provides out-of-the-box dashboards and reports that you can take advantage of immediately. (Hint: use the “splunk/splunk:6.5.2-monitor” image.)
  • Use labels, tags, and environment variables passed with events to enhance the event itself. This will allow you to perform searches that filter on these values.
  • Note that some scheduling tools for containers don’t have the ability to specify a log-driver or log-opts. There are workarounds for this, however.

Additional Resources

Below is a list of some web pages that I’ve found valuable when using Splunk and Docker together.

Happy Splunking!

Have more questions? Contact us today!

[pardot-form id=”13669″ title=”Splunk and Docker Blog”]

Silhouette of business people moving and joining pieces of jigsaw puzzle on a business and stock market blue background, with charts, diagrams, world maps and data arranged on grid and tables. World maps showing continents and countries with economy data and growth diagrams. Light glowing from the center. Copy space on bottom.

Silhouette of business people moving and joining pieces of jigsaw puzzle on a business and stock market blue background, with charts, diagrams, world maps and data arranged on grid and tables. World maps showing continents and countries with economy data and growth diagrams. Light glowing from the center. Copy space on bottom.

Atlanta Business Chronicle Names TekStream One of Atlanta’s Fastest Growing Private Companies in 2017

pacesetter 2017
TEKSTREAM SOLUTIONS NAMED ONE OF ATLANTA’S FASTEST GROWING TECHNOLOGY COMPANIES IN 2017

ATLANTA, GA, MAY 01, 2017 – For the third time in just four years, The Atlanta Business Chronicle has recognized TekStream as one of “Atlanta’s 100 Fastest Growing Private Companies” at the 22nd annual Pacesetter Awards. These awards honor local companies that are taking business to the next level and experiencing growth at top speed. TekStream joins tech powerhouses Ingenious Med, N3, and SalesLoft as being one of Atlanta’s top private tech companies and ranks #61 overall.

“We are very proud of the consistent growth that we have been able to maintain since founding the company in 2011 and the recognition by the Atlanta Business Chronicle for this success the last two years in a row,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping clients leverage Cloud-based technologies to solve complex business problems have provided us with a platform for continued growth into the future. It has also provided our employees with advancement opportunities to continue their passion to learn new technologies and provide cutting edge solutions to our clients.

“Our continued success and growth can be attributed to pivotal industry changes and embracing new technologies. With the colossal rise of cloud computing, we are working side by side with industry leaders like Oracle to help customers make the leap from legacy on-premise models to more up to date Platform-as-service (PaaS) and Infrastructure-as-a-Service (IaaS) environments. New partnerships like Splunk and other upcoming service lines will continue to fuel our growth for 2017 and into 2018,” stated Judd Robins, Executive Vice President of Sales.

“To be a part of this for the past 3 years is a demonstration of consistent and an unwavering growth with an incredible team,” stated Mark Gannon, Executive Vice President of Recruitment. “There is unlimited potential for growth both within TekStream and our clients and we look for forward to building on this recognition with another successful year.”

TekStream has seen a three-year growth of over 178% and added over 45 jobs in the last 12-18 months. The company’s impressive rise has allowed it to receive accolades from groups like Inc. 5000 and AJC’s Top Workplaces; however, the sky is the limit for this tech firm. Look for TekStream to continue to introduce next-generation solutions for Business, Government, Healthcare, and Education.

TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs and remain competitive in an extremely fast-changing market. For more information about TekStream, visit www.tekstream.com or email Shichen Zhang at Shichen.zhang@tekstream.com.

# # #

Using Business Objects With the Oracle Process Cloud Service

Young businessman with big blue cloud

Using Business Objects With the Oracle Process Cloud Service

By: Courtney Dooley | Content Developer

 

Imagine you’re outlining an Oracle Process Cloud Application that you need to build. You then realize you need the data that’s entered into your form to be passed to a Process, filtered through a Decision, and used as an input for a REST API call.  That scenario could have you creating a Field, Data object, Decision input object, and a REST API request body.  However, if you start by defining a Business Object that contains all the data you will need for these functions, you may find you only need one.  Below describes how you can keep your data organized within your Process Cloud Application, and shorten the time needed to create and link these objects. Oracle PCS is extremely versatile and can help you leverage the Oracle Process Cloud optimally.

Creating Application Wide Business Objects With Oracle PCS

 

  1. Define your Business Object to Allow the Most Versatility

A Business Object is a set of data you will be storing throughout your process.  It may be two or three string values, a combination of dates, amounts, and personal data. Or, it may be a complex set of arrays.  Business Objects can take many forms which allows for a wide range of business needs.  The key for success is to have a versatile Business Object to avoid creating new Business Objects for data at different points in your workflow process.

For example, you want to create a form that will ask for a user’s name, address, phone number, and email.  From this you will derive the user’s region where you can then perform a lookup to get the user’s account number.  A shorter process, rather than having a Form Object with field values, a Decision Object for the region derived, and then an API Request Body to do the account lookup; it is best practice to create a single Business Object with all six values defined within it.  Make sure to specify that account number is an array as the user may have more than one account.

  1. Create your Business Object in your Process Cloud Application

There are multiple ways to create business objects within the Process Cloud

Creating New Business Objects

This option does not require any code or formatting and will walk you through creating your Business Object, step by step.  For details on this option, Click Here.

Business Objects 1

Importing Business Objects – From XML Schema

If you have an xsd file for a web service you will be using, or if you have an XML Schema already written, this option allows you to import that file to define your attributes.  For more information on this feature, Click Here.

Business Objects 2

Importing Business Objects – From JSON

If you know you will be using a REST API call within your process, this option allows you to paste in the JSON that will define your request body.  You can also write your own JSON to define what attributes you will be using in this Business Object.  Simply format your attribute as outlined below. The values are examples of the types of values you are expecting.

{
“Name” : “Your Name”,

“Phone_Number” : “123-555-9876”,
“Address” : “123 Baker St, New York NY 54321”,
“Email” : “YourEmail@Tekstream.com”,
“Account_Number” : [{
“Account_Type”:”Customer”,
“Account_Number” : 123123

},{
“Account_Type”:”Business”,
“Account_Number” : 234234
}]

}

Business Objects 3

  1. Using Business Objects
Forms

Now that you have your attributes defined in your new Business Object, you can create a form in a snap.  Whether you’re creating a Web Form (Oracle’s newest form builder) or a Basic Form (Frevvo), you can drag and drop your Business Object into your form and have the fields built out for you.  Just follow the steps below:

Web Forms:

  1. Create a New Web Form, or Edit one you’ve previously created.
  2. From the Business Types Palette, drag and drop your new Business Object
  3. Business Objects 4All of the fields will appear as you outlined in your Business Object, including the ones your user will not be filling out.
  4. Business Objects 5Now you can modify the fields to the layout you prefer. You can also remove the fields you don’t want on your form.

Basic Forms:

  1. Create a New Basic Form, or Edit one you’ve previously created.
  2. Click on the Manage Business Objects Business Objects 6 icon in the Form Header menu.
  3. In the Form Business Objects window, move your Business Object from the “Available” Column to the “Selected” Column.
  4. business object aAfter clicking “OK”, your business object will be available in the Data Sources Business Objects.
  5. business object bYou can either click the green plus sign for the individual objects you wish to add to your form, or you can click the green plus sign for the entire object to create all fields.
  6. business object cNow you can modify the fields to the layout you prefer, as well as remove the unwanted fields.
Processes, Decisions, and Integrations

Throughout your application development process you will find many places where you need to define the type of input and output you will be expecting.  Now that you have defined your Business Object, it can be used for any data object input or output.

Sub Processes:  Inputs and Outputs

business object d

Decision Rules: Inputs and Outputs:

business object e

Integrations: Request Body and Response Body

business object f

As you can see, Business Objects can unify your Application Data and can significantly speed up the development process.  By planning ahead and making smart choices for your Business Objects, you can create a complex Application simply.

 

Contact Us for more tips and tricks on developing smart Oracle Process Cloud Applications or general use cases for the Oracle Process Cloud.

[pardot-form id=”13489″ title=”PCS Blog – Courtney”]

4 Accounts Payable Challenges that can be Overcome with a True Automation Solution

4 Accounts Payable Challenges that can be Overcome with a True Automation Solution

By: William Phelps | Senior Technical Architect

If a tree falls in the woods, does the paper from that tree end up as a paper invoice in your company’s mailroom?  Or worse, does this paper find its way from the mailroom to someone’s desk in accounts payable, only to wait… and wait… and wait for an action?

Does that falling tree make a sound? Most likely yes.  It’s the sound of your early payment discounts floating away or the screams of yet another late payment charge. Accoutns payable challenges are a real problem. Learn more about challenges in the accounts payable process below.

Unfortunately, many companies experience the same accounts payable challenges:

  • While your particular business processes have evolved and progressed, your accounts payable methodology may still rely heavily on paper.
  • Paper dependency exists even though companies have an ERP system (or multiple ERP systems, such as E-Business Suite, PeopleSoft, and JD Edwards) that are supposed to handle the accounts payable process in an efficient manner.
  • Employees may only be proficient in one system and not others.
  • Other employees need to process or approve invoices, but they don’t have ERP authorization.

Maintaining the accounts payable methodology the same way “it’s always been done” is a risk.

Inspyrus Invoice Automation offers an improved user experience and rich toolset to streamline invoice data entry, focusing on the automation of invoice data input.  This high level of automation will lower the number of data elements keyed, which in the end reduces the number of possible clerical errors.

Investments previously made in other areas of your business can be leveraged.

  • Integration points, based on standard defined points within your ERP, quickly retrieve the purchase order and vendor information. From there, it will match the data to the invoice often before a user has any visible interaction with the invoice.
  • Any current validation routines present in your ERP are honored, as data is validated before its delivery to the ERP for ingestion.
    • For example, a PeopleSoft voucher is not created until the data can be validated against your specific PeopleSoft implementation. The validation process occurs in real time, using web services to quickly determine data accuracy.  This is a huge leap forward in processing efficiency.
  • Any existing approval hierarchies from the ERP can be reused for workflow approvals which can often be simply approved via email.
  • Your existing corporate security model and infrastructure can be used to enforce authentication and authorization of invoice information.
  • Invoice related documentation can be attached to the invoice, and subsequently viewed in the ERP.

Inspyrus understands that no ERP is ever deployed simply “out of the box.”  It is likely that your ERP is no different than others except for specific changes tailored towards your business.  The Inspyrus API takes these scenarios into account and offers a way to dovetail the software to your exact ERP.

While the debate rages about whether a tree falling in the woods makes a sound or not, there’s no question about your accounts payable workflow needing an update.  Contact us today for some sound advice on turning your accounts payable system into an unrealized payday. You can also check out our previous Webinar about Inspyrus here.

Contact William or TekStream today to learn more about challenges in the accounts payable process.

[pardot-form id=”13433″ title=”AP Automation Blog – William”]

Why is Risk Management important on every project, big or small, and how do we track it?


Risk Management

The Importance of Risk Management

Jonathan Bohlmann | Solutions Analyst

“If you are never scared or embarrassed or hurt, it means you never take any chances.” – Julia Sorel

Why is project risk management important? Risk Management is not a pleasant topic, and some people would like to avoid it.  Many times when a Project Manager brings up the topic of Risk Management, the participants get that glossy look in their eyes and zone out. Most of the time it is hard to find someone who is willing to even discuss the topic of risk, but Risk Management is vital to the success of a project.

Whyis Risk Management Important in Project Management?

Experience proves that when a risk analysis is conducted for a certain project; problems are reduced by a staggering 90%.  In addition, in the article by the Project Management Institute, “Pulse of the Profession 2015,” it is stated that “83 percent of high performers report frequent use of risk management practices, compared to only 49 percent of low performers.” Managing risk helps ensure that your business is performing at an optimal level.  However, risk management is not a one and done type of deal.  Do you go to a doctor only once in your life time for a physical? Of course not. You go on a regular basis to help detect any problems at the earliest opportunity.  The same is true with risk; you take an assessment at the beginning and you regularly assess the risk throughout the life of the project. Some risks will no longer be pertinent or will have been mitigated, while others may come into play.

The following are a few benefits of risk management:

  • Increased operational efficiency by mitigating exploits that could normally drain organizational resources responsible for remediation
  • Increased revenue due to increased operational efficiency
  • Decreased number of incidents to remediate internally or with a customer/vendor
  • Clearer understanding of current threat climate
  • Creation of a risk-focused culture within the organization

Risk management needs to be given greater authority during the life a project, and Senior Executives must lead risk management from the top.  Risk management should gain enough attention in the organization at a senior level, so that the organization can properly evaluate and elevate the risks when needed.  In addition, risk needs to be adaptive rather than static. As previously mentioned, if the risk analysis is only assessed at the beginning of the project and never again, you may be monitoring risks that are no longer relevant and miss the new risk signals.  For example, if the union employees are negotiating a new contract while your project is being conducted, a risk could be the union goes on strike and could prevent part (or all) of your project from moving forward.  Once that contract is signed, the risk goes away and no longer needs to be monitored.  At the same time, a major storm is developing on the west coast and is expected to hit your area which could impact your project.  If you are not evaluating all potential risks during the life of the project, you will be unprepared for any new risks.

The business division is accountable for risk mitigation decisions. Therefore, they should always be educated on the project at hand.  They should also be subject matter experts and accountable for managing and coordinating the process, but are not the decision makers. Senior leaders are responsible for making the decision of how a risk should be mitigated. Or, when a risk issue is realized, they have the authority to reduce or mitigate the risk based upon the core business objectives.

How does TekStream manage risk?

Project Risks are items that have the likelihood to occur and impact a project.  Risk Issues are risks that have already occurred.  Using JIRA® to help us manage risk, TekStream has standardized the risk management process which in turn allows our customers to monitor the risks and evaluate/implement risk strategies.  This creates a knowledge base of risks across the project and enables transparency into the Risk Management process.

In our Oracle WebCenter projects (on site, PaaS, and IaaS), we start to manage risk by conducting a QuickStream process to gather requirements before developing the Phase 1 project plan.  During the discovery and define steps we look at all aspects of the project from hardware to resources to human interaction to software and evaluate, with the client, any risk that may come up in the project.  This gives the customer the first of many reviews of the risks to be both aware of and possibly mitigate the risk before the full project kicks off.

My hope is the reader will come away with a better understanding of why risk management is as important to us and your company on their next project.  If you have any questions on risk management, please reach out to me and we look forward to helping you on your next project.

 

Contact Jonathan or TekStream Today to learn more.

[pardot-form id=”13337″ title=”Risk Management Blog – Jonathan”]

Oracle’s Hybrid Cloud Approach between WCC and Docs is available and easy to configure.

Cloud Hybrid

Oracle’s Hybrid Cloud Approach between WCC and Docs is available and easy to configure.

By: Brandon Prasnicki | Technical Architect

The Oracle WebCenter Content to Oracle Documents Cloud hybrid solution has unofficially been around for a while, but I haven’t personally seen it as an ‘Oracle Product’.  Last year, Peter Flies wrote a blog on incorporating Oracle Documents Cloud into the native UI using the REST API.  You can see that blog post here:  http://www.ateam-oracle.com/calling-oracle-documents-cloud-service-rest-apis-from-webcenter-content-custom-components/

In February of 2016, Cordell and Thrond from Oracle support conducted their own presentation:  https://community.oracle.com/thread/3898699  In the recording or in the slide show (see slide 33 here: https://jonathanhult.com/blog/wp-content/uploads/2016/02/AW-WCC-Cloud-docs-V9_C.pdf) you can see an informative screenshot of the ADF content UI with Oracle Documents Cloud folders built right into the interface, creating a hybrid cloud solution.  I’ve been waiting for that particular hybrid cloud solution, and now it’s available!

Recently, while doing a demo of WCC (Version:12.2.1.2.0-2016-10-06 07:41:00Z-r148019 (Build:7.3.5.186)) running on Oracle’s cloud compute environment, I ran across a component called Oracle Documents Folders.

I enabled this component in the native UI and restarted.

Cloud Hybrid1

After restarting, I saw a new menu in the Administration menu:

Cloud Hybrid 2

I configured the Oracle Documents Cloud Service information.  However, once I received a hostname verifier error, I had to disable it in the WLS admin server (Consult IT to verify if this is acceptable for a production environment).

Cloud Hybrid 3

After testing and then saving, I restarted the WLS admin server, the WCC server, and the ADF content UI.  I logged in as a test user and verified the preferences:

Cloud Hybrid 4

Note the message regarding the link between Oracle Documents Cloud Service and WCC.  In order to make the hybrid cloud solution work, the email address in WCC needs to match the Oracle Documents Cloud Service username.
After that was complete, I was able to navigate under an Enterprise Library and create an Oracle Documents Cloud folder.  This instantly showed in my cloud environment. From there, I could drag and drop items into my Oracle Documents Cloud folder right from my WCC ADF UI, and vice versa.

I now have a window into my Documents Cloud interface right from WCC using this simple hybrid cloud solution.  It is now easy for me to search and find a template or a form in WCC via the ADF UI, download it and then put it in the Oracle Documents Cloud.  Right from the UI, I can also share the document via a public link within seconds.  Once a cloud user edits that document and uploads it again, or adds more content, the hybrid cloud solution makes it instantly available in the context of my browsing experience of WCC.

On the Oracle Cloud Document service side, it creates the folder with the username in the path:

  • Documents > Mary > A312D3148815662CA5B70F9F837EDFB0 >BrandonTest

Cloud Hybrid 5

It’s pretty easy to set up, but if it’s even quicker to let us show it to you!

 

Contact TekStream or Brandon Today to Learn More.

[pardot-form id=”13211″ title=”Cloud Hybrid Blog – Brandon”]

Expanding the Expired Search Functionality in the WebCenter Content (WCC) Native UI

Search WebCenter

Expanding the Expired Search Functionality in the WebCenter Content Server Native UI

By: Brandon Prasnicki | Technical Architect

In the past customers have complained about the limited search feature related to expired content.  Normal users, and especially power users, may have a need to retrieve an expired content item but cannot easily recover or locate it.  This may discourage users from using the expiration feature in WCC. As you may know, when expiring an item, the content is kept in the system. However, in the case of Oracle Text Search Engine, pull the item from the collection to reduce the search collection size.  This can help with collection builds (including fast rebuilds) and search speeds.

While many of the WCC user base is transitioning to the new sleeker content UI, it is not uncommon for power users to continue using the native UI for more complex functions.  An example of a complex function is locating expired content.  In order to do a search for expired items, a user will navigate to Content Management -> Expired Content.  In the screen shot below, the user is presented with two date fields to filter the recently expired items.  If there are many expired items, or the time frame of the expiration is not known this is not very helpful.

wcc content 1

How can this process be simplified for the user? A simple customization is in order!

Introducing the TSExpiredSearch component: This component adds a button in line with the other buttons that users normally use.  This button switches the repository and goes after expired content items instead of the OTS search repository (or other active content item repository).  You can use the standard search, profile search and even query builder pages and leverage the expired search functionality as this new ‘Expired Search’ button is added to a common resource where the out of the box buttons are seen.  With the button in place, the normal metadata fields can be leveraged and all the existing rules and profile logic is in place to help the user locate the expired content.

wcc content 2

wcc content 3

The expired search feature leverages the Repository parameter and sets it to ‘ExpiredContent’.  The generated query uses the search engine: ‘DATABASE.METADATA.ALLDOCS.ORACLE’. Therefore, the user must use ‘Matches’ and not run Full-Text Searches.  This validation is included in the component, as shown below:

wcc content 4

This component has been tested on 11.1.1.8.0, 11.1.1.9.0, and 12.2.1.0.0.  It even fixes a breadcrumb bug in 11.1.1.9.0.  If you would like to use it we’d love to give it to you!  Please fill out the contact form below and we will send it right away.

Once you receive your component, you will need to install and enable it using the admin server –> component manager -> advanced component manager options.  Hopefully this will improve user search experience and even encourage users to clean up the active collection.

 

Fill out the following form to download the TSExpiredSearch component:
 [pardot-form id=”13151″ title=”Expired Search Blog – Brandon”]

Creating a Single Item Content Presenter Template in WebCenter Portal 12c (12.2.1)

Computer Programming WCP

Creating a Single Item Content Presenter Template in WebCenter Portal 12c (12.2.1)

By: Abhinand Shankar | Technical Architect

The WebCenter Portal (WCP) implementations I have worked on include an integration with WebCenter Content (WCC). WCC managed the creation of the content items through Site Studio. WCP handled the presentation that covered the content on the web page that was finally rendered. This was done through Content Presenter Templates.

In 11g, the round trip development of the templates involved downloading a DesignWebCenterSpaces application that was preconfigured for managing the assets. The release of 12c brought changes to this process. JDeveloper is now built in with the application template to create the portal assets.

When creating a portal asset project, you have the option to select the asset type as Content Presenter Template.

WCP 1

This generates the project and all the necessary artifacts.

WCP 2

In this example, SampleContentPresenter.jsff contains the actual code. You can deploy the asset to the portal managed server directly from JDeveloper or create an AAR file and upload it. It is now available when configuring a Content Presenter task flow on the portal.

WCP 3

You will notice that the template is available only when the content source is a list of items and not for a single item. A look at assetDef.xml shows that it is created as a list template by default.

WCP 4

The application creation wizard does not give you an option to select the type of template. Therefore, attempting to update assetDef.xml throws an error.

In order to create a presenter template for a single item, create the application with the defaults and deploy it to the managed server. Log into portal administration and from the assets page download the template. Explode the archive and you should see a file called asset-entities.xml under the folder contentPresenter-s-gsr90079447_922e_48ff_a483_0c496a20c1c9 (the folder name contains a guide).

Asset-entities.xml contains the definition for the template and is by default as shown below:

WCP 5

Update this section as follows:

WCP 6

Make sure to update the jsff file to use the tag for a single item content presenter template.

WCP 7

Archive the files and deploy the AAR file. The template should now be available for a single content item.

WCP 8

Note: After deploying the template, you may have to uncheck the available box and then check it again before it shows up in the drop down.

 

Contact Abhinand or TekStream Today

[pardot-form id=”13113″ title=”Content Presenter in WCP Blog”]