TekStream Solutions Recognized as One of the 40 Fastest-Growing Companies in Georgia

FAST 40

ATLANTA, GA, May 23, 2017 – The Atlanta Chapter of the Association for Corporate Growth (ACG), a global professional organization with the mission of Driving Middle-Market Growth, has recognized TekStream Solutions as a member of the 2017 Georgia Fast 40, recognizing the top 40 fastest-growing middle-market companies in Georgia.

“We are very proud of the consistent growth that we have been able to maintain since founding the company in 2011 and the recognition by ACG for this success the last two years in a row,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping clients leverage Cloud-based technologies to solve complex business problems have provided us with a platform for continued growth into the future. It has also provided our employees with advancement opportunities to continue their passion to learn new technologies and provide cutting edge solutions to our clients.

Applicants were required to submit three years of verifiable revenue and employment growth records, which were validated by national accounting firm and founding Diamond sponsor, Cherry Bekaert LLP. An ACG Selection Committee evaluated each application and conducted in-person interviews with all qualified applicants. All companies on the list are for-profit, headquartered in Georgia and reported 2016 annual revenues ranging from $15 to $500 million.

“Our continued success and growth can be attributed to pivotal industry changes and embracing new technologies. With the colossal rise of cloud computing, we are working side by side with industry leaders like Oracle to help customers make the leap from legacy on-premise models to more up to date Platform-as-service (PaaS) and Infrastructure-as-a-Service (IaaS) environments. New partnerships like Splunk and other upcoming service lines will continue to fuel our growth for 2017 and into 2018,” stated Judd Robins, Executive Vice President of Sales.

“These 40 companies represent 9,574 new jobs and $2.03 billion in revenue growth,” said Justin Palmer, chairman of the 2017 Georgia Fast 40 Awards and Vice President at Genesis Capital, LLC. “The lower middle-market honorees had more than a 133 percent weighted growth rate, and the upper middle-market honorees had more than a 91 percent weighted growth rate.”

TekStream has seen a three-year growth of over 178% and added over 45 jobs in the last 12-18 months. The company’s impressive rise has allowed it to receive accolades from groups like Inc. 5000 and AJC’s Top Workplaces.

“To be a part of this for the past 2 years is a demonstration of consistent and an unwavering growth with an incredible team,” stated Mark Gannon, Executive Vice President of Recruitment. “There is unlimited potential for growth both within TekStream and our clients and we look forward to building on this recognition with another successful year.”

About TekStream Solutions

TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs and remain competitive in an extremely fast-changing market. For more information about TekStream, visit www.tekstream.com or email Shichen Zhang at Shichen.zhang@tekstream.com.

About ACG Atlanta

ACG comprises more than 14,500 members from corporations, private equity, finance, and professional service firms representing Fortune 500, Fortune1000, FTSE 100, and mid-market companies in 59 chapters in North America and Europe. Founded in 1974, ACG Atlanta is one of the oldest and most active chapters, providing the area’s executives and professionals a unique forum for exchanging ideas and experiences concerning organic and acquisitive growth. Programs include Atlanta ACG Capital Connection, The Georgia Fast 40 Honoree Awards and Gala, a Wine Tasting Reception, a Deal of the Year event as well as an active Women’s Forum and Young Professionals group.

Watch the hightlight video here:

# # #

Containerization and Splunk: How Docker and Splunk Work Together

Splunk and Docker

Containerization and Splunk: How Docker and Splunk Work Together

By: Karl Cepull | Director, Operational Intelligence and Managed Services

Note: Much of the information in this blog post was also presented as a TekTalk, including a live demo of Splunk running in Docker, and how to use Splunk to ingest Docker logs. Please see the recording of the TekTalk at http://go.tekstream.com/l/54832/2017-03-30/bknxhd.

You’ve heard of Splunk, and maybe used it. You’ve heard of Docker, and maybe used it, too. But have you tried using them together? It can be a powerful combination when you do!
But first, let’s review what Splunk and Docker are, just to set a baseline. Also, learn more about TekStream’s Splunk Services.

What is Splunk?

Splunk is the industry-leading solution for turning “digital exhaust” into business value. “Digital exhaust” refers to the almost unlimited amount of data being output by just about every digital device in the world today, such as application and web servers, databases, security and access devices, networking equipment, and even your mobile devices.

Usually, this data is in the form of log files. And, due to the volume being produced, it usually just sits on a hard drive somewhere until it expires and is deleted. It is only looked at when something goes wrong, and requires a lot of digging and searching to find anything useful.

Splunk changes all of that. It ingests those log files in near-real-time, and provides a “Google-like” search interface, making it extremely easy to search large amounts of data quickly. It also can correlate the information in myriad log files, allowing for an easier analysis of the bigger picture. Finally, it has a plethora of visualization and alerting options, allowing you to create rich reports and dashboards to view the information, and generate various alerts when specific conditions are met.

What is Docker?

Docker is also an industry leader. It is a container manager, that allows you to run multiple applications (or containers) side by side in an isolated manner, but without the overhead of creating multiple virtual machines (VMs) to do so. Containers give you the ability to “build once, run anywhere,” as Docker containers are designed to run on any host that can run Docker. Docker containers can also be distributed as whole “images,” making it easy to deploy applications and microservices.

Why use Splunk and Docker Together?

While there are many ways that you can use Splunk and Docker, there are two main configurations that we will address.

Using Docker to run Splunk in a Container

Running Splunk as a container in Docker has a lot of advantages. You can create an image that has Splunk pre-configured, which makes it easy to fire up an instance for testing, a proof-of-concept, or other needs. In fact, Splunk even has pre-configured images of Splunk Enterprise and the Universal Forwarder available in the Docker Hub for you to download!

Using Splunk to Monitor a Docker Container

In this configuration, one or more Docker containers are configured to send their logs and other operational information to a Splunk instance (which can also be running in another container, if desired!). Splunk has a free app for Docker that provides out-of-the-box dashboards and reports that show a variety of useful information about the events and health of the Docker containers being monitored, which provides value without having to customize a thing. If you are also using Splunk to ingest log information from the applications and services running inside of the containers, you can then correlate that information with that from the container itself to provide even more visibility and value.

Our Demo Environment

To showcase both of the above use cases, Splunk has a repository in GitHub that was used at their .conf2016 event in September of 2016. You can download and use the instructions to create a set of Docker containers that demonstrate both running Splunk in a container, as well as using Splunk to monitor a Docker container.

If you download and follow their instructions, what you build and run ends up looking like the following:

splunk 1

There are 5 containers that are built as part of the demo. The ‘wordpress’ and ‘wordpress_db’ containers are sample applications that you might typically run in Docker, and are instances of publicly-available images from the Docker Hub. Splunk Enterprise is running in a container as well, as is an instance of the Splunk Universal Forwarder. Finally, the container named “my_app” is running a custom app that provides a simple web page, and also generates some fake log data so there is something in Splunk to search.

By using a shared Volume (think of it as a shared drive) that the WordPress database logs are stored on, the Splunk Universal Forwarder is used to ingest the logs on that volume using a normal “monitor” input. This shows one way to ingest logs without having to install the UF on the container with the app.

The HTTP Event Collector (HEC) is also running on the ‘splunk’ container, and is used to receive events generated by the ‘my_app’ application. This show another way to ingest logs without using a UF.

Finally, HEC is also used to ingest events about the ‘wordpress’ and ‘wordpress_db’ containers themselves.

If you would like to see a demo of the above in action, please take a look at the recording of our TekTalk, which is available at http://go.tekstream.com/l/54832/2017-03-30/bknxhd

Here is a screenshot of one of the dashboards in the Docker app, showing statistics about the running containers, to whet your appetite.

splunk22

How Does it Work?

Running Splunk in a Container

Running Splunk in a container is actually fairly easy! As mentioned above, Splunk has pre-configured images available for you to download from the Docker Hub (a public repository of Docker images).

There are 4 images of interest – two for Splunk Enterprise (a full installation of Splunk that can be used as an indexer, search head, etc.), and two for the Universal Forwarder. For each type of Splunk (Enterprise vs Universal Forwarder), there is an image that just has the base code, and an image that also contains the Docker app.

Here’s a table showing the image name and details about each one:

Image NameDescription
splunk/splunk:6.5.2The base installation of Splunk Enterprise v6.5.2 (the current version available as of this writing).
splunk/splunk:6.5.2-monitor splunk/splunk:latest The base installation of Splunk Enterprise v6.5.2, with the Docker app also installed.
splunk/universalforwarder:6.5.2 splunk/universalforwarder:latestThe base installation of the Splunk Universal Forwarder, v6.5.2.
splunk/universalforwarder:6.5.2-monitorThe base installation of the Splunk Universal Forwarder v6.5.2, with the Docker add-in also installed.

Get the image(s):

  1. If you haven’t already, download a copy of Docker and install it on your system, and make sure it is running.
  2. Next, create an account at the Docker Hub – you’ll need that in a bit.
  3. From a command shell, log in to the Docker Hub using the account you created in step 2, using the following command:

    docker login
  4. Now, download the appropriate image (from the list above) using the following command:
    docker pull <imagename>
    

Start the Container:

To run Splunk Enterprise in a Docker container, use the following command:

docker run –d \
   --name splunk
   -e “SPLUNK_START_ARGS=--accept-license” \
   -e “SPLUNK_USER=root” \
   –p “8000:8000” \
   splunk/splunk

To run the Universal Forwarder in a Docker container, use the following command:

docker run –d \
  --name splunkuniversalforwarder \
  --env SPLUNK_START_ARGS=--accept-license \
  --env SPLUNK_FORWARD_SERVER=splunk_ip:9997 \
  --env SPLUNK_USER=root \
  splunk/universalforwarder

In both cases, the “docker run” command tells Docker to create and run an instance of a given image (the “splunk/splunk” image in this case). The “-d” parameter tells it to run it as a “daemon” (meaning in the background). The “-e” (or “–env”) parameters set various environment variables that are passed to the application in the container (more below), and the “-p” parameter tells Docker to map the host port 8000 to port 8000 in the container. (This is so we can go to http://localhost:8000 on the host machine to get to the Splunk web interface.)

So, what are those “-e” values? Below is a table showing the various environment variables that can be passed to the Splunk image, and what they do. If a variable only applies to Splunk Enterprise, it is noted.

Environment VariableDescriptionHow Used
SPLUNK_USERUser to run Splunk as. Defaults to ‘root’.
SPLUNK_BEFORE_START_CMD
SPLUNK_BEFORE_START_CMD_n
Splunk command(s) to execute prior to starting Splunk. ‘n’ is 1 to 30. Non-suffixed command executed first, followed by suffixed commands in order (no breaks)../bin/splunk <SPLUNK_BEFORE_START_CMD[_n]>
SPLUNK_START_ARGSArguments to the Splunk ‘start’ command./bin/splunk start <SPLUNK_START_ARGS>
SPLUNK_ENABLE_DEPLOY_SERVERIf ‘true’, will enable the deployment server function. (Splunk Enterprise only.)
SPLUNK_DEPLOYMENT_SERVERDeployment server to point this instance to./bin/splunk set deploy-poll <SPLUNK_DEPLOYMENT_SERVER>
SPLUNK_ENABLE_LISTEN
SPLUNK_ENABLE_LISTEN_ARGS
The port and optional arguments to enable Splunk to listen on. (Splunk Enterprise only)./bin/splunk enable listen <SPLUNK_ENABLE_LISTEN> <SPLUNK_ENABLE_LISTEN_ARGS>
SPLUNK_FORWARD_SERVER
SPLUNK_FORWARD_SERVER_n
SPLUNK_FORWARD_SERVER_ARGS
SPLUNK_FORWARD_SERVER_ARGS_n
One or more Splunk servers to forward events to, with optional arguments. ‘n’ is 1 to 10../bin/splunk add forward-server <SPLUNK_FORWARD_SERVER[_n]> <SPLUNK_FORWARD_SERVER_ARGS[_n]>
SPLUNK_ADD
SPLUNK_ADD_n
Any monitors to set up. ‘n’ is 1 to 30../bin/splunk add <SPLUNK_ADD[_n]>
SPLUNK_CMD
SPLUNK_CMD_n
Any additional Splunk commands to run after it is started. ‘n’ is 1 to 30../bin/splunk <SPLUNK_CMD[_n]>

 

Splunking a Docker Container

There are 2 main parts to setting up your environment to Splunk a Docker container. First, we need to set up Splunk to listen for events using the HTTP Event Collector. Second, we need to tell Docker to send its container logs and events to Splunk.

Setting up the HTTP Event Collector

The HTTP Event Collector (HEC) is a listener in Splunk that provides for an HTTP(S)-based URL that any process or application can POST an event to. (For more information, see our upcoming TekTalk and blog post on the HTTP Event Collector coming in June 2017.) To enable and configure HEC, do the following:

  1. From the Splunk web UI on the Splunk instance you want HEC to listen on, go to Settings | Data inputs | HTTP Event Collector.
  2. In the top right corner, click the Global Settings button to display the Edit Global Settings dialog. splunk img 33Usually, these settings do not need to be changed. However, this is where you can set what the default sourcetype and index are for events, whether to forward events to another Splunk instance (e.g. if you were running HEC on a “heavy forwarder”), and the port to listen on (default of 8088 using SSL). Click Save when done.
  3. Next, we need to create a token. Any application connecting to HEC to deliver an event must pass a valid token to the HEC listener. This token not only authenticates the sender as valid, but also ties it to settings, such as the sourcetype and index to use for the event. Click the New Token to bring up the wizard.
  4. On the Select Source panel of the wizard, give the token a name and optional description. If desired, specify the default source name to use if not specified in an event. You can also set a specific output group to forward events to. Click Next when done. splunk 4
  5. On the Input Settings panel, you can select (or create) a default sourcetype to use for events that don’t specify one. Perhaps one of the most important options is on this screen – selecting a list of allowed indexes. If specified, events using this token can only be written to one of the listed events. If an index is specified in an event that is not on this list, it is dropped. You can also set a default index to use if none are specified in an individual event. splunk 5
  6. Click Review when done with the Input Settings panel. Review your choices, then click Submit when done to create the token.
  7. The generated token value will then be shown to you. You will use this later when configuring the output destination for the Docker containers. (You can find this value later in the list of HTTP Event Collector tokens.)

Configuring Docker to send to Splunk

Now that Splunk is set up to receive event information using HEC, let’s see how to tell Docker to send data to Splunk. You do this by telling Docker to use the ‘splunk’ logging driver, which is built-in to Docker starting with version 1.10. You pass required and optional “log-opt” name/value pairs to provide additional information to Docker to tell it how to connect to Splunk.

The various “log-opt” values for the Splunk logging driver are:

‘log-opt’ ArgumentRequired?Description
splunk-tokenYesSplunk HTTP Event Collector token
splunk-urlYesURL and port to HTTP Event Collector, e.g.: https://your.splunkserver.com:8088
splunk-sourceNoSource name to use for all events
splunk-sourcetypeNoSourcetype of events
splunk-indexNoIndex for events
splunk-formatNoMessage format. One of “inline”, “json”, or “raw”. Defaults to “inline”.
labels / envNoDocker container labels and/or environment variables to include with the event.

In addition to the above “log-opt” variables, there are environment variables you can set to control advanced settings of the Splunk logging driver. See the Splunk logging driver page on the Docker Docs site for more information.

Splunking Every Container

To tell Docker to send the container information for all containers, specify the Splunk logging driver and log-opts when you start up the Docker daemon. This can be done in a variety of ways, but below are two common ones.

  1. If you start Docker from the command-line using the ‘dockerd’ command, specify the “–log-driver=splunk” option, like this:

    dockerd --log-driver=splunk \
      --log-opt splunk-token=4222EA8B-D060-4FEE-8B00-40C545760B64 \
      --log-opt splunk-url=https://localhost:8088 \
      --log-opt splunk-format=json
  2. If you use a GUI to start Docker, or don’t want to have to remember to specify the log-driver and log-opt values, you can create (or edit) the daemon.json configuration file for Docker. (See the Docker docs for information on where this file is located for your environment.) A sample daemon.json looks like this:
       {
         “log-driver”:”splunk”,
         “log-opts”:{
              “splunk-token”:”4222EA8B-D060-4FEE-8B00-40C545760B64”
              “splunk-url”:”https://localhost:8088”,
               “splunk-format”:”json”
              } 
        }

Either of the above options will tell Docker to send the container information for ALL containers to the specified Splunk server on the localhost port 8088 over https, using the HEC token that you created above. In addition, we have also overridden the default event format of “inline”, telling Docker to instead send the events in JSON format, if possible.

Splunk a Specific Docker Container

Instead of sending the container events for ALL containers to Splunk, you can also tell Docker to just send the container events for the containers you want. This is done by specifying the log-driver and log-opt values as parameters to the “docker run” command. An example is below.

docker run --log-driver=splunk \
  --log-opt splunk-token=176FCEBF-4CF5-4EDF-91BC-703796522D20 \
  --log-opt splunk-url=https://splunkhost:8088 \
  --log-opt splunk-capath=/path/to/cert/cacert.pem \
  --log-opt splunk-caname=SplunkServerDefaultCert \
  --log-opt tag="{{.Name}}/{{.FullID}}" \
  --log-opt labels=location \
  --log-opt env=TEST \
  --env "TEST=false" \
  --label location=west \
  your/application

The above example shows how to set and pass environment variables (“TEST”) and/or container labels (“location”), on each event sent to Splunk. It also shows how you can use the Docker template markup language to set a tag on each event with the container name and the container ID.

Hints and Tips

Running Splunk in a Container

  • As of this writing, running Splunk in a Docker container has not been certified, and is unsupported. That doesn’t mean you can’t get support, just that if the problem is found to be related to running Splunk in the container, you may be on your own. However, Splunk has plans to support running in a container in the near future, so stay tuned!
  • One of the advantages to running things in containers is that the containers can be started and stopped quickly and easily, and this can be leveraged to provide scalability by starting up more instances of an image when needed, and shutting them down when load subsides.
    A Splunk environment, however, is not really suited for this type of activity, at least not in a production setup. For examples, spinning up or shutting down an additional indexer due to load isn’t easy – it needs to be part of a cluster, and clusters don’t like their members to be going up and down.
  • Whether running natively, in a VM, or in a container, Splunk has certain minimum resource needs (e.g. CPU, memory, etc.). By default, when running in a container, these resources are shared by all containers. It is possible to specify the maximum amounts of CPU and memory a container can use, but not the minimum, so you could end up starving your Splunk containers.

Splunking a Docker Container

  • Definitely use the Docker app from Splunk! This provides out-of-the-box dashboards and reports that you can take advantage of immediately. (Hint: use the “splunk/splunk:6.5.2-monitor” image.)
  • Use labels, tags, and environment variables passed with events to enhance the event itself. This will allow you to perform searches that filter on these values.
  • Note that some scheduling tools for containers don’t have the ability to specify a log-driver or log-opts. There are workarounds for this, however.

Additional Resources

Below is a list of some web pages that I’ve found valuable when using Splunk and Docker together.

Happy Splunking!

Have more questions? Contact us today!

[pardot-form id=”13669″ title=”Splunk and Docker Blog”]

Silhouette of business people moving and joining pieces of jigsaw puzzle on a business and stock market blue background, with charts, diagrams, world maps and data arranged on grid and tables. World maps showing continents and countries with economy data and growth diagrams. Light glowing from the center. Copy space on bottom.

Silhouette of business people moving and joining pieces of jigsaw puzzle on a business and stock market blue background, with charts, diagrams, world maps and data arranged on grid and tables. World maps showing continents and countries with economy data and growth diagrams. Light glowing from the center. Copy space on bottom.

Atlanta Business Chronicle Names TekStream One of Atlanta’s Fastest Growing Private Companies in 2017

pacesetter 2017
TEKSTREAM SOLUTIONS NAMED ONE OF ATLANTA’S FASTEST GROWING TECHNOLOGY COMPANIES IN 2017

ATLANTA, GA, MAY 01, 2017 – For the third time in just four years, The Atlanta Business Chronicle has recognized TekStream as one of “Atlanta’s 100 Fastest Growing Private Companies” at the 22nd annual Pacesetter Awards. These awards honor local companies that are taking business to the next level and experiencing growth at top speed. TekStream joins tech powerhouses Ingenious Med, N3, and SalesLoft as being one of Atlanta’s top private tech companies and ranks #61 overall.

“We are very proud of the consistent growth that we have been able to maintain since founding the company in 2011 and the recognition by the Atlanta Business Chronicle for this success the last two years in a row,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping clients leverage Cloud-based technologies to solve complex business problems have provided us with a platform for continued growth into the future. It has also provided our employees with advancement opportunities to continue their passion to learn new technologies and provide cutting edge solutions to our clients.

“Our continued success and growth can be attributed to pivotal industry changes and embracing new technologies. With the colossal rise of cloud computing, we are working side by side with industry leaders like Oracle to help customers make the leap from legacy on-premise models to more up to date Platform-as-service (PaaS) and Infrastructure-as-a-Service (IaaS) environments. New partnerships like Splunk and other upcoming service lines will continue to fuel our growth for 2017 and into 2018,” stated Judd Robins, Executive Vice President of Sales.

“To be a part of this for the past 3 years is a demonstration of consistent and an unwavering growth with an incredible team,” stated Mark Gannon, Executive Vice President of Recruitment. “There is unlimited potential for growth both within TekStream and our clients and we look for forward to building on this recognition with another successful year.”

TekStream has seen a three-year growth of over 178% and added over 45 jobs in the last 12-18 months. The company’s impressive rise has allowed it to receive accolades from groups like Inc. 5000 and AJC’s Top Workplaces; however, the sky is the limit for this tech firm. Look for TekStream to continue to introduce next-generation solutions for Business, Government, Healthcare, and Education.

TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs and remain competitive in an extremely fast-changing market. For more information about TekStream, visit www.tekstream.com or email Shichen Zhang at Shichen.zhang@tekstream.com.

# # #