Version Source Control for your Splunk Environment

Version Source Control for your Splunk Environment

By: Zubair Rauf | Splunk Consultant

 

When Splunk environments grow in organizations, the need for source control also grows with it. It is good practice to use the widely available source control tools that are available for enterprise level source control.

There are many Version Source Control (VCS) software available online, but the one most widely used is the open source, Git, which has proven to be a very powerful tool for distributed source control. Using Git, multiple Splunk Admins can work with their local repositories and the changes shared separately.

To take the conversation further, I would separate the need for version control in two major segments

  • User Applications
  • Administrative Applications

I have broken down the applications into two segments to enable ease of management for Splunk Admins. The User Applications should consist of the search artifacts that are built and developed as use cases evolve and change often, whereas the Administrative applications I would classify as those which are mostly used to deploy setup Splunk like TAs and other deployment apps. These applications rarely change once set up unless new data sources are on-boarded, there are significant changes to the architecture, etc.

In the context of this blog post, we will focus on the administrative applications. These apps are the backbone on your Splunk deployment and should be cautiously changed to make sure there is no downtime in the environment. Changing these files could cause irreparable damage to the way data is indexed to Splunk, causing loss to indexed events, especially when changing sourcetypes, etc.

As I already mentioned, there are numerous flavors of source control and depending on your taste, you can use either. If you’re starting off fresh with source control, Git is easy to set-up and you can use it with Github or Atlassian Bitbucket. Both these tools can help you get started in a matter of minutes, where you can create repositories and setup source control for your distributed Splunk environment.

The Git server will host all the master repos in the Splunk Environment for all the administrative apps. The admins who need make edits do it in the following two ways;

  • Edit the master directly.
  • Create local clones of the master, make the required edits, commit them to the local branch and then push it out to the remote repo.

Ideally, no one should edit the master branch directly to reduce the risk of unwanted changes to the master files. All admins should edit in local branches, and then once the edits are approved, they should be merged to the master.

There should be three Master repos with their respective apps and TAs in those repos. These repos should correspond to the following servers;

  • Cluster Master for Indexers
  • Deployer for Search Head Cluster
  • Deployment Server for Forwarders

To deploy the repos to the servers, you can use git hooks or tie your git deployment back into your puppet or chef environment. This is based on your discretion and how you are comfortable with distributed deployment in your organization. The repos should be deployed to the following directories

  • Cluster Master to $SPLUNK_HOME/etc/master-apps/
  • Deployer to $SPLUNK_HOME/etc/shcluster/apps
  • Deployment Server to $SPLUNK_HOME/etc/deployment-apps

After the updated repos are deployed to the respective directories, you can push them out to the client nodes using Splunk commands.

If you are interested in more information, please reach out to us and someone will get in touch with you and discuss options on how TekStream can help you manage your Splunk environment.

 

TekStream AXF 12c Upgrade Special Components

TekStream AXF 12c Upgrade Special Components

By: John Schleicher |Sr. Technical Architect

TekStream’s extension to Oracle’s Application eXtension Framework (AXF) provides enhanced customizations surrounding Invoice Reporting using Business Activity Monitor (BAM), auditing of user actions, and QuikTrace of BPEL process instance.   With the introduction of the 12c upgrade available with release 12.2.1.3 TekStream discovered that two of its reporting components were highly impacted by paradigm changes in 12c.   TekStream has gone through multiple iterations of 12c upgrade and has incorporated the necessary reporting enhancements to provide the functionality of the 11g release to its 12c counterpart.  This paper highlights the enhancements to the package to bring it on line with 12c.

BAM Dashboards:

The Business Activity Monitoring component of the SOA solution was significantly improved in the 12c release.  So significantly in fact that it precluded an upgrade path from 11g.   In the official upgrade procedures solutions incorporating this component are instructed to stand up an 11g version for BAM and slowly introduce a 12c version as all of the nuances of the new release are learned such that alternatives can be made.  In addition to a different dashboard component the layered introduction of ‘Business Queries’ and ‘Business Views’  add new elements to the solution that have to be solved before a dashboard can be constructed.  TekStream has done the necessary homework to bring the 11g based system directly online during the upgrade within a new InvoiceAnalytics package to save our customers the effort of introducing an interim solution during the process.  With TekStream’s 12c AXF upgrade we accommodate replacement dashboards, new 12c objects that are introduced with the release as well as upgrade of the 11g BAM data.  Clients will regain functionality (albeit with new upgraded BAM dashboards and underlying components) immediately after going online with 12c.  They will have direct replacements to the ‘Assigned Invoices’, ‘Invoice Aging’, and ‘Invoices’ reports and can use these with all of the 12c enhancements.

QuikTrace:

TekStream’s Audit and Reporting package ships with a component labeled QuikTrace which in addition to global Worklist views to locate all active invoices also provided technical tracing capability not available in AXF.   Technical staff can use key data points to find a record within the SOA composite execution stack for those records not active in a worklist and traceable via the global Worklist view.  The capability was based on an 11g primitive ‘ora:setCompositeInstanceTitle’ which on a per composite level allowed for the population of the title field which was then searchable via the Enterprise Manager (em).  The Audit and Reporting package allows for the searching based on Imaging Document ID, Invoice Number, Purchase Order Number, Supplier Name, and a customizable business key.

With 12c Oracle has changed their paradigm for a more efficient flow trace primitive ‘oraext:setFlowInstanceTitle’ which migrates the search element to a new single element SCA_FLOW_INSTANCE.title per composite flow.  To maintain the same functionality of the 11g system it is necessary to encapsulate all designed search elements into a single location.   TekStream has incorporated this into the Audit and Reporting package to offer the same functionality to its client base.

Upgrading AXF Clients:

For AXF clients with the reporting package we have the elements to bring you back online with the features that you are accustomed.    These will be available as soon as you bring AXF back up on 12c.

For AXF clients without the reporting package be assured that TekStream can get you to a 12c Audit and Reporting point as well.  We understand the 12c data and can pull together the data objects for functional dashboards and can introduce those QuikTrace touchpoints into the 12c based composites for that feature capability.

Want to learn more about Invoice Reporting using Business Activity Monitor? Contact us today!

What is Invoice Processing?

What is Invoice Processing?

By: John Schleicher |Sr. Technical Architect

 

In a nutshell, invoice processing is the set of practices put in place by a company for the payment of the bills it incurs associated with their business.  Essentially, ‘bills’ translate to invoices.  This doesn’t starkly differ from that of an individual managing their personal bills and budgeting their funds to ensure they can meet their obligations and keep creditors happy to ensure they can continue to make the necessary purchases to maintain their individual lifestyle and plan for the future.

With Invoice Processing in business the primary differences is sheer volume of ‘bills’; the details toward the budget allocation; the handling of the invoice to ensure it is properly approved; and the terms of the payment to the vendor (who sent in the invoice).  As the activities toward Invoice Processing to accommodate the handling are so manually intensive that specialized ‘Accounts Payable’ (AP) staff are set in place to manage the data entry and oversight of the flow of invoices.  Furthermore sophisticated computerized solutions are often employed to manage these activities and reduce the manual overhead.

Good and efficient Invoice Processing is critical to the business to ensure its ultimate survival in the competitive business marketplace.  Components of Invoice Processing include:

Invoice Processing Accounting:

Large business expenditures and planning of such warrant sophisticated budgetary management to ensure that monies aren’t misused and properly allocated toward their intended purpose.  To manage this need budgets are broken down into different categories for which funds are allocated and charged against as invoices are received.  These allocations are very specific to the line of business and require significant attention to detail.  These are often broken down into accounting codes with varying multi-level breakdowns as the allocations are specifically delineated.   Projects and/or tasks provide an alternative method toward these allocations.

Purchase Orders:

As part of the accounting processes purchases are frequently managed through an approval process before the expenditure is ever made.  Buyers (or users wanting to make purchases) initiate a request via ‘Purchase Orders’ which identify purchase details; which accounting pot to draw from; and which vendor is targeted for the purchase.  This typically travels through an approval hierarchy based on the purchase amount before it is finalized for the actual purchase.  When the purchase is finally made the corresponding invoice will reference the purchase order (number typically) and processing of the invoice is streamlined as the authorizations and allocations are already in place.   When associated with the PO these are routed for payment.

NON-PO (non-purchase orders):

This invoices don’t have the pre-approvals in place and all of the required processing is don’t upon receipt of the invoice.   These often require routing to the requester for appropriate accounting details and acknowledgement that they approve the purchase and in many cases have received the associated items from the invoice.   Upon their acknowledgement then any approvals based on the item spent must be made.   Only after completion of these touchpoints is the invoice routed for payment.

Recurring Invoices:

Invoices that are associated with frequent regularly scheduled charges such as utilities are defined as Recurring invoices.   They may be tied to a blanket purchase order but have the processing distinction that prompt attention to the invoice is required as service interruption may occur if not handled expeditiously.

Payment Terms:

The payment cycle of invoice processing is determined by agreements with the vendor on how quickly they want to be paid.  Terminology such as ‘NET 45’ (payment is expected 45 days after receipt) are applied to reference these arrangements.  Discounts may apply for processing these invoices within the prescribed timeframe.

Automation:

Invoice Processing is still heavily encumbered by paper invoices that require extensive manual intervention by members of the AP staff to key them into electronic invoices so they can be processed.  Automation via Optical Character Recognition to translate these paper images and verify the values are valid against stored data greatly reduces the manual overhead of the AP staff.  The OCR and automation activities though are burdened with varying invoices formats and quality which reduces the potential of a touchless invoice upon receipt.

The Premier Invoice Automation Solution – Inspyrus

Consider Inspyrus, as the premier AP automation solution.  Inspyrus offers the best automation solution that can be configured to suit the broadest Invoice Processing needs in the marketplace.   Their solution offers the capability to work all of the major back end financial systems:  EBS, PSFT, JDE, SAP, and others, by effectively offering an abstraction of these Enterprise Resource Planning (ERP) systems with the ability to dynamically route transactions to the relevant instance, even supporting multiple instances for a single client.  You won’t find many automation systems that compete in that regard.   Full automation of paper and electronic invoices reduces the daily costs of invoice processing.  Coupled with the a feature rich set of services that is out of the box and the configuration mechanisms built into the Inspyrus solution allows the diverse client base to match their specific business needs without costly software customizations.

So if you are talking ‘Invoice Processing’ for your business, you should have make sure you consider Inspyrus for those business needs.

Want to learn more about Invoice Automation with Inspyrus? Contact us today!

Optimizing Splunk Dashboards with Post-Process Searches

Optimizing Splunk Dashboards with Post-Process Searches

By: Josh Grissom, MSIT, CISSP | Senior Splunk Consultant

Optimizing Splunk Dashboards with Post-process Searches

When creating Splunk dashboards, we often have the same search run multiple times showing different types of graphs or with slight variations (i.e. one graph showing “allowed” and another showing “blocked”). This creates more overhead every time the dashboard is opened or refreshed, causing the dashboard to open or populate more slowly and increasing the demand on the Splunk infrastructure. There are other situations or limitations that occur such as user concurrent-search limits.

With proper optimization techniques a full typical dashboard with 10 panels can run less than three Splunk queries versus the 10 individual searches that would normally run. This is accomplished by using Post-process searches that are easily added in the SimpleXML of the desired dashboard.

 

Starting Point of Post-process Searches

When running a search in Splunk it will return RAW event data or transformed event data. Transformed event data is data that was returned by a search and is placed in the form of statistical tables which is used as the basis for visualizations. The primary transforming commands are:

  • Chart
  • Timechart
  • Top
  • Rare
  • Stats

 

The Post-process search is known and referred to as a base search. The base search should always avoid returning RAW events and instead return transformed results. This is largely due to one of the limitation of Post-process being it can only return a max of 500,000 events and it will truncate without warning. To circumvent this limitation, it is best practice to use one of the transforming commands and as always, refine your search as much as possible to reduce the number of results and reduce your search .

 

The Documented Limitations of Post-Process Searches

The documentation that is provided on Splunk Docs show a few limitations that you should consider before using the Post-process search:

http://docs.splunk.com/Documentation/Splunk/6.2.5/Viz/Savedsearches#Post-process_searches

 

  • Chaining for multiple post-process searches is not currently supported for SimpleXML dashboards.
  • If the base search is a non-transforming search, the Splunk platform retains only the first 500,000 events returned. The post-process search does not process events in excess of this 500,000 event limit, silently ignoring them. This results in incomplete data for the post-process search. A transforming search as the base search helps avoid reaching the 500,000 event limitation.
  • If the post-processing operation takes too long, it can exceed Splunk Web client’s non-configurable timeout value of 30 seconds. This can result in a timeout due to an unresponsive splunkd daemon/service. This scenario typically happens when you use a non-transforming search as the base .

 

Examples of the Basic Concepts

 

Splunk Search with non-transforming commands returning RAW results:

 

Splunk Search with transforming command retuning transformed results:

Examples of Post-process

There are many different ways to determine what should be the base search and what should be in each post-process search. One method is to create all of the queries for your dashboard first and then find the beginning commonality between the searches which will end up being your base search. Then the part that does not meet the commonality will then be the post-process searches. Keep in mind that if you have four Splunk queries and three have a commonality but the fourth is completely different, you can do the base search for the three common Splunk queries and the fourth will just run as a normal query.

 

We will take the following 5 Splunk queries as our example for what we have determined to put into our new dashboard. If you just ran these in our dashboard it would run 5 almost identical queries taking up valuable search resources and user limits.

sourcetype=”pan:threat” action=allowed | stats count by app
sourcetype=”pan:threat” action=allowed | stats count by rule
sourcetype=”pan:threat” action=allowed | stats count by category
sourcetype=”pan:threat” action=allowed | stats count by signature
sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip

 

As we can easily see, the commonality of the 5 queries is going to be:

 

sourcetype=”pan:threat” action=allowed |

 

The issue with just taking that portion as your base search is that it will return RAW results. If we review the 5 queries, they are using 5 different fields which means our transforming base search should include all 5 fields.

 

sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip

 

If we continue our method of initially creating our dashboard with our 5 independent queries:

 

Then we can switch to the XML source view of the dashboard and start making our base search and post-process searches. Below is how the dashboard’s XML looks before using any post-process searches.

<

<panel>

<table>

<title>Applications</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by app</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by rule</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by category</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by signature</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

 

We will create our base search with the following:

Base Search sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip
Post-process 1 | stats sum(count) as count by app
Post-process 2 | stats sum(count) as count by rule
Post-process 3 | stats sum(count) as count by category
Post-process 4 | stats sum(count) as count by signature
Post-process 5 | stats sum(count) as count, values(rule) as rule by dest_ip

 

Once in the XML Source view, create your base search at the top, under the label but before the first row:

The base search id can be named anything (in this case it is “baseSearch”) but it is best to make it something easy because you will need to use it throughout the dashboard. The base search id is referenced in each post-process search which appends the base search in front of each post-process search. To create the base search, the id is placed inside of the search tags at the top of the dashboard before all of the panels.
<search id=”{id name}”>

 

The id name must be in double quotes “” and the name is case sensitive. Next, the transforming base search query is added inside of the open and closed query tags
<query> {insert query here} </query>

 

After the query tags, any other supported tags can be used such as the timeframe tags including tokens created and assigned in the dashboard. Then close the search tag.
</search>

 

Next we will add the post-process searches to each of the panels on the dashboard. The time references should be removed since the base search controls the timeframe:

Similarly to the base search, the post-process search uses the base search id in the search tags.
<search base=”{id name of base search}”>

 

Next would be the query tags where the post-process search goes. This query should start with a pipe “|” because it will be appended to the base search like it was all one query.
<query> “ {post-process search that start with a pipe “|” } </query>

 

After the query tags, any other supported tags can be used except the timeframe tags since the post-process searches go off the timeframe of the base search. Then close the search tag.
</search>

 

After modifying all 5 of the post-process searches in the XML source, the dashboard will be ready to use the base search. If you run the dashboard and look at the current searches, there will only be 1 search compared to 5 searches. Below is how the dashboard’s XML looks after making the changes.

 

<dashboard>

<label>Threat Dashboard</label>

<!– Base Search Called “baseSearch” (This can be named anything) –>

<search id=”baseSearch”>

<query>sourcetype=”pan:threat” action=allowed | stats count by app, category, rule, signature, dest_ip, src_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<row>

<panel>

<table>

<title>Applications</title>

<!– post-process search 1 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by app</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<!– post-process search 2 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by rule</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<!– post-process search 3 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by category</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<!– post-process search 4 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by signature</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<!– post-process search 5 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count, values(rule) as rule by dest_ip</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

 

The use of post-process searches in dashboards might not always work if there are no common queries. In the situations where there is commonality, then post-process searches should be used. This is not only to reduce the work load that each query requires but it reduces the likeliness off users reaching their search limits especially if the dashboard has a large number of common panels.

 

Want to learn more about optimizing Splunk dashboards? Contact us today!

 

Inspyrus 3.0: Faster, Better, and in the Cloud

Calculator, Invoice, and credit cards in front of a laptop

Inspyrus 3.0: Faster, Better, and in the Cloud

By: Mariano Romano | Senior Developer

 

Inspyrus’ Invoice Automation product recently released version 3.0 with a slew of new features that continues to make an excellent product even better.  This new release comes with over 30 new features and a 10x increase in performance which makes a solid product even faster.  Here is a brief list of some of the top new features released in 3.0.

 

Inspyrus Routing Engine

One of the biggest changes to the product has been the introduction of their own routing engine.  By replacing Oracle BPM with their own engine, Inspyrus was able to improve the speed and stability of routing invoices while also making it easier to configure.

 

Supplier Central

Inspyrus can now retrieve vendor contact information from the ERP system to make it easier to enable new vendors.  But that is not all!  Because Inspyrus has access to vendor contact information, the product can also send the vendor an invitation to join Supplier Central.

 

Mobile App

The Inspyrus mobile app continues to improve and provide additional functionality.  For example, coders now have the ability to enter charge account, project coding (EBS), or cost factor (JDE) information from the app.

 

Forms Recognition

The heart of any good AP solution is automation.  In order to improve the success of Oracle Forms Recognition, Inspyrus continues to make several improvements to the Inspyrus extraction engine to improve accuracy.

 

Inspyrus continues to make under-the-cover improvements in order to ensure stability and increased speed.  They have switched cloud providers in order to improve stability and scale.  These new features prove that Inspyrus will continue to improve their product in terms of features, speed, and ease of use.  And with release 3.1, Inspyrus will begin to use ML (Machine Learning) in order to determine how to code an invoice based on previous coding done for that vendor.  We cannot wait to see what other features they have in store for 3.1!

 

Want to learn more about new Inspyrus 3.0 features? Contact us today!

 

Prepayments Feature Released for Inspyrus Invoice Automation

Prepayments Feature Released for Inspyrus Invoice Automation

By: Karla Broadrick | Technical Architect

 

Prepayments have been long time features of Oracle EBS and other ERP systems.  Prepayments allow advance payment to a supplier.  Later when the invoice is received from the supplier, the prepayment can be applied against that invoice.

In the latest release of the Inspyrus Invoice Automation Solution, a prepayments feature has been added for Oracle EBS.

The new prepayments tab in the UI has the ability to Add, Edit or Delete a Prepayment.   When adding a prepayment, simply click the Add button and a list of all available prepayments for the supplier is displayed.

The user can select a prepayment and edit how much of it should be applied to this particular invoice.

 

This amount is then displayed in the Prepayments tab in Inspyrus.

The prepayment information is then sent to Oracle EBS when the invoice record is created and the prepayment applied.

 

Contact TekStream for more information about the Inspyrus Invoice Automation Solution.

 

Press Release: TekStream Makes 2018 INC. 5000 List For Fourth Consecutive Year

Press Release: TekStream Makes 2018 INC. 5000 List For Fourth Consecutive Year

For the 4th Time, Atlanta-based Technology Company Named One of the Fastest-growing Private Companies in America with Three-Year Sales Growth of 129%

ATLANTA, GA, August 16, 2018 – Atlanta-based technology company, TekStream Solutions, is excited to announce that for the fourth time in a row, it has made the Inc. 5000 list of the fastest-growing private companies in America. This prestigious recognition comes again just seven years after Rob Jansen, Judd Robins, and Mark Gannon left major firms and pursued a dream of creating a strategic offering to provide enterprise technology software, services, solutions, and sourcing. Now, they’re a part of an elite group that, over the years, has included companies such as Chobani, Intuit, Microsoft, Oracle, Timberland, Vizio, and Zappos.com.

“Being included in the Inc. 5000 for the fourth straight year is something we are truly proud of as very few organizations in the history of the Inc. 5000 list since 2007 can sustain the consistent and profitable growth year over year needed to be included in this prestigious group of companies,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping customers leverage Cloud-based technologies and Big Data solutions have provided us with a platform for continued growth and allowed TekStream to provide extremely value-added solutions to our portfolio of industry-leading customers.”

This year’s Inc. 5000 nomination comes after TekStream has seen a three-year growth of over 129%, and 2018 is already on pace to continue this exceptional growth rate. In addition, the company has added 30% more jobs over the last 12 months.

“We’ve seen a significant wave of cloud digital transformation requests coming into this year. Customers acknowledge that running legacy software systems in their own data centers managed by expensive full-time resources is a cost model that is no longer competitive. Moving systems to cheaper cloud platforms, dropping on-premise costs, and invoking the unlimited innovation potential provided by the cloud allows customers to refocus and enhance their core business. They all know they have to make the journey, they need a team to help show them the way.” stated Judd Robins, Executive Vice President of Sales.

To qualify for the award, companies had to be privately owned, established in the first quarter of 2014 or earlier, experienced a two-year growth in sales of more than 50 percent, and garnered revenue between $2 million and $300 million in 2017.

“The increased focus on hiring both commercially and federally along with a changing candidate market has necessitated the need for creative and scalable recruiting solutions,” stated Mark Gannon, Executive Vice President of Recruitment. “This award and recognition is a testament to our team’s adaptive ability to deliver superior recruiting service across multiple solution offerings. We look forward to continued growth and successes in 2019!”

TekStream
TekStream is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners, and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email Shichen Zhang at shichen.zhang@tekstream.com.

Customers Realize 30%+ Annual Savings on their AWS Cloud Spend with TekStream’s New AWS Cloud Optimization Solution

TekStream Solutions Helps Companies Realize 30% or More on their AWS Cloud Spend with TekStream’s new AWS Cloud Optimization Solution

ATLANTA, GA, August 10, 2018 — Atlanta-based technology company, TekStream Solutions, is excited to announce the release of its AWS Cloud Optimization Solution. TekStream’s AWS Cloud Optimization Solution is specifically designed to help companies contain and optimize their AWS Cloud spend. Companies are moving from traditional, on-premise or datacenter hosted environments to take advantage of Cloud solutions to run critical business applications, maintain test and development environments, and devise geographic disaster recovery solutions.

Many of these companies have found that cost management is difficult without the appropriate controls and oversight to control how Cloud-based systems are utilized. TekStream’s AWS Cloud Optimization provides the insight, control, and guidance through a mix of software and Amazon Cloud experts that collect, analyze, recommend, and correct AWS account details to ensure the optimization of Cloud spend. TekStream’s AWS Cloud Optimization Solution provides software comprised of Collectors and Reapers backed by analytics to make initial recommendations helping clients cut Cloud costs. Standard focused areas for Optimization include:

• Orphaned Snapshots
• Snapshots backed by AMIs
• Orphaned EBS volumes
• PIOPS on orphaned volumes
• EBS volumes on stopped instances
• Unused instances
• Legacy instance types
• Underutilized instances

“Our continued focus and shift of our services to helping clients leverage Cloud-based technologies and Big Data solutions have revealed significant areas of cost savings in our client’s Cloud Infrastructure spend with platforms such as AWS and Oracle. While many clients have rapidly adopted AWS and other Cloud platforms, the approach has mainly been a traditional datacenter one versus leveraging the true value of today’s leading Cloud platforms like AWS. Our unique AWS Cloud Optimization solution provides the necessary analytics along with expert architecture review to help our clients optimize their AWS spend as well as set up their Infrastructure for future growth. The result in many cases is a 25%-30% reduction in annual spend for our clients.” – Rob Jansen, CEO of TekStream Solutions,

Troy Allen, Vice President of Emerging Technologies of TekStream Solutions, says, “TekStream has years of experience helping clients manage their applications across a wide variety of infrastructures. In those years, we have seen an increase for Public Cloud environments like AWS. Even early adopters have not developed the best practices needed to manage their infrastructure spend as efficiently as they could. TekStream’s team of AWS experts have worked for some of the largest Cloud adopters bringing those experiences and proven best practices to our own clients. Technology and Knowledge are the cornerstones of our AWS Cloud Optimization Solution.”

TekStream Solutions
TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners, and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email Shichen Zhang at shichen.zhang@tekstream.com.

# # #

3 Crucial Change Management Models to Help Your Company Adopt New Technology

3 Crucial Change Management Models to Help Your Company Adopt New Technology

By: Todd Packer | Sr. Project Manager

In the Technology Age we are constantly bombarded with new initiatives, new technology and the latest Shiny New Toy.  Generally our leaders set us on a course of technology adoption with minimal consideration or involvement from those individuals that must use this Shiny New Toy.  As a Technology Provider we find this consideration (or the lack of) a critical point for the success or failure of a given endeavor.  Management teams that support the use of a project or organization level Change Management strategy can dramatically improve system adoption, investment realization, employee satisfaction and performance.  In the sections below you will find multiple approaches that have been successfully adapted and used in technology engagements.  The specific model is not important, but using one may just be the key to consistent success.

Navigating organizational change at a project level cannot be done in a vacuum.  It must consider and involve the full project team and any external influencers.  Consistently articulate and support the relationship between the project team and related business resources. Define the roles and responsibilities. Work deliberately to create a partnership with a singular goal in mind—delivering the intended results and outcomes of the project.

There are a number of change management models that can be uses to successfully assess and implement a change. Each of these models include a number of activities or practices to follow in making organizational or technology changes.

McKinsey 7-5 Framework

The McKinsey 7-S Framework, developed by Thomas Peters and Robert Waterman is a popular approach.  It provides methods for assessing the current state of an organization, the future state and what needs to change.  It addresses seven key factors: shared values, strategy, structure, systems, style, staff and skills.  With a technology change this structure could be used to address impact on the business team based on its core values, skill sets, strategy, etc.  The objective is to look at the various aspects to gain an overall perspective of the change and its true value.

Kotter’s Eight-Step Change Model

A second type of change management model is Kotter’s Eight-Step Change Model, developed by Harvard Business School professor John P. Kotter.   This eight-step process is fairly simple and straight forward.  It focuses on acceptance and preparation for the change, rather than the specific change.  The idea here is to ease the transition into a new approach.

ADKAR Model

Last but not least is the The ADKAR model, created by Jeffory Hiatt (founder of Prosci).  The ADKAR model uses a bottom-up approach which focuses on the individuals that need to change.  It’s not a strict set of steps, but more a set of goals used to effectively plan for a change.  It focuses on the individuals’ needs rather than the technical aspects to achieve success.

Change management at the project level is important for benefit realization and value applied to a particular initiative.  It is a structured approach to create logical strategies and processes to improve employee adoption and usage. It is a way to help achieve the intended benefits, realize ROI, mitigate cost, risk and create value.  It is an important tool that can make projects and initiatives are more successful.

Take time to explore the various models and take what is valuable to you.  Stay flexible when following a model and adjust the approach to your team, rather than following it too rigidly. At a personal and organization level the way change is perceived and accommodated is unique based on your cultural norms.  There is no right or wrong approach, having one is the key.

Feedback is always welcome.  We would love to hear how your organization approaches change management with technology projects.

 

Have change management questions? Contact us today!