TekStream Ready to Partner with Small Enterprises to Implement Splunk Insights for Infrastructure

TekStream is a Leading Implementation Provider for New Splunk Solution

    ATLANTA, GA, May 31, 2018 — TekStream Solutions, a dynamic Atlanta-based technology company specializing in digital transformation services and technical recruiting, today announced it is ready and available to partner with small enterprises to implement Splunk® Insights for Infrastructure, an analytics driven IT Operations tool for System Administrators and DevOps teams to collect, analyze, and monitor data from their on-premises, cloud or hybrid server infrastructures. TekStream is a Splunk professional services, reseller, and MSP partner with 100% of its team members being accredited Splunk architects; the company’s consultants hail from diverse backgrounds in operations, security, development, and consulting.

Splunk Insights for Infrastructure is a single download designed for individual Infrastructure teams of SysAdmins and DevOps teams who are responsible for up to 1,000 on-premises, cloud or hybrid server infrastructures. The new software solution provides a seamless experience for infrastructure monitoring and troubleshooting. Splunk Insights for Infrastructure simplifies how system administrators and DevOps teams find and fix infrastructure performance problems, enabling them to automatically correlate metrics and logs to monitor their IT environments – in an easier-to-use, more interactive, and lower-cost package.

“Our clients in the Commercial segment have different needs and, of course, smaller budgets, than large enterprises,” said Judd Robins, Executive Vice President of Sales at TekStream. “Splunk Insights for Infrastructure gives companies of any size a monitoring product they can get started with quickly, easily and for free. There is a lot of demand among our Commercial clients for a robust yet affordable monitoring solution to support their digital transformation efforts, and we are excited to be able to help them take advantage of it.”

The smallest IT environments – up to approximately 50 servers, with 200GB in total storage – are charged no licensing fee for Splunk Insights for Infrastructure. The 200GB free tier includes Community support. For tiers greater than 200GB, Base support is included in the paid license price. Splunk’s Base support includes all major and minor software updates and customer support. If the company is growing and needs to move into larger infrastructure environments, they can easily upgrade to Splunk® Enterprise, the leading analytics platform for machine data.

“TekStream offers a full array of Splunk services to companies that wish to implement Splunk Insights for Infrastructure,” said Robins. “As a Splunk partner, TekStream has the knowledge and experience to help companies every step of the way, from deciding which initial licensing option would be best for them, to implementation and training, to maintenance and support, to determining when it is time to upgrade. Our Splunk consultants also have experience and expertise integrating Splunk software with existing technologies to build a unique complementary solution. We are looking forward to helping businesses make the most of this innovative new Splunk solution and partnering with them as they grow with this use case and other use cases in the future.”

You can download Splunk Insights for Infrastructure here.

About TekStream
We are “The Experts of Business & Digital Transformation,” but more importantly, we understand the challenges facing businesses and the myriad of technology choices and skillsets required in today’s “always on” companies and markets. We help you navigate the mix of transformative enterprise platforms, talent, and processes to create future-proof solutions in preparing for tomorrow’s opportunities – so you don’t have to. TekStream’s IT consulting solutions, combined with its specialized IT recruiting expertise, helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream, visit www.tekstream.com or email info@tekstream.com

Data Onboarding in Splunk


Data Onboarding in Splunk

By: Joe Wohar | Splunk Consultant

Splunk is an amazing platform for analyzing any and all data in your business, however you may not be getting the best performance out of Splunk if you’re using the default settings. To get the best performance out of Splunk when ingesting data, it is important to specify as many settings as possible in a file called “props.conf” (commonly referred to as “props”). Props set ingestion settings per data sourcetype and if you do not put anything into props for your sourcetype, Splunk will automatically try to figure it out for you. While this can be a good thing when you’re first beginning with Splunk, having Splunk figure out how to parse and ingest your data affects the overall performance of Splunk. By configuring the ingestion settings manually, Splunk doesn’t have to figure out how to ingest your data. These are the 8 settings that you should set for every sourcetype in order to get the best performance:

SHOULD_LINEMERGE – As the name suggests, this settings determines whether lines from a data source file are merged or not. If your data source file contains 1 full event per line, set this to “false”; if your data source file contains multiple lines per event, set this to “true”. If you set this to “true” you’ll also need to use some other settings such as BREAK_ONLY_BEFORE or MUST_BREAK_AFTER to determine how to break the data up into events.

LINE_BREAKER – This setting divides up the data coming in based on a regular expression defining the “breaks” in the data. By default, this setting looks for new lines, however if your events are all on the same line, you’ll need create a regular expression to divide the data into lines.

TRUNCATE – TRUNCATE will split an event if it’s number of characters exceeds the value set. The default is 10000, it’s a good idea to lower this to better fit your data and it’s absolutely necessary to increase this if the events exceed 10000 characters.

TIME_PREFIX – This setting takes a regular expression for what precedes the timestamp in events so that Splunk doesn’t have to search through the event for the timestamp.

MAX_TIMESTAMP_LOOKAHEAD – This tells Splunk how far to check after the TIME_PREFIX for the full timestamp so that it doesn’t keep reading further into the event.

TIME_FORMAT – Define a timestamp in strftime format. Without this defined, Splunk has to go through its list of predefined timestamp formats to determine which one should be used.

EVENT_BREAKER – This setting should be set on the Splunk Forwarder installed on the host where the data originally resides. It takes a regular expression which defines the end of events so that only full events will be sent from the Splunk Forwarder to your indexers. EVENT_BREAKER_ENABLE – This setting merely tells Splunk to start using the EVENT_BREAKER setting. It defaults to false so if when you use EVENT_BREAKER, you’ll need to set this to “true”.

There are many other settings which can be used but as long as you have these defined, Splunk will perform much better than if it has to figure them out on its own. For more information on these settings, visit Splunk’s documentation on props: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Propsconf

If you have questions, or would like more information on data onboarding best practices, please contact us: 

[pardot-form id=”15640″ title=”Blog-Joe-Data Onboarding in Splunk”]

 

Imaging Upgrade with FMW 12.2.1.3

Imaging Upgrade with FMW 12.2.1.3

By: John Schleicher | Sr. Technical Architect

Last August Oracle announced the release of Fusion Middleware 12.2.1.3.  You are welcome to look at that here:  https://blogs.oracle.com/proactivesupportidm/oracle-fusion-middleware-12c-122130-has-been-released-v2.

What is not obvious in the announcement however is the ability to support the Fusion Middleware (FMW) Imaging component post 11.1.1.9 release.  This is of particular concern to Accounts Payable (AP) Solutions based using the Application eXtension Frameworks (AXF) Solution Accelerator.  Clients with these AP solutions of which many still sit on pre 11.1.1.9 baselines now have a migration path to maintain their software within support and that allows them to take advantage of current Fusion Middleware infrastructure, newer features, and the latest security offerings.  Just within the scope of infrastructure upgrade customers can expect to take advantage of:

  • Updated JDK with current security updates.
  • New/current Operating System (OS) support
  • Security patches delivery
  • Cloud integrations available via 12c FWM

Why should AXF Solution customers be interested in the upgrade?  As mentioned earlier they now have a fully functional migration path for their AP software that extends their premier support window past December 2018 extending it 4 years to Dec 2022.    AXF Solutions that leverage the customizable coding form will soon have a 12c compatible upgrade to fill out their solution as well.    This release of this component is imminent.  With that, the customizations built into their 11g solutions can be applied to the 12c based Application Developer Framework (ADF) project that is configured post upgrade.

There are additional advantages as well:

  • Capture improvements are the most functional for Imaging customers in 12c. A new recognition processor engine opens up the capture engine to non-Windows operating systems.  Cloud/DOCS interfaces are present.  Outlook EWS capability for mail client offers more security.  There is also additional scripting capabilities built in.
  • For Imaging clients using Business Activity Monitoring (BAM) in their solution, the new engine offers much better reporting capabilities and no dependency on browser type and version. Existing reports will require regeneration but the toolset is much more robust and any additional development will see much better statistical monitoring capabilities.
  • SOA improvements lie in cloud integration and tools that ease new development.

Some TekStream clients have partially upgraded their solutions to pick up new features of capture and overcome expired certificates in the 11g client.  This partial upgrade can be done as this component can be easily separated from the rest of the AP solution.  In doing so however creates a hybrid solution with two separate weblogic servers.  With the 12.2.1.3 release this is no longer necessary.

Why should you contact TekStream for the upgrade?  TekStream is the premier systems integrator with Oracle products in the WebCenter Content, Portal, Sites, and Imaging arenas.  We have laid the groundwork to perform the upgrade with minimal risk of downtime through considerable product knowledge, multiple iterations through the process, and stringent internal procedures.  The upgrade process is quite complex with lots of components, pre and post upgrade configurations (some documented, some not).  Let us help you to make this as painless a process as possible to reap the benefits of an up to date imaging solution.

Stay tuned for more information about Upgrading your AXF Imaging Solution Accelerator solution.

Have more questions? Contact TekStream today!

[pardot-form id=”15612″ title=”Blog- John Schleicher-Imaging Upgrade with FMW”]

Data Cleansing in Splunk

Data Cleansing in Splunk

By: Zubair Rauf | Splunk Consultant

Data is the most important resource in Splunk. Having clean data ingestion is of utmost importance to drive better insights from machine data. It is eminent that data onboarding process should not be automated and every step should be carefully done as this process can determine the future performance of your Splunk environment.

When looking at the health of data in Splunk, the following metrics are important:

  • Data parsing
  • Automatically assigned sourcetypes
  • Event truncation
  • Duplicate events

Data parsing

Data parsing is the most important when it comes to monitoring data health in Splunk. This is the first step that is performed by Splunk when data is ingested into Splunk and indexed into different indexes. Data parsing includes event breaking, date and time parsing, truncation, and parsing out fields that are important to the end user to drive better insights from the data.

Splunk best practices recommend using these six parameters when defining every sourcetype to ensure proper parsing.

  • SHOULD_LINEMERGE = false
  • LINE_BREAKER
  • MAX_TIMESTAMP_LOOKAHEAD
  • TIME_FORMAT
  • TIME_PREFIX
  • TRUNCATE

When these parameters are properly defined, Splunk indexers will not have to do spend extra compute resources in trying to understand the log files it has to ingest. In my experience auditing Splunk environments, Date is one field that Splunk has to work the hardest to parse if it is it is not properly defined within the parameters of the sourcetype.

Automatically assigned sourcetypes

Sometimes when Splunk sourcetypes are not defined correctly, Splunk starts using its resources to parse events automatically and creates similar sourcetypes with a prefix number or a tag. These sourcetypes will mostly have a few events and then another one would be created.

It is important to see that such sourcetypes are not being created, as they will again contribute to data integrity being lost and search/dashboards will omit these sourcetypes as they are not part of the SPL queries that make the dashboard. I have come across such automatically assigned sourcetypes at multiple deployments. It becomes necessary to revisit and rectify the errors in sourcetype definition to prevent Splunk from doing this automatically.

Event truncation

Splunk truncates events by default when they exceed 10,000 bytes. There are some events that exceed that limit and are automatically truncated by Splunk. XML events generally exceed that limit. When an event is truncated before it ends, that harms the integrity of the data being ingested in Splunk. Such events omit complete information and therefore they have no use in driving insights and skew the overall results.

It is very important to always go back and monitor all sourcetypes for truncated events periodically so that any truncation errors can be fixed and data integrity can be maintained.

Duplicate events

Event duplication is one more important area to consider when looking at data integrity. At a recent client project, I came across almost multiple hundred gigabytes of duplication in events in an environment that was ingesting almost 10 TB of data per day. Duplication of the data can be due to multiple factors and sometimes while setting inputs, the inputs can be duplicated. Duplicate data poses a threat to the integrity of data and the insights driven from that data. Duplicate data will also take up unwanted space on the indexers.

Duplication of events should also be periodically checked, especially when new data sources are on-boarded. This is to make sure that no inputs were added multiple times. This human error can be costly. At a client where we found multiple gigabytes of duplication, 7 servers were writing their logs to one NAS drive, and then the same 7 servers were sending the same logs to Splunk. That caused duplicate events amounting to almost 100GB/day.

Ensuring that the areas mentioned above have been addressed and problems rectified, would be a good starting point towards a cleaner Splunk environment. This would help save time and money, substantially improve Splunk performance at index and search time and overall help you drive better insights from your machine data.

If you have questions, or would like assistance with cleansing and improving the quality of your Splunk data, please contact us: 

[pardot-form id=”15606″ title=”Blog-Zubair-Data Cleansing in Splunk”]

 

TekStream Announces Sponsorship at Liferay Symposium North America 2018

Digital innovation event will showcase TekStream’s expertise and services

NEW ORLEANS, LA, May 16, 2018 /24-7PressRelease/ — TekStream Solutions, experts in Business and Digital Transformation, announced today its participation as a Sliver Sponsor at this year’s Liferay Symposium North America. Hosted by Liferay, which makes software that helps companies create digital experiences on web, mobile and connected devices, Liferay Symposium North America will take place from October 8 to 10 in New Orleans. Three days of expert sessions, innovative case studies and hands-on workshops will empower attendees to bridge the gap between business and IT and take practical steps to become change agents for digital transformation within their organizations.

TekStream is proud to Sponsor the Liferay Symposium North America,” said Rob Jansen, CEO. “We look forward to meeting other business leaders and developers as we share best practices in helping leading enterprises transform their business processes and digital relationships with their constituents with fellow members of the Liferay community.”

“It’s a pleasure to have TekStream participate in this year’s Liferay Symposium North America. Partners like TekStream are an integral part of Liferay’s success and attendees are sure to appreciate their expertise as they adapt to an increasingly digital world,” said Brian Kim, Chief Operating Officer for Liferay.

To find out more about the Liferay Symposium North America and to register, please visit the event website.

For more information about Liferay, visit www.liferay.com.

About TekStream:

We are “The Experts of Business & Digital Transformation,” but more importantly, we understand the challenges facing businesses and the myriad of technology choices and skillsets required in today’s “always on” companies and markets. We help you navigate the mix of transformative enterprise platforms, talent, and processes to create future-proof solutions in preparing for tomorrow’s opportunities – so you don’t have to. TekStream’s IT consulting solutions, combined with its specialized IT recruiting expertise, helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market.

About Liferay:

Liferay makes software that helps companies create digital experiences on web, mobile and connected devices. Our platform is open source, which makes it more reliable, innovative and secure. We try to leave a positive mark on the world through business and technology. Hundreds of organizations in financial services, healthcare, government, insurance, retail, manufacturing and multiple other industries use Liferay. Visit us at www.liferay.com.

# # #