Optimizing Splunk Dashboards with Post-Process Searches

Optimizing Splunk Dashboards with Post-process Searches

When creating Splunk dashboards, we often have the same search run multiple times showing different types of graphs or with slight variations (i.e. one graph showing “allowed” and another showing “blocked”). This creates more overhead every time the dashboard is opened or refreshed, causing the dashboard to open or populate more slowly and increasing the demand on the Splunk infrastructure. There are other situations or limitations that occur such as user concurrent-search limits.

With proper optimization techniques a full typical dashboard with 10 panels can run less than three Splunk queries versus the 10 individual searches that would normally run. This is accomplished by using Post-process searches that are easily added in the SimpleXML of the desired dashboard.

Starting Point of Post-process Searches

When running a search in Splunk it will return RAW event data or transformed event data. Transformed event data is data that was returned by a search and is placed in the form of statistical tables which is used as the basis for visualizations. The primary transforming commands are:

  • Chart
  • Timechart
  • Top
  • Rare
  • Stats

The Post-process search is known and referred to as a base search. The base search should always avoid returning RAW events and instead return transformed results. This is largely due to one of the limitation of Post-process being it can only return a max of 500,000 events and it will truncate without warning. To circumvent this limitation, it is best practice to use one of the transforming commands and as always, refine your search as much as possible to reduce the number of results and reduce your search .

The Documented Limitations of Post-Process Searches

The documentation that is provided on Splunk Docs show a few limitations that you should consider before using the Post-process search:

http://docs.splunk.com/Documentation/Splunk/6.2.5/Viz/Savedsearches#Post-process_searches

  • Chaining for multiple post-process searches is not currently supported for SimpleXML dashboards.
  • If the base search is a non-transforming search, the Splunk platform retains only the first 500,000 events returned. The post-process search does not process events in excess of this 500,000 event limit, silently ignoring them. This results in incomplete data for the post-process search. A transforming search as the base search helps avoid reaching the 500,000 event limitation.
  • If the post-processing operation takes too long, it can exceed Splunk Web client’s non-configurable timeout value of 30 seconds. This can result in a timeout due to an unresponsive splunkd daemon/service. This scenario typically happens when you use a non-transforming search as the base .

Examples of the Basic Concepts

Splunk Search with non-transforming commands returning RAW results:

Splunk Search with transforming command retuning transformed results:

Examples of Post-process

There are many different ways to determine what should be the base search and what should be in each post-process search. One method is to create all of the queries for your dashboard first and then find the beginning commonality between the searches which will end up being your base search. Then the part that does not meet the commonality will then be the post-process searches. Keep in mind that if you have four Splunk queries and three have a commonality but the fourth is completely different, you can do the base search for the three common Splunk queries and the fourth will just run as a normal query.

We will take the following 5 Splunk queries as our example for what we have determined to put into our new dashboard. If you just ran these in our dashboard it would run 5 almost identical queries taking up valuable search resources and user limits.

sourcetype=”pan:threat” action=allowed | stats count by app
sourcetype=”pan:threat” action=allowed | stats count by rule
sourcetype=”pan:threat” action=allowed | stats count by category
sourcetype=”pan:threat” action=allowed | stats count by signature
sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip

As we can easily see, the commonality of the 5 queries is going to be:

sourcetype=”pan:threat” action=allowed |

The issue with just taking that portion as your base search is that it will return RAW results. If we review the 5 queries, they are using 5 different fields which means our transforming base search should include all 5 fields.

sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip

If we continue our method of initially creating our dashboard with our 5 independent queries:

Then we can switch to the XML source view of the dashboard and start making our base search and post-process searches. Below is how the dashboard’s XML looks before using any post-process searches.

<

<panel>

<table>

<title>Applications</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by app</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by rule</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by category</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count by signature</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<search>

<query>sourcetype=”pan:threat” action=allowed | stats count, values(rule) as rule by dest_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

We will create our base search with the following:

Base Search sourcetype=”pan:threat” action=allowed
| stats count by app, category, rule, signature, dest_ip, src_ip
Post-process 1 | stats sum(count) as count by app
Post-process 2 | stats sum(count) as count by rule
Post-process 3 | stats sum(count) as count by category
Post-process 4 | stats sum(count) as count by signature
Post-process 5 | stats sum(count) as count, values(rule) as rule by dest_ip

Once in the XML Source view, create your base search at the top, under the label but before the first row:

The base search id can be named anything (in this case it is “baseSearch”) but it is best to make it something easy because you will need to use it throughout the dashboard. The base search id is referenced in each post-process search which appends the base search in front of each post-process search. To create the base search, the id is placed inside of the search tags at the top of the dashboard before all of the panels.
<search id=”{id name}”>

The id name must be in double quotes “” and the name is case sensitive. Next, the transforming base search query is added inside of the open and closed query tags
<query> {insert query here} </query>

After the query tags, any other supported tags can be used such as the timeframe tags including tokens created and assigned in the dashboard. Then close the search tag.
</search>

Next we will add the post-process searches to each of the panels on the dashboard. The time references should be removed since the base search controls the timeframe:

Similarly to the base search, the post-process search uses the base search id in the search tags.
<search base=”{id name of base search}”>

Next would be the query tags where the post-process search goes. This query should start with a pipe “|” because it will be appended to the base search like it was all one query.
<query> “ {post-process search that start with a pipe “|” } </query>

After the query tags, any other supported tags can be used except the timeframe tags since the post-process searches go off the timeframe of the base search. Then close the search tag.
</search>

After modifying all 5 of the post-process searches in the XML source, the dashboard will be ready to use the base search. If you run the dashboard and look at the current searches, there will only be 1 search compared to 5 searches. Below is how the dashboard’s XML looks after making the changes.

<dashboard>

<label>Threat Dashboard</label>

<!– Base Search Called “baseSearch” (This can be named anything) –>

<search id=”baseSearch”>

<query>sourcetype=”pan:threat” action=allowed | stats count by app, category, rule, signature, dest_ip, src_ip</query>

<earliest>-24h@h</earliest>

<latest>now</latest>

</search>

<row>

<panel>

<table>

<title>Applications</title>

<!– post-process search 1 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by app</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rule</title>

<!– post-process search 2 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by rule</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Catergory</title>

<!– post-process search 3 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by category</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

<row>

<panel>

<table>

<title>Signature</title>

<!– post-process search 4 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count by signature</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

<panel>

<table>

<title>Rules by Destination IP</title>

<!– post-process search 5 –>

<search base=”baseSearch”>

<query>| stats sum(count) as count, values(rule) as rule by dest_ip</query>

</search>

<option name=”drilldown”>none</option>

</table>

</panel>

</row>

</dashboard>

The use of post-process searches in dashboards might not always work if there are no common queries. In the situations where there is commonality, then post-process searches should be used. This is not only to reduce the work load that each query requires but it reduces the likeliness off users reaching their search limits especially if the dashboard has a large number of common panels.

Want to learn more about optimizing Splunk dashboards? Contact us today!

[pardot-form id=”16215″ title=”Blog – Josh Grissom – Optimizing Splunk Dashboards with Post-process Searches”]

Inspyrus 3.0: Faster, Better, and in the Cloud

Inspyrus’ Invoice Automation product recently released version 3.0 with a slew of new features that continues to make an excellent product even better.  This new release comes with over 30 new features and a 10x increase in performance which makes a solid product even faster.  Here is a brief list of some of the top new features released in 3.0.

Inspyrus Routing Engine

One of the biggest changes to the product has been the introduction of their own routing engine.  By replacing Oracle BPM with their own engine, Inspyrus was able to improve the speed and stability of routing invoices while also making it easier to configure.

Supplier Central

Inspyrus can now retrieve vendor contact information from the ERP system to make it easier to enable new vendors.  But that is not all!  Because Inspyrus has access to vendor contact information, the product can also send the vendor an invitation to join Supplier Central.

Mobile App

The Inspyrus mobile app continues to improve and provide additional functionality.  For example, coders now have the ability to enter charge account, project coding (EBS), or cost factor (JDE) information from the app.

Forms Recognition

The heart of any good AP solution is automation.  In order to improve the success of Oracle Forms Recognition, Inspyrus continues to make several improvements to the Inspyrus extraction engine to improve accuracy.

Inspyrus continues to make under-the-cover improvements in order to ensure stability and increased speed.  They have switched cloud providers in order to improve stability and scale.  These new features prove that Inspyrus will continue to improve their product in terms of features, speed, and ease of use.  And with release 3.1, Inspyrus will begin to use ML (Machine Learning) in order to determine how to code an invoice based on previous coding done for that vendor.  We cannot wait to see what other features they have in store for 3.1!

Want to learn more about new Inspyrus 3.0 features? Contact us today!

[pardot-form id=”16195″ title=”Blog – Mariano Romano – Inspyrus 3.0: Faster, Better, and in the Cloud”]

Prepayments Feature Released for Inspyrus Invoice Automation

Prepayments have been long time features of Oracle EBS and other ERP systems.  Prepayments allow advance payment to a supplier.  Later when the invoice is received from the supplier, the prepayment can be applied against that invoice.

In the latest release of the Inspyrus Invoice Automation Solution, a prepayments feature has been added for Oracle EBS.

The new prepayments tab in the UI has the ability to Add, Edit or Delete a Prepayment.   When adding a prepayment, simply click the Add button and a list of all available prepayments for the supplier is displayed.

The user can select a prepayment and edit how much of it should be applied to this particular invoice.

This amount is then displayed in the Prepayments tab in Inspyrus.

The prepayment information is then sent to Oracle EBS when the invoice record is created and the prepayment applied.

Contact TekStream for more information about the Inspyrus Invoice Automation Solution.

[pardot-form id=”16109″ title=”Blog – Karla Broadrick – Prepayments Feature released for Inspyrus Invoice Automation”]

Press Release: TekStream Makes 2018 INC. 5000 List For Fourth Consecutive Year

Press Release: TekStream Makes 2018 INC. 5000 List For Fourth Consecutive Year

For the 4th Time, Atlanta-based Technology Company Named One of the Fastest-growing Private Companies in America with Three-Year Sales Growth of 129%

ATLANTA, GA, August 16, 2018 – Atlanta-based technology company, TekStream Solutions, is excited to announce that for the fourth time in a row, it has made the Inc. 5000 list of the fastest-growing private companies in America. This prestigious recognition comes again just seven years after Rob Jansen, Judd Robins, and Mark Gannon left major firms and pursued a dream of creating a strategic offering to provide enterprise technology software, services, solutions, and sourcing. Now, they’re a part of an elite group that, over the years, has included companies such as Chobani, Intuit, Microsoft, Oracle, Timberland, Vizio, and Zappos.com.

“Being included in the Inc. 5000 for the fourth straight year is something we are truly proud of as very few organizations in the history of the Inc. 5000 list since 2007 can sustain the consistent and profitable growth year over year needed to be included in this prestigious group of companies,” said Chief Executive Officer, Rob Jansen. “Our continued focus and shift of our services to helping customers leverage Cloud-based technologies and Big Data solutions have provided us with a platform for continued growth and allowed TekStream to provide extremely value-added solutions to our portfolio of industry-leading customers.”

This year’s Inc. 5000 nomination comes after TekStream has seen a three-year growth of over 129%, and 2018 is already on pace to continue this exceptional growth rate. In addition, the company has added 30% more jobs over the last 12 months.

“We’ve seen a significant wave of cloud digital transformation requests coming into this year. Customers acknowledge that running legacy software systems in their own data centers managed by expensive full-time resources is a cost model that is no longer competitive. Moving systems to cheaper cloud platforms, dropping on-premise costs, and invoking the unlimited innovation potential provided by the cloud allows customers to refocus and enhance their core business. They all know they have to make the journey, they need a team to help show them the way.” stated Judd Robins, Executive Vice President of Sales.

To qualify for the award, companies had to be privately owned, established in the first quarter of 2014 or earlier, experienced a two-year growth in sales of more than 50 percent, and garnered revenue between $2 million and $300 million in 2017.

“The increased focus on hiring both commercially and federally along with a changing candidate market has necessitated the need for creative and scalable recruiting solutions,” stated Mark Gannon, Executive Vice President of Recruitment. “This award and recognition is a testament to our team’s adaptive ability to deliver superior recruiting service across multiple solution offerings. We look forward to continued growth and successes in 2019!”

TekStream
TekStream is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners, and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email Shichen Zhang at shichen.zhang@tekstream.com.

Customers Realize 30%+ Annual Savings on their AWS Cloud Spend with TekStream’s New AWS Cloud Optimization Solution

TekStream Solutions Helps Companies Realize 30% or More on their AWS Cloud Spend with TekStream’s new AWS Cloud Optimization Solution

ATLANTA, GA, August 10, 2018 — Atlanta-based technology company, TekStream Solutions, is excited to announce the release of its AWS Cloud Optimization Solution. TekStream’s AWS Cloud Optimization Solution is specifically designed to help companies contain and optimize their AWS Cloud spend. Companies are moving from traditional, on-premise or datacenter hosted environments to take advantage of Cloud solutions to run critical business applications, maintain test and development environments, and devise geographic disaster recovery solutions.

Many of these companies have found that cost management is difficult without the appropriate controls and oversight to control how Cloud-based systems are utilized. TekStream’s AWS Cloud Optimization provides the insight, control, and guidance through a mix of software and Amazon Cloud experts that collect, analyze, recommend, and correct AWS account details to ensure the optimization of Cloud spend. TekStream’s AWS Cloud Optimization Solution provides software comprised of Collectors and Reapers backed by analytics to make initial recommendations helping clients cut Cloud costs. Standard focused areas for Optimization include:

• Orphaned Snapshots
• Snapshots backed by AMIs
• Orphaned EBS volumes
• PIOPS on orphaned volumes
• EBS volumes on stopped instances
• Unused instances
• Legacy instance types
• Underutilized instances

“Our continued focus and shift of our services to helping clients leverage Cloud-based technologies and Big Data solutions have revealed significant areas of cost savings in our client’s Cloud Infrastructure spend with platforms such as AWS and Oracle. While many clients have rapidly adopted AWS and other Cloud platforms, the approach has mainly been a traditional datacenter one versus leveraging the true value of today’s leading Cloud platforms like AWS. Our unique AWS Cloud Optimization solution provides the necessary analytics along with expert architecture review to help our clients optimize their AWS spend as well as set up their Infrastructure for future growth. The result in many cases is a 25%-30% reduction in annual spend for our clients.” – Rob Jansen, CEO of TekStream Solutions,

Troy Allen, Vice President of Emerging Technologies of TekStream Solutions, says, “TekStream has years of experience helping clients manage their applications across a wide variety of infrastructures. In those years, we have seen an increase for Public Cloud environments like AWS. Even early adopters have not developed the best practices needed to manage their infrastructure spend as efficiently as they could. TekStream’s team of AWS experts have worked for some of the largest Cloud adopters bringing those experiences and proven best practices to our own clients. Technology and Knowledge are the cornerstones of our AWS Cloud Optimization Solution.”

TekStream Solutions
TekStream Solutions is an Atlanta-based technology solutions company that specializes in addressing the company-wide IT problems faced by enterprise businesses, such as consolidating and streamlining disparate content and application delivery systems and the market challenges to create “anytime, anywhere access” to data for employees, partners, and customers. TekStream’s IT consulting solutions combined with its specialized IT recruiting expertise helps businesses increase efficiencies, streamline costs, and remain competitive in an extremely fast-changing market. For more information about TekStream Solutions, visit www.tekstream.com or email Shichen Zhang at shichen.zhang@tekstream.com.

# # #

3 Crucial Change Management Models to Help Your Company Adopt New Technology

In the Technology Age we are constantly bombarded with new initiatives, new technology and the latest Shiny New Toy.  Generally our leaders set us on a course of technology adoption with minimal consideration or involvement from those individuals that must use this Shiny New Toy.  As a Technology Provider we find this consideration (or the lack of) a critical point for the success or failure of a given endeavor.  Management teams that support the use of a project or organization level Change Management strategy can dramatically improve system adoption, investment realization, employee satisfaction and performance.  In the sections below you will find multiple approaches that have been successfully adapted and used in technology engagements.  The specific model is not important, but using one may just be the key to consistent success.

Navigating organizational change at a project level cannot be done in a vacuum.  It must consider and involve the full project team and any external influencers.  Consistently articulate and support the relationship between the project team and related business resources. Define the roles and responsibilities. Work deliberately to create a partnership with a singular goal in mind—delivering the intended results and outcomes of the project.

There are a number of change management models that can be uses to successfully assess and implement a change. Each of these models include a number of activities or practices to follow in making organizational or technology changes.

McKinsey 7-5 Framework

The McKinsey 7-S Framework, developed by Thomas Peters and Robert Waterman is a popular approach.  It provides methods for assessing the current state of an organization, the future state and what needs to change.  It addresses seven key factors: shared values, strategy, structure, systems, style, staff and skills.  With a technology change this structure could be used to address impact on the business team based on its core values, skill sets, strategy, etc.  The objective is to look at the various aspects to gain an overall perspective of the change and its true value.

Kotter’s Eight-Step Change Model

A second type of change management model is Kotter’s Eight-Step Change Model, developed by Harvard Business School professor John P. Kotter.   This eight-step process is fairly simple and straight forward.  It focuses on acceptance and preparation for the change, rather than the specific change.  The idea here is to ease the transition into a new approach.

ADKAR Model

Last but not least is the The ADKAR model, created by Jeffory Hiatt (founder of Prosci).  The ADKAR model uses a bottom-up approach which focuses on the individuals that need to change.  It’s not a strict set of steps, but more a set of goals used to effectively plan for a change.  It focuses on the individuals’ needs rather than the technical aspects to achieve success.

Change management at the project level is important for benefit realization and value applied to a particular initiative.  It is a structured approach to create logical strategies and processes to improve employee adoption and usage. It is a way to help achieve the intended benefits, realize ROI, mitigate cost, risk and create value.  It is an important tool that can make projects and initiatives are more successful.

Take time to explore the various models and take what is valuable to you.  Stay flexible when following a model and adjust the approach to your team, rather than following it too rigidly. At a personal and organization level the way change is perceived and accommodated is unique based on your cultural norms.  There is no right or wrong approach, having one is the key.

Feedback is always welcome.  We would love to hear how your organization approaches change management with technology projects.

Have change management questions? Contact us today!

[pardot-form id=”16003″ title=”Blog- Todd Packer – 3 Crucial Change Management Models to Help Your Company Adopt New Technology”]

Considerations for Moving From On-Prem to Cloud

Experts today are continually barraged with data about the cloud. It appears to be each different business is using cloud-based programming, leaving those as yet utilizing on-premise arrangements thinking about whether they, as well, should switch. Organizations are rushing to cloud arrangements on the grounds that there are numerous a large advantages than there are with on-prem arrangements. Here are some of the regularly said reasons cloud setups are better.

COST EFFECTIVE

Cloud arrangement suppliers by and large charge some kind of month to month expense.  This rate might be paid every year or month to month and can either be for each client cost or a cost that incorporates a set scope of records. In return for this charge, you’ll have the capacity to set up accounts until the point when you achieve the most extreme, overseeing secret word resets and record evacuations and augmentations utilizing an authoritative gateway.

Rather than depending on CDs or a site download to introduce the product on every gadget, you’ll have programming that is prepared to utilize. Permitting charges are incorporated into the price tag, so your IT group will never again need to stay aware of your product licenses to ensure the greater part of your introduced programming has been obtained.

TECHNICAL SKILLS

With such huge numbers of private companies and new companies in the business world today, technical support is no longer a choice. A SMB for the most part can’t bear the cost of a full-time IT bolster individual, not to mention the high cost of a server chairman. This implies depending on neighborhood organizations to offer help on an as-required premise, which can accompany a heavy for every hour sticker price. Along these lines, the organizations that do have on-introduce programming will regularly depend on remote help, which is outsourced by means of the cloud.

With cloud programming, technical support is generally taken care of by the supplier, regardless of whether by telephone, email, or an assistance work area ticket. These suppliers have the wage base to pay the high pay rates instructed by the present best IT experts, both at the server level and at the client bolster organize. Most smaller organizations essentially couldn’t manage the cost of this kind of skill all the time.

SCALABILITY

Each strategy for success to develop after some time and cloud programming offers the versatility required to deal with that development. At the point when another representative joins its staff, a business utilizing cloud programming can essentially add another client to its record administration. At the point when an organization maximizes its logins, a higher-level record can as a rule be requested with negligible exertion with respect to the business.

Another advantage to cloud arrangements is that they for the most part include new highlights that normal on-premise setups do not include. As clients express an enthusiasm for having the capacity to accomplish more with their product, suppliers include these highlights, making them accessible either consequently or with a discretionary record change. Cloud arrangements are additionally consistently endeavoring to work with other programming applications and these combinations make it simpler for organizations to deal with everything in one place.

AVAILABILITY

The present workforce is progressively portable, telecommuting, inn rooms, coffeeshops, and air terminals. Cloud programming implies that these laborers can get to their documents wherever they are, utilizing a portable workstation, cell phone, or tablet. This implies even while in the midst of some recreation, groups can keep in contact, keeping ventures pushing ahead through the cloud.

A standout amongst other things about cloud arrangements is that experts never again need to make sure to bring records with them when they leave the workplace. An introduction can be conveyed specifically from a client’s cell phone. Applications that handle charging, cost assessing, and venture administration can be gotten to amid gatherings, enabling participants to get the data they require without influencing everybody to hold up until the point when the meeting is finished and everybody has come back to their workplaces.

RELIABILITY

In the event that you’ve at any point endured a server blackout, you know how destructive it can be on an assortment of levels. Your representatives are compelled to either wait around, sitting tight for the circumstance to be settled, or go home for the day and leave your work environment unmanned. On the off chance that this happens over and over again, you’ll utilize customers and even workers, and in addition hurt your well-deserved notoriety as a business that has become a model of togetherness.

Cloud suppliers look at unwavering quality as a vital piece of their plans of action. They make it their central goal to guarantee clients approach the records and applications they require constantly. In the event that a blackout ever happens, many cloud suppliers have worked in reinforcements to assume control, with clients never mindful an issue has occurred. On the off chance that such reinforcement doesn’t exist, a cloud supplier still approaches specialists who can guarantee frameworks are up considerably more rapidly than a SMB could with an on-preface server.

SECURITY

Security is a continuous worry for organizations, with reports of ruptures getting to be plainly typical. Cloud programming guarantees an abnormal state of security, including information encryption and solid secret word prerequisites. These little things will help protect a business’ information, lessening the danger of a rupture that could cost cash and mischief client confide in an organization.

Organizations that store specific data, for example, medicinal records or financial balance data should scan for a cloud supplier that offers these insurances. There are presently cloud suppliers that have some expertise in HIPAA consistence, for example, so a medicinal practice could profit by the authorities on staff at one of those suppliers who can guarantee that wellbeing information stays safe.

DISASTER RECOVERY

What might happen to your business if a cataclysmic event struck your building or server farm? Imagine a scenario where you came in one morning to discover a fire had rendered your workplaces dreadful. Would you be compelled to close everything down for the term or would your representatives have the capacity to begin working promptly?

Cloud programming enables catastrophe to verification your business, guaranteeing your representatives can telecommute or a brief office if for reasons unknown they can’t work in the workplace. Cloud suppliers typically have reinforcement anticipates their own servers to ensure against calamities, so the product and documents you utilize every day will be available regardless of whether an issue strikes one of their server farms. Before you pick a supplier, don’t hesitate to make inquiries about an organization’s catastrophe arrangements to ensure you’ll be effortless.

Thinking about moving from On-Prem to Cloud? Contact us today!

[pardot-form id=”16001″ title=”Blog- Matt Chumley – Considerations for Moving From On-Prem to Cloud”]

Value of Post Go Live Support

Businesses are constantly striving to gain competitive advantage and efficiencies through process improvement and increased productivity. Companies like Oracle, IBM and SalesForce provide the technology and applications to businesses to enable them to automate and improve their performance. Businesses often invest a lot of money to customize these applications to fit their business need.

Once a business has purchased the application whether it is On Premises or Cloud Based, it often looks to an outside vendor to customize the application to their organization’s needs. The vendor uses highly skilled resources who work with the business users and internal IT team to perform functional and technical analysis. The vendor will design and implement the functionality required by the business. Once the implementation of the customization to the application has been completed, the vendor will hand over the customized application to the internal IT team to continue the support and maintenance of the application.

At this point most businesses are faced with the challenge of continuing to support and enhance the customized application as the users become familiar with the application and want to make changes. More and more companies are focusing on their core business and relying on a vendor or a pool of specialized vendors to provide the support services for their IT infrastructure and applications. Today there are many different IT support models that are available. Some of the models for post go live support are as follows:

  1. Time and Material
  2. Retention Model
  3. Managed Services

Time and Material

The Time and Material model may be used when the business has a small IT team or an IT team that is not familiar with the technology that needs to be supported. The vendor’s support resource becomes an extension of the businesses IT team providing day to day support. Businesses often use this model to continue working on enhancements as they have contracted the support resources for 100% of the time and have to utilize them. This model may continue until the internal IT team has the skills to take over all or part of the support functions from the vendor’s support resources. This model is expensive as it is almost like hiring a full time contract resource with the right skillset. The advantage is continuity as one of the vendor’s resources who worked on the implementation often is the resource who supports the application. The vendor will also provide a backup resource from its pool of resources when the dedicated resource is on vacation or is not available.

Retention Model

The Retention Model provides flexibility when it comes to the budget and could be long term option for providing application support. This model is made up of a fixed fee and a time and material component.

  1. Fixed fee per month for a fixed number of hours per month
    1. Fixed fee covers a fixed number of support hours per month
    2. The Fixed Fee is use it or lose it and is paid to the Vendor whether or not all the support hours are utilized
    3. It is up to the Business to provide work to cover the fixed hours per month
  2. Time and Material for all support hours above the fixed number of hours per month
    1. Utilized when additional support hours above the fixed number of hours per month are needed by the business
    2. Prior approval must be received from the business for any hours worked above the fixed number of hours per month

The advantages of the Retention Model are as follows:

  1. Better control over the support budget as the business does not require a dedicated resource for support
  2. Level of Support can be tailored according to the need of the business
    1. Business may start with 80 fixed support hours per month after Go Live
    2. After 3 months the fixed support hours per month may be lowered to 40 hours
  3. Continuity as the vendor has a pool of resources with the skill set to provide support
  4. Different vendor resources can provide support depending on the issues which may be environment related, infrastructure related or application related

The business must constantly prioritize the support to make sure the vendors support resource is fully utilized and is addressing the high priority issues within the fixed amount of hours per month

Managed Service Model

The Managed Services Model is a broader support model that encompasses environment support, infrastructure support and application support. This is a more expensive model which covers broad areas of support requiring diverse skillset and experience. Instead of contracting with different vendors to provide support for the environment, infrastructure and applications, the business contracts with a single vendor. The single vendor takes on the responsibility for supporting all the areas of support using their own team or specialized partners. The business has to deal with a single vendor who tailors the Managed Services support with a combination of Fixed Fee and Time & Material options for the different support services based on the needs of the business. This model not only brings together diverse skills from different vendors, but it also brings together specialized monitoring tools and processes from different vendors under a single umbrella. The Single Vendor takes on the responsibility and overhead of coordinating the work with all the vendors who provide support. The billing is also simplified as the business has to deal with the single vendor for all billing issues.

Contact us today for more information on the value of post go live support.

[pardot-form id=”15999″ title=”Blog- Mubeen Bolar – Value of Post Go Live Support”]

New features in WebCenter Enterprise Capture 12c

WebCenter Enterprise Capture 12c was released in late 2015 and with it came the addition of several key new features.  This article explores several of the significant improvements and new features offered by the newest product version.

1.     Release Processes

One of the new features added in WebCenter Enterprise Capture is the concept of release processes in the client profiles.  In 11g, each capture client profile had a single specified process that it was sent to upon release.  In configuration, you could define which process the profile mapped to (a specific Conversion, Recognition or Commit profile), but only one could be specified.  In WebCenter Enterprise Capture 12c, the ability to define multiple release processes was added.  This allows a greater amount of flexibility in the capture workflows that can be created by allowing a user to route a batch to one of any number of predefined processors.  This also reduces the number of capture profiles to be configured because a single profile can be used to route to any number of post-processors.

To choose which release process will be used for a particular batch, a user simply selects from the predefined options in the Release drop down menu.

Figure 1 Example of a “Commit” release process

2.     Unlock feature

In Capture 11g, there was no built in ability to unlock a batch.  Administrators would have to configure a specific “unlock batch” client profile but the process was not intuitive to the end users.  This resulted in many abandoned locked batches and unnecessary work for system administrators.  In WEC 12c, the Unlock feature is included OOTB and available from every capture profile.  This makes unlocking batches simple.

Figure 2 Unlock batch button

3.     Attachment Types

One of the new features of WEC 12c is the support for attachments and attachment types.  Administrators can define attachment types for a workspace.  Additional documents can be added to a batch as attachments of the main documents.  Separate workflow and commit paths can be defined for attachments.

4.     External document conversion

Another useful feature of WEC 12c is the support of the use of external conversion programs for document conversion.  In the definition for a Conversion Job, there is now the ability to specify External Conversion including a program and command line parameters to be used.

A common pain point felt by users of WEC 11g was that the outside in conversion engine used by WEC often struggled when trying to convert pdfs with embedded fonts.  A workaround was to use the Ghostscript conversion engine to convert the documents instead of the native outside in.  However, this was only able to be accomplished with documents imported via email. With the new ability to specify an external conversion engine in the document conversion processor, the need to have a custom script to do conversions goes away and it can be used for all documents regardless of the ingestion method.

5.     Desktop client

With the upgraded version comes an upgraded capture client with a lot of new features.  First and foremost, the client is no longer a java applet but instead is a desktop client with a standalone installer.  While the client does use java, the required libraries now come packaged within the client itself.  The benefit here is two-fold.  First, there is no prerequisite to have a certain version of java installed.  Second, there is no longer a dependency on a browser that supports java applets.  Most modern browser versions such as Firefox and Chrome no longer support java applets.  Internet explorer does support java but requires many setup steps to work correctly.  All this hassle goes away in WEC 12c with the introduction of a desktop client.

6.     Metadata search

One of the newly introduced features of the 12c client is the ability to search for a document within a batch based on metadata.  To use this feature, users enable the “Find Document” option from the batch menu.  Then a search box appears allowing users to search through all the metadata fields in a batch to find matches.

Figure 3 Find Document batch menu option

Figure 4 Metadata search pane

7.     View document in native application

Finally, Capture 12c has added the ability to view non-image documents in their native applications.  This includes pdfs, word documents, emails, etc.

To use this features, simply right click on the document and choose “View document in associated application” from the menu.

Figure 5 View document in native application

Contact TekStream for help upgrading to WebCenter Enterprise Capture 12c to make use of these and other great features.

[pardot-form id=”15997″ title=”Blog- Karla Broadrick – New features in WebCenter Enterprise Capture 12c”]