3 Ways to Migrate Custom Oracle Middleware Applications to the Cloud

Understanding and classifying middleware applications is one of the most critical and complex tasks of any Cloud adoption process. No doubt, your company has several diverse applications integrated into your system. Off-the-shelf applications for sure, but custom-built and legacy applications as well.

Whether you are considering migrating your Oracle solution to Amazon Web Services (AWS), Oracle Cloud Infrastructure (OCI), or another Cloud platform, each of your legacy applications will have its own migration needs that will need to be accounted for during your migration efforts.

3 Methods for Migrating Your Middleware Applications to a Cloud Environment

Re-Hosting: Lift & Shift Migrations

The first method for migrating your middleware applications to the Cloud pertains to your applications that rely on traditional server/compute technologies or applications that, based on their complexity, won’t benefit from re-factoring to utilize newer technologies like serverless or micro-services.

For these applications, we recommend leveraging the Infrastructure as a Service (IaaS) offerings provided by AWS and OCI (depending on your preferred platform). With these IaaS offerings, you can re-create the compute/servers required to run these applications just like you would with a traditional datacenter. You can also layer on new Cloud tools and concepts like:

  •  – On-demand pricing
  •  – Next-generation networking and securities
  •  – Additional service integrations like Content Delivery Networks or API gateways

As a note, many Oracle middleware applications will potentially fall under this category. Most of the WebLogic off-the-shelf applications use stateful sessions for clustering and will require additional effort to be able to integrate with newer Cloud concepts like auto-scaling.

Re-Platform: Migration Applications to a Managed Platform

For this next method, you’re going to focus on applications that can (and should) be moved to a managed platform. AWS has several services available to support the deployment of custom applications that use various tech stacks (Java, PHP, .Net, etc.).

In these instances, AWS takes over the provisioning, management, and autoscaling of compute/service and network compliances. This can significantly reduce operational costs as companies no longer need to maintain servers, operating systems, networks, etc. It also eases the migration tasks by removing infrastructure components from the mix.

Re-Architect: Recreating an Application for the Cloud

While many applications can be migrated via a “Lift-and-Shift” approach or through a managed platform, others may need to be completely overhauled to function correctly in the Cloud. “Re-thinking” or “re-architecting” these types of applications allows your team to ensure that these tools and concepts can utilize their full potential and appreciate the benefits of being deployed on the Cloud.

For example, you can explore opportunities to break down monolithic apps into smaller “micro” services and utilize serverless technologies like Lambdas, Amazon Simple Notification Services (SNS), or Amazon Simple Queue Service (SQS) to improve performance. At the same time, you can replace the traditional Oracle RDBMS data sources with new concepts like Data Lakes, Object Storage, or NoSQL Databases.

The Migration Support You Need

Regardless of your Cloud platform of choice, careful consideration needs to be given for how you are going to migrate your legacy middleware applications. You can also use your upcoming migration as an opportunity to audit your applications and determine if there are any that can be sunset or rolled into a new system or application to drive further efficiency.

Have questions? TekStream has deep experience deploying enterprise-grade Oracle middleware applications both on traditional data centers as well as cloud environments like AWS or OCI. As part of any migration, we utilize that experience to help classify applications and apply best practices to the deployment of those applications in the Cloud.

Are you looking for more insight, tips, and tactics for how best to migrate your legacy Oracle solution to the Cloud? Download our free eBook, “Taking Oracle to the Cloud: Key Considerations and Benefits for Migrating Your Enterprise Oracle Database to the Cloud.

 

If you’d like to talk to someone from our team, fill out the form below.

Solution-Driven CMMC Implementation – Solve First, Ask Questions Later

We’re halfway through 2020 and we’re seeing customers begin to implement and level up within the Cybersecurity Maturity Model Certification (CMMC) framework. Offering a cyber framework for contractors doing business with the DoD, CMMC will eventually become the singular standard for Controlled Unclassified Information (CUI) cybersecurity.

An answer to limitations of NIST 800-171, CMMC requires attestation by a Certified Third-Party Assessor Organization (C3PAO). Once CMMC is in full effect, every company in the Department of Defense’s (DoD’s) supply chain, including Defense Industrial Base (DIB) contractors, will need to be certified to work with the Department of Defense.

As such, DIB contractors and members of the larger DoD supply chain find themselves asking: when should my organization start the compliance process, and what is the best path to achieving CMMC compliance?

First, it is important to start working toward compliance now. Why?

  • – Contracts requiring CMMC certification are expected as early as October and if we wait to certify until we see an eligible contract, it’s too late.
  • – You can currently treat CMMC compliance as an “allowable cost.” The cost of becoming compliant (tools, remediation, preparation) can be expensed back to the DoD. The amount of funding allocated to defray these expenses and the allowable thresholds are unclear but the overall cost is likely to exceed initial estimates and as with any federal program, going back for additional appropriations can be challenging.

As far as the best path to achieving CMMC goes – the more direct, the better.

Understanding Current Approaches to CMMC Compliance

CMMC is new enough that many organizations have yet to go through the compliance process. Broadly, we’ve seen a range of recommendations, most of which start with a heavy upfront lift of comprehensive analysis.

The general process is as follows:

  1. Assess current operations for compliance with CMMC, especially as it relates to its extension of NIST 800-171 standards.
  2. Document your System Security Plan (SSP) to identify what makes up the CUI environment. The plans should describe system boundaries, operation environments, the process by which security requirements are implemented, and the relationship with and/or connections to other systems.
  3. Create a logical network diagram of your network(s), including third-party services, remote access methods, and cloud instances.
  4. List an inventory of all systems, applications, and services: servers, workstations, network devices, mobile devices, databases, third-party service providers, cloud instances, major applications, and others.
  5. Document Plans of Action and Milestones (POAMs). The POAMs should spell out how system vulnerabilities will be solved for and existing deficiencies corrected.
  6. Execute POAMs to achieve full compliance through appropriate security technologies and tools.

This assessment-first approach, while functional, is not ideal.

In taking the traditional approach to becoming CMMC compliant, the emphasis is put on analysis and process first; the tools and technologies to satisfy those processes are secondary. By beginning with a full compliance assessment, you are spending time guessing where your compliance issues and gaps are, and by deprioritizing technology selection, potentially relying upon multiple tools, there is the potential to have granular processes that increase the problem of swivel-chair compliance (e.g., having to go to multiple tools and interfaces to establish, monitor, and maintain compliance and the required underlying cybersecurity). This is actually creating more work for your compliance and security team when you have to architect an integrated, cohesive compliance solution.

Then, the whole process has to be redone every time a contractor’s compliance certification is up.

Big picture, having to guess at your compliance gaps upfront can lead to analysis paralysis. By trying to analyze so many different pieces of the process and make sure they’re compliant, it is easy to become overwhelmed and feel defeated before even starting.

With NIST 800-171, even though it has been in effect since January 1, 2018, compliance across the DIB has not been consistent or widespread. CMMC is effectively forcing the compliance mandate by addressing key loopholes and caveats in NIST 800-171:

  • – You can no longer self-certify.
  • – You can no longer rely on applicability caveats.
  • – There is no flexibility for in-process compliance.

So, if you’ve been skirting the strictness of compliance previously, know you can no longer do that with CMMC, and are overwhelmed with where to even begin, we recommend you fully dive into and leverage a tool that can be a single source of truth for your whole process – Splunk.

Leverage a Prescriptive Solution and Implementation Consultancy to Expedite CMMC Compliance

Rather than getting bogged down in analysis paralysis, accelerate your journey to CMMC compliance by implementing an automated CMMC monitoring solution like Splunk. Splunk labels itself “the data to everything platform.” It is purpose-built to act as a big data clearinghouse for all relevant enterprise data regardless of context. In this case, as the leading SIEM provider, Splunk is uniquely able to provide visibility to compliance-related events as the overlap with security-related data is comprehensive.

Generally, the process will begin with ingesting all available information across your enterprise and then implementing automated practice compliance. Through that implementation process, gaps are naturally discovered. If there is missing or unavailable data, processes can then be defined as “gap fillers” to ensure compliance.

The automated practice controls are then leveraged as Standard Operating Procedures (SOPs) that are repurposed into applicable System Security Plans (SSPs), Plans of Action and Milestones (POAMs), and business plans. In many cases, much of the specific content for these documents can be generated from the dashboards that we deliver as a part of our CMMC solution.

The benefits realized by a solution-driven approach, rather than an analysis-driven one, are many:

  1. Starting with a capable solution reduces the overall time to compliance.
  2. Gaps are difficult to anticipate, as they are often not discovered until the source of data is examined (e.g. one cannot presume that data includes a user, or an IP address, or a MAC address until the data is exposed). Assumption-driven analysis is foreshortened.
  3. Automated practice dashboards and the collection of underlying metadata (e.g authorized ports, machines, users, etc.) can be harvested for document generation.
  4. Having a consolidated solution for overall compliance tracking across all security appliances and technologies provides guidance and visibility to C3PAOs, quelling natural audit curiosity creep, and shortening the attestation cycle.

Not only does this process get you past the analysis paralysis barrier, but it reduces non-compliance risk and the effort needed for attestation. It also helps keep you compliant – and out of auditors’ crosshairs.

Let Splunk and TekStream to Get You Compliant in Weeks, Not Months

Beyond the guides and assessments consulting firms are offering for CMMC, TekStream has a practical, proven, and effective solution to get you compliant in under 30 days.

By working with TekStream and Splunk, you’ll get:

  • – Installation and configuration of Splunk, CMMC App, and Premium Apps
  • – Pre/Post CMMC Assessment consulting work to ensure you meet or exceed your CMMC level requirements
  • – Optional MSP/MSSP/compliance monitoring services to take away the burden of data management, security, and compliance monitoring
  • Ongoing monitoring for each practice on an automated basis and summarized in a central auditing dashboard.
  • – Comprehensive TekStream ownership of your Splunk instance, including implementation, licensing, support, outsourcing (compliance, security, and admin), and resource staffing.

If you’re already a Splunk user, this opportunity is a no brainer. If you’re new to Splunk, this is the best way to procure best-in-class security, full compliance, and an operational intelligence platform, especially when you consider the financial benefit of allowable costs.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Maturity – Understanding What is Needed to Level Up

At its core, the Cybersecurity Maturity Model Certification (CMMC) is designed to protect mission-critical government systems and data and has the primary objective of protecting the government’s Controlled Unclassified Information (CUI) from cyber risk.

CMMC goes beyond NIST 800-171 to require strict adherence to a complex set of standards, an attestation, and a certification by a third-party assessor.

The Cybersecurity Model has a framework with five maturity (or “trust”) levels. As you likely know, the certification level your organization needs to reach is going to be largely situational and dependent on the kinds of contracts you currently have and will seek out in the future.

The CMMC compliance process is still so new that many organizations are just prioritizing what baseline level they need to reach. For most, that’s level 3. With that said, there is certainly value to gain from an incremental approach to leveling up.

Why Seek CMMC Level 4 or 5 Compliance, Anyway?

First, let’s define our terms and understand the meaning behind the jump from Level 3 up to 4 or 5. CMMC trust levels 3-5 are defined as:

Level 3: Managed

  • – 130 practices (including all 110 from NIST 800-171 Rev. 1)
  • – Meant to protect CUI in environments that hold and transmit classified information
  • – All contractors must establish, maintain, and resource a plan that includes their identified domain

Level 4: Reviewed

  • – Additional 26 practices
  • Proactive and focuses on the protection of CUI from Advanced Persistent Threats (APTs) and encompasses a subset of the enhanced security requirements from Draft NIST SP 800-171B (as well as other cyber-security best practices). In Splunk terms, that means a shift from monitoring and maintaining compliance to proactively responding to threats. This puts an emphasis on SOAR tools such as Splunk Phantom to automate security threat response in specific practice categories.
  • – All contractors should review and measure their identified domain activities for effectiveness

Level 5: Optimizing

  • – Additional 15 practices
  • – An advanced and proactive approach to protect CUI from APTs
  • – Requires a contractor to standardize and optimize process implementation across their organization. In Splunk terms, this means expansion to more sophisticated threat identification algorithms to include tools such as User Behavior Analytics.

The benefits of taking an incremental approach and making the jump up to Level 4 (and potentially 5 later) are two-fold:

  1. It can make your bids more appealing. Even if the contracts that you are seeking only require Level 3 compliance, having the added security level is an enticing differentiator in a competitive bidding market.
  2. You can open your organization up to new contracts and opportunities that require a higher level of certification and are often worth a lot more money.
  3. It puts in place the tools and techniques to automatically respond to security-related events. This shortens response times to threats, shortens triage, increases accuracy and visibility, automates tasks that would typically be done manually by expensive security resources, and makes you safer.

Plus, with “allowable costs” in the mix, by defraying the spend on compliance back to the DoD, you get the added financial benefit as well.

How Do You Move Up to the Higher CMMC Trust Levels?

Our recommendation is to start small and at a manageable level. Seek the compliance level that matches your current contract needs. As was highlighted earlier, for most, that is Level 3.

To have reached Level 3, you are already using a single technology solution (like Splunk) or a combination of other tools.

Getting to Level 4 and adhering to the additional 14 practices is going to be an incremental process of layering in another tool or technique or technology that goes on top of all your previous work. It’s additive.

For TekStream clients, that translates to adding Splunk Phantom to your Splunk Core and Enterprise Security solution. It’s not a massive or insurmountable task, and it is a great way to defray costs associated with manual security tasks and differentiate your organization from your fellow DIB contractors.

TekStream Can Help You Reach the Right Certification Level for You

Ready to start your compliance process? Ready to reach Level 3, Level 4, or even Level 5? Acting now positions you to meet DoD needs immediately and opens the door for early opportunities. See how TekStream has teamed up with Splunk to bring you a prescriptive solution and implementation consultancy.

If you’d like to talk to someone from our team, fill out the form below.

CMMC Response – Managing Security & Compliance Alerts & Response for Maturity Levels 4 and 5

The Cybersecurity Maturity Model Certification (CMMC) is here and staying. There are increased complexities that come with the new compliance model as compared to NIST 800-171, and organizations have to be prepared to not only navigate the new process but also reach the level that makes the most sense for them.

Level 3 (Good Cyber Hygiene, 130 Practices, NIST SP 800-171 + New Practices) is the most common compliance threshold that Defense Industrial Base (DIB) contractors are seeking out. However, there can be significant value in increasing to a Level 4 and eventually a Level 5, especially if you’re leveraging the Splunk for CMMC Solution.

Thanks to the DoD’s “allowable costs” model (where you can defray costs of becoming CMMC compliant back to the DoD), reaching Level 4 offers significant value at no expense to your organization.

Even if you’re not currently pursuing contracts that mandate Level 4 compliance, by using TekStream and Splunk’s combined CMMC solution to reach Level 4, you end up with:

  • – A winning differentiator against the competition when bidding on Level 3 (and below) contracts
  • – The option to bid on Level 4 contracts worth considerably more money
  • – Automating security tasks with Splunk ES & Phantom
  • – Excellent security posture with Splunk ES & Phantom

And all of these benefits fall under the “allowable costs” umbrella.

The case for reaching Level 4 is clear, but there are definitely complexities as you move up the maturity model. For this blog, we want to zero in on a specific complexity — the alert and response set up needed to be at Level 4 or 5 and how a SOAR solution like Splunk Phantom can get you there.

How Does Splunk Phantom Factor into Levels 4 and 5?

Level 4 is 26 practices above Level 3 and 15 practices below Level 5. Level 4 focuses primarily on protecting CUI and security practices that surround the detection and response capabilities of an organization. Level 5 is centered on standardizing process implementation and has additional practices to enhance the cybersecurity capabilities of the organization.

Both Level 4 and Level 5 are considered proactive, and 5 is even considered advanced/progressive.

Alert and incident response are foundational to Levels 4 and 5, and Splunk Phantom is a SOAR (Security Orchestration, Automation, and Response) tool that helps DIB contractors focus on automating the alert process and responding as necessary.

You can think about Splunk Phantom in three parts:

  1. SOC Automation: Phantom gives teams the power to execute automated actions across their security infrastructure in seconds, rather than the hours+ it would take manually. Teams can codify workflows into Phantom’s automated playbooks using the visual editor or the integrated Python development environment.
  2. Orchestration: Phantom connects existing security tools to help them work better together, unifying the defense strategy.
  3. Incident Response: Phantom’s automated detection, investigation, and response capabilities mean that teams can reduce malware dwell time, execute response actions at machine speed, and lower their overall mean time to resolve (MTTR).

The above features of Phantom allow contractors to home in on their ability to respond to incidents.

By using Phantom’s workbooks, you’re able to put playbooks into reusable templates, as well as divide and assign tasks among members and document operations and processes. You’re also able to build custom workbooks as well as use included industry-standard workbooks. This is particularly useful for Level 5 contractors as a focus of Level 5 is the standardization of your cybersecurity operations.

TekStream and Splunk’s CMMC Solution

With TekStream and Splunk’s CMMC Solution, our approach is to introduce as much automation as possible to the security & compliance alerts & response requirements of Levels 4 and 5.

Leveraging Splunk Phantom, we’re able to introduce important automation and workbook features to standardize processes, free up time, and make the process of handling, verifying, and testing incident responses significantly more manageable.

If you’d like to talk to someone from our team, fill out the form below.

Troubleshooting Your Splunk Environment Utilizing Btool

By: Chris Winarski | Splunk Consultant

 

Btool is a utility created and provided within the Splunk Enterprise download and when it comes to troubleshooting your .conf files, Btool is your friend. From a technical standpoint, Btool shows you the “merged” .conf files that are written to disc and what the current .conf files contain at the time of execution, HOWEVER, this may not show you what Splunk is actually using at that specific time, because Splunk is running off of the settings that are written in memory and for your changes to a .conf file to be read from disc to memory requires a restart of that specific Splunk instance or force Splunk to reload of the .conf files. This blog is focused primarily on a Linux environment, but if you would like more information on how to go about this in a Windows environment feel free to inquire below! These are some use cases for your troubleshooting using Btool.

 

Btool checks disk NOT what Splunk has in Memory

Let’s say you just changed an inputs.conf file on a forwarder – Adding a sourcetype to the incoming data:

The next step would be to change directory to $SPLUNK_HOME/bin directory

($SPLUNK_HOME = where you installed splunk, best practice is /opt/splunk)

Now once in the bin directory, you will be able to use the command:

 

./splunk Btool inputs list

This will output every inputs.conf file that is currently saved to that machine taking the current precedence and what their attributes are. This is what will be merged when Splunk restarts and in which is written to memory, which is why the current instance running needs to be restarted to write our “sourcetype” change above to the memory so it can utilize that attribute. If we don’t restart the instance, Splunk will have no idea that we edited a .conf file and will not use our added attribute.

The above command shows us that our change was saved to disc, but in order for Splunk to utilize this attribute, we still have to restart the instance.

 

./splunk restart

Once restarted, all Btool merge files are in memory and describe how Splunk is currently acting at that given time.

 

Btool conf file creating a file with returned results

The above example will just simply print out the results to the console, where the code below will run the command and then create a file located in your “tmp” folder of all the returned text involving the inputs.conf files in your splunk instance.

 

./splunk btool inputs list > /tmp/btool_inputs.txt

 

Where do these conf files come from?

When running the normal Btool command above, we are returning ALL the settings in all inputs.conf files for the entire instance, however, we can’t tell which inputs.conf file each setting is defined in. This can be done by adding a –debug parameter.

 

./splunk btool inputs list –debug

 

Organizing the Btool legibility

When printing out the long list of conf files, they seem to be all smashed together, using the ‘sed’ command we are able to pretty it up a bit using some simple regex.

 

./splunk btool inputs list | sed ‘s/^\([^\[]\)/   \1/’

 

There are many other useful ways to utilize Btool such as incorporating scripts, etc. If you would like more information and would like to know more about how to utilize Btool in your environment, contact us today!

 

How to Leverage a Bring-Your-Own-License Model on Oracle Cloud Infrastructure and Amazon Web Services

It’s no secret that Oracle licensing can be complicated. Between the never-ending legal jargon, Core calculations, and usage analysis, Oracle licensing can get complex. Often, it’s navigating these licenses, not the underlying technology, that can halt even the most well-intentioned Oracle Cloud migration efforts.

In this blog post, we’re going to take a closer look at how you can leverage your existing Oracle license to support your Oracle Cloud migration efforts to either Oracle Cloud Infrastructure (OCI) or Amazon Web Services (AWS).

What is a Bring Your Own License Model, Anyway?

Simply put, Bring Your Own License (BYOL) is a licensing model that lets you utilize your current on-premise Oracle license to support your Oracle migration and deployment to the Cloud – oftentimes at a significant cost savings.

BYOL on OCI

Is your organization leaning toward migrating your legacy Oracle system to OCI? If you have any existing Oracle software licenses for services like Oracle Database, Oracle Middleware, or Oracle Business Intelligence, you can leverage those existing licenses when subscribing to Oracle Platform Cloud Services (Oracle PaaS).

With BYOL, you can leverage existing software licenses for Oracle PaaS subscriptions at a lower cost. As an example, if you already have a perpetual license for Oracle Database Standard Edition, then you can leverage that license to purchase a cloud subscription to Standard Edition Database as a Service at a lower cost.

The total cost of ownership calculations can be complex with this option as you need to consider your existing cost of support, the added value you will gain from a cloud-based, self-healing, self-patching solution versus the cost of buying the solution outright without using BYOL. TekStream can help you weigh these options if you are thinking about leveraging BYOL for your cloud journey.

How Do You Use Your BYOL for Oracle PaaS?

So, how exactly do you use your existing Oracle software license to support your OCI migration needs? It’s easier than you may think:

• Select specific Oracle BYOL options in the Cost Estimator to get your BYOL pricing.

Apply your BYOL pricing to individual cloud service instances when creating a new instance of your PaaS service. BYOL is the default licensing option during instance creation for all services that support it.

As noted, when creating a new instance of Oracle Database Cloud Service using the QuickStart wizard, the BYOL option is automatically applied.

Bring Your Own License to AWS

Oracle can be deployed on AWS using the compute resources (EC2). Like a standard server on your datacenter today, when using this migration strategy, you are responsible for the licenses of any software running on the instances (including Oracle database, middleware, or any other software instances).

You can use your existing Oracle licenses to run on AWS. If you choose this licensing approach, it is important to consider a couple of supporting factors.

If you are licensing a product by processor or named users on this platform, you need to consider the Oracle core multipliers referenced in the terms and conditions of your license agreement.

If you are using employee or user-based metrics, you can deploy solutions on AWS with little concern about these issues.

Many Oracle Unlimited and Enterprise License Agreements do not allow usage in AWS. If you are using one of these options for your Oracle licensing, we would recommend reviewing your contracts carefully before deploying these Oracle licenses on AWS.

Is the BYOL Licensing Model Right for You?

Regardless of which Cloud platform you choose (AWS or OCI), a Cloud migration is the perfect opportunity to reexamine your Oracle license structure. Whether you opt for the BYOL licensing model or choose to utilize a new licensing structure, take this opportunity to identify ways to reduce the cost of your overarching licensing structure.

Learn about alternative licensing models by downloading our free eBook, “A Primer on Licensing Options, Issues, and Strategies for Running Oracle CPU-based Licenses on Cloud.”

Need help? TekStream can help demystify the Oracle licensing process. We provide straightforward counsel and, most importantly, identify cost-saving opportunities while still maintaining full licensing compliance.

 

If you’d like to talk to someone from our team, fill out the form below.

Migrating Your Enterprise-Level Oracle Solution to the Cloud? Key Benefits and Drawbacks of Amazon Web Services and Oracle Cloud Infrastructure

There are a plethora of cloud platforms available to Enterprise-level companies that are exploring options for migrating their current Oracle solution to a cloud environment. While we won’t name them all, a typical shortlist is going to include platforms familiar to us all: Google Cloud Platform, Microsoft Azure, Amazon Web Services, and Oracle’s own Oracle Cloud Infrastructure.

In this blog post, we’re going to break down some of the key benefits and drawbacks of migrating your Oracle solution to two of the giants in the industry – Oracle Cloud Infrastructure (OCI) and Amazon Web Services (AWS).

Key Benefits and Drawbacks of OCI

Being a cloud-based platform, migrating to OCI also includes several essential benefits common to cloud environments, including:

  •  – More streamlined performance
  •  – Automatic software updates
  •  – Scalability
  •  – Disaster Recovery

Oracle utilizes some of the most advanced technologies to deploy its fully autonomous and scalable Autonomous Data Warehouse and Autonomous Transaction Processing for data warehousing and OLTP workloads, respectively. These technologies support the more advanced Oracle database features such as RAC, Data Guard, Redaction, Encryption, etc.

Another core benefit of choosing OCI as your Cloud platform of choice; the traditional data maintenance/migration utilities like Golden Gate, Data Guard, RMAN, etc. are supported on the Database as a Service offering.

So, how does it differ from AWS? A key differentiator from AWS, the autonomous and advanced features of Oracle databases, is only available on OCI.

It can’t be all benefits though; OCI also has its specific drawbacks. Chiefly, OCI services tend to have a high licensing cost, which can make OCI cost-prohibitive for small to medium workloads – and by extension – small and medium businesses.

Also, OCI’s lack of a live chat feature with skilled support personnel can mean a frustrating troubleshooting experience for companies making the migration to OCI.

Key Benefits and Drawbacks of AWS

As a longstanding leader in the Cloud technology space, AWS has built a strong reputation as a trusted cloud-partner for thousands of Enterprise companies. Plus, they have one of the most robust cloud-based offerings on the market through their AWS ecosystem.

When it comes to supporting Oracle on a cloud environment, AWS has integrated Oracle databases as part of its main Relational Database Service (Amazon RDS) offering. Amazon RDS is provided as part of the managed service and includes a reasonably comprehensive list of features that complement the base functionality of Oracle.

These features include:

  •  – Additional monitoring and metrics
  •  – Managed deployments for software patching and push-button scaling
  •  – Automated backup

AWS also provides an opportunity for companies to review their Enterprise Edition license, as it delivers similar technologies to Oracle’s Tuning and Diagnostic Packs as part of the base license.

So, what are the drawbacks of using AWS to support your Oracle Cloud migration? The most critical disadvantage of AWS is that it has the potential to be difficult and expensive to run some of the more robust oracle features found in Oracle Enterprise, including Data Guard, Management Packs, and Advanced Security. Something to keep in mind if you are using these additional features.

AWS or OCI, Which Is Right for You?

There is no single right answer. Both platforms have their advantages and their drawbacks when it comes to supporting your business’s cloud-based Oracle needs. The “right” platform will be the one that best supports your specific business criteria.

If at any time you have questions concerning your specific cloud migration needs, please reach out to TekStream. Our team of Oracle experts has years of proven experience navigating the cloud-migration needs of our partners.

We also encourage you to download our eBook, “Taking Oracle to the Cloud: Key Considerations and Benefits for Migrating Your Enterprise Oracle Database to the Cloud” for even more information on how best to approach an Oracle cloud migration.

 

If you’d like to talk to someone from our team, fill out the form below.

TekStream Solutions Makes Inc. Magazine’s Best Workplaces 2020 List

TekStream has been named to Inc. magazine’s annual list of the Best Workplaces for 2020. Hitting newsstands May 12 in the May/June 2020 issue, and as part of a prominent Inc.com feature, the list is the result of a wide-ranging and comprehensive measurement of private American companies that have created exceptional workplaces through vibrant cultures, deep employee engagement, and stellar benefits.

This year there were more than 370,000 employee surveys distributed to over 2,500 companies for the 2020 Best Workplaces award. With 99% of our employees stating they were engaged in their work, we are unbelievably proud of the culture we’ve built here at TekStream.  Further, being one of four, medium-sized businesses in the State of Georgia to make the list, we are honored for this achievement given the immense talent and number of companies in Metro Atlanta and the surrounding areas,” said TekStream Chief Executive Officer, Rob Jansen.

Collecting data from more than 2,500 submissions, Inc. singled out 389 finalists for this year’s list. Each nominated company took part in an employee survey, conducted by Quantum Workplace, on topics including trust, management effectiveness, perks, and confidence in the future. Inc. gathered, analyzed, and audited the data. Then we ranked all the employers using a composite score of survey results. This year, 73.5 percent of surveyed employees were engaged by their work.

“We are always proud of any award received for TekStream’s accomplishments, but being recognized on the 2020 Inc. Best Workplaces list is especially noteworthy as it is reflective of the diverse team we have assembled and the positive experience they are having in executing Recruiting and Technology Deployment solutions. We look forward to continuing to build an environment and culture that is deserving of future recognition,” said TekStream Executive Vice President of Talent Management and Recruiting Services, Mark Gannon.

At Tekstream, our culture is built on the following values:

  • Simply put, we’re a family
    • We’re a team consisting of people who are passionate about understanding business needs and driving results. We’re innovators, executors, strategizers, builders, learners, and competitors.
  • We play to win
    • We emphasize working with a sense of urgency and value high performance. We work hard and play hard.
  • Teamwork makes the dream work
    • We seek to inspire, uplift, and ignite the fire within each employee. We value teamwork and recognition for going above and beyond. We empower employees to stretch and grow.
  • Excellence drives us forward
    • We are specialists in what we do and subsequently bring a level of expertise that is second-to-none in the industry.
  • Honestly, it’s the right way or not at all
    • We’re firmly grounded in fundamental, honest business ethics. We’re big believers in transparency; we tell it how it is.

“Impressive revenue growth is certainly important, but it’s a real honor to have the employees voice that Tekstream is one of the best places to work.  Whether we’re a 3 person company in 2011 or a 300 person company in 2021, it’s the core values and the employees that keep us heading in the right direction,” stated Executive Vice President of Sales, Judd Robins.

“TekStream is a great place to work. There are many opportunities provided from personal development to team bonding, to giving back to the community. I feel like I have a lot of flexibility in my job and that personal/family time is valued. I also feel like individual opinions and contributions are valued. Like any organization, we do have some challenges. However, the company does an excellent job at listening to concerns/challenges and working to come to solutions, which is imperative to a successful organization” said one employee surveyed.

Need help on a project? Contact us today!

WFR(ee) Things A Customer Can Do To Improve Extraction

By: William Phelps | Senior Technical Architect

 

When using Oracle Forms Recognition(“OFR”)  or WebCenter Forms Recognition(“WFR”) with the Oracle Solution Accelerator or Inspyrus, clients often engage consulting companies (like TekStream) to fine-tune extraction of invoice data.  Depending on the desired data to extract from the invoice, the terms “confidence”, “training”, and “scripting” are often used in discussing and designing the solution.  While these techniques justifiably have their place, it may be overkill in many situations.

Chances are, if you are reading this article, that you may already be using WFR.  However, the extraction isn’t as good as desired.  You may have been using it for quite a while, with less-than-optimal results.

In reality, there are several no-cost options that a customer can (and should) perform before considering ANY changes to a WFR project file or attempting to bring in consulting.  This approach is the “don’t step over a dollar to pick up a dime” approach.  Many seemingly impossible extraction issues are truly and purely data-related, and in all likelihood, these basic steps are going to be needed anyway as part of any solution.  There is a much greater potential return on investment by simply doing the boring work of data cleanup before engaging consulting.

The areas for free improvement should begin by answering the following questions:

  1. Does the vendor address data found in the ERP match the address for the vendor found on the actual invoice image?
  2. Is the vendor-defined in the ERP designated as a valid pay site?
  3. In the vendor data, are intercompany and employee vendors correctly marked/identified?
  4. Do you know the basic characteristics of a PO number used by your company?
  5. Are the vendors simply sending bad quality invoice images?

Vendor Address Considerations

The absolute biggest free boost that a customer can do to increase extraction is to actually look at the invoice image for the vendor, and compare the address found on the invoice to the information stored in the ERP.  WFR looks at the zip code and address line as key information points.  Mismatches in the ERP data will lower the extraction success rate.  This affects both PO and non-PO invoice vendor extraction from an invoice.

To illustrate this point at a high level, let’s use some basic data tools found within the Oracle database for testing.  The “utl_match” packages will work to get a basic feel for how seemingly minor string differences can affect calculations.

Using utl.match.edit_distance_similarity in a simple query, two strings can be compared as to how similar the first string is to the second.  A higher return value indicates a closer match.

  • This first example shows the result when a string (“expresso”) is compared to itself, which unsurprisingly returns 100.

  • Changing just one letter can affect the calculation in a negative direction. Here, the second letter of the word is changed from an “x” to an “s”.  Note the decrease in the calculation.

  • The case in the words can matter to a degree as well for this comparison. Simply changing the first letter to uppercase will result in a similar reduction.

  • Using the Jaro Winkler function, which tries to account for data entry errors, the results are slightly better when changing from “x” to “s”.

Let’s now move away from theory.  In more of a real-world example, consider the following zip code strings, where the first zip code is a zip + 4 that may be found on the invoice by WFR, and the second zip code is the actual value recorded in the ERP.

In the distance similarity test, the determination is that the strings are 50/50 in resemblance.

However, Jaro Winkler is a bit more forgiving.  There is a difference, but it’s closer to matching both values.

The illustrations above are purely representative and do not reflect the exact process used by WFR to assign “confidence”.  However, it’s a very good illustration to visually highlight the impact of data accuracy.

The takeaway from this ERP data quality discussion should be that small differences in data between what appears on the invoice compared to the data found in the ERP matters.  This data cleanup is “free” in the sense that the customer can (and should) undertake this operation without using consulting dollars.

Both the Inspyrus and Oracle Accelerator implementation of the WFR project leverage a custom vendor view in the ERP.

  • Making sure this view returns all of the valid vendors is critical for correct identification of the vendor. A vendor that is not found in this view cannot be found by WFR – plain and simple since the WFR process collects and stores the vendor information for processing.
  • Also, be sure in this view to filter out intercompany and employee vendor records. These vendor types are typically handled differently, and the addresses of these kinds of vendors typically appear as the bill-to address on an invoice.  Your company address appearing multiple times on the invoice can lead to false positives.
  • In EBS, there is a concept of “pay sites”. A “pay site” is where the vendor/vendor site combination is valid for accepting payments and purchases.  Be sure to either configure the vendor/vendor site combination as a pay site, or look to remove the vendor from the vendor view.

PO Number Considerations

On a similar path, take a good look at your purchase order number information.  WFR operates on the concept of looking for string patterns that may/may not be representative of your organization’s PO number structure.  For example, when describing the characteristics of your company’s PO numbers, these are some basic questions you should answer:

  • How long are our PO numbers? 3 characters? 4 characters? 5 or more characters? A mix?  What is that mix?
  • Do our PO numbers contain just digits? Or letters and digits? Other special characters?
  • Do our PO numbers start with a certain sequence? For example, do our PO numbers always start with 2 random letters? Or two fixed letters like “AB”? Or three characters like “X2Z”?

Answering this seeming basic set of questions allows WFR to be configured to only consider the valid combinations.

  • By discarding the noise candidates, better identification and extraction of PO number data can occur.
  • More accurate PO number extractions can lead to increased efficiency inline data extraction, since the PO data from the ERP can be leveraged/paired, and can lead to better vendor extraction since the vendor can be set based on the PO number.

Avoid trying to be too general with this exercise.  Trying to cast too wide of a net will actually make things worse.  Simply saying “our PO numbers are all numbers 7 to 10 digits long” will result in configurations that pick up zip codes, telephone numbers, and other noise strings. If the number of variations is too many, concentrate on vendors using the 80/20 rule, where 80% of the invoices come from 20% of the vendor base.

General Invoice Quality

Now, one might think “I cannot tell the vendor what kind of invoice to send.”  That’s not an accurate statement at all.  If explained correctly, and provided with a proper incentive, the vendor will typically work to send better invoices.  WFR is very forgiving, but not perfect, and looking at the items in the following list will help.

  • Concentrate initially on the vendors who send in high volumes of invoices.
  • Make sure the invoices are good quality images containing no extra markings on the image that is covering key data, like PO numbers, invoice numbers, dates, total amount, etc.
  • Types of marks could be handwriting, customs stamps, tax identification stamps, mailroom stamps, or other non-typed or machine-generated characters. Dirty rollers on scanners can leave a line across the image.

Hopefully, this article will give an idea of the free things that can be done to increase the efficiency of WFR.

Want to learn more? Contact us today!