Spring Cleaning Your Content:
Essential Content Audit Techniques and Questions
By: Seth Ely | Solutions Analyst
It happens to almost everyone. Time for spring cleaning comes around and you decide it’s time to organize the garage, attic or basement. Once you undertake this organization project you may find that there are items that are completely obsolete (Pentium 2 computer parts), items that you forgot you had (an ab roller), and items that have been looking for but couldn’t find (your high school yearbook). Even though we own these items and most likely moved them all to their current location, our knowledge and understanding of the things we are managing can be somewhat flawed. A similar dynamic is often true of organizations who attempt to employ a comprehensive content strategy or structure content that was previously unstructured.
In order for an Enterprise Content Management strategy to be effective, models for security, metadata, and workflow need to be created which can facilitate the existing content and associated processes within an organization. However, a frequent problem in creating scalable Content Models is that the breadth and depth of content that will need to be managed is not fully known or understood.
In these cases, it is important to perform an Enterprise Content Audit. The audit of the content is designed to get a high-level list of the types of content in the organization and capture details about how the content is used. This exercise has direct inputs to the Content Model that will be created as part of the overall Content Strategy.
The Enterprise Content Audit can logically be broken into two main parts: Content Inventory and Content Analysis which are described below:
When looking into the types of content that a particular organization utilizes, the source systems can vary widely from legacy content systems, to shared drives, to email.
It is important to have a really good profile of the content that exists within the organization. That being said, there is no one way to take inventory of the content. There is a continuum of detail from a full inventory, to a sample inventory, to a set of disparate examples that can all be part of the inventory process.
The ideal is to for the analyst to have access to the source systems and locations that currently house content. This will allow for automated processes to be used to profile the content and obtain various metrics that can inform the Content Analysis.
If the Content is exposed via a consumption site, the site can be indexed with a crawler to provide information about the presentation layer for the content. If there is a legacy system, techniques such as a dump of the database or an export can yield the desired information. In the case where content is on individual workstations, email, etc., it may only be possible to get example files.
The most important thing is to turn over enough rocks that the analyst has an accurate picture of the types of content that are present and can glean ancillary information about the content. This is the same process that happens when we start looking through boxes in our basements; by actually looking in the boxes, we learn things that we never would have known just relying on our memory and perceptions.
These learnings can then be used as a framework to drive the deeper content analysis process. In the absence of this step there is substantial risk that the rocks will be overturned after the Models have been established and introduce risk of substantial rework.
As an output of the content inventory, there should be a high-level list of categories or grouping of content. For each of these groupings a number of questions can help define the Content Model and other specifications for a Content Management Implementation.
Below is a sample set of questions that can be used to elicit the type of information needed to create the full content model (metadata, security, workflow) and other specifications for a Content Management Implementation. For each of these questions, the as-is and to-be needs to be taken into consideration. Some of these questions can be partially answered based on the Content Inventory, others require stakeholder input.
- Where are these currently stored? (migration, integration)
- Who has to access these? (security)
- What is this content used for? What information do you use to find these? (metadata)
- How is this content currently organized? (metadata)
- Where do the go to access these? (information architecture, consumption)
- Who can edit these? (security)
- Is there an approval process for these? (workflow)
- How long do you keep these? (retention)
- How many of these currently exist? (migration)
- How many of these are created each month? (performance)
It makes sense to capture the analysis details on the basis of the content inventory. Once we have these details, we can then begin the exercise of creating the models in the areas highlighted above. The business/functional user does not need to fully understand these concepts initially, but this process will create a model that allows that the system to be modeled according to the functional directives as expressed through the interview process. This is the most efficient and accurate way to establish requirements for a content-driven project.