Are checklists useful in document automation evaluations? I came across yet another vendor checklist today and cringed a bit. Solution vendors try to make purchasing products easy for prospective customers. That is a noble effort, but the checklist can be overused and overly simplistic.
Admittedly, most vendors of technology solutions provide checklists. We do it (even Parascript) because it helps us to accomplish several things at once to:
- Provide a list of capabilities to supply procurement staff with sufficient information for an RFP;
- Educate potential buyers on what key components are important; and (hopefully)
- Position a solution as superior to others by identifying key features that other vendors don’t provide.
When and How to Use a Checklist
To the end user who has a bona fide need to select one option over another, are they that useful anymore? Maybe the answer is “yes” to help a prospective customer decide between multiple levels of a particular vendor’s offering; think “Bronze,” “Silver” and “Gold” packages. However, using a checklist as a way to compare, at a meaningful level, one solution over another is increasingly pointless. Specifically when we discuss document automation, it is even more pointless because to really determine the efficacy of document automation, you need to understand the most important attribute of all: the precision in automating key document-oriented tasks. A checklist for this purpose does not do a bit of good.
Admittedly, you can use a checklist to narrow down candidate solutions because you might have specific functional needs that do not have to focus on precision. Functionality such as the availability of a Web Service for integration or support of a particular OS can easily be identified and compared. The need for a thin browser client doesn’t need precision measurement to downlist a candidate. Single sign-on? That’s easy.
So let’s modify the position a bit and create two categories of comparison, one that can make use of a checklist and one that requires another way to evaluate technology solutions.
Document Automation: Feature-based Evaluation
Platform Support
Most product brochures list the operating systems that they support. That’s easy. For multiple points of capture, it requires identifying if a document automation solution can fulfill your various multi-channel needs such as the ability to monitor one or more shared network directories, FTP sites and email accounts, etc. Does the solution support documents captured via a smart phone? All of these capabilities will have their variations. Yet for the most part, you can weigh one versus the other via brochures.
Deployment Support
A bit harder challenge is to decipher whether a solution meets your deployment needs. This requires a deeper level of understanding regarding how the solution is architected. Is the solution cloud-friendly? What does the vendor mean by that? You’ll have to define what you mean first. If it is simply the ability to use browser-based interfaces, it is better to measure these capabilities more discretely. If you’re looking for a solution that can be hosted in Azure or AWS, almost any bit of software can be. These factors need to be defined by how you plan to use the software. Need container support? Want real multi-tenant support? Be sure you know what that means to you, and that the vendor agrees with this definition.
Export Options
Can data and documents be exported via an API or Web Service? Database? File system? These are fairly straightforward options that do not necessarily need an extra level of detail at first. However, for the APIs/Web Services, make sure you clearly define how you want to integrate and at what level of control because different vendors offer different ranges of capabilities with their integration points.
Pre-built Integrations
This is a fairly straightforward measure, or so it would seem. You can certainly ask for a list of all pre-built integrations, but the nature of how those integrations actually work, and the capabilities provided are another matter. Be careful to define what you really mean.
Data Verification Options
The ability to automate verifying and/or correcting data is also something that can be determined in a straightforward manner. Apart from providing workflows and user interfaces that allow staff to easily and quickly review data, you can also verify whether the system provides capabilities such as database look-ups and “intra-document verification” such as comparing one or more data fields to each other. There will be more details in terms of the various scenarios that can be supported so it is always a good practice to evaluate this capability against specific use cases.
Measurement-based Evaluation
Next we get to the level of capabilities that really cannot be compared without actually tasking the software in a more scientific manner.
Document Classification and Separation
It is fine to identify if a vendor can provide automated classification and whether or not it can do so based upon the data type such as text or visual elements such as logos. Rules-based classification can also be a checklist item. However, identifying that a vendor has document classification is only the beginning. In order to determine whether the system can deliver the level of automation you require, you need to actually test the system with your own labeled data. Why is this? This is because vendors implement document classification and separation in many different ways. One vendor might only use a single classification technology while another may use a blend of several technologies. Unfortunately, you cannot compare a list of algorithms.
So in this scenario, you need documents that cover a real specific use case and the output you require. For example, if you have a mortgage document automation use case, select several document types each with several hundred examples along with their true classifications. Run these through the vendor system and compare your expected results with what the system provided. Only this level of testing will reveal the real capability of each system.
Data Extraction
The precision at which a system can locate and reliably extract data is crucial to any document automation endeavor yet most enterprises stop at asking about whether or not the system can support structured or unstructured data extraction scenarios. Questions such as “does the system support regular expressions?” are worthless until you actually put a system through its paces.
Simply asking about the supported extraction techniques does not provide adequate insight into whether it can meet your needs. We have seen many prospective customers take two systems, import documents and then compare the amount of output. If System A provides more data, it must be the better solution. Wrong. Just as in document classification, you need hundreds if not thousands of samples along with the actual data values. Let the system extract the data and then compare the results to what should have been the output to really determine the best solution.
Thresholding
Now we get into the swamp of evaluating systems: namely the ability to determine good data from bad. Without this ability, you get zero unattended automation. We regularly work with enterprises, which spent a lot of money to implement “advanced capture” that must still run all of their data through manual verification. Any system can output data, but the ability to reliably identify accurate data is not common. While many vendors state that they provide confidence scores, what is not often known is whether those scores are reliable enough to enable straight through processing.
The way to test whether you can achieve unattended automation at any level is to take your document classification or data extraction results along with the corresponding confidence scores and order them based on those scores. And then, identify whether there is a specific score that can be used as a threshold to govern when data can be considered accurate or inaccurate. If there are too many errors above that threshold (typically more than 3%), you will need to manually verify all of your data.
Unmeasurable Features
This last category unfortunately receives the most amount of attention because it is often the domain of artificial intelligence and machine learning. The reality is that it doesn’t matter whether a vendor employs something like deep learning or natural language processing (NLP) if you cannot directly associate these technologies with a specific use case and understand whether it will help you. You have to ask questions about the outcomes they deliver that benefit your organization. These include: “how will NLP help me with my document classification needs?” and “what will deep learning do to improve straight through processing?”
What Do You Achieve with These Capabilities?
It is all about what you want to achieve with these capabilities. Think of these capabilities as inert until they are applied to a given task. They are meaningless until you apply a use case with its associated requirements and then measure the results. All too often vendors hype up the AI quotient without really providing concrete information on how it actually helps an organization other than “it learns.” Focus on the end result and not on whether a system uses the latest machine learning algorithm.
###
If you found this article interesting, you may find this ebook useful, Data Science with Advanced Capture.