ml

In the drive to improve productivity and business performance, one approach with great untapped potential is to harness metadata, machine learning, and process automation together in enterprise solutions smart enough to optimize themselves.

Sound like science fiction? It shouldn't. We already have the necessary components. What we need to do is assemble the components in solutions that can learn, iterate, and improve with each iteration. Such solutions have powerful potential to make people better at their jobs in a multitude of ways -- by automating mundane tasks, reducing errors, and even detecting fraud, to name just a few.

Let's start with metadata, or information that describes data to make it easier to find, categorize, and use. We encounter metadata routinely without necessarily thinking about it as such. In a simple Word document, for example, you can click on "File," then "Info," to see the author, the size of the file, the number of pages, when it was created and last modified, and more. All of that information is metadata about the particular document at hand.

Traditionally, metadata has involved rule-based or structured information such as filenames, keywords, character counts, and the like. Over the last several years, however, there has been an explosion of unstructured data, such as photos, videos, and sound files, that metadata could not cope with. That has changed with the emergence of new artificial intelligence and machine learning algorithms.

Take image recognition. It is essentially impossible to write a set of deterministic rules that allow a computer to correctly identify every instance of something – dogs, cats, oranges, or whatever -- in a database of random photos. Computer scientists for decades have postulated an alternative, machine learning, that would essentially mimic how humans learn by processing vast amounts of this type of information to identify significant patterns and exceptions.

This approach was stymied for years by the need for massive compute power and data storage, but these capabilities have been transformed in the last several years. A third critical area of capability that has also been emerging rapidly is machine-learning algorithms and neural networks that can make hunches about the meaning of, say, a video or a picture, test their hunches against real-world data, and use the results to refine their assumptions – in other words, systems that become steadily smarter and more accurate. With this kind of progress, image recognition has become capable of impressive feats – for example, identifying images of dogs with high accuracy, and even identifying a dog's breed more often than not.

This type of capability is useful in a variety of scenarios. For example, imagine you work for an elected official and your job requires you to track and record every time your boss is quoted in a news story, whether in a print publication, a mainstream news outlet like NBC or CNN, on social media, or any other significant platform. Besides capturing each quote or appearance, you also have to catalog the topic or substance of each one to create an accurate record of your boss's stance on any of a host of different issues.

To accomplish this today, you have to monitor dozens of news outlets and events, capture each instance, edit the text or footage to remove extraneous material, apply tags for every potential query, summarize the content, store all this in relevant databases, archive it, and so on. That's a lot of manual work. You want to capture every relevant event, but you have no good way to know how close you're coming to that goal. You're always under time pressure, which inevitably increases errors, which can have any number of negative consequences when they occur.

Instead, what if you could use image recognition software to automatically scour the airwaves and internet for text and videos of your elected official, analyze the content, create relevant metadata, then use the metadata to initiate an automatic process that executes all of the steps you've always had to take manually, from capture to archiving?

This solution combines our three key components – metadata, machine learning, and intelligent process automation – in a solution that executes quickly, automatically, and accurately, and improves over time. This solution doesn't just make you faster or more efficient; it also can reduce errors, improve policy compliance, and ensure that new material is discovered and captured comprehensively.

Here's another example: Imagine you're buying auto insurance and the carrier requires your driver license as proof of identity. You hand over your license; they photocopy it, then manually extract the number, your date of birth, address, sex, height, weight, eye color, and so forth. They enter all this information as metadata in their customer records management system. That's a lot of time, effort, and opportunity for error.

Optical character recognition has been useful in such scenarios, but OCR systems get stuck if, say, the license is upside down – something that a machine-learning system can be trained to overcome. And OCR, by definition, cannot recognize pictures; by contrast, machine-learning software can easily pick out faces and even learn to identify individuals by comparing facial geometry.

Alternatively, an intelligent automated system might let you use your phone to capture an image of your driver's license and email it to the insurance carrier's customer service representative. In this scenario, instead of extracting data manually, the agent would enter it in a machine-learning-enabled system that could accurately capture all of the data on the license, make sure the license hasn't expired, and trigger an automated process to create or update a record in the relevant system. In this method, he data capture and recognition could be continuously trained and refined to reduce error rates to levels far below those of today's manual processes.

Scenarios like this only scratch the surface of what is now possible with systems that combine metadata, machine learning, and intelligent process automation. Crucially, such systems aren't static; they can be trained to act with increasing autonomy to ingest large amounts of metadata, trigger automated processes, observe and analyze the results, and further refine the underlying processes. It's this continuous interaction between the automated processes and the machine learning system that enables steady improvement and, ultimately, process optimization.

Such solutions are not yet commonplace, but they are becoming more so as the component technologies mature and – especially – as vendors drive no-code integration methods that allow business analysts to create their own solutions that don't require IT involvement. Such self-service solutions are powerful tools for helping people get better at their jobs.

Alain Gentilhomme is the Chief Technology Officer at Nintex, the recognized global leader in Workflow and Content Automation (WCA), where he brings more than 25 years of experience with product development, product management and technical leadership experience.