Reframing Deep Learning
  • 08 Jun 2021
  • 3 Minutes to read
  • Dark
    Light

Reframing Deep Learning

  • Dark
    Light

Article summary

Thank you to Deep Analysis for sharing their Analyst Notes on our site. Deep Analysis provides Information & Process Management Research and Advisory Services

Reframing Deep Learning

By Dan Lucarini / June 1, 2021

Once hailed as the great hope of AI and our liberator from human toil, deep learning seems to be losing its luster these days. As deep learning moves rapidly from its origins as a cool science project and ultimate geek tool and over into our daily work and personal lives, many are concerned about the dark side of this technology. Deep learning is sometimes called the “black box” of AI, a closed and mysterious system of neural networks that issues decisions without accountability or auditability. This cloak of secrecy and unknowability quite frankly scares a lot of people.

At Deep Analysis (no relation at all to “deep” learning), we spend our days and sometimes evenings talking about AI and deep learning with technologists and business users. In these discussions, the concerns raised are less about the scary anti-social threats and more about the actual utility of using deep learning for tedious, work-a-day processes.

There are three seemingly immutable truths about deep learning that stand in the way of effective use in intelligent process automation (IPA). The first truth is that deep learning works best when it can learn by experience from millions of samples. A well-known example is how a computer can learn to distinguish if that is a cat or not in the photo. On the other hand, most business processes involve unstructured or semi-structured business documents of incredible variety and diversity with nowhere near that volume of samples. In some cases, the total available sample size may only be in the tens of thousands. This leads some IPA and cognitive capture vendors to argue that deep learning will never be commercially viable for applications such as invoice processing, mortgage loan files, or contract intelligence. So let’s stick to machine learning.

The second truth is that deep learning can quickly run rampant with compute time costs. There is a reason why Google, Microsoft, Amazon, and IBM were the first to market with deep learning algorithms the rest of us could rent for our projects. The former three are members of the trillion-dollar market cap club, while the latter bet the company on Watson. They can all afford the vast computing and data acquisition bills to generate their deep learning models. Who else can?

The third truth is you need the most clever data scientists and programmers to create deep learning models. With the cost of a data scientist skyrocketing, this again seems like a game at which only the most extensive and wealthiest companies can hope to compete.

Not true any longer – on all three counts. In the past two months, we have spoken with several small software vendors who bring deep learning power into intelligent process automation. These innovative companies have demonstrated to us their pre-trained models for common business documents such as healthcare claims forms, lending documents, invoices, contracts, and general company records, using neural networks that train on as few as 50,000 samples. The models are handed to end users who now have a running head start on the training process for their samples. As the models run in production, the learning is further refined and lessons are applied to the next batch.

That takes care of the hurdles of the enormous data set and colossal cost. What about the huge skill set hurdle? The software we’ve seen in each case is user-friendly and can be operated by a business analyst with no data science training and no programming skills. One company refers to its users as “data shepherds,” whoever is capable of labeling and tagging unstructured data: records managers, subject matter experts of all kinds, data privacy managers, business analysts, compliance officers, legal, etc. Another company created what we’re tempted to call “Deep Learning for Dummies” with step-by-step instructions to walk a novice user through the process of building a sustainable AI model to sort the scanned mail or something equally prosaic.

Move over, machine learning, here come the deep learning algorithms. We predict that deep learning models will disrupt the status quo of document classification over the next 12 – 24 months, as customers discover that they can train an AI classifier with as few as five samples and deploy it in a matter of hours. Without the need for Amazon, Google, Microsoft, or IBM, and without the traditional massive compute costs and data sets associated with Deep Learning to date. Time will tell if we are right or not, but change is on the horizon.


Was this article helpful?

ESC

Eddy AI, facilitating knowledge discovery through conversational intelligence