Sanjay Srivastava, CommunityVoice
The next time you finish your commute home, think for a moment. Do you remember the color of the traffic lights you saw on your way?
Most people don’t retain this level of detail. In fact, our brains receive up to 11 million inputs per second but can process only around 40. The human mind has evolved to execute the right thinking at the right time — bitesize chunks of cognitive decision-making that address life’s routine tasks.
Our most advanced artificial intelligence (AI) systems don’t match the human brain’s processing power. But like our brains, they rely on highly specialized, cognitive algorithms that are brought to bear at the right moment for the parts of the process value chain they automate. Just as our brains subconsciously handle the subtasks required to navigate an intersection, these specialized AI services form the building blocks of AI applications.
At the highest level, enterprise AI applications are made from granular algorithms orchestrated together. These algorithms are built by training compute engines with data and contextualized domain expertise. Increasingly, many of these algorithms will be pre-trained for specific tasks using labeled data, ultimately increasing efficiency, reducing risks and accelerating widespread AI adoption.
Enterprises need AI informed by context.
CXOs at large enterprises around the world say that AI adoption is a question of when — not if. According to a recent AI 360 survey, a quarter of senior executives plan to fundamentally reimagine their business model or significantly transform their business processes using AI by the end of 2021.
However, a few hurdles to AI deployment still remain, including where to start, what to build and how to execute. For most enterprises, large horizontal AI platforms alone will not deliver the results they seek.
By definition, AI compute engines are horizontal, scalable and use case-agnostic; to be effective, they must be tuned to specific scenarios. An AI solution that reports adverse events for pharmacovigilance is different from one that predicts risk in a lending portfolio. Both can use the same core AI platform, but they are trained with different datasets, semantic understanding, process knowledge and associated ontologies.
In fact, AI applications deliver value only when contextualized to and trained with relevant data sets from the use case they are automating.
Picture a layer of fertile soil from which many tall trees grow. The soil is like an AI compute engine — the base from which more specialized AI applications can emerge. The trees are like pre-trained AI accelerators, each elm tuned to a specific use case or each fir using data labeled in a specific domain’s context. As each accelerator is tuned more closely to its use case, the trees grow taller and stronger. AI applications are the forest: the combined value of the trees.
AI will become increasingly modular.
Almost all technologies eventually evolve into more modular components. Take the earliest computers: They were monolithic with limited options. Today, your teenage child can build their own computer from memory chips and graphics processors, stacking them like Lego blocks.
Software is also increasingly modularized. Most cloud-native applications are built as microservices — constructed from interchangeable building blocks that can be separately optimized and updated. This type of architecture makes applications more flexible, scalable and resilient and more open to innovation.
In the same fashion, enterprises can use modular pre-trained AI accelerators to quickly assemble the components of end-to-end processes, automating complex decisions. For example, accounts payable (AP) remains a highly labor intensive and manual workflow, due to variation in invoice formats, complex decisioning in the approval process and widespread use of unstructured data recorded on paper. When large enterprises reengineer AP processing, they can use multiple pre-trained accelerators to speed up AI deployment. For example, one pre-trained accelerator might disambiguate tables and extract information from paper invoices, while another turns unstructured data into structured data sets.
Pre-trained AI accelerators streamline development
Pre-trained AI accelerators allow enterprises to rapidly and modularly build applications while avoiding time-consuming customization and curation. For example, invoice processing actually contains many subtasks, such as parsing a note on an invoice detail. The human brain performs many of these tasks almost automatically, but it’s very difficult to train one AI algorithm to perform all the subtasks without compromising accuracy.
Instead, pre-trained AI accelerators provide a separate algorithm for each subtask. One can be tuned to identify and extract information from the correct fields on a balance sheet, while another can assign scores to semantically extracted text in a footnote. Each individual accelerator is tuned to perform its specific subtask with high accuracy — and together, they work in concert to measure a lending portfolio’s risk. In essence, pre-trained AI accelerators perfect individual tasks, allowing large applications to become the sum of disaggregated tasks.
Explainability and high-data density are also critical.
To unlock AI’s true potential, enterprises also need to consider the explainability of their applications. Otherwise, they are left with a black box, which can hinder AI adoption and pose regulatory challenges.
Implementing AI with traceability is a way to address the black box problem. For example, a commercial bank can use AI to ingest information from extensive balance sheets, convert the unstructured data to structured data and then calculate a risk score. That bank should be able to explain the score by providing traceability and visibility into the application’s reasoning path.
Data density is also important. AI works well — with fewer opportunities for bias — when it has comprehensive data sets to work with. When faced with limited data, enterprises can use tools like computational linguistics, which decipher meaning and extract data based on context, rather than mathematics. This approach can be very effective when challenged by minimal data sets.
As technology trends toward modularization, pre-trained AI accelerators — fueled by data and informed by deep, contextual expertise — could become a powerful tool for enterprises. At the same time, explainability and data density will also be important elements to consider to truly unlock AI’s potential and enact digital transformation on a global scale.