The AI Technology Stack Powering Autonomous Machines & Services

Adam Fisher
6 min readNov 28, 2016

--

Vertical markets such as the automotive, agriculture and healthcare have long been impervious, if not indifferent to many of the new technology trends sweeping the corporate business world. The absence of early adopters, the dependence on proprietary technology systems, and uncomfortably low product margins made these markets unappealing to cutting edge technology providers that could have transformed them. While each of these vertical markets is distinct, the rise of autonomous machines and services, which perform activities much more efficiently than humans ever could, will disrupt the current way of business in all markets whether on the road, in the field or in the hospital. Autonomous software based on advanced neural networks and deep learning will rapidly find its way into existing business and consumer products, but its potential to catapult traditional vertical industries into the 21 century is what has us most excited. Rather than resist the inevitable, previously conservative market players in non-tech verticals are beginning to engage with startups to be among the first to leverage the promise of autonomous technology.

Instead of looking at just one vertical opportunity, I prefer to focus on the broader trend of autonomous machines and services, which are generally based on the same underlying Artificial Intelligence (AI) technology stack regardless of the vertical market. Although it’s not entirely obvious when we encounter it, AI-powered services already surround us. AI powers many free services we take for granted, whether it is product and music recommendation features, voice command services, facial recognition, real-time translation or virtual digital assistants. But AI’s predictive and learning capabilities are not simply for consumer thrills. It is enlisted by law enforcement to uncover terrorist cells, by banks to detect fraud, and even by Kepler astronomers to decide where we are likely to discover Earth-like exoplanets.

It’s worth noting that the rise of autonomous machines and services is not only the culmination of decades of research into AI but also the confluence of other established technology trends of the last decade, including big data collection, mobile broadband, cloud computing and SaaS. Irrespective of application, each new autonomous service opportunity has three essential parts: Data Collection & Connectivity, Big Data Cloud, and Deep Learning/AI.

Let’s take a look at each part of this Autonomous Technology Stack.

  1. Data Collection & Connectivity

Because deep learning demands massive, annotated data sets to train and learn, there is an opportunity, and increasingly a race, to map, measure and monitor everything from physical surroundings and genetic code, to business interactions and machine logs. New and proprietary methods for collecting and building such data sets are essential if autonomous machines and services are to proliferate.

Interestingly, there are already billions of powerful, connected sensors capturing real-time data generated by humans and their electronic possessions, by nature and the environment, and by computers and machines at an ever-accelerating rate. Whether it’s sequencing genomes or 3D scanning of cities or molecules, we are generating incredible amounts of valuable data at a negligible cost. But while the rapid commoditization and proliferation of such sensors may have enabled the autonomous age, there is still a critical need for advanced sensors to feed autonomous machines with ever more precise and detailed data.

Bessemer had made three key investments in this area: Oryx, Vayyar and Prospera Technologies. Fully autonomous vehicles will require depth sensors with a range and precision that is an order of magnitude greater than the market can provide today. Oryx uses nano-antennas to build a highly detailed visual representation of a car’s environment as I explained in this recent post.

Vayyar developed a radar-on-a-chip, which enables super low-cost 3D imaging for a wide variety of applications, including medical, automotive, manufacturing and industrial. Vayyar’s technology enhances professionals’ ability to collect large amounts of data at a fraction of the traditional price, spreading the ability to the masses. Finally, Prospera Technologies is applying computer vision to the terabyte of picture data it retrieves from greenhouses and crop fields. While it has not developed its own sensor, its ability to capture customer data gives the company a proprietary edge in its autonomous agronomist service.

2. Big Data Cloud

Neither big data infrastructure nor the public cloud were designed specifically to support autonomous services, but both are now critical elements in the AI technology stack. All of the sensor data described above needs to be synthesized, uploaded and ingested into large databases and distributed computing systems (Hadoop/Spark/Redshift) to identify patterns, trends, and associations. Much of this big data processing is handled by the public cloud vendors, but there are still significant challenges in curating, securing, anonymizing and sharing such large and complex data sets, especially when specific regulatory and industry needs come into play. This explains our investment in Otonomo, which collects data from different automakers to make it useful to service and application providers. And as powerful as the cloud is, a lot of AI processing is still done locally on servers and embedded devices. However, much of this will eventually shift to the cloud.

3. Deep Learning and AI

As previously mentioned, it is the advent of deep-learning software, which attempts to mimic the numerous layers of neural activity in the brain, that has the most profound implications for AI. These artificial neural networks allow algorithms to find patterns in enormous piles of digital information and make predictions with considerable accuracy and minimal human intervention. Large data sets are used to train deep learning software, enabling it to learn and get smarter, thereby improving over time.

While deep learning started in university research labs, it received a considerable boost when AI moved into the web scale data centers of Google, Microsoft, Facebook, IBM, Baidu and Amazon. These companies have unprecedented storage capacity, computing power and ability to fund non-commercial forward-looking projects. More recently, each has attempted to draw developers to their own cloud platforms, similar to what happened with big data infrastructure. As such, they have released deep learning open source libraries, adding to the dozen open source deep learning frameworks developed by academia.

Of course, new applications and services that employ deep learning will still need their own proprietary algorithms tailored to their particular use case. Combined with specialized or proprietary data sets, this is where startups will develop and apply their own IP. For instance, companies that use computer vision to identify and extract detailed data from images have a unique advantage over companies that rely on limited data from a single source. Prospera Technologies is one example of a company utilizing the latest in computer vision and deep learning to deliver an autonomous agronomy solution, which helps farmers and producers extract more yield out of their land with fewer resources.

Finally, not all of the big data infrastructure in the cloud runs on standard CPU servers. The leap forward in autonomous machines and services would not have been possible without re-purposing powerful graphics processors from Nvidia, whose chips can process AI algorithms more than 10x as fast as a typical CPU. This includes cloud-based GPUs and “on board” GPUs. However, as AI data sets continue to grow in size and complexity, we see an opportunity on the hardware side. Just as graphics processors emerged to offload heavy video graphics rendering from generic CPUs, we expect to see dedicated AI processors enter the market to facilitate the most computationally intensive AI tasks.

Autonomous machines and services are already a part of our lives, but will only grow in importance and sophistication over the next decade. As is typical in large, secular technology trends, there are multiple enablers propelling growth forward. In this case, data collection & connectivity, big data infrastructure & cloud, and AI/deep learning frameworks. Few markets will be immune to the coming wave, in particular, because AI-powered solutions are finally within reach of startups.

Bessemer is eager to fund and work with companies that are developing innovative solutions across this AI technology stack, such as Oryx, Otonomo and Prospera. As an investor, it’s foolish to anticipate all of the vertical opportunities for autonomous machines and services space, but there will be more startups which tie the stack together to deliver a truly disruptive AI service. If you are an AI technology stack founder, please reach out to my partners and me.

--

--