English

A 9 minute Intro to
Edge Computing

Although it is hardly a secret, a steep rise in the number of connected devices around us is set to change the way we live, work and interact with technology. By 2025, forecasts indicate that there will be as many as 75 billion smart devices globally, introducing us to a new era of hyper-connectivity. These devices will not only collect data, but also produce and process information directly on the products closest to their users on the edge. Increased functionality and computing available on the edge is already changing the way companies design and build products, from intelligent construction site video surveillance, to oil rig maintenance tracking. In this follow up to our 6 Minute Intro to AI, we'll unpack popular edge terminology, identify key use cases, and outline what's next in the world of edge computing.
Read on!
507ZB

(507 000 000 000 000 GB) Volume of data that will be generated by edge devices in 2019 aloneCisco 2018

10000×

Increase in the amount of data processed by IoT and edge devices by 2025BBVA 2018

$6.7B

Edge computing market size by 2022CB Insights 2018

75B

Number of IoT devices that will be installed globally by 2025Statista 2018

What exactly is the ‘Edge’?

Edge computing refers to applications, services, and processing performed outside of a central data center and closer to end users. The definition of “closer” falls along a spectrum and depends highly on networking technologies used, the application characteristics, and the desired end user experience.

While edge applications do not need to communicate with the cloud, they may still interact with servers and internet based applications. Many of the most common edge devices feature physical sensors (such as temperature, lights, speakers), and moving computing power closer to these sensors in the physical world makes sense. Do you really need to rely on a cloud server when asking your lamp to dim the lights? With collection and processing power now available on the edge, companies can significantly reduce the volumes of data that must be moved and stored in the cloud, saving themselves time and money in the process.

Image recognition and video streaming are just the tip of the iceberg, and companies across a range of industries can likewise leverage local computing power. Security camera companies, for example, struggle to use cloud-based solutions because real-time data and video streaming to the cloud is prohibitively expensive. Autonomous cars need offline functionality on the road, while AR/VR gaming companies maintain their brand credibility by keeping their products resilient to lag. Google’s soon-to be-released cloud-streaming game service has suffered from well documented latency issues in early trials.

The stakes are high

With edge computing set to change the way we live and work, it is critical for businesses to understand what is at stake for their business models, customer experiences, and workforces. Edge computing definitively impacts three distinct dimensions: Reliability, Privacy, and Latency—each with profound implications for companies and consumers alike. Additionally, the convergence of edge computing and artificial intelligence is unlocking new opportunities for companies in 2020 and beyond.

Find out more about voice on the edge

Let’s Talk

Reliability

A primary motivator driving edge computing’s adoption is the need for robust and reliable technology in “hard to reach” environments. Many industrial and maintenance businesses simply cannot rely on internet connectivity for mission critical applications. Wearables must also be resilient enough to perform without 4G. For these use cases and many more, offline reliability makes all the difference.

McKinsey & Company recently confirmed varied connectivity/ data mobility and the need for real-time decision making as the two factors driving edge computing’s adoption across a range of new use cases. Imagine an IoT-powered heart monitor that needs to notify both the patient and the patient’s physician when it detects a problem for which action is immediately required. In this environment, milliseconds matter. Cloud-based systems introduce a degree of uncertainty that embedded systems on the edge help to resolve. Indeed, recording and processing data locally inherently produces more reliability.

For a voice assistant, reliability is determined by how often it fails to recognize what a user asks it. To ensure the highest reliability, you can measure a voice assistant’s success rate with a simple formula: how often did your assistant recognize the action you wanted it to perform and under which conditions?

Privacy

Protecting privacy is both a potential asset and a risk for businesses in a world where data breaches occur regularly-- in the first half of 2017 alone, there were nearly 2 billion records lost or stolen. Consumers have become wary that their smart speakers (and the people behind them) are always listening. Companies largely reliant on cloud technology have been scrutinized for what they know about users and what they do with that information.

To be relevant AI needs to use personal information and in order to do that while still maintaining privacy, it must be on the edge.

In the healthcare market, privacy is a requirement under HIPAA. Consider that every hospital bed has roughly 20 sensors. A recent IBM study identified that data breaches cost the healthcare industry 3x more than any other sector. As the number of sensors collecting and processing data continues to expand, privacy represents real value for healthcare companies looking to balance innovation with protection of patient data.

Edge computing helps to alleviate some of these concerns by bringing processing and collection into the environment(s) where the data is produced. The leading voice assistants on the market today, for example, systematically centralise, store, and learn from every interaction end users have with them. Their records include raw audio data and the outputs of all algorithms involved, attached to logs of all actions taken by the assistant. The latest research and innovations also suggest that interactions are set to become significantly smoother and more relevant based on additional information about end users’ tastes, contacts, habits, etc., as is currently being explored by Google (1, 2, 3) and Snips (1, 2) alike.

This creates a paradox for voice companies and beyond that rely on the cloud. For ai-powered voice assistants to be relevant and useful they must know more personal information about their users. Moving processing power to edge is the only way to offer the same level of performance without compromising on privacy.

Latency
In the simplest terms, latency refers to the time difference between an action and a response. You may have experienced latency when using a smartphone if you notice a slight delay in the time it takes to open an app after touching the icon on your screen. In a recent interview with VentureBeat, Synaptics CEO Rick Bergman identified latency as a primary motivator for moving voice recognition and other computing power to the edge.

Indeed, for many industrial use cases, there is more at risk than a poor user experience and making users wait. For manufacturing companies mission critical-systems such cannot afford the delay of sending information to off-site cloud databases. Cutting power to a machine split-seconds too late is the difference between avoiding and incurring physical damage.

When the computing is on the edge, latency just isn’t an issue. Customers and workers won’t have to wait while data is sent to and from a cloud server. Their maintenance reports, shipping lists or error logs are recorded and tracked in real time.

Latency can sound somewhat like a myth. “How is latency really going to make a difference for my customers?” manufacturers may ask. CloudPing has a real-time latency generator to tell you exactly how much time it will take your browser to ping one of the many AWS servers around the world. Can your customers wait 300 ms for a response?

Find out more about voice on the edge

Let’s Talk
20%

Percentage of endpoint IoT devices that will have local machine learning by 2022Arm 2018

1.2GB

Volume of data generated by every person on the planet per day by 2020Gartner

45%

Percentage of all data created by IoT devices that will be stored and processed on the EdgeIDC 2018

Trends

Edge drives hyper-personalized experiences

From personalized recommendations on e-commerce sites to Spotify’s Discover Weekly music playlists, consumers now expect a unique experience tailored to their exact tastes. Companies are also finding ways to combine personalization without requiring your personal data as an input. Content recommendation startup Canopy offers a customized experience without sending customer data to a server. Edge computing unlocks entirely new possibilities for personalization by processing data directly in the local environment. This unique proximity to end users means companies can use embedded computing resources to craft behavioral-based messaging, offers, and experiences without compromising on privacy or relying on cloud connectivity.

High bandwidth IT cloud
budgets will move to the edge

The rapid rise in raw data produced and collected by IoT devices has created a bandwidth problem. A virtually limitless stream of data created by IoT and edge devices is often sent to the cloud for analysis and for machine learning to be performed. Beyond the obvious energy consumption and performance implications, sending this amount of data to a server can be extremely costly. By indiscriminately transferring massive volumes of data, companies are racking up their IT bills. To account for this, companies will increasingly look to devote more budget to the edge to avoid “spamming” the cloud with frequent updates and information, especially as local computing and processing becomes more powerful.

Local computing power
becomes the norm

We are living in a centralized world, whether we think about it that way or not. Every time you turn on your mobile phone or open a SaaS application, you are essentially engaging with an interface that represents what is occurring on a cloud server. In his 2016 talk “The End of Cloud Computing” , Andreessen Horowitz’s Peter Levine outlined a vision for the future of edge computing. “Your car is basically a data center on wheels. A drone is a data center on wings,” Levine quipped. Nearly three years later, Levine’s words couldn’t be more prophetic. With more and more applications capable of functioning in local environments due to innovations in edge computing, decentralization is becoming far more than just a trendy buzzword.

Snips is AI on the Edge

It is now ordinary to have a cloud-powered microphone in our direct environment, whether it’s in our pocket, home, or vehicle. Naturally, privacy and security concerns are rising with such centralized, ubiquitous voice interfaces. At Snips, we’ve been working on privacy-preserving voice interfaces on the edge since our company’s founding in 2013. We process voice data at most one hop away from where it was produced. Some people have a more flexible definition of edge, but we like to think of it in terms of the following use case: if your coffee machine records your voice asking for a double espresso, at worst your voice data will be processed within the local network of your house, but never outside. For us, edge is either embedded or within your local network.

Embedded Natural Language Voice Recognition —
No Strings Attached

We used to think that a compromise needed to be made between on-device privacy and cloud-level performance. But you can now run a voice assistant on the edge that can outperform cloud alternatives in practically all use cases, whether for smart lights and other small vocabulary applications, or more extensive shopping lists and large music libraries. At Snips, we’ve done this with speech contextualization and data generation, without trading off performance.

Why are we sharing this?

Snips has spent years working hard to push what is possible on the edge further than ever before. We’ve built AI powered voice technology that allows companies to easily embed powerful voice interfaces into their products and devices. As edge computing becomes a more visible part of our daily lives, we believe it is important to share our insight into the basics.