It was quite easy to walk away from Consumer Electronics Show (CES) 2020 feeling disappointed with the AI capabilities showcased, though the technology was omnipresent at the show. There were plenty of robots, ranging from Samsung’s Ballie to dishwasher and barista robots, as well as smart TVs and healthcare and fitness devices – all highlighting the use of AI. However, the ubiquitous presence of AI across the show in some sense made the technology lose its meaning, as most of the exhibits were repetitions from previous years or “half-baked” concepts.
At the same time, without advances in deep learning, we wouldn’t have Amazon Alexa or Google Home, the two breakout products that have dominated CES for the past few years. Deep learning enables machines to parse human voice/speech and provide intelligent responses. Similarly, deep learning techniques also enable cameras to detect objects, weeds, faces, people, lane markings, or stop signs. Vision and language understanding have troubled researchers and engineers for many decades, and today’s AI capabilities are truly remarkable in that context.
However, we have only scratched the surface with AI. While vision AI has garnered a lot of attention lately, look for breakthroughs in AI for language and audio processing. AI use cases and products have some maturing to do in the business-to-consumer (B2C) sector, but business-to-business (B2B) sectors like retail are expected to break away and scale adoption. Also, new markets like synthetic content are around the corner thanks to generative AI models, giving rise to new entrants and pioneering business models.
The major AI themes at CES 2020 illustrated progress:
Enterprise AI is gaining ground on the back of consumer AI: Continuing with last year’s trend, companies like John Deere increased their presence at the CES, showcasing their AI capabilities. John Deere provided a comprehensive view of its AI-infused weed sprayer that is powered by the expanding data strategy built around imaging and tagging agricultural data, including crops, insect, and weeds. In the same light, Bosch, Doosan Bobcat, and Omron highlighted their enterprise AI push across retail, construction, and the manufacturing sector.
Retailers are expected to infuse AI across the frontend and backend: Shopper analytics is the ability to use cameras to track and analyze shopper behavior in-store to optimize product placement and store layout, enhance personalized shopper experiences, and prevent shoplifting. In-store shopper analytics was frequently mentioned as one of the most desired enterprise AI use cases in Omdia’s conversations with AI vendors. Similarly, smart mirror technology for clothes fitting, makeup, and hairstyling was a common theme across several exhibitors.
AI for audio processing and acoustics is the next frontier: Directional audio processing and filtering technology has made breakthroughs using AI, allowing for devices like Alexa and Google Home to pick up audio commands in noisy households. The same AI acoustics technology is now showing up in many other applications like smart hearing aids, as well as in manufacturing, automotive, construction, HVAC, and many others. In fact, IBM has its own AI Acoustics Insight program.
Synthetic content marketplace could shake up multiple industries: Generative AI is a branch of techniques used to artificially generate images, audio, voice, or any other data. Companies like Typecast are providing AI-generated voices for news organizations, while DataGen provides synthetic data for building any AI application, from robots, drones, and virtual and augmented reality (VR/AR) to autonomous cars. Neon aims to provide virtual synthetic humans, complete with a brain and emotional capabilities in life-size avatars. However, deepfakes represent a worrying outcome of generative AI. While the technology is still in its early stages, it could completely transform media & entertainment, gaming, and the AI data life cycle itself.