The Idea of Smell-O-Vision Has Been Around for Over a Century. AI May Finally Make It Work


Since the early 1900s, the entertainment industry has been attempting to pair the experience of smell with video entertainment.

In 1916, the Rivoli Theater in New York City introduced scents into the theater during a movie called The Story of Flowers. In 1933, the Rialto Theater installed an in-theater smell system. Hans Laube developed a technique called Scentovision, which was introduced at the 1939 World’s Fair. A decade ago, Japanese researchers were also exploring “Smell-O-Vision” for home TVs, working on a television that used vaporizing gel pellets and emitted air streams from each corner of the screen into the living room.

However, none of these efforts took off, primarily because they didn’t work very well. These attempts at Smell-O-Vision failed because we’ve never been able to adequately recreate the world’s smells in an accurate or scalable way, largely because we’ve never been able to digitally capture them.

This doesn’t mean the fragrance and scent industry hasn’t been robust and growing, but it’s a very different task to create a singular fragrance for a consumer product than to develop something akin to a “smell printer” that emits scents on command. The latter requires a comprehensive digital understanding of scent molecules, something that has only recently become possible.

The digital understanding of the world of smells has accelerated in recent years, and one company leading the way is Osmo, a startup that has raised $60 million in funding. Osmo is led by Alex Wiltschko, a Harvard-trained, ex-Googler who received his PhD in olfactory neuroscience from Harvard in 2016. Wiltschko, who led a group at Google that spent five years using machine learning to predict how different molecules will smell, founded Osmo in early 2023 with the mission of “digitizing smell to improve the health and well-being of human life” by “building the foundational capabilities to enable computers to do everything our noses can do.”

Osmo employed AI to explore the connection between molecular structure and the perception of smell, demonstrating that a machine can predict scents with remarkable accuracy. They developed a machine-learning model using graph neural networks (GNNs), trained on a dataset of 5,000 known compounds, each labeled with descriptive smells like “fruity” or “floral.” This model was then tested on 400 novel compounds, selected to be structurally distinct from anything previously studied or used in the fragrance industry, to see how well it could predict their scents compared to human panelists.

The model’s capabilities were further challenged in an “adversarial” test, where it had to predict scents for molecules that were structurally similar but smelled different. Osmo’s model correctly predicted scents 50% of the time in this difficult scenario. Additionally, the model was able to generalize well beyond the original training data, assessing other olfactory properties like odor strength across a massive dataset of 500,000 potential scent molecules.

The Principal Odor Map (POM) created by Osmo’s model outperformed human panelists in predicting the consensus scent of molecules, marking a significant advancement in olfactory science and demonstrating that AI can predict smells based on molecular structure better than individual human experts in many cases.

We’ve been able to digitally capture and categorize other sensory categories, such as vision, which has led to massive new industry value creation in robotics and autonomous vehicles. The biggest leaps have been a result of machine learning models, and now we’re seeing another massive leap forward in capabilities and product innovation through the application of generative AI.

One potential application Wiltschko describes is “teleporting scent,” where we’ll be able to capture a smell from one part of the world and digitally transfer it to another. To do this, he envisions a world where a local AI-guided molecular sensor could instantly identify the molecular makeup of any scent. From there, his odor map can create what is essentially a formula ready for teleportation without significant manual intervention by scent experts.

This idea, using AI to recreate scents based on a digital framework quickly, could lay the foundation for what film and TV makers have long dreamed of: creating technology that can recreate odors and smells at scale. In other words, we may finally enter a world where Smell-O-Vision becomes a reality. The potential for video entertainment, virtual reality, and other experiences in food service, travel, and more would no doubt lead to a multitude of new applications, much like we’ve seen over the past couple of decades with advances in computer and machine vision.



Source link

Exit mobile version