How sound can model the world

by Brenden Burgess

When you buy through links on our site, we may earn a commission at no extra cost to you. However, this does not influence our evaluations.

Researchers from the Massachusetts Institute of Technology (MIT) and the Watson AI Laboratory of Mit-IBM explore ways to use spatial acoustic information to help machines better represent their environment. Scientists have developed an automatic learning model that can capture the way all sound in a room will travel in space, allowing the model to simulate what a listener will hear in different places.

Thanks to the precise simulation of the acoustics of the place, the system can learn the basic 3D geometry of the part from sound recordings. Researchers use the acoustic information that their system collects to create a precise visual representation of a part, similar to the way humans use sounds when evaluating the properties of their physical environment.

In addition to all the potential ways of its use in virtual and augmented reality, this method can help AI agents to better understand the surrounding world. Thus, according to Yilun du, co-author of an article describing the model and a student graduated from the Department of Electrical and Computer Science (CEE): “By modeling the acoustic properties of sound in its environment, an underwater exploration robot could feel things that are more distant than they could with vision alone”.

“Most researchers have only focused on the modeling of vision so far. But as human beings, we have a multimodal perception. Not only is vision important, sound is also important. I think this work opens an exciting research management on better sound to model the world, ”says.

Find out more about how the use of sound can model the world to https://news.mit.edu/2022/sound-model-ai-1101

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.