Associate Professor of Computer Science Doug James and his team are modeling the vibrations of virtual objects and simulating the acoustic waves that these vibrations produce in the surrounding air. Such physics-based simulations, they say, effectively reproduce the sounds made by the relevant objects in situations examined by the researchers. Just as existing visual behaviors have been built into games and animation for years, the researchers say it is possible to bring an acoustic model into the picture.
Most of the sounds generated by water are actually the result of tiny air bubbles that form in the liquid. These bubbles are trapped near the surface of the water as it moves and pushed on by surface tension, which forces them to contract. Eventually the force of the compressed air becomes stronger than the surface tension and the bubbles begin to expand again. This constant shift between compression and expansion mimics the vibrations of a harmonic oscillator and releases sound waves into the air.
These vibrations are the focus of the Cornell modeling effort. Their method starts by calculating the geometry of the water, determining where bubbles should be located and how would they move in any given situation. This data is then used to derive the expected vibrations, which in turn are used to calculate the resulting sound waves.
While the Cornell research focuses on various behaviors of water, the study also examines the sounds of large groups of objects in close proximity such as the shaking of a bin of Lego blocks or a bag full of shells. The study also models the sounds of shattering glass and the resulting clatter of falling shards.
The current calculations still take many hours to perform and require a compact, localized water source. However, Jones and his colleagues are certain they can expand their technique to handle real-time calculations and process larger sources of water with more complex motions and sounds.
TFOT has previously reported on other methods to model sound – you are welcome to read our article on a device that translates sound waves into vibrations that can be interpreted by hearing-impaired people, a new hearing aid that relies on advanced algorithms modeling the way the inner ear interprets sound waves, a touch table that turns movements on the table into different sounds, and the Yamaha TENORI-ON digital instrument that turns patterns of LED lights into sound.
You can read more about the research at Cornell in a paper which will be presented at the ACM SIGGRAPH conference in August 2009, and watch videos of different water effects on the Cornell page devoted to the research.
Icon image credit: Wikimedia Commons user – Henningklevjer