sonifying molecules
Over the past few years, we’ve been exploring sound as a sensory channel for understanding the physics of real-time interactive molecular dynamics simulations. In the molecular sciences, sound is a vastly underutilized means for data processing, partly because audio representational standards are less well defined compared to graphics. Depicting an atom using a sphere is an arbitrary decision, but it’s intelligible owing to the fact that an atom and a sphere are both objects which are spatially delimited. Defining clearly delimited objects in the audio realm is not as straightforward, neither spatially nor compositionally. For example, it is difficult to imagine what constitutes an ‘atomistic’ object in a piece of audio design or music.
Our work suggests that sound is best utilized for representing properties which are non-local: potential energy, electrostatic energy, local temperature, strain energy, etc. The non-locality of these properties makes them extremely difficult to visualize using conventional graphical rendering strategies (and even if there were effective strategies, would lead to significant visual congestion). This video shows a sonification of the potential energy of 17-alanine, a small protein which can easily be manipulated to explore a number of different states.
Another approach which we have taken to molecular sonification is an approach called scanned synthesis, where a dynamic model of a physical system (which typically updates at subsonic rates) can be scanned along an arbitrary path at the sonic rates required to render audio. The scanned synthesis approach translates the low-update rate geometric data produced by a molecular simulation into audio. It recognizes the fact that atoms and molecules are in constant motion, with vibrations and structural fluctuations occurring at a range of time-scales and corresponding length-scales.
The audio file here offers an audio representation of what these microscopic oscillations might sound like when converted into audio. This work is part of a broader effort to develop generalised molecular sonification methods which can accompany molecular visualisations that are faithful to the underlying simulation data while also considering their aesthetic and musical form.
PUBLICATIONS
T. J. Mitchell, A. J. Jones, M. B. O’Connor, M. D. Wonnacott, D. R. Glowacki, J. Hyde, “Towards molecular musical instruments: interactive sonifications of 17-alanine, graphene and carbon nanotubes,” AM ’20: Proceedings of the 15th International Conference on Audio Mostly, 2020, doi: 10.1145/3411109.3411143
R. E. Arbon, A. J. Jones, L. Bratholm, T. Mitchell, and D. R. Glowacki, “Sonifying stochastic walks on biomolecular energy landscapes”, [arxiv:1803.05805], International Conference on Auditory Displays 2018 (ICAD ’18)
Joseph Hyde, Thomas Mitchell, and D.R. Glowacki, “Molecular Music: repurposing a mixed quantum-classical model as an audiovisual instrument”, in the Proceedings of the 17th International Generative Art Conference, (GENArt 2014), Roma, Italia
D. R. Glowacki, M. O’Connor, G. Calabró, J. Price, P. Tew, T. Mitchell, J. Hyde, D. P. Tew, D. J. Coughtrie, S. McIntosh-Smith, “A GPU-accelerated immersive audiovisual framework for interactive molecular dynamics using consumer depth sensors,” Faraday Discussion 169, 2014, 63 – 89, open access