NanoSense – Interactive Haptic-Sonification of Atomic Force Microscopy Data

In the NanoSense lab with Prof. Robert Magerle, Mathias Loew, Dr. Martin Dehnert, Dr. Stephen Barrass, and Dr. Andreas Otto.

I have just finished a visit as a Guest Researcher in the NanoSense project at the Technical University of Chemnitz. During the visit I was part of the growing team of interdisciplinary researchers lead by Prof. Robert Magerle in the Department of Physics and Prof. Alexandra Bendixen in Perceptual Psychology. The aim of the project is to explore Multimodal Perception of Atomic Force Microscopy data from studies of collagen and other soft materials.

In the first stage of my visit I developed a software framework for the multimodal interface using the open source CHAI3D toolkit for 3D visualisation and haptics, which also includes the OpenAL sound API. This initial NanoSense prototype provides perceptual mappings from force data to haptic force and simple pitch based parameter mapping sonifications.

NanoSense worksbench prototype with Omni over a flexible screen with a mono speaker directly underneath.

After trialling the “hello world” pitch based sonification prototype, I introduced the NanoSense team to other sonification methods such as audification, multi-parameter mapping, model based sonification, and stream based sonification.  This lead to further discussions about the features of Atomic Force Microscopy (AFM) datasets that were important or interesting in Collagen studies, and to the introduction of task specific sonic metaphors, context markers and other ideas that required a wide range of sound synthesis techniques. OpenAL is a gaming API that does not provide the sound synthesis techniques to support more general sonifications. The need for perceptually coherent integration of the sonification with the 1kHz haptics loop ruled out offline synthesis tools like Csound, and server based synths like PD or Supercollider. I chose The Synthesis ToolKit in C++ (STK) which is a set of open source audio signal processing and algorithmic synthesis classes to facilitate rapid development of music synthesis and audio processing software, with an emphasis on cross-platform functionality, realtime control, ease of use, and educational example code.

In the second iteration of the NanoSense framework I integrated STK into CHAI3D to allow a broad range of interactive sonification techniques to be explored and trialled in the project. This framework enables a systematic Sonic Information Design process involving the experimental evaluation of sonic metaphors for soft and hard materials using computational models of physical plates and bars, musical instruments, and granular synthesis in STK. Information issues to consider include the need to represent ratio data which has +ve and -ve values. AFM consists of  amplitude and phase data, and derived datasets. Sonic issues include polarity, scaling, categorical perception, perceptual streaming, temporal relations and multimodal attention.