Design Engineering
Showcase 2020

Haptaesthesia: Development of a Bi-Directional Interface for the Control of Sound Through Physical Shape and Texture

Design Engineering MEng
Dr Lorenzo Picinali
Space Invaders

Music is a physical experience. Yet digital music creation has largely involved relatively static interfaces with no connection to their means of generating sound, and limited non-auditory feedback such as vibration, which is present in traditional musical instruments and central to their use. This project has explored ways in which music can be represented and interacted with via physical, tangible means, as well as how forms of active feedback, like vibration, can be incorporated in such interactions to allow superior engagement and control. The final prototypes — produced from home with accessible materials — have demonstrated the musical potential of such interfaces.

Audio extracts created with the prototypes developed can be heard at SoundCloud.

Contextual Introduction Video

Demo Video


When I was a child, my mother would occasionally suggest I draw my emotions, giving structure to the abstract and rendering the invisible visible. This project has applied a similar process to the invisible form of sound/music, by aiming to render it tangible. As ‘synaesthesia’ refers to the mapping of one sense to another, the term ‘haptaesthesia’ was coined for this project to denote how the sense of touch can evoke sound.

The goal has also been to devise interactions with the sound’s embodiment that control the sound in a manner that’s intuitive to certain individuals, or, at least, that can be adapted and learned by others. This goal is pertinent in our digital age, where much of the world of atoms around us — including tools for creating music — is increasingly replaced with digital bits of information that we access through the (physically) fixed portals of computers and ‘smart’ devices (see Contextual Introduction video).

Following a hands-on, explorative design process constrained to the home, a framework for controlling sound through physical shape/texture and receiving apt active feedback has been developed with relatively low cost materials: kinetic sand (a combination of sand and a silicone oil that enables it to be sculpted as though wet), a Microsoft Kinect depth camera, audio transducers (a.k.a. shakers) that supply vibrational feedback, and a computer running ‘Max’.

This system’s potential was demonstrated via prototypes that control aspects of additive synthesis and a wavetable oscillator (see Demo Video and below). While only two of many possible forms they could have taken, these prototypes draw on the project’s broad primary and secondary research, including consultation with experts and an online survey of 74 participants, to inform their functionality. They have been largely validated from home via discussion with professional musicians. Crucially, they have also helped me create music (hear further extracts at SoundCloud). It is hoped these prototypes may be recreated and adapted to expand musical expression for others, too.

Additional Prototype Details

The prototypes are summarised in the Demo Video. A key component includes the depth-matrix processing in the software ‘Max’ (and its visual programming languages ‘Max’, ‘MSP’ and ‘Jitter’), which supplies usable data from the the Kinect to ‘plug and play’ into sound generation and processing code. This code has been made open source for others to use and adapt, and includes a GUI for users to adjust its settings.

The two prototypes are designed to exemplify ways this system can use physical shape/texture to create sound.

  1. Wavetable Oscillator: the values in the depth-matrix are scanned through and selected to construct a periodic wave. As the matrix is 2-D, it is scanned horizontally and vertically, at rates that determine the pitch frequency, but can be set to differ slightly based on the roughness indices to create various interference patterns.
  2. Additive Synthesis (see image below): this involves combining individual sine waves of various frequencies (called ‘partials’ — the lowest of which is the pitch frequency, and others are ‘overtones’), to create sounds of differing timbre or sonic quality. A key part of the timbre is also how the loudness of each partial varies over time — this is their ‘amplitude envelope’. This second prototype splits the depth-matrix into six horizontal bands, each of which determines the amplitude envelope of a partial. The particular set of partials (i.e. which other frequencies accompany pitch frequency) can be set manually or based on the roughness indices of the depth-matrix; the mapping in this latter case draws on findings from the survey conducted for the project, which suggested certain users with a more technical understanding of sound generally associate rougher textures with higher and/or non-harmonically-related frequencies for partials.
 — Haptaesthesia: Development of a Bi-Directional Interface for the Control of Sound Through Physical Shape and Texture


Other prototypes were created in the development process — highlights are shown at the end of the Demo Video. As well as prototyping, research was also key. Influential elements from research into shape- and texture-sound mapping, existing tangible and/or digital music interfaces, and how different users understand texture and sound are outlined below.

 — Haptaesthesia: Development of a Bi-Directional Interface for the Control of Sound Through Physical Shape and Texture


No comments have been posted on this project yet.

Outdated Browser

This website has been built using some of the latest web technologies. Unfortunately, your browser doesn't support these technologies. To update your browser, please visit Outdated Browser.