SoundFace - technicals


The soundface is basically just a table. Well, apparently it is. Actually, it’s also much more than this. Let’s take a closer look…


In order to visualise the graphics you need three elements:

We chose this model because it can render very big images at very small distances.

The opacity provided by the frost let the user see on the plexiglass sheet what is projected from the engine underneath the surface



Now, let’s consider the infra-red ligh spotlights.

We placed 4 of them inside the table structure in order to properly radiate the infra-red light to capture the objects on the surface.


The infra-red light, after being bounced from the objects on the surface, is intercepted by the camera.

The camera is a modified ps3Eye. We removed the ps3Eye IR filter and we replaced it with a visible light filter (that filters out the visible light). Thus, only infra-red light can be detected by the camera.

We also changed the ps3Eye lens, provinding it with a a wide-angle lens. This new lens allows peripheral vision and we can reduce the distance among the plexiglass sheet and the camera.


Visible light domain Infra-red light domain
Reception of the image related to the surface status Projection of a graphic interface concordant with the surface statuse

For more details about infra-red lighting, take a look at this site:

In order to isolate the infra-red illumination within the table structure, we use four dimming fabric sheets (one per angle). The external visible light won’t interfere with the lighting and sensing within the table.

As a precaution, we mounted two fans to provide a proper air flow inside the table: one to push hot air outside and the other to pull fresh air iniside.

Before implementing the ‘4 IR spotlights + camera’ solution, we experimented other ‘lighting and sensing’ setups. First, we tried a surveillance IR camera with an integrated IR spothlight circuit (implemented in Prototype #2). Then we attempted to create a specific circuit implementing a 555 timer IC to create a pulsating IR light spotlight with some IR LEDs.


Due to many problems arising from these setups, we finally decided to implement the current solution.


The software structure relies on a client-server architecture

By “Client” we mean an application demanding a service. By “Server” we mean an application that can supply the service. Basically: the client asks the server a service. The server identifies the client and supplies the service. Eventually the client confirms the reception of the service to the server.

client & server

There are two main communictaion protocols between Client and Server. They both fulfill specific tasks:

Here’s a framework showing some everyday activities in which the two protocols are involved (source Wikipedia):

Application Protocol Application layer Protocol Transport layer
Access to remote terminal telnet TCP
File transfer FTP TCP
Streaming Audio/Video RTSP/RTP TCP (comands) + UDP (flow)
Remote file server NFS typically UDP
VoIP SIP, H.323, altri tipicamente UDP
Net management SNMP typically UDP
Routing protocol RIP typically UDP
Names resolution DNS typically UDP

TCP and UDP regulate the data tansport; other protocols specify the format and the data type to be sent and received. Amongst the others, we focused on these two:

In more detail, here are the TUIO v1.1 specs:


This is the structure of a TUIO message: [src] / [alive] / [set] / [fseq]

in detail:

The TUIO protocol has been developed by M. Kaltenbrunner, T. Bovermann, R. Bencina, E. Costanza; it inspired the Universitat Pompeu Fabra (Barcellona) Reactable project and is mainly implemented in fiducial marker and computer vision interactive applications.

The TUIO protocol is at the core of ReacTIVision. ReacTIVision is a server software that eases the creation and dispatch of properly formatted messages within interactive applications.


It’s time to dive more deeply into the fiducial markers issue. Fiducial markers are symbols/images easily and uniquely recognizable by a computer vision system.

A brief history of fiducial evolution:

ARToolkit fids

dtouch 0

a b c



1 2 3


let’s now investigate the physical structure that transfer data between a server and a client.

Here are some examples:


The latter is our case! In our project, reacTIVision, acting as a server, dispatches TUIO messages to our custom application that is the client.

computer 2

We use Reactivision on the server side, so we focused on the client side developement. We wanted our client to be able to render TUIO messages into immages as a visual feedback.

We could choose amongst many tools to accomplish this. Later we’ll discuss our choises. Before doing that, we miss a little piece of implementation. We wanted the visual feedback to match with sound production. Instead of using pre-recorded samples we experimented with procedural audio: every sound is synthesized in real time by the client Volevamo realizzare un client il cui compito principale fosse quello di trasformare i messaggi TUIO in immagini, che restituisse un feedback visivo all’utente. Gli strumenti a nostra disposizione erano principalmente 3 e, di seguito, vedremo i pregi e i difetti di ciascuno per capire come siamo giunti alla scelta finale. To examine in depth procedural audio, follow Andy Farnell site.


Programming languages for graphics
Processing_Logo Processing: both a language and an IDE. It's been developed at MIT by ben fry and Casey Reas. It's very adaptable and easy to learn. Anyway, if you need low level control, that's probably not the tool for you.
ofw-logo OpenFrameworks: It'a set of utilities libraries that embed Processing easiness within C/C++ completeness. Just like processing, it's a comunity developed project, suited for the main OSes (Linux, OSX, windows, ...)
Programming languages for sound
pure_data_logo Pure Data: it's a programmable DSP developed by Miller Puckette at IRCAM and maintained by the community. You can program PureData directly inside its IDE using graphical tools, or you can embed it inside your C program.
SuperCollider Supercollider: it's a scripting language use for the generation and manipulation of sounds in real time. It' internal architecture is based on a server-client structure.

We opted for the OpenFrameworks + PureData + TUIO setup.

To link together these three tools we used ofAddons. An ofAddon is a plugin that embed some functionalities within the openframeworks core. In that way we could embed new features to our code.

More precisely, we used:


Here’s the Github repository where you can find the source code of the project.

To go further

Authors Title Where and when links
Kaltenbrunner M., Bovermann T., Bencina R., Costanza E. "TUIO- A Protocol for Table-Top Tangible User Interfaces".Proceedings of the 6th International Workshop on Gesture in Human-Computer Interaction and Simulation (GW 2005) Vannes, France, 2005 link
Kaltenbrunner M., Bencina R. "reacTIVision: A Computer-Vision Framework for Table-Based Tangible Interaction".Proceedings of the first international conference on "Tangible and Embedded Interaction" (TEI07) Baton Rouge, Louisiana, 2007 link
Wright M., Freed A., Momeni A. "OpenSound Control: State of the Art 2003".Proceedings of the 3rd Conference on New Instruments for Musical Expression (NIME 03) Montreal, Canada, 2003. link
Kaltenbrunner M. "reacTIVision and TUIO: A Tangible Tabletop Toolkit".Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS2009) Banff, Canada. link
Bencina R., Kaltenbrunner M. "The Design and Evolution of Fiducials for the reacTIVision System".Proceedings of the 3rd International Conference on Generative Systems in the Electronic Arts (3rd Iteration 2005) Melbourne,Australia link



If you find this article useful and you like it, please leave a comment below: let us know what do you think about it, we'd really appreciate it. Thank you very much and, as always, stay tuned for more to come!