Faces2Voices (offline version)

interactive installation, facial recognition, AI-generated sound

Collaboration with Nikita Prudnikov, AI-expert, musician
Faces2Voices is an interactive installation that uses face recognition technology and AI-synthesized sound to create a generative multichannel sound composition - a choir that changes over time depending on the contribution of the people who participated in the project. The project is based on a study in which scientists use a neural network to generate an image of a person's face based on a sample of his speech. Artists reverse this process and use AI to generate imaginary human voices based on facial recognition algorithms.

Artists focus on critical approaches to AI-based technologies and explore topics such as privacy, data scraping and algorithmic regulation within the procedural rhetoric in Faces2Voices project.

Visitors of the show have two options: they can listen to the multichannel sound installation or they can contribute to the project: AI recognises faces, synthesizes imaginary voices and add them to the chorus.

We reflected on Speech2Face