Google is trying to recreate the phenomenon of synesthesia using artificial intelligence

A new Google project called "Play a Kandinsky", which is being implemented jointly with the Georges Pompidou National Center for Art and Culture in Paris, aims to recreate such a complex phenomenon as synesthesia using artificial intelligence. In a simplified form, this is a phenomenon in which one kind of feeling generates sensations in another, for example, a visual picture begins to "sound". The artist Wassily Kandinsky is considered one of the most important visionaries in this field, his work is based on the combination of image and sound.

Only a few people can understand and feel synesthesia through personal experience, so teaching a neural network the technology of its creation is in many ways an experiment. There are no standards and rules, you cannot draw parallels and reveal patterns, but this is exactly what a new neural network should try to do. The Google Transformer system was taken as a basis, in the training of which musician Antoine Bertin and the NSDOS group took part.

The neural network analyzes the original paintings of Kandinsky, such as the canvas "Yellow, Red, Blue" in 1925, which the artist painted under the influence of the effect of synesthesia. And he tries to select the sound of individual elements so that the viewer transforms into a listener and, as the artist intended, hears music in the interweaving of lines and shades in the picture. In particular, red for Kandinsky sounded like a violin, while yellow was echoed by the roar of a trumpet, but what other viewers see and hear depends on the AI.