Bernstein Center for Computational Neuroscience Tübingen

 BCCN Tübingen » News » Article » What you say is what you see.

What you say is what you see.

September 15, 2011 - Cluster D, Project D1

New publication of Cesare Parise in Experimental Brain Research.

How did our lexicon develop? What is the link between the sound of a word and its meaning? Traditionally, scientists claimed that such a link is purely arbitrary and based exclusively on social conventions. New research from the University of Trento, the University of Oxford, and the Max Planck Institute for Biological Cybernetics, however, is now dramatically challenging this view. The findings have now been published in Experimental Brain Research.

“There are natural constraints between the meaning of a word and its sound” claims Cesare Parise of the University of Oxford and the Max Planck Institute, “that is why most people throughout the world would immediately agree that something called mal can’t just be smaller than a mil”.

Cesare Parise and Francesco Pavani set out to examine whether human vocalizations were automatically affected by what they were looking at.

They sat volunteers in front of a computer screen and asked them to vocalize the letter a whenever a figure was shown on the screen. Such figures varied in size, shape, and lightness.

An analysis of volunteers’ vocalizations highlighted a surprising set of regularities: Participants systematically modulated their vocalizations depending on what they were looking at! In particular, participants were louder in response to bright as compared to dark figures, and they were louder in response to spiky rather than rounded figures. Moreover, their vocalizations were sharper for spiky than for rounded figures.

“These results are amazing because volunteers were explicitly instructed to vocalize a meaningless letter in the most natural way, without bothering to worry about what was on the screen” says Parise, “imagine what would happen if they only wished to communicate what they were looking at!”

This research opens new perspectives regarding the development of oral language, and raises the intriguing question of whether there might exist not only a universal grammar common to all languages, but also some universal aspects of the lexicon. That is why, for example, Japanese people can tell, at a greater than chance level, whether a Native American word that they had never heard before is the name of a fish or a bird.

“It has long been known that people tend to pair meaningless heard words with specific visual shapes, as in the famous example in which the word takete is more likely to describe a spiky object than the word maluma.” says Francesco Pavani of the University of Trento, ”The surprising and novel aspect of our research, however, is that even a spontaneous and totally arbitrary vocalization that we utter can change according to the shape we are looking at”.

In the future it might even be possible to use the same technique to investigate whether other animals also automatically modulate their vocalizations in a similar fashion.

The potential applications of these results are manifold. In a clinical setting, the natural mappings between sound and meaning highlighted by Parise and Pavani can be exploited to develop more effective treatments for people with developmental or acquired speech disorders. Moreover, the team believes that knowledge of the right sound for a given concept might be a powerful tool in marketing to find the best fitting name for new products.

Publication:
Parise C & Pavani F (In press) Evidence of sound symbolism in simple vocalizations. Experimental Brain Research.

Contacts:
Cesare Parise: cesare.parise(AT)tuebingen.mpg.de
Francesco Pavani: francesco.pavani(AT)unitn.it


« Back

 
Sponsored by the Federal Ministry of Education and Research