What could this mean? It suggests that native speakers learn to disregard visual information in preference for audio information. It also suggests that training CI users might help them recognize the visual cues of tonal languages: if you like, to lip-read the tones.
Improvements in CIs are clearly needed, and scientists are busy trying to enhance changes in tone, or pitch. Xin Luo of Purdue University in West Lafayette, Indiana, in collaboration with researchers from the House Research Institute, a hearing research centre in Los Angeles, has figured out how to make CIs create pitch changes that better reflect the smooth variations in prosody.
To understand how, we need to know a little about how the cochlea senses pitch in the ear, and how CIs try to replicate this. The cochlea contains a coiled membrane, which is stimulated in different regions by different sound frequencies – low at one end, high at the other, rather like a keyboard. A CI creates a crude approximation of this continuous pitch-sensing device using a few (typically 16-22) electrodes to excite different nerve endings, producing a small set of pitch steps instead of the normal smooth pitch slope. Luo and colleagues have figured out a way of sweeping the signal from one electrode to the next such that pitch changes seem gradual instead of jumpy.
The cochlea can also identify pitches by, in effect, “timing” successive acoustic oscillations to figure out the frequency. CIs can simulate this method of pitch discrimination too, but only for frequencies up to about 300 Hertz, the upper limit of a bass singing voice. Luo and colleagues say that a judicious combination of these two pitch-sensing methods, enabled by signal-processing circuits in the implant, could one day improve pitch perception for users. Then we may, at least, to allow them to capture more of the emotion-laden prosody of speech that exists within every sentence we express.