Fundamentals of Neuroscience/Hearing
Goals[edit | edit source]
- To learn how auditory information becomes encoded into neural signals
- To learn how those signals are processed and interpreted
The Ear[edit | edit source]
The ultimate task of the ear is to translate the vibration of the air, which is all sound is, into nerve impulses. This imposing task requires three distinct intermediate steps: first, the vibration of air has to be converted into the vibration of fluid, second the mechanical vibration of that fluid has to be detected by nerve cells, and lastly, the results of that detection process have to be sent for processing deeper in the brain.
For the first step, vibrating air is funneled through the ear canal down to eardrum (also known as the tymphatic membrane). That air causes the eardrum to vibrate in turn. The eardrum is connected to a series of bones called ossicles, which due to their attachment to the eardrum, move at rate proportional to original air vibrations. The opposite end of the ossicles is attached to another membrane belonging to a spiraled structure called the cochlea, which is the fluid-filled portion of the ear. Hence the motion of the ossicles started by eardrum vibrations driven by outside air becomes fluid vibrations in the cochlea, completing the first stage.
Nestled within the cochlea are sensory hair cells, which hold the key to the next two stages of the process. The pressure waves in the cochlear fluid cause the hair cells to have their protruding hairs deflected. Hair cells are mechanoreceptors, meaning they can interpret tactile stimulus into nerve impulses. The movement of these hairs essentially acts like a tactile stimulus, so the hair cells can turn the motion of cochlear fluid into electrical signals. These signals then leave the ear via the auditory nerve.
Auditory Processing[edit | edit source]
Before the signals from the ears reach any cortical areas, they first pass through lower level brain areas for basic processing. Areas in the brainstem determine the location of a sound by comparing the slightly different inputs each ear receives, and other rough data interpretation similar to that visual signals go through before reaching their respective cortical regions.
The primary auditory cortex is located along the inner portion of the temporal lobes of the brain, and along with the adjacent secondary cortex, has the task of extracting meaning from the sound signals. One important quality of a sound is its frequency, or pitch. The primary method the brain uses to determine this is to see what region of the cochlea the signal mainly originated from. Sounds corresponding to the stiff base of the cochlea correspond to very high pitches, while the more floppy cochlear apex corresponds to low pitches.
While the auditory cortices are adept at deciphering much of the perceptual qualities of sound, for more sophisticated functions like identification and attributing meaning, other brain areas are needed. For example, certain regions like Wernicke's area is vital for comprehending spoken language, while others have been implicated in the myriad of attributes that give rise to the experience of music.