What is the function and mechanism of hearing in humans?

Hearing is a vital sense that allows humans to perceive and interact with the world around them. It plays a crucial role in communication, spatial awareness, and overall well-being. The process of hearing involves a complex series of functions and mechanisms that work together to convert sound waves into meaningful information for the brain to interpret. In this article, we will explore the function and mechanism of hearing in humans, including the anatomy of the ear, the process of sound transmission, and how the brain interprets and processes auditory information. Understanding these processes is essential in appreciating the remarkable ability of the human ear and its role in our daily lives.

Schematic diagram of the human ear


Hearing, or auditory perception, is the ability to perceive sound by detecting vibrations, changes in the pressure of the surrounding medium through time, through an organ such as the ear.



Sound may be heard through solid, liquid, or gaseous matter. It is one of the traditional five senses; partial or total inability to hear is called hearing loss.

In humans and other vertebrates, hearing is performed primarily by the auditory system: mechanical waves, known as vibrations are detected by the ear and transduced into nerve impulses that are perceived by the brain (primarily in the temporal lobe). Like touch, audition requires sensitivity to the movement of molecules in the world outside the organism. Both hearing and touch are types of mechanosensation.


Hearing Mechanism

There are three main components of the human ear: the outer ear, the middle ear, and the inner ear.


Outer Ear

The outer ear includes the pinna, the visible part of the ear, as well as the ear canal which terminates at the eardrum, also called the tympanic membrane. The pinna serves to focus sound waves through the ear canal toward the eardrum. Because of the asymmetrical character of the outer ear of most mammals, sound is filtered differently on its way into the ear depending on what vertical location it is coming from. This gives these animals the ability to localize sound vertically. The eardrum is an airtight membrane, and when sound waves arrive there, they cause it to vibrate following the waveform of the sound.


Middle Ear

The middle ear consists of a small air-filled chamber that is located medial to the eardrum. Within this chamber are the three smallest bones in the body, known collectively as the ossicles which include the malleus, incus and stapes (sometimes referred to colloquially as the hammer, anvil and stirrup respectively). They aid in the transmission of the vibrations from the eardrum to the inner ear. The purpose of the middle ear ossicles is to overcome the impedance mismatch between air and water, by providing impedance matching.

Also located in the middle ear are the stapedius and tensor tympani muscles which protect the hearing mechanism through a stiffening reflex. The stapes transmits sound waves to the inner ear through the oval window, a flexible membrane separating the air-filled middle ear from the fluid-filled inner ear. The round window, another flexible membrane, allows for the smooth displacement of the inner ear fluid caused by the entering sound waves.


Inner Ear

The inner ear consists of the cochlea, which is a spiral-shaped, fluid-filled tube. It is divided lengthwise by the organ of Corti, which is the main organ of mechanical to neural transduction. Inside the organ of Corti is the basilar membrane, a structure that vibrates when waves from the middle ear propagate through the cochlear fluid – endolymph. The basilar membrane is tonotopic, so that each frequency has a characteristic place of resonance along it. Characteristic frequencies are high at the basal entrance to the cochlea, and low at the apex. Basilar membrane motion causes depolarization of the hair cells, specialized auditory receptors located within the organ of Corti. While the hair cells do not produce action potentials themselves, they release neurotransmitter at synapses with the fibers of the auditory nerve, which does produce action potentials. In this way, the patterns of oscillations on the basilar membrane are converted to spatiotemporal patterns of firings which transmit information about the sound to the brainstem.



The sound information from the cochlea travels via the auditory nerve to the cochlear nucleus in the brainstem. From there, the signals are projected to the inferior colliculus in the midbrain tectum. The inferior colliculus integrates auditory input with limited input from other parts of the brain and is involved in subconscious reflexes such as the auditory startle response.

The inferior colliculus in turn projects to the medial geniculate nucleus, a part of the thalamus where sound information is relayed to the primary auditory cortex in the temporal lobe. Sound is believed to first become consciously experienced at the primary auditory cortex. Around the primary auditory cortex lies Wernickes area, a cortical area involved in interpreting sounds that is necessary to understand spoken words.

Disturbances (such as stroke or trauma) at any of these levels can cause hearing problems, especially if the disturbance is bilateral. In some instances it can also lead to auditory hallucinations or more complex difficulties in perceiving sound.


Hearing Tests

Hearing can be measured by behavioral tests using an audiometer. Electrophysiological tests of hearing can provide accurate measurements of hearing thresholds even in unconscious subjects. Such tests include auditory brainstem evoked potentials (ABR), otoacoustic emissions (OAE) and electrocochleography (ECochG). Technical advances in these tests have allowed hearing screening for infants to become widespread.


Defense Mechanism

The hearing structures of many species have defense mechanisms against injury. For example, the muscles of the middle ear (e.g. the tensor tympani muscle) in many mammals contract reflexively in reaction to loud sounds which may otherwise injure the hearing ability of the organism.


Hearing Loss

There are several different types of hearing loss: Conductive hearing loss, sensorineural hearing loss and mixed types.

  • Conductive hearing loss
  • Sensorineural hearing loss
  • Mixed hearing loss

There are defined degrees of hearing loss:

  • Mild hearing loss – People with mild hearing loss have difficulties keeping up with conversations, especially in noisy surroundings. The most quiet sounds that people with mild hearing loss can hear with their better ear are between 25 and 40 dB HL.
  • Moderate hearing loss – People with moderate hearing loss have difficulty keeping up with conversations when they are not using a hearing aid. On average, the most quiet sounds heard by people with moderate hearing loss with their better ear are between 40 and 70 dB HL.
  • Severe hearing loss – People with severe hearing loss depend on powerful hearing aid. However, they often rely on lip-reading even when they are using hearing aids. The most quiet sounds heard by people with severe hearing loss with their better ear are between 70 and 95 dB HL.
  • Profound hearing loss – People with profound hearing loss are very hard of hearing and they mostly rely on lip-reading and sign language. The most quiet sounds heard by people with profound hearing loss with their better ear are from 95 dB HL or more.



  • Heredity
  • Congenital conditions
  • Presbycusis
  • Acquired
    • Noise-induced hearing loss
    • Ototoxic drugs and chemicals
    • Infection



Hearing protection is the use of devices designed to prevent Noise-Induced Hearing Loss (NIHL), a type of post-lingual hearing impairment. The various means used to prevent hearing loss generally focus on reducing the levels of noise to which people are exposed. One way this is done is through environmental modifications such as acoustic quieting, which may be achieved with as basic a measure as lining a room with curtains, or as complex a measure as employing an anechoic chamber, which absorbs nearly all sound. Another means is the use of devices such as earplugs, which are inserted into the ear canal to block noise, or earmuffs, objects designed to cover a person’s ears entirely.



Hearing aids are electronic devices that enable a person with hearing loss to receive sounds at certain amplitudes. This technological development has led to the benefit of improving the sense of hearing of a person, but the usage of these devices is significantly low. Psychologically, the first time that a person realizes that he/she needs help from a professional such as an audiologist is when they feel that their hearing is severely poor. Initially, people don’t like to believe that they are becoming deaf; hence it negatively affects their approach towards the use of hearing aids. Familiarity with the devices and consultation with professionals do help people feel good about using the hearing aids.


Hearing Underwater

Hearing threshold and the ability to localize sound sources are reduced underwater in humans but not in aquatic animals, including whales, seals, and fishes which have ears adapted to process water borne sound.

Some research suggests underwater hearing in humans may occur through bone conduction but with poor localization. This is related to differences of the speed of sound in water vs air and the blocking of normal air conducted sound paths.


In Vertebrates


A cat can hear high-frequency sounds up to two octaves higher than a human.


Not all sounds are normally audible to all animals. Each species has a range of normal hearing for both amplitude and frequency. Many animals use sound to communicate with each other, and hearing in these species is particularly important for survival and reproduction. In species that use sound as a primary means of communication, hearing is typically most acute for the range of pitches produced in calls and speech.


Frequency Range

Frequencies capable of being heard by humans are called audio or sonic. The range is typically considered to be between 20 Hz and 20,000 Hz. Frequencies higher than audio are referred to as ultrasonic, while frequencies below audio are referred to as infrasonic. Some bats use ultrasound for echolocation while in flight. Dogs are able to hear ultrasound, which is the principle of ‘silent’ dog whistles. Snakes sense infrasound through their jaws, and baleen whales, giraffes, dolphins and elephants use it for communication. Some fish have the ability to hear more sensitively due to a well-developed, bony connection between the ear and their swim bladder. The “aid to the deaf” of fishes appears in some species such as carp and herring.


In Invertebrates

Vertebrates aren’t the only group of animals that have hearing. Some insects have hearing organs as well (e.g. the long-horned grasshopper, lubber grasshopper and the cicada); they use sound as a form of communication.

Something widely spread among insects is body hair, that can be made to swing by sonar waves. Due to the resonance phenomenon certain hairs swing stronger when exposed to a specific sonar-frequency. This specificity depends on the stiffness and the length of the hairs. That is why certain caterpillar species have evolved hair that would resonate with the sound of buzzing wasps, thus warning them of the presence of natural enemies. Moreover, mosquitoes have hair on their antennae that resonate with the flying sound of homogeneous females, enabling the males the ability to detect potential sexual partners.

Some insects possess a tympanal organ. These are “eardrums”, that cover air filled chambers on the legs. Similar to the hearing process with vertebrates, the eardrums react to sonar waves. Receptors that are placed on the inside translate the oscillation into electric signals and send them to the brain. Several groups of flying insects that are preyed upon by echolocating bats can perceive the ultrasound emissions this way and reflexively practice ultrasound avoidance.



The basilar membrane of the inner ear spreads out different frequencies: high frequencies produce a large vibration at the end near the middle ear (the “base”), and low frequencies a large vibration at the distant end (the “apex”). Thus the ear performs a sort of frequency analysis, roughly similar to a Fourier transform. However, the nerve pulses delivered to the brain contain both rate-versus-place and fine temporal structure information, so the similarity is not strong.

Scroll to Top