Even in the most anodyne pop song, there’s usually an awful lot going on. There may be more than one instrument playing at any moment, and each is typically playing different notes at different times, perhaps in different rhythms. We have to distinguish what is melody from what is accompaniment. To make matters worse, there are probably words being sung over it all, which we have to string together into what we hope are coherent sentences. We have to take the music apart, to identify each strand, and then weave it back together both melodically—to deduce structure in the sequence of notes and harmonically, to hear chords and harmonies in notes sounded simultaneously. It sounds like an awesome task, and yet we do it apparently without effort.
This ability is less surprising than it might seem once you recognize that our everyday world is like this too: it is filled with noises, overlapping, masking one another, vying for our attention. That’s why we have been equipped, probably both by evolution and by experience, with the mental tools needed to decode complex sound. We’ve got a similar capacity to decode visual information too: we turn the patches of colored light that hit our eye into a world of specific objects, textures and shapes. In both cases, our minds seem to use some simple rules of thumb for making a good guess at what it is we’re perceiving—for turning raw sensation into a model of the physical world. Some of the most important of these rules are called the Gestalt principles, and they are ways of grouping together sensory input to deduce patterns and relationships. Music doesn’t just take advantage of these rules; it is generally designed to accommodate them, to make it easier for us to perceive structure and coherence. We can see the Gestalt principles being applied in music centuries before they were explicitly described, for example in the way Johann Sebastian Bach arranged his melodic lines so that we can keep track of several voices at once. They are in many ways to key to our musical brains.