Toward New Heights

Music continues to evolve as with us. What have been the most notable changes in music so far?

Wolf tones
Headphones

Written music and its origins

Before written music, people learned songs through memorization, which of course had its disadvantages – musical knowledge could only be passed on from person to person, face-to-face.

The 9th century saw the first iteration of written music. ‘Neumes’ were marks written on top of lyrics to show how a song’s melody moved. This system was crude, a basic reminder of the details one already knew about the song.

But neumes served their purpose. At the time, only a small portion of the population knew how to read, and musical knowledge was still passed on through oral tradition. Additionally, most songs of the day were sacred chants, which used simple melodies that moved in a stepwise style, employing only a narrow range of pitches.

Over time, music evolved, and notation changed alongside it. Guido d’Arezzo is credited for the creation of the next major development in music notation. The introduction of musical staves – a set of parallel horizontal lines – took out the guesswork in determining note pitches. Further changes would refine this notation system, including the addition of rhythmic details, the use of accidentals, and the concept of clefs and key signatures.

The impact of musical notation

In a world without musical notation, you could only learn how to play new songs by listening to them. But imagine a time in history when there was no musical notation or recorded music. The only way you could listen to a song was if it was played in front of you. Everything was based on memory, and some great songs may have been lost in time simply because they weren’t passed on to someone else. A big part of why we could listen to the songs of great classical musicians like Beethoven, Bach, and Mozart is because of musical notation.

Even with recorded music, musical notation still plays an important role today. It’s still a convenient way for composers to document their work without actually recording it, and likewise, it allows musicians to play songs they’ve never encountered before. Sight-reading sheet music allows musicians to get a feel for a piece of music down to its finest details. This is because musical notation provides a common language for musicians. With musical notation, music can be preserved, learned, and shared by different people who play different instruments.

When the ratios wouldn’t line up

‘Equal temperament’ refers to the widely used tuning system of dividing an octave into twelve equally sized steps. Though imperfect – and a contentious topic in some music circles – equal temperament resolves a handful of problems musicians had grappled with long before Vincenzo Galilei advocated for it in the 16th century.

To understand why equal temperament matters, let’s go back in time to Pythagoras, who translated musical intervals into mathematical ratios of sound frequencies. Pythagorean tuning highlighted the pure, clean sound of the ‘perfect fifth’ musical interval, but it wasn’t all that perfect. By favoring the 3:2 ratio of a perfect fifth, other ratios, like the major third, sounded awful.

At the time, musicians skirted around the issue by simply avoiding the major third. But over time, they began experimenting with different tuning systems to find a workaround for the out-of-tune notes. At some point, keyboards with split black keys were created to reflect the difference in frequencies of an enharmonic pair, like a G♯ and an A♭, for example. If this tuning system prevailed, we’d have 31 notes in an octave instead of the standard 12!

Twelve evenly-spaced notes

A standard piano octave nowadays has the same keys for G♯ and an A♭, thanks to equal temperament. And it’s not just a manageable number of pitches we have to be grateful for. The various tuning systems musicians experimented with before equal temperament all sought to address the issue of ‘wolf tones.’ These were the undesirable or dissonant tones and intervals instrumentalists avoided.

By avoiding one wolf tone, a new scale might produce another in its stead. The systems inevitably favored certain keys over others, thus compromising the overall sound. Regardless of the tuning system used, there was always a set of combinations or notes that composers had to avoid.

With equal temperament, all intervals became equally spaced. This meant that most tones were no longer perfect because intervals had to become narrower or wider to adjust for the system. Using equal temperament sacrificed the pure tones we would have heard from previous systems. At the same time, however, composers no longer had to avoid certain intervals like the plague because wolf tones had been greatly reduced.

An easy favorite

In 1709, Bartolomeo Cristofori invented the piano. Originally named gravicèmbalo col pian e forte (“harpsichord with soft and loud”) before being shortened to “piano,” the instrument would replace the harpsichord and clavichord.

What set the piano apart from its predecessors was its ability to play both soft and loud sounds at the instrumentalist’s will, while the other two produced sound at a set volume. The piano’s construction used a hammer-and-lever mechanism that struck the corresponding string as softly or as forcefully as the instrumentalist’s finger hit the piano key. In this regard, it is considered a percussive instrument. This versatility would pave the way for the use of dynamics in music.

Another unique feature of the piano is its impressive range, seven octaves on average. Access to a wide range of notes on a single instrument meant that composers could write for various instruments using just the piano.

The invention of the piano also impacted hobbyists and beginning musicians. As the piano became a staple in many Western households, it was an easy and popular choice for young children as a first instrument.

The beginnings of music on record

It’s hard to imagine a world without recorded music, but that was a reality, at least until 1877, when Thomas Edison revealed the phonograph to the public. Using a thin sheet of foil, a needle, and a horn, sound was recorded by etching grooves onto a cylinder. The cylinder was placed onto a carriage, which moved it forward as the music played. The patterns found on the cylinder were converted into vibrations as the phonograph’s needle or stylus touched it. These vibrations then exited the phonograph’s horn as sound or music.

The phonograph would change both the ways in which musicians made music and how the rest of the world listened to it. The advent of recorded music meant many things. First, it gave people control over what they listened to. No longer restricted to just live music, they could buy a record and play a song on repeat if they wanted to – “The music you want, whenever you want it,” as the Victor Talking Machine Company claimed in its ads. The ability to listen to music on demand allowed fans to study music in detail, and also to listen to it alone, whereas before, it was solely a social activity.

”Thomas

The burgeoning of an industry

The public quickly embraced the phonograph and the concept of recorded music, and the music industry was happy to oblige this fervent demand. By 1915, Americans spent a whopping $60 million each year on phonographs and records.

As much as the phonograph changed how listeners interacted with music, it also shook the industry. Early records could only fit around three minutes of audio, so musicians adjusted. These days, popular music averages three minutes, a holdover from the days of the phonograph.

Insofar as people were happy to splurge on records, they also wanted an idea of what they were buying – a reasonable request, one the industry accommodated with the creation of music genres. Categorizing records into distinct styles helped the industry market its artists effectively and efficiently. In turn, music fans welcomed the idea, forming their identities as fans of their chosen genre.

The phonograph was far from perfect. It accommodated a mere three minutes of audio, and its quality was wanting. But one cannot deny that it was a pioneer in many ways, and that we music fans are better for it.

Soaring through the airwaves

Within roughly a decade of the phonograph’s release, inventor Guglielmo Marconi developed a device that could transmit electromagnetic waves across large distances. In November 1920, the first commercial broadcast was made from Pittsburgh, Pennsylvania, announcing the presidential election results in its inaugural show and beating the news cycle of traditional print media. Since then, radio has evolved as a source not just of live news and commentary, but of entertainment for the masses.

Many predicted that radio would kill the phonograph, and for a time, the record industry seemed to think it would. Record labels kept their stars’ music off the air – that is, until the music industry found a way to appease both parties by updating copyright laws and funneling licensing fees from radio stations to copyright owners.

As a free medium readily accessible to most American households, radio provided recording artists with exposure to a wider audience, especially as they were starting out. In time, airplay became an integral part of a record’s success. Billboard would acknowledge this in 1958, when it consolidated airplay data into its Billboard Hot 100 chart metrics.

It’s how you use it

Transformative innovations often draw flak for supposedly “ruining” entire industries. Auto-Tune has not been exempt from this. Created by Andy Hildebrand, Auto-Tune compares vocals against a reference point to identify where a singer is off-tune. It then digitally corrects off-key singing into the right key, resulting in “polished” vocals.

The technology has received vitriol for allowing second-class vocalists to “cheat” on records by making artists sound better than they are – at least that’s what critics say. To them, the performance feels fake, and the voice sounds robotic, a product of a machine and not actual talent.

Despite all the hate it gets, Auto-Tune has endured. Cher’s “Believe” placed it in producers’ radars in 1998. Most pop music we hear today will have used Auto-Tune in some way, shape, or form, whether we notice it or not.

Some of the best records to have come out in the decades since Auto-Tune launched make use of the effect – Rihanna’s “Diamonds,” Kanye’s “Heartless,” and Imogen Heap’s “Hide and Seek.” Auto-Tune suffers in the hands of a subpar producer, making way for songs that sound trite and packaged. But to the geniuses who understand it, music becomes fresh and exciting.

In our pockets and straight to our ears

The phonograph drew horrified comments about how it allowed individuals to listen to music alone. Imagine the reactions when, in 1891, the French engineer Ernest Mercadier invented the earliest version of headphones. With headphones, not only could we enjoy music solo, we also had “absolute control over our audio-environment, allowing us to privatize our public spaces.”

Besides blocking out distractions from the outside world, headphones enhance our enjoyment of music. It allows us to hear the finer details and textures of a song, as the sounds play directly into our ears, minimizing any influence from our surroundings.

Indeed, recent technology has dramatically changed the way we consume music. In 1963, Philips developed the cassette tape. In 1979, Sony’s Walkman allowed us to listen to the tapes wherever we went. Not only could we enjoy music in our private worlds, we could now choose where that would be.

Since then, technology has progressed rapidly. Compact discs overtook cassette tapes, only to be replaced by MP3 files. MP3 players – most famously, the iPod – condensed far more songs into a palm-sized format. Nowadays, even those are near-obsolete. Our songs live in our phones now, if not on the Cloud.

The Internet and its disruptions

In the early 2000s, the rise of peer-to-peer file-sharing platforms like Napster posed a big threat to record sales as online music piracy grew. The Internet was both a bane and a boon to the music industry, though. Alongside music piracy, information sharing thrived, providing record labels with more avenues to promote their artists to an ever-wider audience. The Internet also provided a space for fans to build a community around their favorite musicians, strengthening their fandoms.

As the early Internet gave way to Web 2.0, social media prevailed, empowering aspiring musicians to share their music through platforms like YouTube and SoundCloud. In response, the music industry kept a close watch, “discovering,” and signing on, stars like Justin Bieber and Billie Eilish.

Beyond new promotional channels and talent pools, widespread access to stable Internet has also made streaming music the norm. Casual fans no longer purchase records. They pay a monthly subscription to Spotify for access to a library of over 80 million songs. As the world evolves, so too shall music. But where exactly it’s headed, the industry will have to play it by ear.

You will forget 90% of this article in 7 days.

Download Kinnu to have fun learning, broaden your horizons, and remember what you read. Forever.

You might also like

Chasing Talent, Developing Skill;

What defines musical ability, and how is it attained?

The Magic of Music;

Where does sound come from, and how does it become music in our minds?

Demystifying Sound;

What does science have to say about noisy neighbors, outrageously priced violins, and earworms?

Beyond Pleasure;

Besides enjoyment, do we stand to gain anything else from music? A resounding YES, according to science.

Musical Instruments and How They Work;

How do different musical instruments produce sound?

The Building Blocks of Music;

What elements do we need to make music?

Leave a Reply

Your email address will not be published. Required fields are marked *