Music is always changed by the technology of its time. From the advent of the gramophone to vinyl records to mp3s, innovation in society always produces innovation in music. As technological innovation begins to progress at an exponential rate, we often find ourselves wondering where exactly the future is going to take us.
Vocaloid
Likely not the first but certainly one of the most prominent instances of the blending of music and computers is Vocaloid, a Japanese voice synthesizing software. It uses a vast library of sounds recorded by voice actors and singers to produce generated music. Users of the software input the melody and the lyrics themselves, creating a wholly viable song. The voice packages are released under a name and persona, like Hatsune Miku, giving users the ability to effectively crowdsource a digital artist.
Niconico, a Japanese video-sharing platform, became a hotspot for these songs, and before long elaborate music videos were being made to accompany them. They made their way onto Youtube and, with that, into the collective history of the internet. Vocaloid has maintained a large, cult-like following ever since, with Hatsune Miku even appearing as a hologram for international concerts.
Holograms
The use of holograms isn’t limited to Vocaloid. In 2012, Tupac appeared on stage for the first time in 17 years. The projected Tupac rapped alongside Dr. Dre and Snoop Dogg, both friends of the real man, at Coachella. The concert made waves across the world. The hologram wasn’t a hologram in the technical sense, but instead, a stage trick using an image projected onto angled mirrors. The result creates a 2D “phantom” of the image, making it look like the person, although somewhat intangible, is standing on stage. Regardless of the technicalities, the Tupac project opened up discussions into the possibility of “reviving” other dead artists. This did indeed come to fruition: Maria Callas, Buddy Holly, and Michael Jackson (to name only a few) had their own holograms projected on stage shortly thereafter.

Blog Post
"*" indicates required fields
By submitting this form, I authorize Musicians Institute (MI) to make or allow the placement of calls, emails, and texts to me at the phone number that I have provided, including through the use of automated technology, or a prerecorded or artificial voice. I understand that I am not required to provide my phone number as a condition of purchasing any property, goods, or services. I agree to the terms of MI’s Privacy Policy. MI will not sell or rent your information to third parties, and you may unsubscribe at any time.
A hologram of Whitney Houston was slated to perform at the 2016 finale of The Voice, but the family retracted consent after finding the experience overwhelming. Although she did eventually get her own holo-tour, which began February 25th, 2020, the emotions the family felt are echoed throughout the music community. Many find the idea of posthumous concerts disrespectful at best, but those in charge of them say it’s simply a glimpse into the future of entertainment.
Artificial Intelligence (AI)
Now, as the world tumbles forward into the future, the next big step seems to be artificial intelligence. Just recently, Spotify announced the “Alone with Me AI Experience,” a chance for fans to speak with a realistic version of Abel Tesfaye of The Weeknd. Powered by the individual’s listening data, Abel talks the user through their history listening to his music, like the first song they listened to and which one they listen to the most. It isn’t perfect, but it’s a start.
Many artists are beginning to see this period of time as “the end of human art,” given AI’s stunning ability to learn from and replicate existing artists. It likely won’t be long until AI is able to make works of its own, independent from the artists that it has learned from. With this possibility, however, comes a host of difficult legal questions. Who will own the rights to AI-created music? To what extent does copyright law affect the music produced?
Right now, legal experts don’t have a lot of answers. If a particular AI is trained solely on one artist’s music and outputs a unique but stylistically similar song, it’s hard to say if the artist will have any right to the earnings generated. There’s a reason that stylistic similarity isn’t typically punished under copyright law: artists inspire each other constantly.
Genres are simply the product of someone trying something new and others being inspired by it. If an AI creates a song that has significant samples of an artist’s work, that may be another story. For now, the lines are unclear, but the future is not. AI is here to stay.
There are positive outcomes in the relationship between AI and music. Now, thanks to a program called Amper, those in need of stock music no longer have to go looking. The user can set parameters such as genre and tempo and have a simple piece of music created within minutes. Amper’s music has already been widely used, from commercials to podcasts to youtube videos. A test run by the company found that users could not differentiate between the
AI-generated songs and similar music made by humans. Creators like those of Amper are now looking towards “personal soundscapes,” music that changes with the listener. Everything from the weather to the speed of the listener’s heartbeat will contribute to the direction that the music takes. It could potentially be used to soothe people with OCD and anxiety or help those with insomnia find rest.
There are a lot of questions that don’t have answers when it comes to music’s future. Some are a little concerning, but many simply have to do with untapped potential. From devices that interpret the sounds that plants make to fully immersive virtual reality, there’s no doubt that technological innovation is at the edge of something incredible. It might be time to accept that the old and the new no longer have to clash. Even as music evolves, it will always acknowledge its roots and the musicians who create it.