Google has been at the forefront of Artificial Intelligence (AI) research for many years now and has come up with yet another remarkable system called MusicLM. This new AI system has the capability to generate music in any genre by just taking a text description as input. But, Google has decided not to make this system public yet due to some ethical issues that come along with it.
MusicLM is not the first generative AI system for music, but it is the first one that creates music with “high-fidelity” and complex composition. The system has been trained on a dataset of 280,000 hours of music and can generate songs that are coherent with the given description. MusicLM can build on existing melodies, whether they are hummed, sung, played on an instrument or described in text.
In addition to that, MusicLM can take a series of written descriptions and turn them into a musical “story” or narrative, making it possible to generate music that tells a story. The system can also be directed by a picture and a caption or be made to generate music that is “played” by a specific kind of instrument in a particular style. Although MusicLM can technically synthesize vocals, the results are not optimal and have some issues like distorted samples.
One of the key concerns for Google regarding MusicLM is the potential use of training data that contains copyrighted material in the songs that are produced. In an experiment conducted by the researchers, they found out that one percent of the music produced directly copied the songs on which it was trained. This high figure makes the company hesitate to release MusicLM in its current form, as it may lead to copyright issues.
These hazards related to music generation must be addressed, and more effort is needed in the future to prevent creative content misappropriation. The use of AI-generated music raises ethical and legal challenges that the industry must address as AI technology continues to develop. Ensuring that AI-generated music is used in a fair way for both composers and users is critical.
This is not a novel instance where AI-created music has sparked legal controversies. Jay-Z’s company filed copyright complaints against Vocal Synthesis in 2020 after the YouTube channel used AI to produce Jay-Z renditions of songs like Billy Joel’s “We Didn’t Start the Fire.” The videos were initially taken down, but YouTube later decided the takedown requests were “incomplete” and put the videos back online.
Eric Sunray, a current legal intern at the Music Publishers Association, has posited in a whitepaper that AI music generators like MusicLM may be infringing on copyright laws by constructing “coherent audio tapestries” from the compositions ingested during the training process.
Although some people are impressed by the quality of the AI-generated music samples released by Google, the ethical and legal issues surrounding AI-generated music cannot be ignored. It may take some time before there is clarity on how AI-generated music can be used in a way that is fair to all parties involved. In the meantime, the industry must continue to address these challenges and work towards finding a solution.
In conclusion, MusicLM is a remarkable AI system that has the potential to change the music industry as we know it. However, before it can be released to the public, the ethical and legal issues surrounding AI-generated music must be addressed and resolved. As AI technology continues to evolve, it is critical that the industry finds a way to use AI-generated music in a way that benefits everyone.