• Wed. Nov 6th, 2024

The Official Student Paper of Riverside Poly High School

Suno AI: The ChatGPT of Music

Nov 4, 2024

Written By: Connor Julian, Staff Writer

ALGORITHM: Generative AI in music production, namely Suno AI, has raised concerns about creativity and the nature of art itself.

Music and technology have been intertwined since the early days of recording. Their complex relationship has transformed the way we create, produce, and listen to music. From early mechanical instruments to digital audio workstations, each innovation has redefined the boundaries of what’s possible in sound. Today we face a new dilemma: generative AI in music. With its ability to compose, edit, and remix tracks, AI raises many questions about the nature of art, the role of human creativity, and the ethics surrounding AI-generated musical works.

Some of the old-school technology used for music recording.

Music technology has been revolutionized in many stages. In the early 20th century, analog recording allowed musicians to reach wide audiences via records and radio. The 1940s introduced magnetic tape, enabling multi-track recording, which allowed for greater creativity by layering separate instrument and vocal tracks. By the 1980s, digital technology became central to music production. Synthesizers, drum machines, and MIDI (Musical Instrument Digital Interface) were vital tools, while software like Pro Tools in the 1990s digitized editing, mixing, and mastering. 

In the 21st century, however, technology has moved beyond just aiding human creativity—it now participates in the creative process itself. 

The home page of ChatGPT.

Generative AI is now being utilized in various aspects of music production. One of its most widespread applications is in digital effects and audio processing. AI can assist with stem separation—isolating individual tracks from a mix, such as vocals or instruments. This is particularly useful for remixing or remastering songs where the original stems might not be available. Tools like Vocalremover.org and LALAL.AI, open-source AI services that separate vocals and instruments, have made it easier for producers to manipulate sound with precision.

AI is also involved in the creation of virtual instruments, where machine learning algorithms analyze thousands of audio samples to generate new, synthetic sounds that mimic real instruments or create entirely novel ones. These models can now analyze music to suggest harmonies, chord progressions, and even generate melodies in real-time, helping producers overcome creative blocks.

The home page of SunoAI, advertising the different methods of AI use.

Among the many AI-driven music tools, Suno AI has gained significant attention. Suno AI is a deep learning model that creates audio compositions by learning from vast datasets of music. What makes it unique is its ability to generate complete tracks, from melody to percussion, all while considering genre-specific nuances. With just a few simple inputs, such as a written genre prompt, it can generate a fully fleshed-out composition within seconds. Artists without advanced production skills can now produce music that sounds polished and professional. This accessibility of music creation has opened the doors for hobbyists and beginners to explore music production without the steep learning curve typically associated with DAWs or traditional music theory. With its increasing adoption, Suno has drawn widespread criticism for automating what was once seen as a deeply human craft.

The way AI models are trained in music differs significantly from how text-based generative AI systems like ChatGPT are trained. Text-based AI relies on vast databases of written language, analyzing patterns, grammar, and semantics to generate coherent text. They are fed a wide range of information, including books, articles, and websites, to predict the next word in a sequence and learn how to generate human-like text responses. Audio-based AI, on the other hand, deals with more complex, multi-dimensional data. Sound consists of frequency, amplitude, and timbre, which all vary continuously over time. Generative models like Suno analyze vast libraries of music, breaking it down into components like tempo, key, chord progressions, and instrumentation. They learn the relationships between these elements, allowing the AI to generate original compositions. The data requirements are significantly higher for audio than text, making it far more intensive to train music-generating AI models.

The rise of AI in music has led to a broader conversation about the nature of art itself. Traditionally, art has been viewed as a reflection of human experience, shaped by emotion, culture, and creativity. But when machines start to generate music, we are forced to reconsider whether these creations still qualify as art. Is a track generated by Suno AI any less valid than one created by a human composer? Does the involvement of AI devalue the creative process?

There is no easy answer. Some argue that AI is a tool that can augment human creativity, and that AI-generated music is simply a new form of art—one that reflects 21st century musical creativity. Others argue that AI-generated music lacks the emotional depth and personal connection that human-made music carries. Moreover, concerns about the dehumanization of art persist. As AI becomes more capable of generating original compositions, there is a fear that human musicians may become obsolete, or that the unique value of human creativity will be overshadowed by machines’ efficiency. The ethical questions surrounding AI in music are not just about creativity, but also about ownership, credit, and the very definition of what it means to create.

ghostwriter977 promoting “Heart On My Sleeve”

One of the most prominent cases of AI-generated music entering the mainstream is the story of ghostwriter977. In 2023, this anonymous producer used generative AI to create a track featuring AI versions of famous musicians, including Drake and The Weeknd. The song, titled “Heart on My Sleeve,” was initially mistaken for a legitimate collaboration between the two artists and went viral before being taken down due to copyright issues. Ghostwriter977’s use of AI blurred the line between human and machine-made art. The song’s viral success demonstrated that AI-generated content could rival the popularity of human-made music, while also sparking debate about intellectual property and artists’ authenticity. Can a song created by an algorithm that mimics famous artists be considered legitimate? More importantly, should AI be allowed to profit off of imitating living artists?

Generative AI has introduced extensive new possibilities in music, from enhancing production techniques to creating entirely new compositions. However, as with any technological advancement, it also raises important ethical dilemmas. As AI continues to evolve, society will need to consider the implications for musicians, artists, and the future of creativity itself. Whether AI-generated music is considered innovative or destructive, a tool or a fuse, art or merely a product of algorithms—one thing is clear: the relationship between humans and machines in music is just beginning.

Translate »