fbpx
Skip to content

The Emergence of AI in Music

Guest Blog Feature by Professor Justin Paterson (University of West London) and Dr Kirsten Hermes (University of Westminster)

> Intro > Ideation > Shortcomings > Staying in the driver’s seat > AI music creation & songwriting > AI music production > AI performance & lyrics > The debate > Conclusion

Introduction

The music industry is experiencing a wave of transformation with the advent of artificial intelligence (AI). From creation and songwriting to production and performance, AI is reshaping how music is made and consumed. This article explores the exciting developments in AI-driven music, highlights innovative tools like Songzap, and delves into the debate on whether AI is a boon or bane for musicians and the creative industries.

As AI continues to spread across creative sectors, many artists, musicians, and designers are grappling with a fundamental question: is AI a threat to human creativity, or could it become an essential collaborator in the creative process? A growing body of research is exploring how creatives can work with AI systems not as replacements, but as partners in their artistic exploration. While AI can produce impressive works, it cannot reflect back upon them like humans do, or critically assess its own outputs. Most artists prefer for AI to work in assistive or collaborative roles, rather than having it generate a finished piece of work.

From Blank Canvas to Brainstorm Buddy

Imagine starting a project with a blank screen or silent studio. It’s exhilarating – but can also be intimidating. Generative AI (GAI) tools like Suno for music or Midjourney for visuals are changing that. AI systems can generate sounds, melodies, images, and even videos in seconds, giving creatives something to react to and build upon. Rather than stifling originality, these AI tools help jumpstart ideation – a process that’s often the most difficult. AI lets artists quickly iterate through style combinations and test out any kind of crazy or off-beat idea in a risk-free and immediate way. It’s also great for collaborations between humans: instead of trying to describe a sound or image in words, an AI-generated mood-board can quickly align everyone’s expectations..

AI Shortcomings

However, there are limits. Today’s AI tools can feel fragmented and overly specialized on just one very small, specific task. It often takes multiple AI apps with different workflows to finish a project such as an album release, complete with marketing materials and social-media content. Switching between tools disrupts creative ‘flow’, a state where concentration and enjoyment peak. Also, many tools rely on short text commands (called ‘prompts’), which don’t always suit visual or audio-based thinkers.

Another issue: lack of control. While GAI can generate outputs fast, it’s still hard to make fine-grained edits. For example, you might like part of a track generated by Suno but want to change just the strings or adjust the rhythm. Often, you’d need to rebuild the entire thing from scratch, and even then, the prompt might generate a different response – inefficient and frustrating.

Ownership is also a hot topic. Legally, most AI-generated works aren’t protected by copyright unless heavily edited by a human creator. Creatively, many artists feel unconnected to AI content if they haven’t played an active role in shaping it. Traditional theories argue that the creator’s identity and effort are embedded in the final work through their labour. When this is replaced by a machine, the ‘sense’ of authorship is lost as well..

Staying in the Creative Driver’s Seat

So, what’s the sweet spot? As we said, artists prefer using AI as a smart assistant that helps with ideation, skill-building, refining concepts, and reducing repetitive tasks. Think of AI as your behind-the-scenes team – your sketch artist, draft composer, or digital assistant – while you remain the creative director.

Here are some quick tips for getting started with AI collaboratively:

  • Bring AI into the right phase of your project. AI is great for early exploration and rough drafts, but less so for detailed development – at least for now.
  • Combine strengths. Let AI churn out fast ideas, then infuse them with your own style, experience, and taste.
  • Choose tools that work with your platform and that generate file types you can work with in your preferred creative software.
  • Don’t be afraid to edit AI outputs or use them as stepping stones. Your version doesn’t have to follow the AI’s lead – it can spark something entirely new.
  • Stay critical. Be mindful of the biases baked into training data and prompt-based systems. Adjust and challenge them.
  • Remember that AI can hallucinate. Some outputs are great, others not so much – you are the director, choosing what gets used and what misses the mark.
  • Always keep your ethics cap on – think about what moral right you have to use GAI material and claim ownership of it!

AI in Music Creation and Songwriting

AI has made significant strides in music creation and songwriting. Tools like Suno and Udio, are at the forefront, enabling artists to generate melodies, harmonies, stems and lyrics with the help of sophisticated algorithms, and they can respond to user-uploaded material. These AI systems analyze vast amounts of musical data to create compositions that can range from classical to contemporary genres. The outputs are increasingly remarkable and can be indistinguishable from well-produced human bands.

A major problem is that at present, many of these tools are scraping the Internet for training material – music – created by human artists and repurposing it without credit or remuneration. A petition signed by luminaries such as Sir Elton John, Kate Bush, Sir Paul McCartney and Sir Simon Rattle expresses great concern about this and according to the Guardian, “they are outraged at UK government plans to weaken copyright laws, and are urging greater transparency, control and financial remuneration to counter the challenge of AI”. The petition has above 50,000 signatures at the time of writing and is entitled “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.” The three major Record companies – Universal Music Group, Sony Music Group and Warner Music Group – are also opposed to these plans.

The UK government stance is driven by a desire to develop the UK AI Sector into a world leader and they wish to empower it, but they acknowledge that UK creative industries are a great strength as well and they argue that they are also, “supporting right holders’ control of their content and ability to be remunerated for its use.” It is perhaps the rigour and extent to which this can be done that is generating such pushback.

In parallel to this, the responsible AI (RAI) movement is gaining momentum, for instance more broadly via the RAI Institute which is supported by Shell and KPMG amongst others. In music, DAACI have attracted Innovate-UK funding to “establish a compliance mechanism for […] the dialogue surrounding transparency, attribution, and ethics in generative music and AI.” RT60’s Songzap – also Innovate-UK funded – takes an RAI approach using bespoke in-house training data and engaging educational approaches, offering a user-friendly interface that allows musicians to experiment with different styles and structures, fostering creativity and innovation.

Songzap’s AUTOZAP feature allows AI session players to automatically accompany a musician’s recordings

RAI apart, here is a list of some of the current AI music-creation tools that are currently available:

  • Suno has an ability to produce high-quality, professional-sounding tracks with impressive vocals.
  • Udio, which grew out of the Google Deepmind team, has a similar feature set, and you can also download basic music videos with karaoke style lyrics in subtitles.
  • AIVA offers a similar approach even for sophisticated compositions, and has three different price-plans linked to its claimed retention of its outputs’ copyright – with the free plan, it retains copyright, but with the pro plan, this is passed to the user.
  • Soundraw is popular among content creators for its ease of use and customization options, allowing users to tweak audio blocks and adjust instrument intensity.
  • Vochlea Dubler2 offers pitch tracking and generates harmony and chords, allowing musicians to create complex musical arrangements effortlessly.
  • Quosmo Timbre Transfer changes the input timbre to another timbre, providing unique sound transformations for creative exploration.
  • Scaler provides a vast catalogue of chord sets and also generates bass-lines, phrases and melodies, making it a powerful tool for composers. It is not principally AI driven, but does use AI to suggest chords.
  • Orb Tempus can create chords from everything, offering a versatile tool for music composition.
  • OpenAI MuseNet can blend styles and generate music in various genres, showcasing the versatility of AI in music creation.
  • BandLab’s SongStarter generates multitrack AI audio, providing a starting point for new musical projects.
  • Melody Sauce generates melodies, assisting composers in creating catchy tunes.
  • Instacomposer creates multi-part arrangements, streamlining the composition process.
  • Google Magenta offers various generation and manipulation tools within Ableton Studio, enhancing music production capabilities.
  • Band in a Box makes backing tracks based on chord progressions, supporting musicians in live performances and practice.
  • Boomy creates complete songs, making music creation accessible to everyone.
  • Loudly generates full songs, catering to a wide range of musical styles.
  • Music Flow creates songs based on user input, providing a personalized music creation experience.
  • Soundful generates songs and provides stems, offering flexibility in music production.
  • DadaBots is an open-source tool for generating metal, funk, and rock music.
  • Musico uses browser-based control for interactive AI-music creation.
  • CoSo by Splice selects samples from Splice and matches them based on user input, streamlining the music-creation process.
  • Splash generates full music tracks based on text prompts, making it easy to create music without any musical background.
  • Audio Cipher generates music based on text prompts, offering a unique approach to music creation.
  • Riffusion
  • Music21 Composer is an AI-driven code-based tool that enables programmatic generation, study, and manipulation of musical scores.

These tools are revolutionizing the way music is created, offering new possibilities for both experienced artists and novices to explore and create equally sophisticated music. However, the commercial landscape is complicated. We are sometimes told that we can sell music that we make with these tools, but the legal landscape is changing and will likely fluctuate for some years before stabilizing. What litigation might I yet be exposed to if I sync a composition with even a modest AI-generated component to Netflix and several years hence, some newly invented bot detects what I did and reports it? Netflix will not be happy, the rights holders will not be happy, the artists will not be happy, and they will probably all want some – or all – of what I have earnt from it, and quite possibly, quite a lot more besides.

Something that surprised me – as a very experienced music producer and composer – was the sense of ownership that I felt upon creating music with text prompts. This increased when I manipulated it, much in the same way as using a sample. The thing is, although I might kid myself, just because I sample something does not and must not make it – entirely? – mine…

ChatGPT just told me:

The first widely known legal case involving sampling in music was Grand Upright Music, Ltd. v. Warner Bros. Records Inc. in 1991. This landmark case involved Gilbert O’Sullivan suing rapper Biz Markie for unauthorized use of a sample from O’Sullivan’s 1972 song “Alone Again (Naturally)” in Biz Markie’s track “Alone Again”.

The court ruled in favour of O’Sullivan, and the judge famously opened the ruling with “Thou shalt not steal.” This case set a major precedent for the music industry, making it clear that sampling without permission was a violation of copyright law. After this case, sample clearance became a crucial (and often expensive) part of music production.

RAI approaches are the only long-term solution.

AI in Music Production

The production side of music has also seen remarkable advancements due to AI. AI-powered software can handle tasks such as mixing, mastering, and sound design with precision and efficiency. This technology not only speeds up the production process but also ensures high-quality output.

  • Beatoven allows users to create custom soundtracks by selecting genres, moods, and instruments, making it a versatile tool for various projects.
  • Landr Studio leverages AI to master tracks and albums, making near-professional-grade production accessible without pro mastering studio fees.
  • Endlesss is a collaborative music creation platform that uses AI to help musicians jam together in real-time, regardless of their location.
  • Orb Producer Suite offers AI-driven composition tools that assist in generating chord progressions, melodies, and basslines, enhancing the creative workflow for producers.
  • TC Helicon VoiceLive is a live effects processor that offers live harmony, enhancing vocal performances with real-time effects.
  • LifeScore generates RAI “AI-powered music generation in service of artists and rights holders” based on existing music, allowing for continuous musical development.
  • Captain Chords uses AI to generate chord progressions and melodies.
  • Magenta Studio – inside Ableton Live – expands on MIDI ideas or generates new ones from scratch.
  • RipX is a DAW based around AI with integrated stem separation and clean-up, and internal note editing.
  • Spectral Layers by Steinberg focuses on unmixing and audio repair with a layer-based workflow
  • iZotope RX is a sophisticated spectrogram-based audio-restoration suite with many specialized modules.
  • Aiode is an RAI tool that provides virtual musicians modelled upon consenting participants that can respond to uploaded original content
  • Sonarworks VoiceAI is a voice transformer that can modify a vocal performance into one of a number of virtual humans or instruments, as well as having an AI double-tracking mode.
  • Synthesiser V Studio generates singing voices with a DAW-type piano roll into which lyrics and performance expression can be edited.
  • ACE Studio also generates singing voices with a DAW-type piano roll into which lyrics and performance expression can be edited. It can also clone voices.

These tools are democratizing music production, allowing independent artists and hobbyists to create high-quality music without needing expensive equipment or extensive technical knowledge.

AI in Music Performance and lyrics

AI is also only starting to make waves in the realm of actual music performance by humans, but from holographic concerts to real-time interactive experiences, AI is transforming how audiences engage with live music.

  • Holographic Performances: AI, combined with holographic technology, is bringing legendary artists back to life on stage. Shows like the Elvis Presley Evolution (although perhaps he’s just not actually dead) and ABBA’s digital avatars offer fans a chance to experience performances from artists who are no longer with us. These performances use AI to analyze and recreate the artist’s movements, voice, and stage presence, providing an immersive experience that bridges the past and present.
  • Interactive Concerts: AI is enabling concerts to become more interactive and personalized. For example, AI can analyse audience reactions in real-time and adjust the music or visuals accordingly, creating a dynamic and responsive performance. This technology allows for a more intimate connection between the performer and the audience, making each concert a unique experience.
  • Performance Enhancement: AI tools are also being used to enhance live performances by providing musicians with real-time feedback and assistance. For instance, AI can help musicians stay in tune, keep time, and even suggest improvisations during a performance. This can elevate the quality of live music and help performers reach new levels of mastery.
  • Lyrics: Large language models like Chat GPT are extremely good at writing lyrics in prescribed styles – albeit with the above ethical concerns around training sets. Are they truly creative though? Current technology is evolving with a focus on that…

The Debate: Is AI Good or Bad for Musicians?

The rise of AI in music has sparked a lively debate (way beyond just music). On one hand, AI offers numerous benefits:

  • Enhanced Creativity: AI tools can inspire musicians by providing new ideas and perspectives, pushing the boundaries of traditional music-making. For example, AI can suggest unexpected chord progressions or melodic variations that a human might not have considered.
  • Accessibility: AI democratizes music production, allowing independent artists and hobbyists to create high-quality music without needing expensive equipment or extensive technical knowledge. This opens up opportunities for a more diverse range of voices and styles in the music industry.
  • Efficiency: AI can handle repetitive and time-consuming tasks, freeing up musicians to focus on the creative aspects of their work. For instance, AI can automate the mixing and mastering process, ensuring a polished final product with less manual effort.

However, there are also concerns:

  • Authenticity: Some argue that AI-generated music lacks the emotional depth and authenticity of human-created compositions. Music is often seen as an expression of human experience and emotion, and there is a fear that AI might not be able to replicate this aspect fully.
  • Job Displacement: There is a fear that AI could replace human musicians and producers, leading to job losses in the industry. As AI becomes more capable, there is a concern that it might reduce the demand for human talent in certain areas of music production.
  • Ethical Considerations: The use of AI in music raises questions about copyright and ownership, especially when AI-generated works are indistinguishable from those created by humans. Who owns the rights to a piece of music created by an AI? What about joint human-AI creation? How should royalties be distributed? These are complex issues that need to be addressed as AI continues to evolve.

Helmuts Bems, CEO of Sonarworks offers a summary of the impact of AI upon industry stakeholders:

  • Producers and Composers – Winners in the AI Era: Producers and composers will benefit significantly from AI, enhancing their productivity and creativity. They can deliver more content independently.
  • Labels – Legal Battles: Labels are challenging generative AI companies over IP rights. Success in maintaining royalties could offer long-term opportunities, while failure could pose existential threats.
  • Streaming Services – Opportunities and Risks: AI can reduce costs and increase content for personalization, but real-time generative models could threaten the streaming business model, leading to competition from big tech.
  • Live Shows – Positive Impact: AI can enhance live shows with new creative possibilities and adaptive music tailored to audience reactions. However, virtual concerts with AI-generated artists could pose competition.
  • Professional Musicians – Reduced Demand: AI may decrease the need for human musicians in commercial industries, leading to lower royalties and visibility for independent musicians.
  • Hobbist Musicians – Accessibility and Creativity: AI makes music creation more accessible and enjoyable for hobbyists, fostering creativity. However, it might discourage learning traditional instruments.
  • Studio Engineers – Efficiency vs Demand: AI can automate technical processes, increasing efficiency but reducing demand for human engineers. The role may shift to curating and refining AI-generated music.
  • Equipment Companies – Shift in Demand: AI tools will expand solutions offered by equipment companies, shifting demand towards AI-driven tools and potentially reducing reliance on traditional hardware.
  • Software Companies – Market Growth: AI will drive the development of AI-driven tools, reshaping traditional workflows. AI-assisted tools could become a significant market, impacting traditional production software.
  • Publishers – Opportunities and Challenges: AI can expand catalogues and optimize distribution, but market oversaturation and legal challenges may disrupt traditional revenue models.

Conclusion

The emergence of AI in music is a double-edged sword. While it brings unprecedented opportunities for creativity and accessibility, it also poses challenges that need to be addressed. Tools like Songzap exemplify the positive potential of AI, helping musicians explore new horizons and streamline their workflows. In the realm of performance, AI is creating new ways for audiences to experience music, from holographic concerts to interactive live shows. Like with Songzap, the key lies in finding a balance where AI complements human creativity rather than replacing it, ensuring that the music industry continues to thrive in this new era.

The AI music sector is fast-moving but also volatile, as illustrated by the 2019 acquisition of leading UK startup Jukedeck by Chinese giant Bytedance (parent company of TikTok) – just one of many changes of players.

AI isn’t killing creativity – it’s redefining it. As with any tool, its value depends on how you use it. When creatives maintain their artistic intentions while embracing AI’s strengths, the results may be more imaginative and efficient than either could achieve alone. In a time when technology is advancing quickly, the most powerful creative combination may not be human or machine – but both, together.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments