The music industry is tuning into the capabilities of deepfake audio technology after the song “Heart on my Sleeve” went viral for using AI to create realistic imitations of the voices of Drake and The Weeknd. While the artists’ record label swiftly removed the unauthorised AI-generated rendition from streaming platforms, not all musicians are opposed to deepfakes. Commenting on the AI hit, Canadian musician Grimes embraced the technology and encouraged her fans to compose music using her voice. She subsequently released her own AI tool to generate vocals that mimic her tone and offered a 50/50 split of royalties derived from any tracks using her AI-generated voice. Within a day, however, Grimes backtracked and placed restrictions on using her voice for “toxic” lyrics, asking fans not to use the tool to create “Nazi anthems” or tracks about “baby murder” (if only she had read our earlier blog post, which highlighted the need for celebrities to protect their reputation when making deepfake licensing deals).

Grimes’ embrace of fan-created vocal soundalike performances raises interesting legal issues. While she threatens to use copyright law to take down any deepfake songs with “toxic” lyrics, in the UK her ability to do this is uncertain: she may not own the copyright in any resulting track (which could enable her to remove it), and even if she did, it may not be straightforward for her to enforce her “acceptable-use” rules. As explored below, Grimes’ offer highlights some of the limitations of the law in dealing with deepfake content.

Would UK law treat Grimes as the owner of any soundalike sound recording?

We have recently written about the legality of vocal imitation, where we noted that in the UK, there is no intellectual property right which directly provides for the protection of a voice, although in some circumstances a performer might be able to rely on contractual protections or the law of passing off, provided they can show sufficient reputation and “goodwill” in their voice. Similar legal principles apply whether a soundalike vocal performance is created by an AI vocal synthesiser or using ordinary human vocal imitation. Musicians such as Grimes might therefore have some difficulty in controlling soundalike vocal performances in the UK.

Although Grimes could argue that she is the copyright owner of any sound recording created using her AI tool, a user of her software might have a claim to ownership. Under UK law, the first owner of the copyright in a computer-generated sound recording is “the person by whom the arrangements necessary for the making of the recording are undertaken”. Past cases have focused on identifying the person who instigated the relevant recording, assumed financial responsibility, or organised the necessary activities for its creation. When using Grimes’ tool, a user records or uploads their own acapella vocal performance, which is then processed by the software into a soundalike performance. The sound recording is then made available to download and can be incorporated into a musical composition by the user. The user’s active involvement suggests they could be considered the person who makes the arrangements necessary for the creation of the track by instigating the recording, although it could be argued that Grimes’ contribution of the tool was equally necessary. Where a soundalike performance is created using third-party software (rather than with Grimes’ tool), it is highly unlikely that Grimes would be able to assert a copyright interest in the recording.

Can Grimes enforce her acceptable-use rules for the use of her voice?

Unless she owns any intellectual property rights in the soundalike vocal performance (e.g. through a valid assignment of the copyright in the terms of use), Grimes may find it difficult to enforce her acceptable-use rules through mechanisms such as copyright takedown notifications. When creators use Grime’s tool to generate recordings, she may be able to impose contractual restrictions to enforce her acceptable-use rules. However, again, where soundalike vocal performances are generated using third-party software, Grimes may have difficulty enforcing her policies.

As discussed in our previous blog post, artists may be able to rely on defamation law where a convincing deepfake of their voice is used without permission, in a way that may bring them into disrepute, and is likely to cause serious harm to their reputation. Musicians such as Grimes may therefore be able to take action against particularly objectionable deepfakes (such as “nazi anthems”), but it will be much harder for them to pursue the defamation route in respect of distasteful, less offensive content (such as tracks with a political message).

One practical solution may be for her to 'watermark' the output of her AI tool. That could be done so that fakes could be more easily identified, but also by including copyright works that she does own (through sampling for example), making the takedown process simpler.

Does the law need to be changed to address deepfake vocal impersonations?

While Grimes welcomes the creation of deepfake vocal imitations, most artists are likely to be more protective of their distinctive voices. For example, in April last year, the UK performers and entertainment professionals union, Equity, launched the “Stop AI Stealing the Show” campaign, calling on the government to strengthen performers’ rights in response to the rise of AI across the entertainment industry.

Although law reform proposals are emerging to address the broader issue of deepfakes, vocal impersonation is not currently in the legislative “spotlight”. For example, the proposed UK Online Safety Bill regulates deepfake content, however, as currently drafted it seeks to address the narrower issue of pornographic deepfakes. Ofcom also recently addressed the use of deepfakes in broadcast programming, suggesting that while the technology poses challenges, the existing obligations within the Broadcasting Code are capable of protecting broadcast audiences from the harms of synthetic media.

Introducing transparency obligations on deepfake creators, designed to alert consumers when content is synthetic, could be a promising solution. This could reduce the reputational harm of artists being associated with offensive deepfake content. Transparency is a key guiding principle in the UK’s recently released AI white paper, and also an approach taken in the EU, where the proposed AI Act features transparency obligations requiring deepfake creators to disclose that their content has been artificially manipulated. As currently drafted, however, this obligation may not apply fully to outputs which engage the creator’s right to freedom of expression or the right to freedom of the arts.