Data Science and AI

Untrainable: Building a Silent Cacophony with HarmonyCloak

A visual deception of a robot taking information and then creating music.

Claim your CPD points

We've watched AI develop visual generation capabilities that, while imperfect, have fundamentally changed how we think about digital imagery. In parallel, audio AI has quietly become just as powerful—and is emerging as a threat to the music industry. Musicians now face a growing challenge: how to prevent their voices from being stolen.

This is the second article in Untrainable, a series originally published on LinkedIn by the Young Data Science Working Group exploring how creators are fighting back against generative AI systems that have "learned" from their work without consent. Follow the Data Science Actuaries on LinkedIn to stay updated on the latest articles.

The purpose of HarmonyCloak is simple yet powerful: embed an inaudible protective signal inside music files. This signal is imperceptible to human listeners but can confuse AI models, making the music effectively unusable for training. In short, it’s like putting a “digital cloak” over a song — listeners enjoy it as usual, but AI models cannot learn from it.

How to quietly overpower the music

HarmonyCloak exploits a gap between human hearing and AI perception. Humans naturally ignore masked sounds, but AI models don’t filter them out. The result is a form of digital camouflage: people hear clean music, while AI hears distorted data.

But how does it actually work?

Step 1 — Break the song into tiny slices: The track is divided into very short time segments (about 10 milliseconds). This allows the cloak to adjust to each moment’s content.

Step 2 — Identify the dominant note or tone: For MIDI files, the system detects the loudest note in each slice. For raw audio, it identifies the strongest frequency at that instant.

Step 3 — Decide how much protective signal to add: Using psychoacoustic masking — a property of human hearing — the cloak determines how much signal can be hidden.

  • Loud notes can hide stronger protection.
  • Quiet passages allow less protection.
  • The cloak is always softer than the music itself.

Step 4 — Add the Invisible ‘Cloak’: An optimisation process inserts a carefully calculated perturbation pattern. To listeners, nothing changes. But to an AI model, the track now looks confusing and inconsistent, disrupting its learning process.

Step 5 — Test human perception: The final cloaked track is tested to ensure that:

  • Human listeners cannot hear any difference.
  • The cloak survives typical audio compression (e.g., MP3 or AAC)
HarmonyCloak in action

HarmonyCloak has already been used in real applications within the music industry with artists attempting to protect their own intellectual property and creativity. As much as AI is evolving in its ability to imitate, the applications of HarmonyCloak in protecting musicians is also growing with applications extending beyond the initial intent of protecting music that has already been released.

Illustration of the threat model where the attacked scrapes music posted online (from the HarmonyCloak paper)

Illustration of the threat model where the attacked scrapes music posted online (from the HarmonyCloak paper)

Some extensions of HarmonyCloak and AI poisoning for the auditory kind include:

  1. Record labels: A label shares preview tracks with journalists under embargo—protected so leaks can’t feed generative AI. Archived demos from legacy artists remain safe during catalog digitisation.
  2. Music Libraries & Licensing Platforms: Stock music services (e.g., AudioJungle, Epidemic Sound) cloak tracks so they retain licensing value. Film/TV production music libraries safeguard cues from being cloned into “AI background music.”
  3. Streaming Platforms: Spotify or YouTube could cloak uploaded songs automatically, ensuring user-generated and professional content alike can’t be copied into training datasets.
  4. Legal & Compliance: An artist in a lawsuit can demonstrate they applied HarmonyCloak to prevent AI scraping, strengthening their legal position.

In short, HarmonyCloak is not just a technical filter—it could become a new standard layer of digital rights management (DRM) for the AI era, working across individual creators, industry stakeholders, and large platforms. You can watch the demo below. 

Limitations and considerations

While HarmonyCloak offers a reactive shield for musicians to protect their creativity from being exploited in AI training, it is not a cure. Like any protective technology, its effectiveness depends on context, implementation, and the ever-evolving landscape of AI development. Below are key factors artists, labels, and platforms should weigh when deciding how — and where — to deploy HarmonyCloak.

Limitations of HarmonyCloak

Limitations of HarmonyCloak

HarmonyCloak represents a clever fusion of psychoacoustics and digital security. By embedding imperceptible signals into music, it gives artists a way to protect their work from being misused in AI training. As AI technology evolves, so must protective tools like HarmonyCloak — but today, it offers musicians a vital step toward regaining control of their creative output.

In the next instalment of Untrainable, we'll explore how creators are confusing the AI bots that automatically crawl and respond to online content. From scrambled video captions to hidden prompt injection, writers are learning to make their work unreadable to automated systems while keeping it perfectly clear for humans.

As #DataScienceActuaries, we’re always looking for another data set to wrangle into something fun using our unique blend of data and actuarial skills. If you have any interesting ideas and want to get involved, join the Data Science Actuaries page or reach out to any of our members.

Further reading

HarmonyCloak: Making Music Unlearnable for Generative AI https://mosis.eecs.utk.edu/publications/meerza2024harmonycloak.pdf

Lichao Sun: Finding harmony in the age of artificial intelligence https://engineering.lehigh.edu/research/resolve/volume-1-2025/lichao-sun-finding-harmony-age-artificial-intelligence

Analysing the tools of resistance against AI-generated content (ironically, image was AI-generated via ChatGPT)

Analysing the tools of resistance against AI-generated content (ironically, image was AI-generated via ChatGPT)

About the authors
Ean Chan headshot
Ean Chan
Ean is a Senior Manager within EY's Actuarial Services team, with experience in Life Insurance, Data Analytics and AI, primarily concentrating on Health and Human Services clients. As chair of the Institute's Young Data Analytics Working Group and member of the Data Science and AI Practice Committee, Ean is dedicated to driving progress in the actuarial field by augmenting our expertise with the latest data science, AI and machine learning methodologies.
Scott Teoh