Jelly – Digital Agency
The rise of generative artificial intelligence is reshaping the creative landscape. Texts, images, music, and even entire videos are now generated based on simple prompts. But one question remains unanswered: who is the author of a work created by a machine?
More than a legal debate, this is an ethical and cultural issue that crosses creative boundaries and challenges the business models of the creative industries. And if, at first, everything seemed like promising innovation, today we realise that the risks of appropriation and devaluation of human authorship are too real to be ignored.
AI as a tool or as an author?
The distinction between using AI as a tool or considering it the true creator of a work has profound implications. Currently, most legal systems, including those of the European Union and the United States, establish that only humans can hold copyright. AI, however sophisticated, has no legal personality or consciousness.
Thus, credit usually falls on the human user: the person who provides the input, adjusts parameters, or chooses the final output. But is that control sufficient to justify author status? What if the human contribution is minimal?
Recent cases that raise questions
As generative AI tools become more accessible, controversial cases are multiplying. These situations not only expose legal loopholes but also reveal the growing imbalance between technological innovation and the protection of creative rights.
1.AI-generated books: authorship or operation?
In 2025, there were numerous reports of users publishing entire books on Amazon KDP, created with tools such as Sudowrite or Writesonic. With a simple prompt – ‘write a historical novel with a female protagonist who travels back in time to the 19th century’ – hundreds of pages are generated in minutes.
Critics point out that this constitutes a type of “manufacturing of intellectual authority”: the user presents themselves as the author, but the creation is almost entirely algorithmic, supported by models trained on decades of literature written by others. AI learns to write by reading real authors and never cites them.
2.The music industry and the explosion of deepfake music
On platforms such as TikTok and Spotify, AI-generated music with voices imitating real artists began to appear. In 2024, the song ‘Heart on My Sleeve’ went viral by simulating Drake and The Weeknd, but was quickly removed after complaints from Universal Music.
In 2025, tools such as Suno AI and Udio made it possible to create entire tracks with replicated vocal identities. This created a new legal dilemma: is the voice an authorial asset? How do you protect a sound identity in a world where anyone can replicate it with software?
3.Fashion, design and algorithmic plagiarism
Visual design is also being profoundly affected. In 2025, a Spanish designer saw her textile pattern, created years earlier, replicated almost in its entirety in a garment created with Midjourney. The brand claimed that the design was ‘originally generated’. The AI, as always, explained nothing.
This case is symptomatic of a broader phenomenon: AI does not create from scratch, it remixes what already exists. And when models are trained with vast, unregulated datasets, we are faced with a system that normalises plagiarism disguised as efficiency.
These examples show that the problem is not the technology itself, but who controls it and how it is used. Currently, large technology platforms concentrate power, exploit creative data without transparency, and offer tools that benefit end users while making the original authors invisible.
Innovation is being built on layers of unrecognised creativity. As Carlos A. Scolari pointed out, ‘implicit consumers become invisible producers,’ and this has never been more literal than it is now.
What is being done?
Several international organisations are trying to keep pace with innovation. The U.S. Copyright Office has already made a clear statement: works created exclusively by artificial intelligence are not eligible for copyright registration unless there is significant creative intervention by a human.
In Europe, the European Parliament is discussing the AI Act and legislative proposals that include a ‘mandatory AI training licence’. This measure would require platforms to financially compensate authors whose works are used to train generative models—an important step, but still far from implementation.
The World Intellectual Property Organisation (WIPO) warns of the urgency of creating global recognition and compensation mechanisms, lest the entire creative ecosystem be weakened in the long term.
Creativity is not at risk, but creative justice is. In an era where AI can generate content indistinguishable from human content, it is more important than ever to define who is entitled to authorship, remuneration and recognition.
At Jelly, this is not just a legal issue, it is a strategic one. Because communicating with impact in the digital world of 2025 also means being on the right side of ethical innovation.