Generative AI: Exploring ownership and authorship
Generative AI continues to be the most talked-about technology, however, one can’t help but wonder who owns the work created by generative AI models. In this article, we’ll try to answer this question as best we can, but first, let’s take a step back to understand how generative AI fits into the greater scheme of things.
If you can imagine artificial intelligence as a pie, then generative AI is a slice of that pie. And as the name suggests, generative AI models or algorithms generate novel content. This is possible because these algorithms can draw on a large dataset of examples to create images, videos, music, and even impressive text.
Yet, at the same time, its outcome still depends strongly on human input since generative AI models rely on a prompting process. This means someone has to write a set of instructions to guide the algorithms’ creative output towards a desired outcome. Because who is more qualified to prompt the machines than the humans who have been doing the creative work all along?
The upside of this collaborative interplay between human creator and generative AI model is that it allows for iterative experimentation to refine and adjust the results. And it is, therefore, unlikely that generative AI will completely replace human content creators in the near future.
But to whom does the final product belong?
Well, that depends on where you are in the world: US law states that intellectual property can be copyrighted only if it is the product of human creativity, and most countries worldwide follow similar practices. However, in South Africa, the answer is not straightforward.
The South African Copyright Act dictates that the author of a computer-generated work is the person responsible for making the arrangements for the creation. So, this could potentially be the developer of the AI, right?
On the other hand, one could say that the user of the AI model is the person responsible for making or creating the work. But how do you establish whether the author’s efforts in creating the work were sufficient?
Because generative AI is so new, there simply aren’t clear answers to these questions at this stage.
A major problem is the inherent way the generative AI models are trained. For example, a type of generative AI that has been specifically designed to help generate text-based content will need to be trained on an enormous amount of text-based data to create rich outcomes. These large data sets often consist of work that was painstakingly crafted by creative humans, and without legal and ethical frameworks in place to protect these artists – their work is being used to train AI algorithms without their consent and without any compensation.
Plus, there is no way that a generative AI model can draw inspiration from art the same way a human artist does without outright plagiarising the original work: Algorithms can mash together influences but cannot create something that is truly new and original.
Transparency and proper attribution have always been the best way for brands and businesses who rely on external sources for content. And luckily, there are some instances where companies are showing responsible practice, like, Adobe, who trained its generative AI model Firefly exclusively on images that Adobe holds the rights to. They even offer to indemnify customers who use their AI tools against future claims against them.
It’s clear that generative AI has so much potential to change our lives for the better, but it is undeniable that ethical content generation has to be prioritised.