Artificial intelligence (AI) is reaching new heights, but its popularity also raises concerns. This technology is advancing rapidly — often outpacing the rate users understand its consequences — and legislation has yet to catch up. As this continues, the complex relationship between AI and intellectual property becomes increasingly important to consider.
Generative AI, in particular, could have significant implications for IP rights and laws. Tools like ChatGPT are growing at a record pace but face accusations of plagiarism. In other cases, questions of AI’s own IP rights arise. Organisations and government agencies must consider these issues before investing further in AI.
How Generative AI Works
Understanding the relationship between AI and intellectual property starts with learning how AI works. Generative models like ChatGPT — now the fastest-growing consumer application in history — are famous for their ability to generate new content instead of simply analysing existing data. However, it’s important to note that they can’t produce anything out of thin air.
AI learns by looking for patterns and relationships in data. Generative models then use this learning to reproduce, summarise or fuse content to meet users’ requests. Even though the end product may not exactly match any existing work, AI cannot produce truly original ideas. It can only reproduce or combine what it’s learned from other sources.
The data sets required to produce these results are also massive. OpenAI trained GPT-3, the underlying model behind ChatGPT, on 300 billion words from books, web articles and other sources. More training data results in more versatile models, but it also complicates the matter of IP.
Complications With AI and Intellectual Property
Generative AI can be highly useful. It can automate routine tasks like customer outreach or research and provide helpful jumping-off points to inspire artists or company leaders. However, the way it works introduces significant IP concerns if businesses don’t use it carefully.
Accidental Copyright Infringement
Because AI training data sets are so large, they often contain copyrighted material and rarely have the original creators’ consent. You could argue that some AI use cases fall under fair use of this material, such as using AI to summarise a copyrighted work for research purposes. Profiting from AI-generated content is another matter.
Imagine someone using generative AI to create a piece of digital art they then sell. The AI program didn’t technically create anything new but pieced together images and patterns it learned from existing art. Even if the artist didn’t intend to steal anyone’s IP, the AI might closely mimic copyrighted material, leading to copyright issues.
Even if the product AI generates is transformative enough to be considered a new work under EU law, there’s the issue of licensing. If the original artists the AI learned from didn’t consent to their IP training the AI model, the AI user is profiting from another’s work without their permission.
Who Owns AI-Generated Material?
Ownership is another murky area of AI and intellectual property’s relationship. If a user or business uses AI to create content or patent an idea, who owns the rights to the final product?
In 2019, the European Patent Office (EPO) refused two patent applications that listed an AI program as the inventor. The EPO backed their decision by stating inventors must be human beings, not machines. Under that precedent, an AI model itself cannot own IP, but does that mean the owner is the end user or the parties that created the data used to train the model?
The most straightforward solution would be to grant ownership to the end user. However, because AI automates much of the process, these parties can’t reasonably claim they came up with these ideas. Because AI often recycles ideas from other sources, it may be more ethical to assign ownership to the people behind its training data. In a database of millions of data points, though, it’s hard to say who specifically that is.
Security and Privacy Issues
AI may also jeopardise the privacy and security of some intellectual property. Training data sets are tempting targets for cybercriminals because they contain so much information in one place. If these databases include sensitive details like trade secrets and patents, that vulnerability becomes even more concerning.
Now that more industries like manufacturing are embracing AI, they could be a target of hackers looking for valuable IP. Increased cybersecurity is paramount. The manufacturing sector had the highest share of cyberattacks of any industry in 2021. If these companies hold vast amounts of data from other parties for training AI, that could lead to massive breaches, potentially affecting hundreds or even thousands of outside organisations.
Many training databases gather copyrighted or trademarked information without the owners’ knowledge. As a result, breaches could impact the sensitive IP of parties who had no idea it was even vulnerable and in a database they had no control over.
Balancing AI and IP
In light of these risks, organisations should approach AI and its impact on IP carefully. These issues don’t necessarily mean businesses should avoid AI altogether, but they emphasise the importance of increased governance and regulatory guidance over the practice.
Responsibility starts with AI developers. Devs should ensure they avoid using copyrighted material when training AI models or gain the informed consent of the rights holders if they do. In some cases, it may be best to use synthetic data, which delivers similarly accurate results while not reflecting any real-world information, reducing privacy and copyright infringement risks.
Companies and users implementing AI should consider how it may use other people’s IP. If they can’t verify that a tool wasn’t trained on protected information, they should avoid it. Disclosing when a product or service used any AI-generated content may also help boost transparency.
As AI adoption grows, government agencies should review these concerns and enact new requirements accordingly. Copyright and trademark laws should adopt AI clauses, specifying who owns AI-generated content and requiring developers to meet certain disclosure and licensing standards.
Consider AI IP Issues Today to Prevent Future Issues
Generative AI is still young, but it’s growing quickly. Businesses, individuals and governments must ask questions about its ethical use now before it creates more complicated issues in the future.
AI and intellectual property have a complicated relationship, so there likely aren’t any easy answers to these problems. However, if organisations start working toward a solution today and bringing more people into the conversation, it will encourage positive change to happen sooner.