Article
6 Min Read

The Top Generative AI Legal Issues To Consider

Generative AI is an exciting technology with lots of potential—and risk. Learn more about the possible legal issues surrounding GenAI—and where we go from here.

The Top Generative AI Legal Issues To Consider
Listen to the audio version
13:47
/
19:15
1x

Generative AI can quickly produce drafts for email, social media posts, blogs, and even images. It can be useful—but it isn’t without risk. Depending on how you use these technologies, your company could encounter disappointed clients, data breaches, and even legal issues.

This doesn’t mean you need to completely avoid generative AI. It just means you need to be aware of the risk—so you can plan ways to use AI safely, legally, and ethically.

Understanding generative AI

First off, we need to explain generative artificial intelligence (GenAI).

GenAI is a class of computer programs that users can interact with through written or visual instructions, called prompts.

There are a number of different machine learning models that we can group under the umbrella of “generative AI”, including:

  • Generative pretrained transformers (GPT), which are trained on large data sets and use probability to calculate the outputs, or responses, that a user is most likely to expect.
  • Generative adversarial networks (GAN), which are two or more AI systems that work in tandem—some of the AIs are prompted to create a realistic output, while the others are prompted to determine which results are correct. If you’ve ever used Midjourney to create AI-generated images, you’ve interacted with a GAN.
  • Variational encoders (VAE), which convert complex data into simpler codes, store this information, and use it to produce similar results in the future.
  • Normalizing flow models, which modify simple probability distributions to create more complex ones and generate new data points.

Generative pretrained transformers may sound the most familiar to many people, as that’s the basis behind ChatGPT and other free AI chat tools like Bard and Claude. These tools use natural language processing (NLP) to create AI outputs that, while entirely probability-based, read very similar to natural human language.

With the help of a machine learning engineer, you could create your own GPT model, too—as well as a GAN, VAE, or normalizing flow model.  

Not all “AI powered” tools use GenAI, though. Since ChatGPT launched in late 2022, many software platforms now tout their AI capabilities. While these companies may use machine learning algorithms and principles in their product, it doesn’t mean they’re generative AI tools.

Intellectual property concerns

Many consumer-facing products, like ChatGPT, have been trained with data that’s publicly available on the internet. This raises questions about intellectual property—and experts in American and international copyright law are still working to figure out the answers.

Can AI output count as plagiarism or copyright infringement?

Because publicly available generative AI models  are often trained on internet data, their algorithms may begin to recreate patterns, structures, and sentences that are close to the source material. Some claim that these outputs can lead to copyright infringement, and plagiarism if a source’s quote or idea is used directly without attribution.

Right now, there aren’t clearly defined ways to defend against or detect generative AI copyright infringement and plagiarism—but some claim that using tools that provide sources, like Bing Chat, can help.

There’s also the matter of whether or not AI companies have the appropriate permissions to use copyrighted works when training their Large Language Models (LLMs), and who—or what—holds the copyright protection on new, original AI outputs.

These are questions that courts and government agencies are working to figure out.

In August 2023, NPR reported that The New York Times may potentially take OpenAI to court for including the publication’s articles in ChatGPT training data. Comedian Sarah Silverman also joined a class-action lawsuit against OpenAI in 2023, stating that the company used an online copy of her memoir without permission while training its GPT LLM.

The outcome of these and similar disputes related to genAI could continue to shape how businesses can legally and effectively use AI in their work, as well as if fair use doctrine protects AI trainers.

Data privacy and generative AI

Intellectual property isn’t the only question of rights that arises around the use of  generative AI systems. Many people also wonder who is allowed to store and use the data that’s been fed into an AI tool.

Generative AI data collection

Some AI companies will include their users’ conversations in the tool used to further train LLMs. This can vary based on which tool—and version of the tool—used.

According to data protection company Cyberhaven, as much as 11% of the data that workers feed into ChatGPT may be confidential company information, which could expose a company to risks like data breaches, loss of intellectual property rights.

Every generative AI tool’s data collection policies are different.

How AI stacks up against privacy laws

If your company’s operations are impacted by privacy laws like HIPAA, GDPR, or CCPA, there may be additional complications you face when using genAI.

Some sources claim that doctors who use ChatGPT to consolidate notes or conduct research could be violating patient privacy laws.

If you work in a regulated industry or location, you may want to consider taking specific steps before incorporating genAI into your work or onto your devices.

Misinformation and deepfakes

Let’s say you manage to work out the legal issues and you move forward with feeding data into a GenAI tool—can you trust the results?

AI hallucinations

Generative AI tools have been shown to “hallucinate”—instances where they have produced factually incorrect answers that read as definitive statements-due to the predictive nature of AI algorithms. These responses can sound as confident, as if it was an accurate response.

When using genAI, you may want to remember that generative AI tools aren’t sentient, can’t experience feelings like confidence, and aren’t able to second-guess or fact-check themselves like a person. Human involvement is an important part of responsible AI use.

Sharing and distributing fake content

Generative AI can also be used to create and distribute fake content. This can include:

  • Factually inaccurate articles someone has prompted the AI to write
  • Deepfake videos that look and sound like a real or well-known person, but aren’t
  • Images showing people, places, and scenes in various combinations that never actually occurred

Once this content is online, it can easily be picked up and shared by people and publications that aren’t aware it’s fake—so fact-checking and verifying the information your business uses (or shares) is important.

Liability and accountability

Because generative AI is such a new technology, several key legal questions around AI are still in development. These include:

  • The proper use of data in training AI
  • Consequences of distributing fake information
  • Using someone else’s likeness to create AI-generated works
  • What constitutes AI plagiarism
  • Whether and under what circumstances AI outputs count as original work

You may want to seek legal advice if you’d like to incorporate AI into your business and have concerns about legal implications—such as how to safely use the technology in regulated industries.

Preparing for the future

There’s still a lot that’s uncertain about the use of GenAI. Lawyers, judges, court systems, and even national governments are actively exploring how people and businesses can responsibly and safely use AI.

In early 2023, for example, Italy temporarily banned the use of ChatGPT while regulators worked with OpenAI to confirm that the company’s data collection policies met specific standards.

Meanwhile, the U.S. Patent and Trademark Office (USPTO) continues to publish reports on how AI and copyright laws intersect.

There are also a number of organizations, such as Harvard’s Berkman Klein Center for Internet & Society, The University of Toronto’s Centre for Ethics, and the Future of Humanity Institute at the University of Oxford, that are working to produce useful resources around how our society can use and develop AI in a manner that supports things like humanity, privacy, creativity, and ownership.

Navigate the AI landscape with expert help

If you’re concerned about how you can best implement AI technology to support your work, Generative AI pros can help you better understand the tools you’ve selected and establish frameworks for use. It’s a smart step to take when exploring how AI fits into your operations.

Disclosure: Upwork is an OpenAI partner, giving OpenAI customers and other businesses direct access to trusted expert independent professionals experienced in working with OpenAI technologies.

Upwork does not control, operate, or sponsor the other tools or services discussed in this article, which are only provided as potential options. Each reader and company should take the time to adequately analyze and determine the tools or services that would best fit their specific needs and situation.

Heading
asdassdsad
Projects related to this article:
No items found.

Author Spotlight

The Top Generative AI Legal Issues To Consider
Emily Gertenbach
B2B SEO Content Writer & Consultant

Emily Gertenbach is a B2B writer who creates SEO content for humans, not just algorithms. As a former news correspondent, she loves digging into research and breaking down technical topics. She specializes in helping independent marketing professionals and martech SaaS companies connect with their ideal business clients through organic search.

Latest articles

Popular articles

Join Upwork, where talent and opportunity connect.