Just released: How to raise venture capital in 2023

Download

Generative AI: Legal Landscape, Key Lawsuits, & Risk Mitigation Strategies

Jonathan Selby - Founder Shield
Jonathan Selby

General Manager; Technology Practice Lead

From art and entertainment to healthcare and finance, generative AI is rapidly expanding to meet its increased demand across industries. However, its usage is also prompting necessary discussions around intellectual property, data privacy, and biases. The existing generative AI legal framework is still in its infancy as we learn more about the technology’s risks.

While challenges loom ahead, generative AI is also facing countless opportunities to develop and continue benefiting every industry under the sun. Let’s explore the sector’s evolving legal landscape and review valuable risk management insights for founders to grow their ideas safely and responsibly.

The Legal Landscape of Generative AI

Generative AI refers to technology that uses algorithms to create new content from existing data, closely resembling human-like logic and thought processes. It can rapidly create text, images, and even music when prompted, propelling its integration into workflows of a variety of industries.

While the technology has been in the works for some time, the release of ChatGPT, OpenAI’s main product, catapulted the technology’s evolution and success in late 2022.

Its capabilities seem endless — among its flurry of use cases, it can generate educational content for teachers and review code for software developers. As a result, investing in this technology is becoming necessary for those looking to stay competitive.

However, generative AI is two steps ahead of the regulations that govern it. The existing legal framework uses a melting pot of laws and regulations that were created before AI became mainstream, which does little to address its complexities surrounding liability, intellectual property, data protection, and ethics.

Despite the technology’s rampant growth, countries are trying their best to adapt: The US recently named an AI task force and foundation model transparency. At the same time, Europe continues updating its recent AI Act, and Asian countries are focusing on regulating AI in the financial sector.

Key Generative AI Lawsuits and Claims

As legal frameworks evolve, issues surrounding AI are now being legally challenged. Recent lawsuits highlight concerns related to the data used to train the systems and draw attention to the ambiguity of the current legal framework governing generative AI.

Who Was Involved?

What Happened?

Stability AI

Midjourney

DeviantArt

In January 2023, a group of visual artists filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt. The three plaintiffs alleged the defendants used their creative content to train their generative AI image-creation tools without their permission, citing a list of violations against the companies, including copyright infringement and unfair competition.

Nicholas Basbanes

Nicholas Gage

OpenAI

Microsoft

In a similar court battle, authors Nicholas Basbanes and Nicholas Gage claimed that OpenAI and Microsoft fed the company’s AI models using their copyrighted material without proper licensing. This case flamed the discussion around using copyrighted content in AI training while questioning to what degree authors have creative control over their works.

George Carlin

Dudesy

Lastly, early last year, the estate of the late comedian George Carlin took legal action against media company Dudesy. Following the airing of an hour-long comedy special that mimicked his unique voice and comedy style, Carlin’s estate alleges that the AI-generated program infringed on the comedian’s right to publicity and copyrighted work.

Risk Management Strategies for Generative AI Companies

For AI companies to innovate and grow successfully, they must adopt an effective risk management strategy that protects IP rights, user data, and ethical development.

A good place to start is creating best practices for protecting and respecting IP rights in developing and deploying generative AI technologies, like identifying proprietary IP through consistent audits of all datasets and algorithms. This process should also ensure all third-party data used to teach the system is appropriately licensed and its use documented.

On the other hand, global data protection laws are evolving to safeguard user information, further prompting generative AI companies to keep their data secure from bad actors. Protecting sensitive data and informing users of these practices is essential to foster the ethical use of AI, instilling public confidence, and a positive long-term impact on your company.

Because you can never be too careful, you can also rely on insurance solutions to offer financial protection against potential lawsuits and claims. Let’s explore key coverages.

Intellectual Property (IP) Insurance

IP insurance offers financial coverage for companies claimed to have infringed copyright laws and litigations where they are infringed upon. Covering the financial burden of these cases is essential, as these litigations are notoriously costly and, without proper coverage, can financially crush a company that is still growing.

Errors & Omissions (E&O) Insurance

E&O  insurance, also known as professional liability or “malpractice” insurance, provides coverage for companies accused of providing inadequate work or service. This insurance is critical for an AI company that develops or sells software. If your customer encounters a technical or human error issue with your system, your company can be held liable for any loss they’ve incurred.

Cyber Liability Insurance

Cyber liability insurance offers well-rounded and far-reaching coverage against cyber incidents related to electronic activities. The spike in attacks in recent years has shown insurers the ways a company can face extensive financial damage stemming from internet-based and IT infrastructure-based risks.

From extortion losses in ransom attacks to public relations expenses due to reputational damage, cyber liability insurance can provide personalized coverage for a list of cyber-related risks that can come in many forms.

Product Liability Insurance

Product liability insurance is built to protect companies that manufacture, deploy, or sell products. If a product causes harm or injury, this insurance provides financial protection for legal fees, settlements, or judgments related to a claim.

D&O Insurance

Directors & Officers (D&O) insurance is a type of liability insurance that provides coverage for the personal assets of corporate directors and officers and company assets if they are sued for alleged wrongful acts while managing the company. D&O insurance is nuanced and provides three layers of coverage that can benefit any growing company at every stage.

One example is its financial coverage for expenses incurred during a regulatory investigation. Because generative AI companies are subject to many changing regulations, D&O insurance can cover defense costs and penalties arising from regulatory investigations and compliance issues.

GUIDE

Cyber Risk Management Guide

Navigating Regulatory Compliance

It can be challenging to keep up with regulatory and compliance changes as they evolve alongside technology. This is even more so for AI companies, which are required to keep up with both advances in their industry and changes to the law. However, it’s not impossible to stay ahead of the curve.

The influx of information can be overwhelming, so dedicating staff and resources to continuously researching and enforcing developments surrounding laws and policy changes is ideal. By focusing on the industries you serve, you are guaranteed to stay in the know of regulatory changes specific to your customer sector.

Be detailed and concise with your records. In the event of an audit of insurance or lawsuit, you want to demonstrate your data sources, the consent you obtained, and any compliance measures you implemented effectively. You should implement this alongside transparent reporting practices for AI operations, including data usage, algorithmic decisions, and impact assessments. Doing so can save you time and money in the long run.

Moreover, prioritize data practices and policies that align with privacy regulations. Start by implementing strong security measures like data encryption, access controls and continuous monitoring to mitigate the far-reaching impacts of cybersecurity attacks. Involve your front lines by offering educational learning about policies and security practices to guarantee employees understand its value.

Lastly, actively participate in your industry’s policy discussions and advocacy efforts. Whether it’s participating in public consultations, providing expertise to an advisory committee, or joining a research project, become an active participant in the discussions that continue to build the legal framework that will impact the responsible creation and use of generative AI — and your company’s future.

Preparing for the Future

It’s safe to say the demand for AI-generated content will continue to grow, as will the number of artists, writers, and content creators who seek to protect their work from unauthorized use in training systems. Staying on top of the latest industry trends, from new use cases to those put into question, will be vital for a company’s success.

As litigations continue to draw lines, it will be easier for companies to adapt their operations to new laws, provided they can make swift changes.

This is why creating and enforcing an adaptive risk management strategy is paramount. A proactive approach will safeguard the company’s interests and contribute to the responsible and ethical development of AI technologies.

Amid sudden changes and new applications, generative AI companies can’t go their journey alone. At Founder Shield, we help your high-growth company navigate the complexities of the evolving legal framework so you can focus on growing your business.

Related Articles

ai industry risks
May 22 • Uncategorized

Emerging Risks for the AI Industry and How to Prepare

From bias to security breaches, the future of AI is bright but not without challenges. This post explores emerging risks threatening the AI industry and equips you with strategies to mitigate them.

startups-we're-thankful-for-2023
November 21 • Uncategorized

Startups We’re Thankful For: 2023 Edition

Every day we team with brilliant leaders taking their young companies to new heights. Here are a few of those startups we’re thankful for, what makes them amazing, and how they’re making a difference.