From art and entertainment to healthcare and finance, generative AI is rapidly expanding to meet its increased demand across industries. However, its usage is also prompting necessary discussions around intellectual property, data privacy, and biases. The existing generative AI legal framework is still in its infancy as we learn more about the technology’s risks.
While challenges loom ahead, generative AI is also facing countless opportunities to develop and continue benefiting every industry under the sun. Let’s explore the sector’s evolving legal landscape and review valuable risk management insights for founders to grow their ideas safely and responsibly.
The Legal Landscape of Generative AI
Generative AI refers to technology that uses algorithms to create new content from existing data, closely resembling human-like logic and thought processes. It can rapidly create text, images, and even music when prompted, propelling its integration into workflows of a variety of industries.
While the technology has been in the works for some time, the release of ChatGPT, OpenAI’s main product, catapulted the technology’s evolution and success in late 2022.
Its capabilities seem endless — among its flurry of use cases, it can generate educational content for teachers and review code for software developers. As a result, investing in this technology is becoming necessary for those looking to stay competitive.
However, generative AI is two steps ahead of the regulations that govern it. The existing legal framework uses a melting pot of laws and regulations that were created before AI became mainstream, which does little to address its complexities surrounding liability, intellectual property, data protection, and ethics.
Despite the technology’s rampant growth, countries are trying their best to adapt: The US recently named an AI task force and foundation model transparency. At the same time, Europe continues updating its recent AI Act, and Asian countries are focusing on regulating AI in the financial sector.
Key Generative AI Lawsuits and Claims
As legal frameworks evolve, issues surrounding AI are now being legally challenged. Recent lawsuits highlight concerns related to the data used to train the systems and draw attention to the ambiguity of the current legal framework governing generative AI.
Who Was Involved? |
What Happened? |
|
---|---|---|
Stability AI Midjourney DeviantArt |
In January 2023, a group of visual artists filed a class-action lawsuit against Stability AI, Midjourney, and DeviantArt. The three plaintiffs alleged the defendants used their creative content to train their generative AI image-creation tools without their permission, citing a list of violations against the companies, including copyright infringement and unfair competition. |
|
Nicholas Basbanes Nicholas Gage OpenAI Microsoft |
In a similar court battle, authors Nicholas Basbanes and Nicholas Gage claimed that OpenAI and Microsoft fed the company’s AI models using their copyrighted material without proper licensing. This case flamed the discussion around using copyrighted content in AI training while questioning to what degree authors have creative control over their works. |
|
George Carlin Dudesy |
Lastly, early last year, the estate of the late comedian George Carlin took legal action against media company Dudesy. Following the airing of an hour-long comedy special that mimicked his unique voice and comedy style, Carlin’s estate alleges that the AI-generated program infringed on the comedian’s right to publicity and copyrighted work. |
Risk Management Strategies for Generative AI Companies
For AI companies to innovate and grow successfully, they must adopt an effective risk management strategy that protects IP rights, user data, and ethical development.
A good place to start is creating best practices for protecting and respecting IP rights in developing and deploying generative AI technologies, like identifying proprietary IP through consistent audits of all datasets and algorithms. This process should also ensure all third-party data used to teach the system is appropriately licensed and its use documented.
On the other hand, global data protection laws are evolving to safeguard user information, further prompting generative AI companies to keep their data secure from bad actors. Protecting sensitive data and informing users of these practices is essential to foster the ethical use of AI, instilling public confidence, and a positive long-term impact on your company.
Because you can never be too careful, you can also rely on insurance solutions to offer financial protection against potential lawsuits and claims. Let’s explore key coverages.
Intellectual Property (IP) Insurance
IP insurance offers financial coverage for companies claimed to have infringed copyright laws and litigations where they are infringed upon. Covering the financial burden of these cases is essential, as these litigations are notoriously costly and, without proper coverage, can financially crush a company that is still growing.
Errors & Omissions (E&O) Insurance
E&O insurance, also known as professional liability or “malpractice” insurance, provides coverage for companies accused of providing inadequate work or service. This insurance is critical for an AI company that develops or sells software. If your customer encounters a technical or human error issue with your system, your company can be held liable for any loss they’ve incurred.
Cyber Liability Insurance
Cyber liability insurance offers well-rounded and far-reaching coverage against cyber incidents related to electronic activities. The spike in attacks in recent years has shown insurers the ways a company can face extensive financial damage stemming from internet-based and IT infrastructure-based risks.
From extortion losses in ransom attacks to public relations expenses due to reputational damage, cyber liability insurance can provide personalized coverage for a list of cyber-related risks that can come in many forms.
Product Liability Insurance
Product liability insurance is built to protect companies that manufacture, deploy, or sell products. If a product causes harm or injury, this insurance provides financial protection for legal fees, settlements, or judgments related to a claim.
D&O Insurance
Directors & Officers (D&O) insurance is a type of liability insurance that provides coverage for the personal assets of corporate directors and officers and company assets if they are sued for alleged wrongful acts while managing the company. D&O insurance is nuanced and provides three layers of coverage that can benefit any growing company at every stage.
One example is its financial coverage for expenses incurred during a regulatory investigation. Because generative AI companies are subject to many changing regulations, D&O insurance can cover defense costs and penalties arising from regulatory investigations and compliance issues.
Cyber Risk Management Guide
Preparing for the Future
It’s safe to say the demand for AI-generated content will continue to grow, as will the number of artists, writers, and content creators who seek to protect their work from unauthorized use in training systems. Staying on top of the latest industry trends, from new use cases to those put into question, will be vital for a company’s success.
As litigations continue to draw lines, it will be easier for companies to adapt their operations to new laws, provided they can make swift changes.
This is why creating and enforcing an adaptive risk management strategy is paramount. A proactive approach will safeguard the company’s interests and contribute to the responsible and ethical development of AI technologies.
Amid sudden changes and new applications, generative AI companies can’t go their journey alone. At Founder Shield, we help your high-growth company navigate the complexities of the evolving legal framework so you can focus on growing your business.