Just released: How to raise venture capital in 2023

Download

Emerging Risks for the AI Industry and How to Prepare

TL:DR

Key Takeaways

Jonathan Selby - Founder Shield
Jonathan Selby

General Manager

Understanding the AI Industry Landscape 

Almost every industry has been influenced by AI in the past year: Building avatars with AI in the gaming industry, a major actors’ union allowing AI to do voice acting in video games, and legal cases potentially being predicted by the technology. The world is facing the new normal of AI usage everywhere, and the industry is grappling with rampant growth and nearly endless potential (it is growing at almost 16% annually, according to Statista).

This skyrocketing surge has come from companies like OpenAI, whose generative AI chatbot, ChatGPT, broke internet records when it was launched in late 2022 due to its human-like responses. Since then, the company has also released a text-to-image tool called DALL-E, which increasingly generates pictures that need more than a double take to assess if it’s a real or fake image. 

Google also launched its own tool, Gemini (formerly Bard), which integrates with Google apps, and Elon Musk joined the AI race with Grok, a straightforward generative tool available for premium users on X (formerly Twitter).

Several startups are also putting up a fight against these tech giants and are revolutionizing their own categories. For example, Jasper.ai shook up the marketing space with dedicated content generation, and Midjourney has become a worthy rival for DALL-E. While these are tools with a broad audience scope, other platforms like Timely, Hugging Face and Memora Health are helping people make smarter scheduling, better app development and easier healthcare task management.

Identifying Emerging Risks in AI

Like any groundbreaking technology, AI hasn’t come without risks. Most of these tools are trained with abysmally large amounts of data and each company uses unique algorithms to input and output information. Since many of these algorithms work in unsupervised ways, it’s difficult to tell how they select and discriminate data and whether human input creates bias.

Ethical and Privacy Concerns

One of the most prevalent AI industry risks is its ethical use. To start, the material used to train AI widely stems from original pieces like books, news articles, essays, poems, and a myriad of visual art. Unfortunately, the technology is still highly unregulated, and there’s no formal framework to limit its usage of original pieces for training purposes. 

This is exactly what the New York Times is currently battling with Microsoft and OpenAI in federal court, as the journal alleges both companies are illegally using their original pieces to train ChatGPT without paying for the content. Many others have put forth similar lawsuits, with others even questioning if a generated image is an original piece or just a stitched-up creation of other original visuals.

Stemming from this same issue, people are concerned about the use of their personal data to train algorithms. To address this issue, a UN Special Rapporteur on the right to privacy reported that transparency and explainability of personal data usage keep companies away from unethical practices. AI companies can do their part by keeping their users informed on what datasets they implement and which ones are limited from their algorithms while further regulations are enforced.

Another major issue facing AI is possible bias in its training material and algorithm. United Healthcare is currently facing a lawsuit from patients alleging that the AI they use to filter claims denied those of senior Medicare users with a 90% error rate. Such discriminatory cases have serious consequences in users’ lives and lead to damaging companies’ reputations forever.

Technology Misuse and Security Threats

AI has proven to be a powerful tool with positive results, like automating workflows and lessening the burden of repetitive tasks. However, it also bears the potential to create harmful content on behalf of malicious users. For example, Google recently reported on the technology’s ability to make cyberattacks a lot smarter, making it easier for hackers to obtain information.

On the other hand, AI is deceptively good at mimicking people’s likenesses, like their facial features and voices, which have been used for harmful purposes like deepfakes. Public figures like Taylor Swift and US President Joe Biden are some of the victims who have fallen prey to people using their likenesses to scam or propagate fake news.

AI systems also work like sponges, absorbing all the information they receive to become more knowledgeable and useful. Their deep learning capabilities are also a double-edged sword since any personal data fed to them could be obtained in data breaches. While AI companies are rapidly advancing to keep systems safe, looming cyberattacks could mean a flurry of personal information can be at risk of theft.

Regulatory and Compliance Risks

As AI evolves, so do its regulations and the need to stay compliant. AI companies should work just as hard on implementing incoming frameworks as they do to advance their technology and innovate. As seen with previous risks, new frameworks are necessary for AI to thrive and truly aid its users instead of harming them, and managing of risk is crucial in this context. Companies must adapt to such regulations quickly to succeed in the industry.

But it’s not an easy task when AI platforms are present worldwide and different jurisdictions impose different rules. This is where leaders must equip themselves with agile compliance teams that can keep up and ensure the safety of their service within the law.

GUIDE

Web3 Risk Management Playbook

Impact on Businesses and Consumers

Sam Altman, Founder and CEO of OpenAI, recently expressed that, when used responsibly, AI can make the world much better and more abundant. He’s also aware of the risks the technology poses and says leaders need to pay close attention to them. AI harnesses the power to improve efficiency, accuracy and productivity at work, but if used incorrectly, it can be deeply detrimental to people and businesses.

As AI is increasingly used for content creation, HR processes, legal cases and many other important tasks, companies must carefully study their providers and their services to avoid costly, and even embarrassing, oversights. For example, leaders might not want talent recruiters to discriminate against candidates due to faulty AI programming or for AI-powered healthcare systems to generate harmful recommendations to patients.

These risks can affect companies’ reputation and credibility, create poor customer experience, affect an individual’s public image from deepfakes, increase cyber attacks and scams and spread misinformation.

Risk Management Strategies for AI Companies

With some accusations of bias, discrimination, privacy concerns, and a lack of transparency in systems, AI companies are under attack from all sides. These companies may be driving the future of business and creating scalable solutions, but clearly, they are not free of regulatory and legal risks. They must consider these three risk management strategies to build robust and agile businesses: 

Develop Ethical Frameworks

To build ethical AI systems and ensure transparency in AI operations, companies should consider adopting ethical frameworks. These structures allow business leaders to define an AI risk management approach and sort a methodology for managing AI and machine learning (ML) models in production. 

But did you know 89% of respondents from a 2023 Deloitte survey said that their company does not have (or are unsure if they have) ethical principles governing emerging tech products? 

Creating these responsible frameworks and principles first involves companies familiarizing themselves with emerging technology and aligning it with their mission and values. Then, business leaders can look to AI practitioners to implement an AI Center of Excellence (CoE) made of experts to consistently assess the AI lifecycle. Pilot programs can also allow time to discuss the ethical and legal risks of AI. 

Most importantly, these frameworks are not just created once and left to pick up dust. Companies must update them frequently for better transparency with users, fair and impartial outputs and higher quality services and products. 

Invest in Cybersecurity and Data Protection

AI systems are subject to novel security vulnerabilities as well as normal cybersecurity threats. What’s more, for secure AI system development, there are many stages, including design, development, deployment and operation, and maintenance. Therefore, safeguarding data used in AI applications requires a multi-layered approach, with data encryption, strict access controls and adversarial training. 

Investing in specific AI-powered cybersecurity solutions can also equip IT teams at AI companies with the tools to safeguard businesses and protect against cyber attacks. They could delve into real-time monitoring, leverage penetration test platforms with AI capabilities and use ML to protect against malware. 

Navigate Regulatory Compliance

Companies can work with legal experts to anticipate and adapt to regulatory changes, like the new EU Parliament and Council AI Act, which will likely come into force in 2025. 

It’s important to be familiar with other established standards and procedures too. For example, some companies, including Nike and Walmart, chose to adopt Data & Trust Alliance’s Algorithmic Bias Safeguards for Workforce and Responsible Data & AI Diligence for M&A. 

Knowledge about standards and evolving AI regulation proves that AI companies are taking ownership of security outcomes for customers, staying informed and compliant. And attending extra events won’t go amiss either, like the Web Summit last year where speakers discussed AI regulation.

Preparing for the Future With Insurance

Insurance is the number one risk mitigator since it provides a financial safety net. And, for developers in particular, it encourages experimenting with bolder ideas knowing they’re shielded from the worst. 

Consider autonomous vehicles, for example, powered by sensors, cameras and AI. Car accidents, even with sophisticated algorithms in place, are a possibility. So, liability insurance for autonomous vehicles could cover damages caused, protecting companies and potential victims.

Even some insurance companies recognized how autonomous cars could make life safer and wanted to play a key part in enabling this technology to reach the roads. AXA, for example, even pressured the UK government to have clear legislation so the technology could be rolled out, integrated into daily life, and, most importantly, insured safely. 

It’s advised to research what insurance brokers and underwriters are particularly focused on evolving and personalizing their AI insurance policies and offerings. Also, ongoing dialogue among stakeholders is key for informing companies about relevant insurance options and guarding against emerging AI risks.

In general, here are a handful of recommended insurance policies for AI companies:

  • General liability — a policy that covers personal or property damage occurring on business premises. 
  • Tech errors and omissions (E&O) — a policy that protects businesses offering a technology service or product from the financial fallout of errors, omissions or negligence.  
  • Cyber liability — a policy protecting companies from third-party lawsuits related to cyberattacks, data breaches, etc. 
  • Product liability — a policy that covers legal fees, court costs, or settlements resulting from product liability claims. 
  • Directors & officers (D&O) liability insurance — a policy that covers the compensation costs of claims made against directors and officers.
  • Intellectual property (IP) insurance — a policy designed to protect businesses if there is IP infringement

In the evolving AI industry, it is difficult to keep on top of all the risks. This is where Founder Shield can come in, helping AI companies stay proactive with risk management and insurance and encouraging continuous adaptation.

360 Risk Assessment

Understand how your insurance coverage & risk management measures up.

Related Articles

generative AI legal risk mitigation
June 12 • Uncategorized

Generative AI: Legal Landscape, Key Lawsuits, & Risk Mitigation Strategies

Your generative AI company is the IT thing nowadays, so it’s work knowing legal risk mitigation strategies to stay succcessful and keep your business away from lawsuits.

startups-we're-thankful-for-2023
November 21 • Uncategorized

Startups We’re Thankful For: 2023 Edition

Every day we team with brilliant leaders taking their young companies to new heights. Here are a few of those startups we’re thankful for, what makes them amazing, and how they’re making a difference.