September 22, 2023
by Eunice Buhler / September 22, 2023
As G2’s General Counsel, it’s my job to help build and protect the company, so it’s likely no surprise that generative AI is top of mind for me (and lawyers everywhere!).
While AI presents an opportunity for organizations, it also poses risks. And these risks raise concerns for all business leaders, not only legal departments.
With so much information out there, I recognize these waters can be difficult to navigate. So, to help get to the crux of these concerns and boil them down into a helpful guide for all business leaders, I recently sat down with some of the top minds in the AI space for a round-table discussion in San Francisco.
There, we discussed the changing landscape of generative AI, the laws affecting it, and what this all means for how our businesses operate.
We came to the agreement that, yes, generative AI tools are revolutionizing the way we live and work. However, we also agreed that there are several legal factors businesses should consider as they embark on their generative AI journeys.
Based on that discussion, here are seven things to consider when integrating AI into your company.
Your first task is to identify whether you're working with an artificial intelligence company or a company that uses AI. An AI company creates, develops, and sells AI technologies, with AI as its core business offering. Think OpenAI or DeepMind.
On the other hand, a company that uses AI integrates AI into its operations or products but doesn't create the AI technology itself. Netflix's recommendation system is a good example of this. Knowing the difference is pivotal, as it determines the complexity of the legal terrain you need to navigate and deciphers which laws apply to you.
G2 lays out the key AI software in this developing field. When you have a bird’s-eye view of the possible tools, you can make better decisions on which is right for your business.
Keep an eye out on the latest developments in the law, as generative AI regulations are on the horizon. Legislation is rapidly developing in the US, UK, and Europe. Likewise, litigation involving AI is actively being decided. Keep in touch with your attorneys for the latest developments.
You can tell a lot about a company by its terms of use. What does a company value? How do they handle the relationship with their users or customers? The terms of use can serve as a litmus test.
OpenAI, for instance, explicitly states in its usage policies that its technology shouldn't be used for harmful, deceptive, or otherwise unethical applications. Bing Chat requires users to comply with laws prohibiting offensive content or behavior. Google Bard, meanwhile, focuses on data security and privacy in its terms - highlighting Google's commitment to protecting user data. Evaluating these terms is essential to ensuring your business aligns with the AI partner's principles and legal requirements.
We compared the terms of use and privacy policies of several key generative AI players to help us determine which AI tools would work best for our company’s risk profile and recommend you do the same.
Between your company and the AI company, who owns the input? Who owns the output? Will your company data be used to train the AI model? How does the AI tool process, and to whom does it send personally identifiable information? How long will the input or output be retained by the AI tool?
Answers to these questions inform the extent to which your company will want to interact with the AI tool.
When using generative AI tools, it’s paramount to understand the extent of your ownership right to the data that you put into the AI and the data that is derived from the AI.
From a contractual perspective, the answers depend on the agreement you have with the AI company. Always ensure that the terms of use or service agreements detail the ownership rights clearly.
For example, OpenAI takes the position that between the user and OpenAI, the user owns all inputs and outputs. Google Bard, Microsoft’s Bing Chat, Jasper Chat, and Anthropic’s Claude similarly each grant full ownership of input and output data to the user but simultaneously reserve for themselves a broad license to use AI-generated content in a multitude of ways.
Anthropic’s Claude grants ownership of input data to the user but only “authorizes users to use the output data.” Anthropic also grants itself a license for AI content, but only “to use all feedback, ideas, or suggested improvements users provide.” The contractual terms you enter into are highly variable across AI companies.
AI's ability to generate unique outputs creates questions about who has intellectual property (IP) protections over those outputs. Can AI create copyrightable work? If so, who is the holder of the copyright?
The law is not entirely clear on these questions, which is why it's crucial to have a proactive IP strategy when dealing with AI. Consider whether it is important for your business to enforce IP ownership of the AI output.
Presently, jurisdictions are divided about their views on copyright ownership for AI-generated works. On one hand, the U.S. Copyright Office takes the position that AI-generated works, absent any human involvement, cannot be copyrighted because they are not authored by a human.
Note: The US Copyright Office is currently accepting public comment on how copyright laws should account for ownership with regard to AI-generated content.
Source: Federal Register
For AI-generated works created in part by human authorship, the U.S. Copyright Office takes the position that the copyright will only protect the human-authored aspects, which are ‘independent of’ and ‘do not affect’ the copyright state of the AI-generated material itself.
On the other hand, UK law provides that AI output can be owned by a human or business, and the AI system can never be the author or owner of the IP. Clarifications from many global jurisdictions are pending and a ‘must-watch’ for business lawyers as a significant increase in litigation on output ownership is anticipated in the next few years.
Privacy is another vital area to consider. You need to know where your data is stored, whether it's protected adequately, and if your company data is used to feed the generative AI model.
Some AI companies anonymize data and do not use it to improve their models, while others might. It's essential to establish these points early on to avoid potential privacy breaches and to ensure compliance with data protection laws.
Broadly speaking, today’s privacy laws generally require companies to do a few key things:
The way AI is built, from a technical perspective, it is extremely difficult to separate personal information, making it practically challenging to be in full compliance with these laws. Privacy laws are constantly changing, so we certainly expect that the advent of AI will inspire further changes to these laws.
If your company operates in the European Union, compliance with the General Data Protection Regulation (GDPR) becomes critical. The GDPR maintains strict regulations concerning AI, focusing particularly on transparency, data minimization, and user consent. Non-compliance could result in hefty fines, so it's essential to understand and adhere to these regulations.
Like the GDPR, the European Union’s proposed Artificial Intelligence Act (AIA) is a new legal framework aimed at regulating the development and use of AI systems. It would apply to any AI company doing business with EU citizens, even if the company is not domiciled in the EU.
AIA regulates AI systems based on a classification system that measures the level of risk the technology could have on the safety and fundamental rights of a human.
The risk levels include:
Both AI companies and companies integrating AI tools should consider making their AI systems compliant from the start by incorporating AIA features during the development phases of their technology.
The AIA should be effective by the end of 2023 with a two-year transition period to become compliant, failure of which could result in fines up to €33 million or 6% of a company’s global income (steeper than the GDPR, which noncompliance is penalized at the greater of €20 million or 4% of a company’s global income).
Lastly, your company's officers and directors have fiduciary duties to act in the best interest of the company. Nothing new there. What is new, however, is that their fiduciary duties can extend to decisions involving generative AI.
There is added responsibility for the board to ensure the company’s ethical and responsible use of the technology. Officers and directors must consider potential legal and ethical issues, the impact on the company's reputation, and financial implications when working with AI tools.
Officers and directors should be fully informed about the risks and benefits of generative AI before making decisions. In fact, many companies are now appointing chief AI officers whose responsibility is to oversee the company’s strategy, vision, and implementation of AI.
AI will significantly impact the fiduciary duties of company officers and directors. Fiduciary duties refer to the responsibilities company leaders have to act in the best interests of the company and its shareholders.
Now, with the rise of AI, these leaders will need to keep up with AI technology to ensure they're making the best decisions for the company. For instance, they might need to use AI tools to help analyze data and predict market trends. If they ignore these tools and make poor decisions, they could be seen as not fulfilling their duties.
As AI becomes more prevalent, officers and directors will need to navigate new ethical and legal challenges, like data privacy and algorithmic bias, to ensure they are managing the company in a responsible and fair manner. So, AI is adding a new layer of complexity to what it means to be a good company leader.
Just last month, two new pieces of generative AI regulation were introduced in Congress. First, the No Section 230 Immunity for AI Act, a bill that aims to deny generative AI platforms Section 230 immunity under the Communications Decency Act.
Note: Section 230 immunity generally insulates online computer services from liability with respect to third-party content that is hosted on its site and generated by its users. Opponents of this bill argue that because the users are providing the input, they are the content creators, not the generative AI platform.
Alternatively, proponents of the bill argue that the platform provides information that generates the output in response to the user’s input, making the platform a co-creator of that content.
The proposed bill could have a huge effect–it could hold AI companies liable for content generated by users using AI tools.
The second policy, the SAFE Innovation Framework for AI, focuses on five policy objectives: Security, Accountability, Foundations, Explain, and Innovation. Each objective aims at balancing the societal benefits of generative AI with the risks of societal harm, including significant job displacement misuse by adversaries and bad actors, supercharger disinformation, and bias amplification.
Continue to look out for new laws on generative AI and pronouncements with regard to how the deployment of Generative AI interacts with existing laws and regulations.
Note: It is anticipated that the upcoming 2024 election will be pivotal for the generative AI landscape from a regulatory perspective. HIPAA, for example, is not an AI law but will need to work with generative AI regulations.
While your legal teams will keep you informed, it’s important for all business leaders to have awareness of the issues.
You don’t need to be an expert in all the legal details, but understanding the seven considerations will help you address concerns and know when to turn to legal counsel for expert advice.
When the partnership between AI and business is done right, we’re all able to contribute to the growth and protection of our businesses–speeding innovation and avoiding risks.
Wondering how AI is impacting the legal industry as a whole? Learn more about the evolution of AI and law and what the future holds for the pair.
Eunice is General Counsel at G2 (the company’s first ever!). Entrepreneurialism has been part of Eunice her entire life: as a kid, she created a kid’s newspaper business, in middle school she founded a global non-profit, during her gap year she published a book, and in college she was immersed in Silicon Valley’s start-up culture at Stanford University. Professionally, she cut her teeth at a high-powered Chicago law firm and eventually traded client services for company-building. She loves using the practice of law to help grow businesses. When she’s not leading G2’s Legal department, she is volunteering for numerous charities focusing on healthcare, scholarships, and poverty alleviation.
AI has an ever-evolving potential that can feel overwhelming without adequate knowledge. When...
The legal industry presents a unique contrast: many of its principles have remained the same...
If you're anxious about introducing AI into your business, you're not alone. With the...
AI has an ever-evolving potential that can feel overwhelming without adequate knowledge. When...
The legal industry presents a unique contrast: many of its principles have remained the same...