Like many other AI service providers, search giant Google is now providing its customers with limited protection against copyright infringement claims.
Numerous generative AI services utilise neural networks trained on content collected from various sources without consent or compensation. Content creators and authors have filed lawsuits to seek remuneration for the use of their works in these models, which can generate pieces closely resembling or directly copying their unique styles and themes.
Businesses considering the adoption of generative AI are understandably cautious about potential legal liabilities. Microsoft has already stated its commitment to defending customers using its Copilot products against copyright infringement lawsuits. Google has now followed suit with a similar approach.
Neal Suggs and Phil Venables, VP of Legal and VP, TI Security & CISO at Google Cloud said in a statement, “If you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved. It means that you can expect Google Cloud to cover claims, like copyright infringement, made against your company, regardless of whether they stem from the generated output or Google’s use of training data to create our generative AI models.”
Google’s indemnity offer covers Duet AI, the chatbot integrated with its Workspace apps, as well as the Vertex AI platform, which encompasses AI Search, AI Conversation, AI Text, Multimodal Embeddings, and Visual Captioning software. Also, the code-generating Codey APIs are included under this indemnity policy.
The Alphabet subsidiary is offering protection against allegations that both the training and output of AI systems infringe copyrights, as opposed to just the output argument.
Terms to follow
However, this safeguard won’t apply in situations where a user intentionally instructs a model to create content that outright copies someone else’s work.
“You as a customer also have a part to play. For example, this indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly,” Suggs and Venables said.
These limitations on indemnification are quite common. For instance, Microsoft‘s policy only comes into effect when users utilise AI safeguards specifically designed to prevent problematic results.
The question of whether AI systems infringe on copyright laws during training or output remains somewhat uncertain and is still a subject of legal deliberation, particularly in the United States. However, the US Copyright Office has clarified that content generated by AI cannot be eligible for copyright protection.