Anthropic pledges no client data for AI training

Anthropic, a generative artificial intelligence (AI) startup, has promised not to use client data for large language model training according to updates to the Claude developer’s commercial terms of service. The company pledges to support users in copyright disputes.
Anthropic, led by ex-OpenAI researchers, revised its commercial Terms of Service to clarify its stance. Effective January 2024, the changes state that Anthropic’s commercial customers also own all outputs from using its AI models. The Claude developer “does not anticipate obtaining any rights in Customer Content under these Terms.”
OpenAI, Microsoft, and Google pledged to support customers facing legal issues due to copyright claims related to the use of their technologies in the latter half of 2023.
Anthropic has committed similarly in its updated commercial terms of service to protect customers from copyright infringement claims arising from their authorized use of the company’s services or outputs. Anthropic stated,
“Customers will now enjoy increased protection and peace of mind as they build with Claude, as well as a more streamlined API that is easier to use,”
As part of its legal protection pledge, Anthropic said it will pay for any approved settlements or judgments resulting from its AI’s infringements. The terms apply to Claude API customers and those using Claude through Bedrock, Amazon’s generative AI development suite.
Related: Google taught an AI model how to use other AI models and got 40% better at coding.
The terms state that Anthropic does not plan to acquire any rights to customer content and does not provide either party with rights to the other’s content or intellectual property by implication or otherwise.
Advanced AI systems like Anthropic’s Claude, GPT-4, and LlaMa, known as large language models (LLMs), are trained on extensive text data. The effectiveness of LLMs depends on diverse and comprehensive training data, improving accuracy and contextual awareness by learning from various language patterns, styles, and new information.
Universal Music Group sued Anthropic AI in October over copyright infringement on “vast amounts of copyrighted works – including the lyrics to myriad musical compositions” that are under the ownership or control of the publishers.
Anthropic isn’t alone in being the object of such lawsuits. Author Julian Sancton is suing OpenAI and Microsoft for allegedly using nonfiction authors’ work without authorization to train AI models, including ChatGPT.
Magazine: Top AI tools of 2023, weird DEI image guardrails, ‘based’ AI bots: AI Eye



