OpenAI lost access to the Claude API this week after Anthropic claimed the company was violating its terms of service.
In a significant move that highlights the escalating
competition in the AI industry, Anthropic has revoked OpenAI's API access to
its Claude models. The decision comes after Anthropic discovered OpenAI was
allegedly using its tools, specifically the highly-regarded coding model Claude
Code, to benchmark and develop its own upcoming GPT-5 model.
Violation
of Terms of Service
Anthropic's spokesperson, Christopher Nulty, confirmed that
OpenAI's actions were a "direct violation" of the company's terms of
service. Anthropic's commercial agreement explicitly prohibits customers from
using its services to "build a competing product or service, including to
train competing AI models."
OpenAI's technical staff reportedly used specialized
developer APIs, not the standard chat interface, to connect Claude to its
internal tools. This allowed them to systematically test Claude's performance
in areas like coding, creative writing, and safety responses to compare it
against their own models and inform the development of GPT-5.
OpenAI's
Response and Industry Tensions
In its defense, OpenAI stated that evaluating other AI
systems to benchmark progress and improve safety is a standard industry
practice. While expressing respect for Anthropic's decision, OpenAI's chief
communications officer, Hannah Wong, also noted her disappointment, especially
since Anthropic's API access to OpenAI's models remains open.
This incident underscores the growing tensions within the AI
sector, where companies are becoming increasingly protective of their
intellectual property and competitive advantages. It's not the first time
Anthropic has restricted a competitor's access; the company previously cut off
the AI coding startup Windsurf after rumors of a potential acquisition by
OpenAI.
A History
of Divergence
The rivalry between Anthropic and OpenAI is rooted in their
shared origins. Anthropic was founded by a group of former OpenAI researchers,
including siblings Dario and Daniela Amodei, who left due to disagreements over
OpenAI's direction and its shift from a nonprofit to a for-profit entity. Anthropic
was founded as a public benefit corporation with a strong emphasis on AI
safety, which it describes as a "systematic science." This founding
mission and a focus on "Constitutional AI" have set it on a different
path from its former partner.
#Mutesa Techlink



