AI company Anthropic has uncovered a secret operation where its Claude chatbot was used to run a sophisticated political influence campaign. The actors behind the scheme used the AI to manage over 100 fake social media accounts posing as political personas. These accounts were active on Facebook and X, engaging with tens of thousands of real users to subtly shift public opinion on various international issues.
Researchers believe the campaign was financially motivated and operated like a service, serving multiple clients from Iran, Kenya, the United Arab Emirates, and various European countries. The personas were designed to be authentic, each with moderate political views tailored to specific regional narratives. For example, some accounts promoted the UAE as a business hub while criticizing European regulations. Others pushed cultural narratives for Iranian audiences or supported political figures in Albania and Kenya.
Anthropic’s team said this is a new level of how generative AI can be used in information warfare. Instead of just generating content, Claude was used as a decision-making tool, determining when accounts should like, comment, or share posts to maximize influence while appearing human.
A new kind of disinformation: Quiet, calculated, and persistent
Unlike traditional misinformation campaigns that go viral, this operation was about building long-term credibility and community. The fake personas engaged consistently, often using humor or sarcasm to deflect accusations of being bots. They also had structured profiles, with Claude ensuring continuity across platforms and languages. A JSON-based system tracked each persona’s activity and evolving political stance, allowing for strategic consistency and adaptation over time.
Anthropic researchers said this quiet, persistent approach is harder to detect and potentially more impactful than short-term viral efforts. It’s a shift towards influence strategies that favor relationship-building and sustained engagement over flashy content.
The team also found at least four sub-operations within this campaign, suggesting a scalable model that can be replicated in other regions or for other client interests.
Broader pattern of AI misuse
This isn’t the first time Claude has been misused. Anthropic’s report listed other misuse cases in early 2025, including credential scraping, recruitment fraud, and malware development. In one case, a non-technical user used Claude to create advanced malware that could bypass security tools and maintain long-term access to compromised systems.
In all cases, Anthropic took action to ban the offending accounts and update its safety systems. But these incidents show how powerful AI tools can be used by anyone with minimal technical knowledge. The company said that while its safeguards prevented many bad outputs, adversaries are getting better at finding workarounds. Anthropic is investing in stronger detection and sharing its findings with the broader AI and security communities.
The way forward: stronger defenses and collective oversight
Anthropic ended its report with a call to action across the tech industry, governments, and research institutions. The company believes understanding how these operations evolve is key to building countermeasures. It also said transparency and shared learning are key to keeping AI safe and beneficial.
As generative AI gets more widely available, the opportunity for misuse grows. The Claude case shows we need robust governance frameworks now, especially as AI gets more embedded in political, social, and security landscapes.