-640x358.jpg&w=1200&q=75)
Political theorist Curtis Yarvin said he was able to guide Anthropic’s Claude chatbot into echoing his ideological views through sustained prompting.
Yarvin published a transcript he said showed how the model shifted from what he described as a progressive default to mirroring his political framing.
He claimed the change was achieved by embedding extensive prior dialogue into the chatbot’s context window.
If you convince Claude to be based, you have a totally different animal.
Curtis Yarvin said.
The exchange showed the model moving from language moderation to endorsing critiques associated with the John Birch Society.
AI researchers said the episode illustrates how large language models reflect the prompts and context they are given.
Experts noted that prompt engineering is a recognised phenomenon in which carefully framed inputs shape model outputs.
Anthropic builds safeguards into Claude, but sustained questioning can still elicit a wide range of responses.
The incident has renewed debate over neutrality, bias and safety standards in artificial intelligence systems.