
Colombia’s Supreme Court rejected a cassation appeal after running the lawyer’s filing through AI-detection software that concluded the document was largely machine-generated.
The court said the text was analysed using the Winston AI tool, which indicated the submission contained only about 7% human-written content.
“Faced with a well-founded suspicion that the brief submitted by the attorney had not been drafted by the legal professional himself, the court submitted the text to the Winston AI tool,”
The ruling stated.
Legal experts later tested the court’s own ruling with the same software and reported that it appeared to contain 93% AI-generated text, fuelling criticism over the reliability of such tools.
The controversy quickly spread across legal circles after attorney Emmanuel Alessio Velasquez posted the results on X, arguing the outcome exposed the “methodological fragility” of using AI detectors as legal evidence.
Further experiments by lawyers showed similar inconsistencies, with some historical legal documents and academic papers written before modern AI tools existed also being flagged as AI-generated.
The episode has intensified debate over AI detection technology as researchers warn such tools rely on statistical writing patterns that often misclassify formal legal writing, academic texts and work produced by non-native English speakers.