Artificial Intelligence in Court: ChatGPT Becomes a New “Assistant” for Self-Represented Litigants, but Raises Judges’ Concerns

More and more people in legal cases are turning to AI chatbots, like ChatGPT, hoping they will help with legal matters. Experts note that the number of citizens representing themselves in court (so-called pro se participants) and using AI to prepare documents, find legal arguments, and even simulate courtroom performances is growing. The main reason is the high cost of legal services and the desire to save money.
Supporters argue that AI helps reduce inequality in access to justice: people without lawyers get at least basic help in drafting documents and understanding procedures. However, critics warn that AI cannot fully understand legal nuances, often makes mistakes, and “hallucinates” — i.e., generates non-existent references to laws, court decisions, and quotes.
Such cases have already led to high-profile incidents. Several U.S. lawyers submitted AI-prepared documents containing fictional precedents. Judges responded harshly — issuing fines and demanding written explanations.
One notable example is journalist Timothy Burke, accused of illegally accessing Fox News materials. In his case, lawyers submitted documents containing false references and quotes generated by AI. The judge required the lawyers to explain how the errors occurred and warned about potential disciplinary measures.
Another incident occurred in a New York appellate court, where a participant used a video avatar created by generative AI to appear remotely. The judge immediately ordered the broadcast stopped, allowing the person to speak in person, and stated that concealing the AI nature of a participant is strictly unacceptable.
Legal experts note a serious gap between ordinary citizens’ understanding of AI capabilities and actual courtroom norms. Courts require transparency, source disclosure, and accountability for submitted documents, but most AI users do not realize these requirements.
Research confirms that “hallucinations” by language models remain a serious problem: even modern systems frequently provide non-existent legal references. Experts warn that relying on AI without verification is unacceptable — especially in legal matters, where every mistake can have serious consequences.
As technology spreads, lawyers and regulators increasingly discuss the need for rules for AI use in courts. Experts agree that AI can be a useful tool — but only under strict control, transparency, and personal responsibility for outcomes.
In the coming years, AI will likely find its place in legal practice, helping those who cannot afford a lawyer. However, the path to official courtroom recognition remains long: before entrusting AI with legal work, the system must learn to protect the very idea of justice from digital errors.
You may also be interested in:
- Cyprus Animal Protection Party: most pet shops operate without licenses
- Cyprus Post moves its services online: queues and paperwork will decrease
- A new law will tighten honey labelling requirements in Cyprus
- Christmas Village returns to the Presidential Palace on December 13–14
- Animal Party: Most pet shops in Cyprus sell animals without licenses

