Supreme Court’s AI Push Still in Pilot Phase; No Formal Policy Yet, Government Tells Parliament
The Supreme Court of India may be exploring the use of artificial intelligence (AI) in judicial functioning, but the technology remains confined to controlled pilot projects, with no formal policy or guidelines yet in place for wider adoption. This was confirmed in the Lok Sabha on Friday by Minister of State for Law and Justice Arjun Ram Meghwal, who detailed the judiciary’s cautious approach toward AI.
Meghwal said the Supreme Court’s dedicated AI Committee was formed to examine potential technological advancements, but added that AI is currently used only in areas specified within the eCourts Phase III project plan.
He noted that the judiciary fully recognises the challenges of integrating AI into legal work, including algorithmic bias, linguistic limitations, data privacy vulnerabilities, and the need for manual verification of AI-generated content to avoid errors in judicial decisions.
To ensure safety and reliability, the eCommittee of the Supreme Court has created a Sub-Committee of High Court judges and technical experts to recommend secure connectivity and authentication mechanisms. Their mandate includes strengthening digital infrastructure, protecting sensitive case data, and ensuring reliable service delivery under the ambitious eCourts project.
Among the limited AI tools currently being tested is LegRAA (Legal Research Analysis Assistant), designed to help judges analyse legal documents and perform research more efficiently. Another AI-driven tool, Digital Courts 2.1, provides a comprehensive digital interface for judges, integrating voice-to-text (ASR-SHRUTI) and translation assistance (PANINI) to aid in dictating orders and judgments.
During its pilot testing, the judiciary has so far reported no systemic bias or unintended outputs, Meghwal informed.
The minister also raised concerns about the rising threat of fabricated or morphed digital content being submitted in courts. He said such cases fall under relevant provisions of the Information Technology Act, 2000, including identity theft, cheating by impersonation, and transmission of obscene or harmful content. Offences may also be booked under the Bharatiya Nyaya Sanhita, 2023, covering electronic forgery and falsification of records.
As courts increasingly rely on digital evidence, Meghwal stressed that safeguarding authenticity is critical to preserving the integrity of the justice system.
Our Thoughts
India’s cautious entry into AI-led judicial reform reflects a system trying to innovate without compromising integrity. The Supreme Court’s deliberate decision to limit AI deployment to pilot programmes signals awareness of the risks: algorithmic bias could unintentionally shape judgments, language and translation errors could distort meaning, and data security lapses could weaken trust in the judiciary. What stands out is the recognition that AI tools like LegRAA and Digital Courts 2.1 are assistive technologies—not decision-makers. This distinction is vital in a justice system built on human reasoning and constitutional values.
Equally important is the government’s acknowledgment of rising digital forgery, a threat that grows in parallel with technological integration. As courts rely more on electronic records, the need for strong authentication mechanisms becomes urgent. The creation of a High Court–led Sub-Committee marks a responsible step toward building safeguards before scaling. AI can enhance efficiency, but only if balanced with oversight, transparency and robust checks. India’s judiciary appears willing to innovate—but not at the cost of fairness or public trust.
