Judicial Systems in the Age of Artificial Intelligence Implementation: The U.S. and China

—Dominic Theos (Mentor: Nick Smith)

SHARE

Abstract

The world has witnessed rapid advances in artificial intelligence (AI) development. In recent years, however, development took a backseat as AI entered the age of implementation (Lee, 2018). Products that once existed only in theory are now being put into practice across various domains. This momentum extends to one of society’s most central pillars: the courtroom.

A technology so foreign to most people quietly entering the heart of justice systems across the world seems deserving of far more attention than it is receiving. Existing discourse has examined this technology on a micro-level, analyzing its current use in the United States and the impact on specific case outcomes (Newcomb, 2024). However, less research has focused on macro-level implications, specifically how this technology might reshape the structures and values of political systems themselves.

With a Summer Undergraduate Research Fellowship (SURF) from the Hamel Center for Undergraduate Research, I pursued a set of questions on that topic: How are the world’s leading AI powers, the United States and China (Lee, 2018), implementing this technology in their courtrooms, and which system is better suited to handle the challenges that this integration presents?

Methodology

While this research took place over a summer, my interest in the topic began during Professor Nick Smith’s Intro to Law and Justice course when Kieran Newcomb, conducting his own undergraduate research at the time, gave a guest lecture on AI’s growing role in sentencing. The following year, I took Future of Humanity, which allowed me to focus my studies on the U.S. and Chinese models for AI development and implementation. These experiences drove me to pursue a SURF project of my own.

The first stage of my work consisted of compiling books that would be central to my philosophical research, namely Kai-Fu Lee’s AI Superpowers and Bin Liang’s The Changing Chinese Legal System, 1978–Present: Centralization of Power and Rationalization of the Legal System. Seminal government documents were also necessary to understand the political ideologies behind each country and its actions. However, given the unique nature of AI as a rapidly evolving and increasingly implemented technology, a large portion of my resources consisted of recent articles, academic papers, and official reports on AI implementation. I organized my resources into weekly themes. The first week expanded upon the research I had completed in Future of Humanity, covering AI Superpowers and foundational AI texts. The next few weeks consisted of research specific to the United States, including the actual implementation, its benefits and dangers, and the institutional and cultural context in which this took place. The same was done in the following weeks for China, culminating in Bin Liang’s book on the country’s legal reform. 

Working remotely, I accessed the materials online, taking thorough notes that related to the week’s theme. I created one document for each week of research, in which I created subsections devoted to each source I read that week. This organized structure was necessary, as this type of research consists of taking in as much information as possible and using it to not just guide my understanding of the topic but support any arguments I might go on to make. Therefore, I needed to be able to easily access specific notes when writing my research article several months later. 

Dominic Theos presentation

The author presenting his research as a guest speaker for UNH’s Future Leaders Institute.

Throughout the summer, I constantly refined my research focus. When applying for SURF, I thought I could learn everything there is to know about these two nations and their uses of AI. It did not take me long to realize that this was an ambitious goal for a ten-week span. This meant making changes like identifying only a few important AI systems and prioritizing the most relevant readings. This flexibility was crucial given the summer’s time constraints and my initially broad goal. The more I learned, the more I discovered how much remained to be explored.

The summer concluded as I finalized my comparative analysis and prepared a presentation for the UNH Future Leaders Institute. This, along with a research article that I will develop into my thesis, fulfilled my research goals for the summer. What follows is a summary of my findings, covering how the United States and China approach courtroom AI, and offering an analysis of the implications this technology holds for each political system.

Introduction: Two Competing AI Models

A new cold war has begun. At least, this is what many experts seem to think. When it comes to transformative technologies like nuclear weapons and space travel, advancement is often framed as a race, competition, or even war. AI is no exception. With its promise of unprecedented speed, efficiency, and analytical power, the technology’s advancement has become a central pursuit of states across the globe. Eric Schmidt, former CEO of Google, has even referenced the “AI race between the U.S. and China” (Mao & Patel, 2024). These two nations have emerged as the leaders in this domain, with contrasting approaches to AI development and implementation: the Silicon Valley Model and the Chinese Model (Lee, 2018). These models extend to the courtroom, with both countries seeking to be the template for how to successfully merge this technology with a judicial system. While the two nations’ current courtroom uses of AI differ, they both point to a future of increased implementation, one that China is better equipped to handle. Furthermore, the increased adoption of this technology does not merely favor China’s system, but actively reinforces the authoritarian, centralized structures that define it.

The divide between these nations’ models boils down to two factors: governance approach and data infrastructure. The first is the strategy that the nations use to control AI implementation, and the second is how these nations collect and use data to train AI systems. Relying on private tech companies, the United States government has remained out of the picture for the most part, resulting in a decentralized and market-driven approach. This laissez-faire capitalist approach results in numerous proprietary datasets that are used for privately owned AI models. China, on the other hand, is state directed, with the government actively detailing how the technology will be implemented. Given this approach, China’s data infrastructure is centralized and government regulated (Lee, 2018).

Courtroom AI Implementation

Within this foundational distinction, though, are the concrete differences seen in each nation’s courtroom implementation. To examine these differences, I begin with the American approach. As of now, judges remain at the wheel. The adoption has primarily involved predictive algorithms that assist judges with their sentencing decisions, called risk assessment tools (Picard-Fritsche et al., 2017). These tools are a form of predictive AI, which are AI models specifically designed to make future predictions from past data. This is done through deep learning, or the process of AI training itself to recognize patterns in a given dataset. For predictive AI, these patterns are statistical, with the system identifying correlations within the relevant data that will be used to inform its future predictions (Mucci, n.d.).

The most well-known risk assessment tool in use in the United States is COMPAS, or Correctional Offender Management Profiling for Alternative Sanctions. This predictive software uses information gathered from public criminal records and a 137-item questionnaire given to the convicted person (Angwin et al., 2016). Using this data, COMPAS then provides a risk score from 1 to 4 (low risk), 5 to 7 (medium risk), or 8 to 10 (high risk), essentially predicting the likelihood of recidivism, or the chance that an offender will reoffend. The internal parameters that are used to produce these scores, though, are unknown, sealed behind a black box. Various states across the U.S. are using COMPAS, but it is up to each individual court to decide upon its use. 

If American judges use AI like a statistical calculator, I found that Chinese courts are building something closer to a copilot. In 2017, the Chinese Communist Party released a seminal document titled the “New Generation Artificial Intelligence Development Plan” (Webster et al., 2017). While this broadly outlined China’s ambition to incorporate AI into their government, it also marked the beginning of their “intelligent court project.” This officially promoted AI applications for uses such as evidence collection, case analysis, and legal document reading and analysis. Simply put, the “intelligent court project” gave courts across China the green light to experiment with AI implementation. A court could choose to use it as an artificial law clerk, or to not even use it at all. Zhou Qiang, the former chief justice of the Supreme People’s Court, stated the nation’s vision plainly: “The ‘intelligent court’ project functions as a key component of judicial reforms in China, as well as a powerful driving force for taking China’s judicial reforms to the next level” (Wang, 2021).

This plan was an invitation to incorporate not only predictive AI, but generative AI as well. While predictive AI like COMPAS uses deep learning purely for statistical pattern recognition, generative AI recognizes broader patterns within massive datasets to understand prompts and generate content (Stryker & Scapicchio, n.d.). Generally, the most common form is the large language model (LLM), which generates text-based outputs. Given the size of China, there are countless reported pilot systems in use in courts across the country. Some of these resemble United States tools, and others are far more involved in judicial decision-making. Two prominent examples of the latter are Shanghai’s “206 System,” and the city of Shenzhen’s Intermediate Court’s intelligent adjudication system.

The 206 System was developed by iFlyTek, China’s leading AI company, in collaboration with the Shanghai People’s High Court. The model provides all-around assistance, from legal document analysis to the detection of contradictions and gaps in evidence (Zheng, 2020). These capabilities demonstrate AI’s potential to perform tasks at the core of judicial work. The Shenzhen system shares this trait. This newly adopted LLM summarizes cases, creates prompts for questioning, and even generates reasoning and judgments (Liu & Li, 2025). This final function is the most significant, because it suggests AI can engage in legal reasoning itself. Shenzhen, often viewed as China’s Silicon Valley for its role as a technology and innovation hub, has been explicitly recognized by China’s Supreme People’s Court for its “AI-assisted system to support the adjudication process, including case filing, hearings and legal document drafting” (China Daily, 2025). In both cases, the judge remains at the wheel, though by a much smaller margin than judges in the U.S.

So, while there is no such thing as a “robot judge,” I believe AI’s role in courtrooms will likely continue expanding in both nations. Kieran Newcomb expressed this in his research on the United States, arguing that as AI advances, it will reach a point where it becomes “a viable candidate to replace clerks and speed up the work of the courts” (Newcomb, 2024). China’s already aggressive implementation only reinforces this likelihood. In the competitive framing of an “AI cold war,” the United States faces pressure to keep pace with Chinese advances in judicial AI. Exactly what this implementation will look like is unknown, but expansion is certain. The question, then, is not whether the United States will adopt generative AI in its courts, but which nation is better equipped to harness this technology’s potential and manage its risks.

AI’s Benefits and Dangers

What is this potential? My answer is threefold: AI is faster, cheaper, and potentially more accurate than humans. Institutions tend to value these qualities. Speed is the most obvious advantage. Both the United States (American Bar Association, 2025) and China (Wang, 2021) face overcrowded judicial systems, with caseloads that grow exponentially every year. This challenge is not unique to these two nations; courts worldwide struggle with similar backlogs. AI, which can produce advanced, humanlike outputs in seconds, seems to offer a solution.

Cost and accuracy are more complex but still apply. It is difficult to find the exact prices for courtroom AI software, because they are rarely publicly listed. COMPAS, however, is estimated to cost around $68,000 for the first year, and $19,000 each following year (Newcomb, 2024). This is significantly less expensive than judges and even undercuts the cost of clerks, whose median salary was approximately $57,000 in 2023 (Bureau of Labor Statistics, 2023). Software costs will likely decrease further, too, as AI becomes more widely adopted. As for accuracy, current courtroom AI systems perform at roughly the same level as humans in measurable tasks like predicting recidivism (Lin et al., 2020), but this is also likely temporary. Given AI’s rapid advancement, systems driven by comprehensive data analysis may soon surpass human accuracy in judicial decision-making.

These advantages, though, come with dangers. The danger that has garnered the most attention so far is bias. AI is informed by data, so if the data that is being used has bias within it, then the AI, generative or predictive, will reinforce it. COMPAS, for example, has faced backlash for this issue, giving Black offenders a higher risk score than otherwise identical white offenders (Larson et al., 2016). The list of technological risks goes on, though, including issues like black box algorithms, where the AI’s output is visible but the internal decision-making process is not, and hallucinations. The latter is an exclusively generative AI issue, where models will cite cases or precedent that do not actually exist (Magesh et al., 2024). This occurs because generative AI models produce outputs based on patterns in their training data rather than checking for verified facts, leading them to generate plausible-sounding but fabricated information. As AI advances, these last two issues could go away, with explainability research making algorithms more transparent, and improved training methods and verification systems reducing hallucinated outputs.

These types of dangers represent what I call courtroom AI’s micro-level dangers, or harms that manifest on a case-by-case basis, affecting individual litigants and outcomes. When managing these dangers as implementation inevitably increases, China’s centralized model proves better equipped. This advantage begins with governance structure. China’s centralized authority allows rapid regulatory responses from a single source, ideally able to contain technological risks. Furthermore, if these dangers worsened, the government could stop development and implementation through one centrally issued command. While either nation could theoretically take such action, only China has the institutional capacity to uniformly enforce it, as the government directly controls all judicial AI deployment. The United States’ decentralized, market-driven model makes similar intervention nearly impossible. Should the government step in and attempt to regulate courtroom AI, various problems arise. Legislation would have to navigate congressional procedures to address implementation that varies across states, and even then, uniform enforcement would be difficult given the fragmented nature of the AI industry and its many independent actors.

The large difference in the two nations’ data infrastructures also gives China the upper hand. It was mentioned earlier that data infrastructure is the most crucial component of training an AI system with deep learning. Countless experts, including Kai-Fu Lee, stress that “In deep learning, there’s no data like more data” (Lee, 2018). Strong AI advancement requires large, structured, and accessible datasets. The United States, however, relies on fragmented, proprietary datasets. This fragmentation both slows AI development and complicates efforts to address data-related issues like algorithmic bias. Since bias is embedded across multiple independent datasets, identifying and correcting it becomes much more difficult.

China, by contrast, possesses both a massive population and unprecedented data infrastructure. With 1.4 billion citizens and data collection through hundreds of millions of surveillance cameras, national ID databases, facial recognition networks, mobile phone tracking, and internet monitoring, China maintains a comprehensive digital record of activity within its borders (Chaturvedi, 2020). This creates an ideal environment for the rapid advancement of AI. More significantly, the state’s direct access to this centralized data allows it to identify and address data-related issues, and to intentionally shape AI development according to state objectives.

Macro-level Implications of Courtroom AI

However, my research revealed a much less obvious dimension to this comparative analysis. The increased implementation of AI does not merely align better with China’s system; it actively threatens certain foundations of American democracy itself. This threat of inadvertent systemic changes in the United States is a macro-level danger. One could argue that this is a “danger” only to those who would not welcome such political change, but that argument misses the point. The threat is not that one system is better or worse, but that the changes, which fundamentally contradict core American values, would not occur through deliberate democratic choice. They would occur as an unforeseen consequence of deploying a technology whose implications remain poorly understood.

One example of this emerges from a longstanding value in American culture: privacy. Americans, unlike Chinese citizens, prioritize privacy over potential benefits of less restrictive data practices. According to a 2023 Pew Research Center study, 72 percent of Americans favor increased restrictions on how their data is collected and used (McClain et al., 2023). If the United States were to expand AI implementation in courts, it would require substantial upgrades to data infrastructure, both by collecting more data and making it much more accessible. The public would likely oppose this, so increased expansion would force a democratic society to adopt practices fundamentally at odds with its citizens’ values.

Another example is the imbalance created between the coequal branches of our government when AI, if widely implemented and performing optimally, drastically speeds up the judiciary while the legislature and executive remain slow. The War on Drugs offers a clear illustration of this danger. There is a widely held belief that the War on Drugs caused significant negative consequences, including mass incarceration disproportionately affecting Black Americans (Alexander, 2010). Had advanced courtroom AI been integrated during this period, these consequences would have been amplified. Furthermore, if the legislative or executive branch attempts to intervene, they would move significantly slower than the already-expedited courts. In cases such as this, where a policy results in harmful outcomes, rapid implementation would cause an imbalance between the branches.

Last, I found that this technology, which is regarded as a way of helping the judiciary, might actually undermine judicial independence. Judicial independence has deep historical roots, entering England in 1701 but dating back to Aristotle in the fourth century BCE (Ervin, 1970). Furthermore, the United States Constitution stresses separation of powers, and, in Article III, Section 1, judges are given protection from other branches. This idea of judicial independence is meant to ensure that judges base their opinions solely on law and fact, without influence from other branches, political groups, or the public. Widespread courtroom AI usage, though, would challenge this. As evidenced by the earlier AI systems in use, the technology has the potential to absorb aspects of a judge’s role, even legal reasoning and decision-making. Resultingly, the companies developing the AI, or the legislature regulating its development, would have some control over the legal recommendations that the system produces. This raises the concern, as noted in scholarship on China, that technology can become “a means to curb judicial autonomy” (Stern et al., 2021). Essentially anyone who has a substantial say in the AI’s development or regulation would, in turn, have judicial influence.

While the future of AI is unknown, the trend so far is that its implementation seems to present these dangers to the United States. China, on the other hand, might welcome the technology’s unique traits. As stated in Bin Liang’s book, China has struggled to legitimize its judiciary without ceding central power. Before reform, “the legal system served as a political instrument to enforce class struggle and state control, not to protect individual rights” (Liang, 2007). The need to modernize this judiciary created pressure to adopt Western notions like transparency, independence from political influence, emphasis on the rule of law, and the protection of individual rights. According to Liang, China’s reform focused on creating a legal system that could achieve some of these traits while still supporting the central party’s control. This proved to be difficult. Rapid AI implementation seems to provide a solution, because having the most technologically advanced court system in the world would bring legitimacy while allowing the central party to maintain its governance by centrally managing the AI. This dynamic places China in an even more advantageous position, because the institutional changes would be embraced by the state.

Conclusion

Ultimately, there is a tension at the heart of the so-called “AI cold war.” The technology’s structural advantages align so thoroughly with authoritarian governance that attempting to compete on China’s terms is not a competition the U.S. can win without compromising its own values.

Recent actions by the U.S. government are already beginning to confirm this theory. Federal contracts with companies like Palantir, along with AI deployments by agencies such as DHS and ICE, suggest that the distinction between the Silicon Valley and Chinese models is already breaking down. The U.S. government is increasingly adopting centralized, state-directed approaches to AI implementation—a shift that, as this research demonstrates, threatens key constitutional values.

The best path forward for the U.S., then, might be to abandon its competitive mindset and establish conscious, deliberate regulation of the technology. Such cautionary regulation could even prove more strategically sound than any attempt at widespread adoption left up to market forces or unchecked centralized control. Regardless of how the country chooses to proceed, though, the central issue is one that at least deserves more attention than is being given. AI will not leave the courtroom. Instead, its role will grow. What happens when central government actions are performed not “by the people,” but by algorithms instead? This question needs an answer while AI’s influence in the judicial system can still be meaningfully adjusted.

While scholarship exists on the micro-level dangers of this technology, I hope this research stresses the macro-level implications that are also present. Research like this may not alter the trajectory of AI, but ideally, it can introduce these problems into academic discourse. This is particularly important for a technology as unknown as AI, where the potential impact is massive, yet so little research has been done.

 

As a kid, I’d write stories hoping that one day I could publish my work. Thanks to Dana Hamel and the Hamel Center for Undergraduate Research, I now have. This paper, though, would not have been possible without the people around me. I’ll start with my parents. Whatever I pursue—from basketball growing up to projects like this—you’ve always been fully on board. Thank you for your constant guidance and support. Thank you to Professor Nick Smith, who has been a mentor not just to me but to the countless students and athletes he’s coached and taught. I’m also grateful for Professor Paul McNamara and Professor Sue Siggelakis, who have challenged me since my freshman year, and whose passion for their work has influenced my own. Finally, thank you to my grandparents, my brothers Anthony and Luca, my best friend Lily, and the best dog in the world, Maya: You all keep me going.

 

Works Cited

ABA Day Edition: Judicial Vacancies. (2025). Americanbar.org; American Bar Association. 

Alexander, M. (2010). The new Jim Crow: Mass incarceration in the age of colorblindness. Samuel Dewitt Proctor Conference.

Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016, May 23). Machine bias. ProPublica

Chaturvedi, A. (2020, May 11). The China way: Use of technology to combat Covid-19. Geospatial World

Ervin, S. J. (1970). Separation of powers: Judicial independence. Law and Contemporary Problems, 35(1), 108. 

Holdsworth, J., & Scapicchio, M. (2024, June 17). What is deep learning? IBM. 

Judicial Law Clerks. (2023, April 25). Bureau of Labor Statistics. 

Larson, J., Mattu, S., Kirchner, L., & Angwin, J. (2016, May 23). How we analyzed the COMPAS recidivism algorithm. ProPublica.

Lee, K.-F. (2018). AI superpowers: China, Silicon Valley, and the new world order. Mariner Books.

Liang, B. (2007). The changing Chinese legal system, 1978–present. Routledge.

Lin, Z., Jung, J., Goel, S., & Skeem, J. (2020). The limits of human predictions of recidivism. Science Advances, 6(7), eaaz0652. 

Liu, J. Z., & Li, X. (2025). How do judges use large language models? Evidence from Shenzhen. Journal of Legal Analysis, 16(1), 235–262. 

Magesh, V., Surani, F., Dahl, M., Suzgun, M., Manning, C., & Ho, D. (2024, May 23). AI on trial: Legal models hallucinate in 1 out of 6 (or more) benchmarking queries. Hai.stanford.edu. 

Mao, W., & Patel, D. (2024). Former Google CEO Eric Schmidt says U.S. trails China in AI development | News | The Harvard Crimson. Thecrimson.com. 

McClain, C., Faverio, M., Anderson, M., & Park, E. (2023, October 18). How Americans view data privacy. Pew Research Center. 

Mucci, T. (n.d.). What is predictive AI? IBM. 

Newcomb, K. (2024). The place of artificial intelligence in sentencing decisions. Inquiry Journal. /inquiryjournal/blog/2024/03/place-artificial-intelligence-sentencing-decisions

Newcomb, K. D. (2024). Judging our new judges: Why we must remove artificial intelligence from our courtrooms now. 鶹app Scholars Repository. 

Officials seek limited use of AI in judiciary. (2025). Court.gov.cn; China Daily

Picard-Fritsche, S., Rempel, M., Tallon, J., Adler, J., & Reyes, N. (2017). Demystifying risk assessment: Key principles and controversies. Center for Justice Innovation. 

Stern, Rachel E., Liebman, Benjamin L., Roberts, Margaret & Wang, Alice Z. Automating fairness? Artificial intelligence in the Chinese court, 59 COLUM. J. TRANSNAT'L L. 515 (2021). 

Stryker, C., & Scapicchio, M. (n.d.). What is generative AI? IBM.com. 

Webster, G., Creemers, R., Kania, E., & Triolo, P. (2017). Full translation: China’s “new generation artificial intelligence development plan” (2017). DigiChina. Stanford University. 

Zheng, G. G. (2020). China’s grand design of People’s Smart Courts. Asian Journal of Law and Society, 7(3), 1–22. 

Wang, Z. (2021, April 8). China’s e-justice revolution. Judicature | the Scholarly Journal about the Judiciary

 

Author and Mentor Bios

Dominic Theos

Dominic Theos grew up in Exeter, New Hampshire, and is a junior at UNH, where he studies philosophy, justice studies, and political science. He works at the Connors Writing Center and serves on the executive board of the UNH Pre-Law Society. Through a Summer Undergraduate Research Fellowship (SURF), Dominic built this research on judicial systems in the age of AI, which will evolve into his senior honors thesis. After he graduates in May 2027, he plans to attend law school. 

Dr. Nick Smith, J.D., is a professor of philosophy and has been teaching at the 鶹app since 2002. Previously a litigator at a major New York law firm and a judicial clerk for the United States Court of Appeals for the Third Circuit, he teaches and writes on issues in law, politics, and society. Dr. Smith published I Was Wrong: The Meanings of Apologies in 2008 and Justice through Apologies: Remorse, Reform, and Punishment in 2014 (both with Cambridge University Press). He has been interviewed by or appeared in many major news outlets, including the New York Times, the Wall Street Journal, NPR,and the BBC, among others. He has mentored many undergraduate researchers and Inquiry authors. 

 

Contact the author >

Copyright 2026 © Dominic Theos

 

Categories