By Chase Boss, Senior Editor
“Technology can change the facts to which doctrine applies faster than courts can adjust the doctrine.”[1] AI algorithms are bound to be a prime example; this technology may very well alter the application of a right unique to citizens of the Keystone State—the right to reputation.
Introduction
Article I, Section 1 of the Pennsylvania Constitution provides for an “inherent and indefeasible right” to reputation.[2] As acknowledged by the Pennsylvania Supreme Court in the nineteenth century, this textual protection places reputation on equal footing with the well-known interests of life, liberty, and property.[3] Federal doctrine, following Paul v. Davis, considers reputational injury only a basic stigma unless coupled with a change in legal status;[4] Pennsylvania, however, has read Article I, Section 1 to recognize reputation as a fundamental right that cannot be abridged without due process.[5]
This right has generally emerged in disputes as a “belt-and-suspenders” constitutional argument, separate from common-law defamation suits.[6] In the last half-century, however, the Pennsylvania Supreme Court has recognized this right as alive and well today, as it was recently used to narrow the application of law that impose official stigma and permanently mark individuals as dangerous.[7]
Harm to reputation will inevitably grow as algorithmic classification technologies improve in reach and scale. Predictive policing systems, person-based “risk” scores, and automated risk assessments now influence and, in some cases, essentially determine decisions about whom to stop and the identification of “high risk” persons.[8] These systems, while designed to merely classify individuals and recommend police action, operate with little transparency and limited opportunities to challenge improperly assigned labels.[9] A growing body of research by lawyers and ethicists alike seeks to explain how these tools reinforce racial and socioeconomic disparities despite the increased use of algorithms in policing nationwide.
Pennsylvania’s right to reputation may offer a strong framework for assessing and restraining algorithmic categorization of individuals without due process. When state and local officials solely rely on automated systems to label individuals as dangerous—and then base actions on an algorithm’s conclusions—the right to reputation may be directly implicated.
The Origin and Development of the Right
Pennsylvania added “reputation” to its Declaration of Rights in 1790, signaling an early and explicit commitment to protecting personal reputation against government action; however, early records offer little indication as to why “reputation” was included in the final draft.[10] This very clear textual choice, however, elevates reputation to the same level as life, liberty, and property.[11]
One major development in reputation rights occurred in Simon v. Commonwealth, in which the Commonwealth Court ruled that state agencies could not publicly spread stigmatizing findings without first giving affected individuals adequate notice and opportunity to respond.[12] Even government employee cases like Hunter v. Port Authority demonstrate that a government’s reputation-based judgments invoke the due process protections.[13] The Supreme Court’s recent ruling in In re J.B.—which struck down lifetime juvenile sex-offender registration—held that statutes imposing irrebuttable presumptions of dangerousness violate due process when they cannot be challenged.[14] Collectively, today’s right to reputation involves three key doctrines that make up a basic framework: (1) reputational harm is a fundamental right; (2) stigmatizing classifications must provide notice and a chance to rebut; and (3) irrebuttable presumptions of dangerousness are constitutionally disfavored.
The Rise of Algorithmic Classification and Changing Reputations
The use of predictive policing tools is likely to introduce new reputational risks. Many tools can assign labels such as “high risk,” “dangerous,” or “gang-affiliated” with little to no transparency, verification, or proper process for rebutting an incorrect label.[15] Predictive policing platforms are trained using historical arrest data; this practice, understandably, tends to replicate historical racial disparities and makes recommendations based on those disparities for future policing.[16] For example, one researcher discusses how these historic models have directed officers disproportionately to Black neighborhoods, thereby reinforcing biased crime data patterns from the past and importing disparity into the present.[17]
Unfortunately, this suggests a troubling, circular relationship between policing then and policing today: historically overpoliced communities may generate more enforcement data, which the algorithm interprets as increased criminal activity, leading to additional surveillance.[18] Take Chicago’s Strategic Subject List (SSL) for example; research on that program highlights the risks of person-based risk scoring.[19] Those classifications appeared in police records and influenced patrol decisions, directly affecting how officers interacted with specific individuals.[20]
Predictive policing was once described as an “experiment” conducted on communities without informed consent or oversight.[21] AI-driven analytics, while unquestionably useful for agencies attempting to expand reach and capabilities, enable police to gather and interpret data at a scale previously impossible.[22] Automated decision systems (“ADS”) further encode value judgments instead of objective assessments, leading to real-world action.[23]
Applying Pennsylvania’s Right to Reputation to Algorithmic Classification
Simon requires notice and a chance to respond before the state shares stigmatizing information with the public, an event that algorithmic systems usually do not meet, given the limited use of such information by internal law enforcement and third-party AI providers. However, individuals flagged as “high risk” rarely know they have been classified, cannot access the underlying data, and are denied any opportunity to challenge the designation.[24] It’s important to note that an agency’s limited designation of an individual may not reach the public-facing reputational damage as seen in Simon and its progeny; however, the effects of the labeling can still be felt.
Under In re J.B., irrebuttable presumptions are deemed unconstitutional when they are not true in all cases; algorithmic labels based on broad correlations act as modern irrebuttable presumptions. Risk scores generated from opaque models assume risk without providing any way to challenge them.[25] Reputational stigma, when combined with tangible consequences of reputational damage—such as enhanced surveillance, denial of release, and employment exclusion—may very well trigger due-process protections under Pennsylvania law. This is not to suggest that injury to reputation alone is not a constitutional violation;[26] rather, the violation may arise when the state relies on false or improper reputational classifications that, once relied upon, used, or disseminated, go on to inflict harm on an individual’s reputation.
In Pennsylvania, algorithmic classifications should be treated as state action, likely to adversely affect individuals’ due process rights; procedural safeguards should, as a result, be implemented accordingly. Agencies that use algorithmic tools should provide notice, access to relevant data (even limited and subject to trade privilege), and, at a minimum, a human review of computer-generated classifications.[27] Judicial review must also be available to address constitutional violations in the criminal context, especially when privately-owned algorithmic systems are likely to hinder transparency.[28] When vendors refuse to disclose model details necessary for due process review,[29] their outputs should not be used in situations involving reputational rights.
Pennsylvania’s courts have long maintained that government actions must not be “unduly oppressive” or “patently beyond the necessities of the case,”[30] and algorithmic tools with low accuracy and high bias fail would appear to fall short of that standard. The Commonwealth should take it upon itself to mandate independent audits and conduct bias evaluations before algorithms are used and relied upon in policing contexts.[31] Private models may also need to be prohibited from high-stakes applications where reputational harm is likely.[32]
Article I, Section 1 guarantees all Pennsylvanians an independent constitutional right to protect their reputation; in the approaching era of algorithmic governance, upholding that right requires transparency, fairness, and accountability.
Conclusion
The growth of algorithmic classifications changes how the government sees and treats its citizens. Today’s predictive policing, risk assessments, and other AI tools that assign reputational labels affecting surveillance and detention are error-prone, susceptible to historical racial biases, and can be used without meaningful procedural protections.
However, the Pennsylvania Constitution may offer a strong safeguard against these harms. As algorithmic tools become ingrained in policing, Pennsylvania courts and legislators must decide whether an individual’s right to reputation should serve as a shield against an algorithm’s irrebuttable presumption, which, if unchallenged, can lead to very real, tangible consequences.
[1] Elena Kagan, J., Harvard Law School Class Address (May 22, 2013); see, e.g., U.S. v. Jones, 565 U.S. 400, 427 (2012) (Alito, J., concurring in judgment) (“But technology can change [privacy] expectations. Dramatic technological change may lead to periods in which popular expectations are in flux and may ultimately produce significant changes in popular attitudes”).
[2] Pa. Const. art. I, § 1.
[3] Meas v. Johnson, 39 A. 562, 563 (Pa. 1898).
[4] See the “Stigma-Plus Doctrine.” Paul v. Davis, 424 U.S. 693, 711–12 (U.S. 1976).
[5] R. v. Dept. of Pub. Welfare, 636 A.2d 142, 149 (Pa. 1994).
[6] See, e.g., Balleta v. Spadoni, 47 A.3d 183 (Pa. Commw. 2012); Hatchard v. Westinghouse Broadcasting Co., 532 A.2d 346 (Pa. 1987); Thomas v. Kane, 2016 WL 6081868 (Pa. Commw. Oct. 17, 2016).
[7] Perhaps most notably, the right to reputation was a vehicle to strike down the Sex Offender Registration and Notification Act (SORNA) as applied to juvenile offenders. See In re J.B., 107 A.3d 1, 19–20 (Pa. 2014).
[8] Sarah Valentine, Impoverished Algorithms: Misguided Governments, Flawed Technologies, and Social Control, 46 Fordham Urb. L. J. 364, 371 (quoting Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish The Poorrrr, p. 38 (2017)).
[9] Ben Winters, Layered Opacity: Criminal Legal Technology Exacerbates Disparate Impact Cycles and Prevents Trust, 12 J. Nat’l Sec. L. & Pol’y 327, 337–340 (2022); David Freeman Engstrom & Daniel E. Ho, Algorithmic Accountability in the Administrative State, 37 Yale J. on Reg. 800, 821 (describing AI-based tools as “black box[es]” which may erode overall accountability); see also Kim Lyons, Fetterman joins lawmakers’ letter urging DOJ to stop funding so-called ‘predictive’ policing tools, Pa. Capital-Star (Jan. 29, 2024).
[10] Jeremy E. Abay, Simon Says Protect My Reputation: Understanding Pennsylvania’s Constitutional Right to Reputation, 94 Pa. B. A. Q. 68, 71 (Apr. 2023).
[11] Meas, supra note 3.
[12] Simon v. Commonwealth, 659 A.2d 631 (Pa. Commw. 2012); Abay, supra note 10 at 69–70.
[13] Hunter v. Port Auth., 419 A.2d 631 (Pa. Super. 1980).
[14] In re J.B., 107 A.3d 1 (Pa. 2014).
[15] Winters, supra note 9, at 337; Rashida Richardson, Defining and Demystifying Automated Decision Systems, 81 Md. L. Rev. 785, 826–27 (2022).
[16] Namrata Kakade, Sloshing Through the Factbound Morass of Reasonableness: Predictive Algorithms, Racialized Policing, and Fourth Amendment Use of Force, 88 Geo. Wash. L. Rev. 788, 797 (2020).
[17] Renata M. O’Donnell, Challenging Racist Predictive Policing Algorithms Under the Equal Protection Clause, 94 N.Y.U. L. Rev. 544, 561 (2019).
[18] Winters, supra note 9, at 334, n. 27; Fleur G. Oké, The Minority Report: How the Use of Date in Law, 63 How. L. J. 87, 105 (discussing the “runaway feedback loops” that occur from predictive policing).
[19] Kakade, supra note 16, at 796.
[20] Id.; see also Matthew D. Zampa, Caught in the Crosshairs: Predictive Policing and the Use of Force, 59 U.S.F. L. Rev. 371, 381 (2025) (“The records . . . are then ‘fed’ to software, which sorts the data, identifies patterns, and returns a result that reflects the department’s preexisting racial biases”).
[21] Elizabeth E. Joh, Police Technology Experiments, 125 Colum. L. Rev. F. 1, 10 (Jan. 31, 2025).
[22] Id. at 41–42; see also Andrew Keane Woods, ROBOPHOBIA, 93 U. Col. L. Rev. 51, 51–52 (“We are entering an age when one of the most important policy questions will be how and where to deploy machine decision-makers”).
[23] O’Donnell, supra note 17 at 562; Richardson, supra note 15 at 832.
[24] Winters, supra note 9, at 328; Artificial Intelligence and Criminal Justice Final Report, U.S. Dep’t of Just., p. 65 (Dec. 3, 2024).
[25] Id.
[26] See infra notes 4–5.
[27] Artificial Intelligence and Criminal Justice Final Report, U.S. Dep’t of Just., p. 50 (Dec. 3, 2024) (“Agencies should also ensure a central role for humans in choosing which data to use as inputs, deciding which metric to use for evaluating accuracy . . . and determining which practices to implement in response to predictions” (emphasis added)).
[28] Oké, supra note 18 at 109.
[29] Artificial Intelligence and Criminal Justice Final Report, U.S. Dep’t of Just., p. 65 (Dec. 3, 2024).
[30] Gambone v. Commonwealth, 101 A.2d 634, 637 (1954).
[31] Winters, supra note 9, at 347; See generally Artificial Intelligence and Criminal Justice Final Report, U.S. Dep’t of Just. pp. 67–69 (Dec. 3, 2024).
[32] Kakade, supra note 16, at 798.
