Minority report: Fact or fiction? Can we actually predict violent behavior?

Editorial Assistants: Elisabeth Höhne and Zoey Chapman.

Note: An earlier version of this article has been published in the Dutch version of In-Mind.

More than twenty-four years ago, the film Minority Report offered a visionary glimpse into a future of self-driving cars, facial recognition, and the possibility of predicting crimes before they occur. Some of these technologies have since become reality, but how close are we really to predicting violent behavior? This article reflects on the science of predicting violent behavior, from past to present. 

Figure 1. Criminal or lovely old lady?

The necessity of prediction 

Accurately predicting and preventing violent crimes could save lives. Consider, for example, the tragic cases of women who are killed by men with a history of severe violence or other victims of violent acts. Such tragedies raise the pressing question: could we have prevented these crimes by predicting the perpetrator’s behavior? 

The fascination with predicting criminal behavior is not new. In the 19th century, Italian criminologist Cesare Lombroso introduced the idea of the “born criminal.” He claimed that criminality was genetically determined. According to Lombroso, criminals had several distinctive physical features, such as asymmetrical faces and abnormal earlobes. He drew these conclusions based on research involving thousands of prisoners. Although his theory is now considered pseudoscientific [1], it laid the foundation for the idea that criminal behavior might be predictable. Lombroso’s theory was a product of its time, influenced by the contemporary understanding of genetics and biology. Today, we know that criminality cannot be reduced to physical traits: it involves a complex interplay of many factors

Accuracy of predictions by forensic experts 

Forensic behavioral experts are often asked to assess the risk of reoffending, known as recidivism. Historically, however, the predictive accuracy of their clinical judgment has been around 33%, which is worse than the 50% expected from random guessing [2-4]. This low accuracy raised questions about the effectiveness of forensic assessments and the methods used. 

Forensic psychologists therefore began searching for ways to support and improve predictions [5, 6]. To this end, they gathered information from large numbers of convicted individuals and examined who reoffended and who did not. In essence, they followed Lombroso’s approach, but this time focused on behavioral characteristics rather than physical traits. These studies identified a set of social and behavioral characteristics associated with reoffending [5]. 

Based on these characteristics, checklists called risk assessment instruments were developed. These instruments list the risk factors (as these characteristics are called) and use them to estimate the risk of recidivism [7]. Some risk assessment instruments are designed to predict specific types of criminal behavior, such as violent behavior, sexual offenses, or stalking. There are also instruments tailored for specific types of offenders, such as female offenders or offenders with intellectual disabilities. In practice, these assessments are used not only to estimate the risk of reoffending but also—and especially—to determine how intensively someone should be treated and which risk factors the treatment should address. 

Despite their widespread use, these risk assessment instruments achieve an accuracy of only 60–70% [8]. This means that although they are useful for treatment planning and risk management and outperform clinical judgment, they are far from perfect. The challenge remains to further refine these instruments to improve their predictive power

(Neuro)biological predictors 

About a century and a half after Lombroso, scientists are once again increasingly interested in using (neuro)biological features to predict violent behavior. Research has identified certain neurobiological mechanisms associated with aggressive behavior, but translating these findings into predictions for individuals remains challenging due to the complexity of the human brain and ethical considerations [9]. 

Nevertheless, neurobiological evidence is increasingly used in courtrooms. This raises the question: do brain abnormalities automatically explain observed behavior? No, because brain functioning is complex, and abnormalities are not always present, and when present, not always detectable. For example, the brain of John Wayne Gacy—better known as the “killer clown,” who raped and murdered at least 33 boys and young men—was examined after his death, and no abnormalities were found. 

Despite numerous studies, a biomarker for predicting aggressive behavior based on brain characteristics has not yet been found. However, research suggests that combining neurobiological data with existing risk factors can improve predictive accuracy [10, 11]. This opens the possibility that future risk assessment instruments will integrate both behavioral and biological data to provide a more holistic approach to risk prediction. Artificial intelligence (AI) may help facilitate this integration. 

The use of AI in predicting violence 

Since algorithms using AI are already used for predictive policing, we might not be so far from using them for forensic risk assessment. A well-known example is the system PredPol (Predictive Policing), which has been used in the United States since 2011 to predict where and when crime is likely to occur based on historical crime data [12]. So, while risk assessment instruments and biomarkers focus on who might be a potential offender, predictive policing focuses on where and when a crime might occur. 

Another example is Palantir Technologies, which develops advanced data-analysis platforms used by governments and security agencies worldwide to identify potential threats before they materialize. Their best-known platform, Gotham, allows users to integrate and analyze large amounts of data, such as criminal records, social networks, location data, and even camera footage [13]. Through pattern recognition and machine learning, the system attempts to detect signals of increased risk of criminal or violent behavior. 

Are we really going to entrust decisions about the risk of future violence to a computer? To what extent are AI systems error-free (see e.g., [14])? A widely criticized example is the automated risk assessment program COMPAS [15, 16], used in the United States by judges to make decisions including early release. According to journalists [17], COMPAS allegedly discriminates on the basis of ethnicity by assigning higher risk scores to people of color than to White defendants. But why do people of certain ethnicities receive higher scores? Investigative journalists suspect that this bias stems from the data used to train the system. If historical data show that individuals of certain ethnicities received higher scores, then those data are included in the training. As a result, the algorithm inevitably reproduces this bias [18, 19]. This mechanism illustrates “machine bias” [20]. Importantly, however, this criticism has since been refuted in scientific research, and COMPAS’s methodology does not show ethnic bias caused by machine learning [21]. 

What about human bias, then? It certainly exists as well. We know that professionals also have prejudices [22], although they are often blind to their own biases [23]. For example, forensic experts conducting risk assessments for the defense tend to give lower scores than those assessing risk for the prosecution [24]. Ideally, this could be addressed by creating a so-called “centaur” [25], combining AI and human expertise to reduce the risk of errors and false positives [26]. This could potentially reduce bias and increase safety by bringing together (neuro)technological advances in risk assessment and human judgment. 

Ethical considerations 

Although these technologies show great promise, the use of AI in predicting violence raises ethical questions, particularly concerning privacy, bias, and accountability [27]. AI researchers point to the importance of “fairness-aware machine learning” in which developers explicitly check and correct for biases in the data and algorithms used to train AI [28]. To further ensure trust in such systems, it is essential that AI systems are developed and used in a transparent and fair manner. The European AI Act developed by the Council of the European Union is a step in the right direction for regulating the use of AI and preventing misuse. This legislation sets strict requirements for transparency, safety, and reliability in AI systems, helping protect individual rights and strengthen trust in these technologies. 

How much privacy are we willing to sacrifice for a safer society? Ask yourself: if you had lost a loved one to a crime, would that trade-off be easier to make? Would every measure that could help prevent another innocent life from being lost feel justified? It is not easy to strike the balance between collective safety and individual rights to privacy and autonomy

Ethical concerns arise when technologies such as facial recognition, location analysis, and even behavioral prediction might be applied without our consent. How long are data such as facial or location information stored? Who has access to these data, and how is misuse prevented? These questions touch on fundamental rights such as the right to privacy. Additionally, increased surveillance can easily lead to a “slippery slope” in which the boundaries of privacy are pushed ever further. Privacy experts warn that if we allow such data to be routinely collected and analyzed today, we may find ourselves in a society tomorrow where personal freedom is severely restricted [29]. 

Perhaps the most controversial ethical question is: should we intervene preventively if a risk is assessed as high? And how accurate must a system be before we treat someone as a potential offender? What if we can predict behavior reasonably well? Is it ethical to arrest someone before they have committed a crime? In the film Minority Report, the viewer sees what happens when the system does not work properly. Predicting the future carries enormous responsibility. That is why ethicists emphasize that decisions based on algorithmic and AI-driven predictions of violence must always involve human judgment and oversight [30]. 

Conclusion 

Can we predict violent behavior? Certain behavioral characteristics allow us to make a reasonable estimate of the risk, and combining these with (neuro)biological measures improves risk assessment to some extent [9, 31]. But we are far from the scenario depicted in Minority Report. There are simply too many individual variables that can affect these estimates. So, although scientific progress has improved our accuracy, researchers still do not know all the pieces of the puzzle. The hope now lies in the so-called centaur: a combination of AI and human input to piece together the full puzzle of environmental, (neuro)biological, and behavioral characteristics and maximize predictive accuracy. We stand today at a crossroads where science, technology, and ethics converge. How we meet this challenge will determine what our future looks like: will relying on these forms of predictions become reality, or will it remain fiction? 

Bibliography 

[1] J. Berveling, "'My God, here is the skull of a murderer!' Physical appearance and violent crime," J. Hist. Neurosci., vol. 30, no. 2, pp. 141–154, Apr.–Jun. 2021, doi: 10.1080/0964704X.2020.1789937. 

[2] American Psychiatric Association, Clinical aspects of the violent individual (Task Force Report No. 8). Washington, DC, USA: American Psychiatric Association, 1974.   

[3] B. J. Ennis and T. R. Litwack, "Psychiatry and the presumption of expertise: Flipping coins in the courtroom," Calif. Law Rev., vol. 62, no. 3, pp. 693–752, 1974. 

[4] J. Monahan, The clinical prediction of violent behavior. Rockville, MD, USA: National Institute of Mental Health, 1981.  

[5] D. A. Andrews and J. Bonta, The psychology of criminal conduct, 4th ed. Newark, NJ, USA: LexisNexis, 2006.  

[6] D. A. Andrews, J. Bonta, and S. J. Wormith, "The recent past and near future of risk and/or need assessment," Crime Delinq., vol. 52, no. 1, pp. 7–27, Jan. 2006. doi: 10.1177/0011128705281756. 

[7] M. G. T. Ogonah, A. Seyedsalehi, D. Whiting, and S. Fazel, "Violence risk assessment instruments in forensic psychiatric populations: A systematic review and meta-analysis," Lancet Psychiatry, vol. 10, no. 10, pp. 780–789, Oct. 2023, doi: 10.1016/S2215-0366(23)00256-0. 

[8] J. L. Viljoen, L. M. Vargen, D. M. Cochrane, M. R. Jonnson, I. Goossens, and S. Monjazeb, "Do structured risk assessments predict violent, any, and sexual offending better than unstructured judgment? An umbrella review," Psychol. Public Policy Law, vol. 27, no. 1, pp. 79–97, Feb. 2021, doi: 10.1037/law0000299. 

[9] J. D. van Dongen, Y. Haveman, C. S. Sergiou, and O. Choy, "Neuroprediction of violence and criminal behavior using neuro-imaging data: From innovation to considerations for future directions," Aggress. Violent Behav., vol. 80, Art. no. 102008, 2024, doi: 10.1016/j.avb.2024.102008. 

[10] C. Delfin, H. Krona, P. Andiné, E. Ryding, M. Wallinius, and B. Hofvander, "Prediction of recidivism in a long-term follow-up of forensic psychiatric patients: Incremental effects of neuroimaging data," PLOS ONE, vol. 14, no. 5, Art. no. e0217127, May 2019, doi: 10.1371/journal.pone.0217127. 

[11] J. Zijlmans et al., "The predictive value of neurobiological measures for recidivism in delinquent male young adults," J. Psychiatry Neurosci., vol. 46, no. 2, pp. E271–E280, Mar. 2021, doi: 10.1503/jpn.200103. 

[12] K. Lum and W. Isaac, "To predict and serve?," Significance, vol. 13, no. 5, pp. 14–19, Oct. 2016, doi: 10.1111/j.1740-9713.2016.00960.x.  

[13] L. Ulbricht and S. Egbert, "In Palantir we trust? Regulation of data analysis platforms in public security," Big Data Soc., vol. 11, no. 3, pp. 1–15, Sep. 2024, doi: 10.1177/20539517241255108. 

[14] T. Greene, G. Shmueli, J. Fell, C. F. Lin, and H. W. Liu, "Forks over knives: Predictive inconsistency in criminal justice algorithmic risk assessment tools," J. R. Stat. Soc. A., vol. 185, no. 2, pp. S692–S723, Dec. 2022, doi: 10.1111/rssa.12966. 

[15] T. Brennan and W. Dieterich, "Correctional Offender Management Profiles for Alternative Sanctions (COMPAS)," in Handbook of recidivism risk/needs assessment tools,  J. P. Singh, D. G. Kroner, J. S. Wormith, and Z. Hamilton, Eds. Hoboken, NJ, USA: Wiley, 2018, pp. 49–75.  

[16] T. Räz, "Reliability gaps between groups in COMPAS dataset," arXiv: 2308.15243, Aug. 2023, doi: 10.48550/arXiv.2308.15243. 

[17] P. Patalay, "COMPAS: Unfair algorithm?," Medium, Nov. 2023. [Online]. Available: https://medium.com/@lamdaa/compas-unfair-algorithm-812702ed6a6a 

[18] T. Douglas, J. Pugh, I. Singh, J. Savulescu, and S. Fazel, "Risk assessment tools in criminal justice and forensic psychiatry: The need for better data," Eur. Psychiatry, vol. 42, pp. 134–137, May 2017, doi: 10.1016/j.eurpsy.2016.12.009. 

[19] J. Dressel and H. Farid, "The accuracy, fairness, and limits of predicting recidivism," Sci. Adv., vol. 4, no. 1, Art. no. eaao5580, Jan. 2018, doi: 10.1126/sciadv.aao5580. 

[20] J. Angwin, J. Larson, S. Mattu, and L. Kirchner, "Machine bias—There’s software used across the country to predict future criminals. And it’s biased against Blacks," ProPublica, May 2016. [Online]. Available: https://www.propublica.org/article/machine-bias-risk-assessments-in-crim...

[21] S. L. Desmarais and S. A. Zottola, "Violence risk assessment: Current status and contemporary issues," Marq. L. Rev., vol. 103, no. 3, pp. 793–817, 2020. 

[22] L. F. Meyer and A. M. Valenca, " Factors related to bias in forensic psychiatric assessments in criminal matters: A systematic review," Int. J. Law Psychiatry, vol. 75, Art. no. 101681, Mar.–Apr. 2021, doi: 10.1016/j.ijlp.2021.101681. 

[23] P. A. Zapf, J. Kukucka, S. M. Kassin, and I. E. Dror, "Cognitive bias in forensic mental health assessment: Evaluator beliefs about its nature and scope," Psychol. Public Pol. Law, vol. 24, no. 1, pp. 1–10, Feb. 2018, doi: 10.1037/law0000153. 

[24] L. A. Guarnera, D. C. Murrie, and M. T. Boccaccini, "Why do forensic experts disagree? Sources of unreliability and bias in forensic psychology evaluations," Transl. Issues Psychol. Sci., vol. 3, no. 2, pp. 143–152, Jun. 2017, doi: 10.1037/tps0000114. 

[25] S. Saghafian and L. Idan, "Effective generative AI: The human-algorithm centaur," Harv. Data Sci. Rev., vol. 5 (Special Issue), Dec. 2024, doi: 10.1162/99608f92.19d78478. 

[26] I. Hefetz, "Mapping AI-ethics' dilemmas in forensic case work: To trust AI or not?," Forensic Sci. Int., vol. 350, Art. no. 111807, Sep. 2023, doi: 10.1016/j.forsciint.2023.111807. 

[27] A. V. Papachristos, "The promises and perils of crime prediction," Nat. Hum. Behav., vol. 6, no. 8, pp. 1038–1039, Aug. 2022, doi: 10.1038/s41562-022-01373-z. 

[28] C. Ferrara, G. Sellitto, F. Ferrucci, F. Palomba, and A. De Lucia, "Fairness-aware machine learning engineering: How far are we?," Empir. Softw. Eng., vol. 29, no. 1, Art. no. 9, Nov. 2024, doi: 10.1007/s10664-023-10402-y. 

[29] A. Acquisti, L. Brandimarte, and G. Loewenstein, "Privacy and human behavior in the age of information," Science, vol. 347, no. 6221, pp. 509–514, Jan. 2015, doi: 10.1126/science.aaa1465. 

[30] C. Cath, S. Wachter, B. Mittelstadt, M. Taddeo, and L. Floridi, "Artificial Intelligence and the 'good society': The US, EU, and UK approach," Sci. Eng. Ethics, vol. 24, no. 2, pp. 505–528, Apr. 2018, doi: 10.1007/s11948-017-9901-7. 

[31] D. Watts et al., "Predicting criminal and violent outcomes in psychiatry: A meta-analysis of diagnostic accuracy," Transl. Psychiatry, vol. 12, no. 1, Art. no. 470, Nov. 2022, doi: 10.1038/s41398-022-02214-3. 

Figure Source

Figure 1: https://www.bing.com/images/create

article author(s)

facebook