#242 Measuring Automated Influence: Between Empirical Evidence and Ethical Values
Automated influence, delivered by digital targeting technologies such as targeted advertising, digital nudges, and recommender systems, has attracted significant interest from both empirical researchers, on one hand, and critical scholars and policymakers on the other. In this paper, we argue for closer integration of these efforts. Critical scholars and policymakers, who focus primarily on the social, ethical, and political effects of these technologies, need empirical evidence to substantiate and motivate their concerns. However, existing empirical research investigating the effectiveness of these technologies (or lack thereof), neglects other morally relevant effects—which can be felt regardless of whether or not the technologies “work” in the sense of fulfilling the promises of their designers. Drawing from the ethics and policy literature, we enumerate a range of questions begging for empirical analysis—the outline of a research agenda bridging these fields—and issue a call to action for more empirical research that takes these urgent ethics and policy questions as their starting point.
#276 Blacklists and Redlists in the Chinese Social Credit System: Diversity, Flexibility, and Comprehensiveness
The Chinese Social Credit System (SCS, 社会信用体系) is a novel digital socio-technical credit system. The SCS aims to regulate societal behavior by reputational and material devices. Scholarship on the SCS has offered a variety of legal and theoretical perspectives. However, little is known about its actual implementation. Here, we provide the first comprehensive empirical study of digital blacklists (listing “bad” behavior) and redlists (listing “good” behavior) in the Chinese SCS. Based on a unique data set of reputational blacklists and redlists in 30 Chinese provincial-level administrative divisions (ADs), we show the diversity, flexibility, and comprehensiveness of the SCS listing infrastructure. First, our results demonstrate that the Chinese SCS unfolds in a highly diversified manner: we find differences in accessibility, interface design and credit information across provincial-level SCS blacklists and redlists. Second, SCS listings are flexible. During the COVID-19 outbreak, we observe a swift addition of blacklists and redlists that helps strengthen the compliance with coronavirus-related norms and regulations. Third, the SCS listing infrastructure is comprehensive. Overall, we identify 273 blacklists and 154 redlists across provincial-level ADs. Our blacklist and redlist taxonomy highlights that the SCS listing infrastructure prioritizes law enforcement and industry regulations. We also identify redlists that reward political and moral behavior. Our study substantiates the enormous scale and diversity of the Chinese SCS and puts the debate on its reach and societal impact on firmer ground. Finally, we initiate a discussion on the ethical dimensions of data-driven research on the SCS.
#200 Blind Justice: Algorithmically Masking Race in Charging Decisions
A prosecutor’s decision to charge or dismiss a criminal case is a particularly high-stakes choice. There is concern, however, that these judgements may suffer from explicit or implicit racial bias, as with many other such actions in the criminal justice system. To reduce potential bias in charging decisions, we designed a system that algorithmically redacts race-related information from free-text case narratives. In a first-of-its-kind initiative, we deployed this system at a large American district attorney’s office to help prosecutors make race-obscured charging decisions, where it was used to review many incoming felony cases. We report on both the design, efficacy, and impact of our tool for aiding equitable decision-making. We demonstrate that our redaction algorithm is able to accurately obscure race-related information, making it difficult for a human reviewer to guess the race of a suspect while preserving other information from the case narrative. In the jurisdiction we study, we found little evidence of disparate treatment in charging decisions even prior to deployment of our intervention. Thus, as expected, our tool did not substantially alter charging rates. Nevertheless, our study demonstrates the feasibility of race-obscured charging, and more generally highlights the promise of algorithms to bolster equitable decision-making in the criminal justice system.