Oral Sessions

Oral Session 1, October 21, 9:15 am – Algorithmic Implications of Regulations
120How Should AI Decisions Be Explained? Requirements for Explanations from the Perspective of European Law
Benjamin Fresz, Elena Dubovitskaya, Danilo Brajovic, Marco Huber and Christian Horz
215Proxy Fairness under the European Data Protection Regulation and the AI ACT: A Perspective of Sensitivity and Necessity
Ioanna Papageorgiou
203You Still See Me: How Data Protection Supports the Architecture of AI Surveillance
Rui-Jie Yew, Lucy Qin and Suresh Venkatasubramanian
341Do Responsible AI Artifacts Advance Stakeholder Goals? Four Key Barriers Perceived by Legal and Civil Stakeholders
Anna Kawakami, Daricia Wilkinson and Alexandra Chouldechova
Oral Session 2, October 21, 11:45 am – Large Language Model Alignment
111Learning When Not to Measure: Theorizing Ethical Alignment in LLMs
William Rathje
256A Qualitative Study on Cultural Hegemony and the Impacts of AI
Venetia Brown, Retno Larasati, Aisling Third and Tracie Farrell
455PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models
Ahmed Agiza, Mohamed Mostagir and Sherief Reda
453Legal Minds, Algorithmic Decisions: How LLMs Apply Constitutional Principles in Complex Scenarios
Carolina Camassa and Camilla Bignotti
Oral Session 3, October 21, 2:15 pm – Excluded Knowledges and Openness
150What Makes An Expert? Reviewing How ML Researchers Define “Expert”
Mark Diaz and Angela Smith
57Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance
Mohammad Tahaei, Daricia Wilkinson, Alisa Frik, Michael Muller, Ruba Abu-Salma and Lauren Wilcox
39Decolonial AI Alignment: Openness, Viśeṣa-Dharma, and Including Excluded Knowledges
Kush R. Varshney
66The Origin and Opportunities of Developers’ Perceived Code Accountability in Open Source AI Software Development
Sebastian Clemens Bartsch, Moritz Lother, Jan-Hendrik Schmidt, Martin Adam and Alexander Benlian
Oral Session 4, October 21, 3:45 pm – Governance and Implications
40Pay Attention: a Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance
Franck Michel and Fabien Gandon
492Acceptable Use Policies for Foundation Models
Kevin Klyman
227An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence
Daniel Carpenter and Carson Ezell
336The Societal Implications of Open Generative Models Through the Lens of Fact-Checking Organizations
Robert Wolfe and Tanushree Mitra
Oral Session 5, October 21, 5:00 pm – Responsible AI Tools and Transparency
228Foundation Model Transparency Reports
Rishi Bommasani, Kevin Klyman, Shayne Longpre, Betty Xiong, Sayash Kapoor, Nestor Maslej, Arvind Narayanan and Percy Liang
447Co-designing an AI Impact Assessment Report Template with AI Practitioners and AI Compliance Experts
Edyta Bogucka, Marios Constantinides, Sanja Scepanovic and Daniele Quercia
23How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance
Kevin Wei, Carson Ezell, Nick Gabrieli and Chinmay Deshpande
397The Ethico-Politics of Design Toolkits: Responsible AI Tools, From Big Tech Guidelines to Feminist Ideation Cards
Tomasz Hollanek
Oral Session 6, October 22, 11:45 am – Biases in Foundation Models I
317Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil
Marcelo Sartori Locatelli, Matheus Prado Miranda, Igor Joaquim Costa, Matheus Torres Prates, Victor Thome, Mateus Zaparoli, Tomas Lacerda, Adriana Pagano, Eduardo Rios Neto, Wagner Meira Jr. and Virgilio Almeida
359Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
Kyra Wilson and Aylin Caliskan
8A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
Jessy Xinyi Han, Andrew Miller, S. Craig Watkins, Christopher Winship, Fotini Christia and Devavrat Shah
323Automate or Assist? The Role of Computational Models in Identifying Gendered Discourse in US Capital Trial Transcripts
Andrea W Wen-Yi, Kathryn Adamson, Nathalie Greenfield, Rachel Goldberg, Sandra Babcock, David Mimno and Allison Koenecke
Oral Session 7, October 22, 2:15 pm – Human-AI Relationships
79Perception of experience influences altruism and perception of agency influences trust in human-machine interactions
Mayada Oudah, Kinga Makovi, Kurt Gray, Balaraju Battu and Talal Rahwan
367What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users
Jana Schaich Borg and Hannah Read
200Beyond Interaction: Investigating the Appropriateness of Human-AI Assistant Relationships
Arianna Manzini, Geoff Keeling, Lize Alberts, Shannon Vallor, Meredith Ringel Morris and Iason Gabriel
234Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
Borhane Blili-Hamelin, Leif Hancox-Li and Andrew Smart
Oral Session 8, October 22, 3:45 pm – Algorithms
86Fairness in Reinforcement Learning: A Survey
Anka Reuel and Devin Ma
496Nothing Comes Without Its World – Practical Challenges of Aligning LLMs to Situated Human Values through RLHF
Anne Arzberger, Stefan Buijsman, Maria Luce Lupetti, Alessandro Bozzon and Jie Yang
32Algorithmic Decision-Making under Agents with Persistent Improvement
Tian Xie, Xuwei Tan and Xueru Zhang
487When and Why is Persuasion Hard? A Computational Complexity Result
Zach Wojtowicz
Oral Session 9, October 22, 5:00 pm – Evaluating Risks and Harms
418ExploreGen: Large Language Models for Envisioning the Uses and Risks of AI Technologies
Viviane Herdel, Sanja Šćepanović, Edyta Bogucka and Daniele Quercia
147Red-Teaming for Generative AI: Silver Bullet or Security Theater?
Michael Feffer, Anusha Sinha, Wesley Deng, Zachary Lipton and Hoda Heidari
146Gaps in the Safety Evaluation of Generative AI
Maribeth Rauh, Nahema Marchal, Arianna Manzini, Lisa Anne Hendricks, Ramona Comanescu, Canfer Akbulut, Tom Stepleton, Juan Mateos-Garcia, Stevie Bergman, Jackie Kay, Conor Griffin, Ben Bariach, Iason Gabriel, Verena Rieser, William Isaac and Laura Weidinger
433Operationalizing content moderation “accuracy” in the Digital Services Act
Johnny Wei, Frederike Zufall and Robin Jia
Oral Session 10, October 23, 10:45 am – Biases in Foundation Models II
483LLM Voting: Human Choices and AI Collective Decision-Making
Joshua C. Yang, Damian Dalisan, Marcin Korecki, Carina I. Hausladen and Dirk Helbing
424Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
Chahat Raj, Anjishnu Mukherjee, Aylin Caliskan, Antonios Anastasopoulos and Ziwei Zhu
426Understanding Intrinsic Socioeconomic Biases in Large Language Models
Mina Arzaghi, Florian Carichon and Golnoosh Farnadi
400Identifying Implicit Social Biases in Vision-Language Models
Kimia Hamidieh, Haoran Zhang, Walter Gerych, Thomas Hartvigsen and Marzyeh Ghassemi