Oral Sessions

Oral Session 1, October 21, 9:15 am – Algorithmic Implications of Regulations
120How Should AI Decisions Be Explained? Requirements for Explanations from the Perspective of European Law
215Proxy Fairness under the European Data Protection Regulation and the AI ACT: A Perspective of Sensitivity and Necessity
203You Still See Me: How Data Protection Supports the Architecture of AI Surveillance
341Legal and Civil Perspectives on Responsible AI Artifacts: Current Limitations and Paths Towards Facilitating Multi-Actor Communication
Oral Session 2, October 21, 11:45 am – Large Language Model Alignment
111Learning When Not to Measure: Theorizing Ethical Alignment in LLMs
256A Qualitative Study on Cultural Hegemony and the Impacts of AI
455PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models
453Legal Minds, Algorithmic Decisions: How LLMs Apply Constitutional Principles in Complex Scenarios
Oral Session 3, October 21, 2:15 pm – Excluded Knowledges and Openness
150What Makes An Expert? Reviewing How ML Researchers Define “Expert”
57Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance
39Decolonial AI Alignment: Openness, Viśeṣa-Dharma, and Including Excluded Knowledges
66The Origin and Opportunities of Developers’ Perceived Code Accountability in Open Source AI Software Development
Oral Session 4, October 21, 3:45 pm – Governance and Implications
40Pay Attention: a Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance
492Acceptable Use Policies for Foundation Models
227An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence
336The Societal Implications of Open Generative Models Through the Lens of Fact-Checking Organizations
Oral Session 5, October 21, 5:00 pm – Responsible AI Tools and Transparency
228Foundation Model Transparency Reports
447Co-designing an AI Impact Assessment Report Template with AI Practitioners and AI Compliance Experts
23How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance
397The Ethico-Politics of Design Toolkits: Responsible AI Tools, From Big Tech Guidelines to Feminist Ideation Cards
Oral Session 6, October 22, 11:45 am – Biases in Foundation Models I
317Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil
359Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval
8A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems
323Automate or Assist? The Role of Computational Models in Identifying Gendered Discourse in US Capital Trial Transcripts
Oral Session 7, October 22, 2:15 pm – Human-AI Relationships
79Perception of experience influences altruism and perception of agency influences trust in human-machine interactions
367What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users
200Beyond Interaction: Investigating the Appropriateness of Human-AI Assistant Relationships
234Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse
Oral Session 8, October 22, 3:45 pm – Algorithms
86Fairness in Reinforcement Learning: A Survey
496Nothing Comes Without Its World – Practical Challenges of Aligning LLMs to Situated Human Values through RLHF
32Algorithmic Decision-Making under Agents with Persistent Improvement
487AI & The Economics of Persuasion: A Computational Hardness Result
Oral Session 9, October 22, 5:00 pm – Evaluating Risks and Harms
418ExploreGen: Large Language Models for Envisioning the Uses and Risks of AI Technologies
147Red-Teaming for Generative AI: Silver Bullet or Security Theater?
146Gaps in the Safety Evaluation of Generative AI
433Operationalizing content moderation “accuracy” in the Digital Services Act
Oral Session 10, October 23, 10:45 am – Biases in Foundation Models II
483LLM Voting: Human Choices and AI Collective Decision-Making
424Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis
426Understanding Intrinsic Socioeconomic Biases in Large Language Models
400Identifying Implicit Social Biases in Vision-Language Models