Oral Session 1, October 21, 9:15 am – Algorithmic Implications of Regulations
120 | How Should AI Decisions Be Explained? Requirements for Explanations from the Perspective of European Law |
215 | Proxy Fairness under the European Data Protection Regulation and the AI ACT: A Perspective of Sensitivity and Necessity |
203 | You Still See Me: How Data Protection Supports the Architecture of AI Surveillance |
341 | Legal and Civil Perspectives on Responsible AI Artifacts: Current Limitations and Paths Towards Facilitating Multi-Actor Communication |
Oral Session 2, October 21, 11:45 am – Large Language Model Alignment
111 | Learning When Not to Measure: Theorizing Ethical Alignment in LLMs |
256 | A Qualitative Study on Cultural Hegemony and the Impacts of AI |
455 | PoliTune: Analyzing the Impact of Data Selection and Fine-Tuning on Economic and Political Biases in Large Language Models |
453 | Legal Minds, Algorithmic Decisions: How LLMs Apply Constitutional Principles in Complex Scenarios |
Oral Session 3, October 21, 2:15 pm – Excluded Knowledges and Openness
150 | What Makes An Expert? Reviewing How ML Researchers Define “Expert” |
57 | Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance |
39 | Decolonial AI Alignment: Openness, Viśeṣa-Dharma, and Including Excluded Knowledges |
66 | The Origin and Opportunities of Developers’ Perceived Code Accountability in Open Source AI Software Development |
Oral Session 4, October 21, 3:45 pm – Governance and Implications
40 | Pay Attention: a Call to Regulate the Attention Market and Prevent Algorithmic Emotional Governance |
492 | Acceptable Use Policies for Foundation Models |
227 | An FDA for AI? Pitfalls and Plausibility of Approval Regulation for Frontier Artificial Intelligence |
336 | The Societal Implications of Open Generative Models Through the Lens of Fact-Checking Organizations |
Oral Session 5, October 21, 5:00 pm – Responsible AI Tools and Transparency
228 | Foundation Model Transparency Reports |
447 | Co-designing an AI Impact Assessment Report Template with AI Practitioners and AI Compliance Experts |
23 | How Do AI Companies “Fine-Tune” Policy? Examining Regulatory Capture in AI Governance |
397 | The Ethico-Politics of Design Toolkits: Responsible AI Tools, From Big Tech Guidelines to Feminist Ideation Cards |
Oral Session 6, October 22, 11:45 am – Biases in Foundation Models I
317 | Examining the Behavior of LLM Architectures Within the Framework of Standardized National Exams in Brazil |
359 | Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrieval |
8 | A Causal Framework to Evaluate Racial Bias in Law Enforcement Systems |
323 | Automate or Assist? The Role of Computational Models in Identifying Gendered Discourse in US Capital Trial Transcripts |
Oral Session 7, October 22, 2:15 pm – Human-AI Relationships
79 | Perception of experience influences altruism and perception of agency influences trust in human-machine interactions |
367 | What Is Required for Empathic AI? It Depends, and Why That Matters for AI Developers and Users |
200 | Beyond Interaction: Investigating the Appropriateness of Human-AI Assistant Relationships |
234 | Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse |
Oral Session 8, October 22, 3:45 pm – Algorithms
86 | Fairness in Reinforcement Learning: A Survey |
496 | Nothing Comes Without Its World – Practical Challenges of Aligning LLMs to Situated Human Values through RLHF |
32 | Algorithmic Decision-Making under Agents with Persistent Improvement |
487 | AI & The Economics of Persuasion: A Computational Hardness Result |
Oral Session 9, October 22, 5:00 pm – Evaluating Risks and Harms
418 | ExploreGen: Large Language Models for Envisioning the Uses and Risks of AI Technologies |
147 | Red-Teaming for Generative AI: Silver Bullet or Security Theater? |
146 | Gaps in the Safety Evaluation of Generative AI |
433 | Operationalizing content moderation “accuracy” in the Digital Services Act |
Oral Session 10, October 23, 10:45 am – Biases in Foundation Models II
483 | LLM Voting: Human Choices and AI Collective Decision-Making |
424 | Breaking Bias, Building Bridges: Evaluation and Mitigation of Social Biases in LLMs via Contact Hypothesis |
426 | Understanding Intrinsic Socioeconomic Biases in Large Language Models |
400 | Identifying Implicit Social Biases in Vision-Language Models |