Poster Sessions

Poster Session 1, October 21, 6:00 pm
28What’s Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from a Perspective of Approximate Justice
43Reflection of its Creators: Qualitative Analysis of General Public and Expert Perceptions of Artificial Intelligence
48Habemus a Right to an Explanation: so What? – A Framework on Transparency-Explainability Functionality and Tensions in the EU AI Act
77A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness
80Quantifying gendered citation imbalance in computer science conferences
93PPS: Personalized Policy Summarization for Explaining Sequential Behavior of Autonomous Agents
101On the Trade-offs between Adversarial Robustness and Actionable Explanations
108“I don’t see myself represented here at all”: User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities
123Racial and Neighborhood Disparities in Legal Financial Obligations in Jefferson County, Alabama
134Estimating Environmental Cost Throughout Model’s Adaptive Life Cycle
136The Impact of Responsible AI Research on Innovation and Development
143Public Attitudes on Performance for Algorithmic and Human Decision-Makers
174“Democratizing AI” and the Concern of Algorithmic Injustice
178Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children VS. Adults
213CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models
233Introducing the AI Governance and Regulatory Archive (AGORA): An Analytic Infrastructure for Navigating the Emerging AI Governance Landscape
241Enhancing Equitable Access to AI in Housing and Homelessness System of Care through Federated Learning
258The Problems with Proxies: Making Data Work Visible through Requester Practices
261Scaling Laws Do Not Scale
285Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation
287Compassionate AI for Moral Decision-Making, Health, and Well-Being
288Formal Ethical Obligations in Reinforcement Learning Agents: Verification and Policy Updates
296Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation
306Contributory injustice, epistemic calcification and the use of AI systems in healthcare
334Mitigating urban-rural disparities in contrastive representation learning with satellite imagery
339ML-EAT: A Multilevel Embedding Association Test for Interpretable and Transparent Social Science
345Observing Context Improves Disparity Estimation when Race is Unobserved
371Algorithmic Fairness From the Perspective of Legal Anti-discrimination Principles
389Reducing Biases towards Minoritized Populations in Medical Curricular Content via Artificial Intelligence for Fairer Health Outcomes
395Tracing the Evolution of Information Transparency for OpenAI’s GPT Models Through a Biographical Approach
415MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks
417Social Scoring Systems for Behavioral Regulation: An Experiment on the Role of Transparency in Determining Perceptions and Behaviors
420Trustworthy Social Bias Measurement
434A Conceptual Framework for Ethical Evaluation of Machine Learning Systems
442The PPOu Framework: A Structured Approach for Assessing the Likelihood of Malicious Use of Advanced AI Systems
450Epistemic Injustice in Generative AI
451Disengagement through Algorithms: How Traditional Organizations Aim for Experts’ Satisfaction
461Decoding Multilingual Moral Preferences: Unveiling LLM’s Biases Through the Moral Machine Experiment
465Stable Diffusion Exposed: Gender Bias from Prompt to Image
472Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents
Poster Session 2, October 22, 6:00 pm
26Strategies for Increasing Corporate Responsible AI Prioritization
29On Feasibility of Intent Obfuscating Attacks
31Non-linear Welfare-Aware Strategic Learning
45Gender in pixels: pathways to non-binary representation in Computer Vision
60Responsible Reporting for Frontier AI Development
64Coordinated Disclosure for AI: Beyond Security Vulnerabilities
89The supply chain capitalism of AI: A call to (re)think algorithmic harms and resistance
95APPRAISE: a Governance Framework for Innovation with Artificial Intelligence Systems
106Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation
110Individual Fairness in Graphs Using Local and Global Structural Information
117A Human-in-the-Loop Fairness-Aware Model Selection Framework for Complex Fairness Objective Landscapes
118Algorithms and Recidivism: A Multi-disciplinary Systematic Review
124Introducing ELLIPS: an ethics-centered approach to research on LLM-based inference of psychiatric conditions
137Face the Facts: Using Face Averaging to Visualize Gender-by-Race Bias in Facial Analysis Algorithms
145A Relational Justification of AI Democratization
153SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata
163What’s Your Stake in Sustainability of AI?: An Informed Insider’s Guide
165AI debates aren’t binary — they’re plural
175Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach
189Beyond Participatory AI
196PICE: Polyhedral Complex Informed Counterfactual Explanations
201Fairness in AI-Based Mental Health: Clinician Perspectives and Bias Mitigation
244Foundations for Unfairness in Anomaly Detection – Case Studies in Facial Imaging Data
247Medical AI, Categories of Value Conflict, and Conflict Bypasses
251Lessons from clinical communications for AI systems
253Virtual Assistants Are Unlikely to Reduce Patient Non-Disclosure
267Uncovering the gap: Challeging the Agential Nature of AI Responsibility Problems
275Hidden or Inferred: Fair Learning-To-Rank With Unknown Demographics
280LLMs and Memorization: On Quality and Specificity of Copyright Compliance
294Foregrounding Artist Opinions: A Survey Study on Transparency, Ownership, and Fairness in AI Generative Art
299Annotator in the Loop: A Case Study of In-Depth Rater Engagement to Create a Prosocial Benchmark Dataset
311Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors
342Representation Bias of Adolescents in AI: A Bilingual, Bicultural Study
355On The Stability of Moral Preferences: A Problem with Computational Elicitation Methods
399As an AI Language Model, “Yes I Would Recommend Calling the Police”: Norm Inconsistency in LLM Decision-Making
414LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins
419Why Am I Still Seeing This: Measuring the Effectiveness of Ad Controls and Explanations in AI-Mediated Ad Targeting Systems
457AIDE: Antithetical, Intent-based, and Diverse Example-Based Explanations
470Estimating Weights of Reasons using Metaheuristics: A Hybrid Approach to Machine Ethics
490Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community
Poster Session 3, October 23, 9:00 am
5What to Trust When We Trust Artificial Intelligence
19Are Large Language Models Moral Hypocrites? A study based on Moral Foundations
78Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems
83Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust
113Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations
132All Too Human: Understanding and Mitigating the Risk from Anthropomorphic AI
139Representation Magnitude has a Liability to Privacy Vulnerability
164Afrofuturist Values for the Metaverse
191Anticipating the risks and benefits of counterfactual world simulation models
206Surviving in Diverse Biases: Unbiased Dataset Acquisition in Online Data Market for Fair Model Training
214Public vs Private Bodies: Who Should Run Which Advanced AI Audits and Evaluations? Evidence from Nine Case Studies of High-Risk Industries
229Ecosystem Graphs: Documenting the Foundation Model Supply Chain
252How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies
277Human-Centered AI Applications for Canada’s Immigration Settlement Sector
295Compute North vs. Compute South: The Uneven Possibilities of Compute-based AI Governance Around the Globe
322Interpretations, Representations, and Stereotypes of Caste within Text-to-Image Generators
324Ontology of Belief Diversity: A Community-Based Epistemological Approach
332Algorithm-Assisted Decision Making and Racial Disparities in Housing: A Study of the Allegheny Housing Assessment Tool
340On the Pros and Cons of Active Learning for Moral Preference Elicitation
344A Model- and Data-Agnostic Debiasing System for Achieving Equalized Odds
347Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits
360Legitimating Emotion Tracking Technologies in Driver Monitoring Systems
373Automating Accountability Mechanisms in the Judicial System Using LLMs: Opportunities and Challenges
374AI Failure Loops in Feminized Labor: Understanding the Interplay of Workplace AI and Occupational Devaluation
388Dataset Scale and Societal Consistency Mediate Facial Impression Bias in Vision-Language AI
392Not Oracles of the Battlefield: Safety Considerations for AI-Based Military Decision Support Systems
413Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety
427Measuring Human-AI Value Alignment in Large Language Models
431Sponsored is the New Organic: Implications of Sponsored Results on Quality of Search Results in the Amazon Marketplace
444Navigating Governance Paradigms: A Cross-Regional Comparative Study of Generative AI Governance Processes & Principles