Poster Session 1, October 21, 6:00 pm
28 | What’s Distributive Justice Got to Do with It? Rethinking Algorithmic Fairness from a Perspective of Approximate Justice |
43 | Reflection of its Creators: Qualitative Analysis of General Public and Expert Perceptions of Artificial Intelligence |
48 | Habemus a Right to an Explanation: so What? – A Framework on Transparency-Explainability Functionality and Tensions in the EU AI Act |
77 | A Formal Account of Trustworthiness: Connecting Intrinsic and Perceived Trustworthiness |
80 | Quantifying gendered citation imbalance in computer science conferences |
93 | PPS: Personalized Policy Summarization for Explaining Sequential Behavior of Autonomous Agents |
101 | On the Trade-offs between Adversarial Robustness and Actionable Explanations |
108 | “I don’t see myself represented here at all”: User Experiences of Stable Diffusion Outputs Containing Representational Harms across Gender Identities and Nationalities |
123 | Racial and Neighborhood Disparities in Legal Financial Obligations in Jefferson County, Alabama |
134 | Estimating Environmental Cost Throughout Model’s Adaptive Life Cycle |
136 | The Impact of Responsible AI Research on Innovation and Development |
143 | Public Attitudes on Performance for Algorithmic and Human Decision-Makers |
174 | “Democratizing AI” and the Concern of Algorithmic Injustice |
178 | Kid-Whisper: Towards Bridging the Performance Gap in Automatic Speech Recognition for Children VS. Adults |
213 | CIVICS: Building a Dataset for Examining Culturally-Informed Values in Large Language Models |
233 | Introducing the AI Governance and Regulatory Archive (AGORA): An Analytic Infrastructure for Navigating the Emerging AI Governance Landscape |
241 | Enhancing Equitable Access to AI in Housing and Homelessness System of Care through Federated Learning |
258 | The Problems with Proxies: Making Data Work Visible through Requester Practices |
261 | Scaling Laws Do Not Scale |
285 | Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation |
287 | Compassionate AI for Moral Decision-Making, Health, and Well-Being |
288 | Formal Ethical Obligations in Reinforcement Learning Agents: Verification and Policy Updates |
296 | Simulating Policy Impacts: Developing a Generative Scenario Writing Method to Evaluate the Perceived Effects of Regulation |
306 | Contributory injustice, epistemic calcification and the use of AI systems in healthcare |
334 | Mitigating urban-rural disparities in contrastive representation learning with satellite imagery |
339 | ML-EAT: A Multilevel Embedding Association Test for Interpretable and Transparent Social Science |
345 | Observing Context Improves Disparity Estimation when Race is Unobserved |
371 | Algorithmic Fairness From the Perspective of Legal Anti-discrimination Principles |
389 | Reducing Biases towards Minoritized Populations in Medical Curricular Content via Artificial Intelligence for Fairer Health Outcomes |
395 | Tracing the Evolution of Information Transparency for OpenAI’s GPT Models Through a Biographical Approach |
415 | MoJE: Mixture of Jailbreak Experts, Naive Tabular Classifiers as Guard for Prompt Attacks |
417 | Social Scoring Systems for Behavioral Regulation: An Experiment on the Role of Transparency in Determining Perceptions and Behaviors |
420 | Trustworthy Social Bias Measurement |
434 | A Conceptual Framework for Ethical Evaluation of Machine Learning Systems |
442 | The PPOu Framework: A Structured Approach for Assessing the Likelihood of Malicious Use of Advanced AI Systems |
450 | Epistemic Injustice in Generative AI |
451 | Disengagement through Algorithms: How Traditional Organizations Aim for Experts’ Satisfaction |
461 | Decoding Multilingual Moral Preferences: Unveiling LLM’s Biases Through the Moral Machine Experiment |
465 | Stable Diffusion Exposed: Gender Bias from Prompt to Image |
472 | Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents |
Poster Session 2, October 22, 6:00 pm
26 | Strategies for Increasing Corporate Responsible AI Prioritization |
29 | On Feasibility of Intent Obfuscating Attacks |
31 | Non-linear Welfare-Aware Strategic Learning |
45 | Gender in pixels: pathways to non-binary representation in Computer Vision |
60 | Responsible Reporting for Frontier AI Development |
64 | Coordinated Disclosure for AI: Beyond Security Vulnerabilities |
89 | The supply chain capitalism of AI: A call to (re)think algorithmic harms and resistance |
95 | APPRAISE: a Governance Framework for Innovation with Artificial Intelligence Systems |
106 | Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation |
110 | Individual Fairness in Graphs Using Local and Global Structural Information |
117 | A Human-in-the-Loop Fairness-Aware Model Selection Framework for Complex Fairness Objective Landscapes |
118 | Algorithms and Recidivism: A Multi-disciplinary Systematic Review |
124 | Introducing ELLIPS: an ethics-centered approach to research on LLM-based inference of psychiatric conditions |
137 | Face the Facts: Using Face Averaging to Visualize Gender-by-Race Bias in Facial Analysis Algorithms |
145 | A Relational Justification of AI Democratization |
153 | SoUnD Framework: Analyzing (So)cial Representation in (Un)structured (D)ata |
163 | What’s Your Stake in Sustainability of AI?: An Informed Insider’s Guide |
165 | AI debates aren’t binary — they’re plural |
175 | Do Generative AI Models Output Harm while Representing Non-Western Cultures: Evidence from A Community-Centered Approach |
189 | Beyond Participatory AI |
196 | PICE: Polyhedral Complex Informed Counterfactual Explanations |
201 | Fairness in AI-Based Mental Health: Clinician Perspectives and Bias Mitigation |
244 | Foundations for Unfairness in Anomaly Detection – Case Studies in Facial Imaging Data |
247 | Medical AI, Categories of Value Conflict, and Conflict Bypasses |
251 | Lessons from clinical communications for AI systems |
253 | Virtual Assistants Are Unlikely to Reduce Patient Non-Disclosure |
267 | Uncovering the gap: Challeging the Agential Nature of AI Responsibility Problems |
275 | Hidden or Inferred: Fair Learning-To-Rank With Unknown Demographics |
280 | LLMs and Memorization: On Quality and Specificity of Copyright Compliance |
294 | Foregrounding Artist Opinions: A Survey Study on Transparency, Ownership, and Fairness in AI Generative Art |
299 | Annotator in the Loop: A Case Study of In-Depth Rater Engagement to Create a Prosocial Benchmark Dataset |
311 | Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors |
342 | Representation Bias of Adolescents in AI: A Bilingual, Bicultural Study |
355 | On The Stability of Moral Preferences: A Problem with Computational Elicitation Methods |
399 | As an AI Language Model, “Yes I Would Recommend Calling the Police”: Norm Inconsistency in LLM Decision-Making |
414 | LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins |
419 | Why Am I Still Seeing This: Measuring the Effectiveness of Ad Controls and Explanations in AI-Mediated Ad Targeting Systems |
457 | AIDE: Antithetical, Intent-based, and Diverse Example-Based Explanations |
470 | Estimating Weights of Reasons using Metaheuristics: A Hybrid Approach to Machine Ethics |
490 | Misrepresented Technological Solutions in Imagined Futures: The Origins and Dangers of AI Hype in the Research Community |
Poster Session 3, October 23, 9:00 am
5 | What to Trust When We Trust Artificial Intelligence |
19 | Are Large Language Models Moral Hypocrites? A study based on Moral Foundations |
78 | Breaking the Global North Stereotype: A Global South-centric Benchmark Dataset for Auditing and Mitigating Biases in Facial Recognition Systems |
83 | Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust |
113 | Human vs. Machine: Behavioral Differences Between Expert Humans and Language Models in Wargame Simulations |
132 | All Too Human: Understanding and Mitigating the Risk from Anthropomorphic AI |
139 | Representation Magnitude has a Liability to Privacy Vulnerability |
164 | Afrofuturist Values for the Metaverse |
191 | Anticipating the risks and benefits of counterfactual world simulation models |
206 | Surviving in Diverse Biases: Unbiased Dataset Acquisition in Online Data Market for Fair Model Training |
214 | Public vs Private Bodies: Who Should Run Which Advanced AI Audits and Evaluations? Evidence from Nine Case Studies of High-Risk Industries |
229 | Ecosystem Graphs: Documenting the Foundation Model Supply Chain |
252 | How Are LLMs Mitigating Stereotyping Harms? Learning from Search Engine Studies |
277 | Human-Centered AI Applications for Canada’s Immigration Settlement Sector |
295 | Compute North vs. Compute South: The Uneven Possibilities of Compute-based AI Governance Around the Globe |
322 | Interpretations, Representations, and Stereotypes of Caste within Text-to-Image Generators |
324 | Ontology of Belief Diversity: A Community-Based Epistemological Approach |
332 | Algorithm-Assisted Decision Making and Racial Disparities in Housing: A Study of the Allegheny Housing Assessment Tool |
340 | On the Pros and Cons of Active Learning for Moral Preference Elicitation |
344 | A Model- and Data-Agnostic Debiasing System for Achieving Equalized Odds |
347 | Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits |
360 | Legitimating Emotion Tracking Technologies in Driver Monitoring Systems |
373 | Automating Accountability Mechanisms in the Judicial System Using LLMs: Opportunities and Challenges |
374 | AI Failure Loops in Feminized Labor: Understanding the Interplay of Workplace AI and Occupational Devaluation |
388 | Dataset Scale and Societal Consistency Mediate Facial Impression Bias in Vision-Language AI |
392 | Not Oracles of the Battlefield: Safety Considerations for AI-Based Military Decision Support Systems |
413 | Vernacularizing Taxonomies of Harm is Essential for Operationalizing Holistic AI Safety |
427 | Measuring Human-AI Value Alignment in Large Language Models |
431 | Sponsored is the New Organic: Implications of Sponsored Results on Quality of Search Results in the Amazon Marketplace |
444 | Navigating Governance Paradigms: A Cross-Regional Comparative Study of Generative AI Governance Processes & Principles |