You are browsing the site of a past edition of the AIES conference (2019). Navigate to present edition here.

Invited talks

​January 27th, 8:50 AM – 9:40 AM

How We Talk About AI (And Why It Matters)

Ryan Calo (Univ. of Washington School of Law)
​Chair: Vincent Conitzer
Abstract: How we talk about artificial intelligence matters. Not only do our rhetorical choices influence public expectations of AI, they implicitly make the case for or against specific government interventions. Conceiving of AI as a global project to which each nation can contribute, for instance, suggests a different course of action than understanding AI as a “race” America cannot afford to lose. And just as inflammatory terms such as “killer robot” aim to catalyze limitations of autonomous weapons, so do the popular terms “ethics” and “governance” subtly argue for a lesser role for government in setting AI policy. How should we talk about AI? And what’s at stake with our rhetorical choices? This presentation explores the interplay between claims about AI and law’s capacity to channel AI in the public interest.

Ryan Calo​ is the Lane Powell and D. Wayne Gittinger Associate Professor at the University of Washington School of Law. He is a faculty co-director (with Batya Friedman and Tadayoshi Kohno) of the University of Washington Tech Policy Lab, a unique, interdisciplinary research unit that spans the School of Law, Information School, and Paul G. Allen School of Computer Science and Engineering. Professor Calo holds courtesy appointments at the University of Washington Information School and the Oregon State University School of Mechanical, Industrial, and Manufacturing Engineering.

​January 27th,  5:40 PM – 6:30 PM

Guiding and Implementing AI

Susan Athey (Stanford University, Graduate School of Business and Economics):
Chair: Gillian Hadfield
Abstract:​ This talk will provide theoretical perspectives on how organizations should guide and implement AI in a way that is fair and that achieves the organization’s objectives. We first consider the role of insights from statistics and causal inference for this problem. We then extend the framework to incorporate considerations about designing the rewards for AI and human decision-makers, and designing tasks and authority to optimally combine AI and humans to achieve the most effective incentives.

Susan Athey​ is The Economics of Technology Professor at Stanford Graduate School of Business. She received her bachelor’s degree from Duke University and her Ph.D. from Stanford, and she holds an honorary doctorate from Duke University. She previously taught at the economics departments at MIT, Stanford and Harvard. In 2007, Professor Athey received the John Bates Clark Medal, awarded by the American Economic Association to “that American economist under the age of forty who is adjudged to have made the most significant contribution to economic thought and knowledge.” She was elected to the National Academy of Science in 2012 and to the American Academy of Arts and Sciences in 2008. Professor Athey’s research focuses on the intersection of machine learning and econometrics, marketplace design, and the economics of digitization. She advises governments and businesses on marketplace design and platform economics, serving as consulting chief economist to Microsoft for a number of years, and serving on the boards of Expedia, Lending Club, Rover, Ripple, and Turo.

​January 28th,  8:50 AM – 9:40 AM

Specifying AI Objectives as a Human-AI Collaboration Problem

Anca Dragan (UC Berkeley, EECS)
Chair: Vincent Conitzer
Abstract:​ ​Estimation, planning, control, and learning are giving us robots that can generate good behavior given a specified objective and set of constraints. What I care about is how humans enter this behavior generation picture, and study two complementary challenges: 1) h​ ow ​to optimize behavior when the robot is not acting in isolation, but needs to coordinate or collaborate with people; and 2) ​what t​ o optimize in order to get the behavior we want. My work has traditionally focused on the former, but more recently I have been casting the latter as a human-robot collaboration problem as well (where the human is the end-user, or even the robotics engineer building the system). Treating it as such has enabled us to use robot actions to gain information; to account for human pedagogic behavior; and to exchange information between the human and the robot via a plethora of communication channels, from external forces that the person physically applies to the robot, to comparison queries, to defining a proxy objective function.

Anca Dragan​ is an Assistant Professor in EECS at UC Berkeley, where she runs the InterACT lab. Her goal is to enable robots to work with, around, and in support of people. Anca did her PhD in the Robotics Institute at Carnegie Mellon University on legible motion planning. At Berkeley, she helped found the Berkeley AI Research Lab, is a co-PI for the Center for Human-Compatible AI, and has been honored by the Sloan fellowship, the NSF CAREER award, the Okawa award, MIT’s TR35, and an IJCAI Early Career Spotlight.

​January 28th, 4:30 PM – 5:20 PM

The Value of Trustworthy AI

David Danks (Carnegie-Mellon University, Dept. of Philosophy)
Chair: Chair: Shannon Vallor
​​
Abstract:​ There are an increasing number of calls for “AI that we can trust,” but rarely with any clarity about what ‘trustworthy’ means or what kind of value it provides. At the same time, trust has become an increasingly important and visible topic of research in AI, HCI, and HRI communities. In this talk, I will first unpack the notion of ’trustworthy’, from both philosophical and psychological perspectives, as it might apply to an AI system. In particular, I will argue that there are different kinds of (relevant, appropriate) trustworthiness, depending on one’s goals and modes of interaction with the AI. There is not just one kind of trustworthy AI, even though trustworthiness (of the appropriate type) is arguably the primary feature that we should want in an AI system. Trustworthiness is both more complex, and also more important, than standardly recognized in the public calls-to-action (and this analysis connects and contrasts in interesting ways with others).

David Danks​ is the L.L. Thurstone Professor of Philosophy & Psychology, and Head of the Department of Philosophy, at Carnegie Mellon University. He is also an adjunct member of the Heinz College of Information Systems and Public Policy, and the Center for the Neural Basis of Cognition. His research interests are at the intersection of philosophy, cognitive science, and machine learning, using ideas, methods, and frameworks from each to advance our understanding of complex, interdisciplinary problems. In particular, Danks has examined the ethical, psychological, and policy issues around AI and robotics in transportation, healthcare, privacy, and security. He has received a McDonnell Foundation Scholar Award, an Andrew Carnegie Fellowship, and funding from multiple agencies.