Abstract There's a phrase among engineers - "You don't build a bridge to fall down." Codes of ethics have existed for decades to help guide roboticists, programmers, and academics in their work to help prioritize safety concerns regarding what they build. But as with the introduction of any new technology, Autonomous and Intelligent Systems (A/IS) have introduced new societal issues engineers must account for in the design and proliferation of their work. Specifically, A/IS deeply affect human emotion, agency and identity (via the sharing of human data) in ways no technology has ever done before. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems was created to help address key ethical issues like accountability, transparency, and algorithmic bias in A/IS and to help recommend ideas for potential Standards based on these technologies. The IEEE P7000™ series of standards projects under development represent a unique addition to the collection of over 1300 global IEEE standards and projects. Whereas more traditional standards have a focus on technology interoperability, safety and trade facilitation, the P7000 series address specific issues at the intersection of technological and ethical considerations. Like their technical standards counterparts, the P7000 series empower innovation across borders and enable societal benefit. Standards provide a form of "soft governance" that can be utilized for policy as well as technology design and manufacture. This means that where these (or similar) Standards are being launched by the engineering or technological community it is imperative to have thought leaders from the robotics and A/IS communities join. Along with social scientists and philosophers, these Working Groups also include corporate and policy leaders to best facilitate the discussions on how to move forward on these issues with pragmatic, values-design-driven Standards that can help set the modern definition of innovation in the Algorithmic Age.
Paul Bello is the director of the Interactive Systems Section at the U.S. Naval Research Laboratory, and the former director of the Cognitive Science and AI program at the Office of Naval Research. He received his Ph.D. in cognitive science in 2005 from Rensselaer Polytechnic Institute, where his thesis helped lay the groundwork for a now-blossoming logicist approach to machine ethics. At ONR, Bello spearheaded an effort to expand funding and visibility for issues pertaining to the development of artificial moral agents, and the study of human moral cognition. In his current role at NRL, Bello co-directs the ARCADIA research program: an ambitious effort to explore the relationships between attention, (self)-consciousness, and agency by building an attention-centric architecture.
Abstract The ostensible target for much of the work in machine ethics is to develop action-selection routines for intelligent agents that flexibly incorporate norms as soft constraints so as to guide their behavior. For now, let us call this the ``Context of Deliberation” (CD). CD can be contrasted with the ``Context of Judgment” (CJ), where an agent is deciding if and how blame should be apportioned in a situation S which elicits norms NS, given interactions between agents A1 … NJ. Building a system capable of judgment in CJ is just as important as building a system that can flexibly decide and act in CD. More clearly, part of flexibly choosing how to act with respect to norms may involve how such actions will be evaluated by others in CJ. Because my lab is interested primarily in human-machine interaction, our efforts will consist in getting a system to reason about how a human observer might apportion blame in various scenarios. Such judgments can then be put to use in choosing if, when, and how to act. Such a seemingly simple thing that most of us do every day involves the coordination of a dizzying array of capacities that range from perception up through higher-order cognition. The critical conclusion to draw is that machine ethics is not just a matter of formalism, or even of normative ethics, but demands an approach grounded in cognitive architecture. In this talk, I present first steps at building a cognitive architecture capable of simultaneously operating in CD and CJ, using judgments generated in the latter to inform action-selection in the former: all while engaging in ongoing moment-by-moment perception and action.
Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. He is author of the forthcoming book, Army of None: Autonomous Weapons and the Future of War, to be published in April 2018.
From 2008-2013, Mr. Scharre worked in the Office of the Secretary of Defense (OSD) where he played a leading role in establishing policies on unmanned and autonomous systems and emerging weapons technologies. Mr. Scharre led the DoD working group that drafted DoD Directive 3000.09, establishing the Department’s policies on autonomy in weapon systems. Mr. Scharre also led DoD efforts to establish policies on intelligence, surveillance, and reconnaissance (ISR) programs and directed energy technologies. Mr. Scharre was involved in the drafting of policy guidance in the 2012 Defense Strategic Guidance, 2010 Quadrennial Defense Review, and Secretary-level planning guidance. His most recent position was Special Assistant to the Under Secretary of Defense for Policy.
Prior to joining OSD, Mr. Scharre served as a special operations reconnaissance team leader in the Army’s 3rd Ranger Battalion and completed multiple tours to Iraq and Afghanistan. He is a graduate of the Army’s Airborne, Ranger, and Sniper Schools and Honor Graduate of the 75th Ranger Regiment’s Ranger Indoctrination Program.
Mr. Scharre has published articles in The New York Times, Foreign Policy, Politico, Proceedings, Armed Forced Journal, Joint Force Quarterly, Military Review, and in academic technical journals. He has presented at the United Nations, NATO Defence College, Chatham House, National Defense University and numerous other defense-related conferences on robotics and autonomous systems, defense institution building, ISR, hybrid warfare, and the Iraq war. He has appeared as a commentator on CNN, MSNBC, NPR, the BCC, and Swiss and Canadian television. Mr. Scharre is a term member of the Council on Foreign Relations. He holds an M.A. in Political Economy and Public Policy and a B.S. in Physics, cum laude, both from Washington University in St. Louis.
Abstract Militaries around the world are racing to build robotic systems with increasing autonomy. What will happen when a Predator drone has as much autonomy as a Google car? Should machines be given the power to make life and death decisions in war? Paul Scharre, a former Army Ranger and Pentagon official, will talk on his forthcoming book, Army of None: Autonomous Weapons and the Future of War. Scharre will explore the technology behind autonomous weapons and the legal, moral, ethical, and strategic dimensions of this evolving technology. Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security.
Abstract Opinions are divided about robots. Some people would love to have robots at home and are pushing the roboticists to work harder and faster. Some are much more reluctant and see robots as a threat to their jobs, indeed to their freedom and humanity in general. If science fiction movies have fed this fear with famous examples, some scientists and even some roboticists also have contributed to creating this anxiety. Either because they overpromised about their work or because they were unclear in their communication, they cut off the branch that we are all sitting on with them. Of course, it is necessary to make people dream about our future robots, but we have to find a way to avoid giving false hope (because it is not ethical) and creating unfounded fears (because it is suicidal).