Ethics against the emergence of killer robots - Interview with Kristinn R. Thórisson

Dr. Kristinn R. Thórisson is the founder and Managing Director of IIIM and co-founder of Reykjavik University’s artificial intelligence (AI) laboratory

With a rapidly increasing development in AI technology ethical issues about its usages have become more prominent. One of the most emphasized ethical issues is the usage of AI in military actions. Military institutions have funded a great deal of AI-technology projects from human supervisory controlled killing machines such as the notorious drones to fully autonomous sentry guns.

The Icelandic Institute for Intelligent Machines (IIIM) has put its foot down against he militarization of AI technology by introducing an ethics policy for their institution where they state that they will not accept military funding or develop any kind of harmful technology.

 

Dr. Kristinn R. Thórisson is the founder and Managing Director of IIIM and co-founder of Reykjavik University’s artificial intelligence (AI) laboratory. Kristinn has been researching AI for 25 years, in academia and industry. He has founded and led several startups and consulted on technology and business for NASA and British Telecom. Recently his €2M EU-funded HUMANOBS project resulted in a new kind of AI that can learn complex tasks by observation through self-programming. Kristinn has authored numerous scientific papers and sits on the editorial board of the Journal of Artificial General Intelligence and the LNCS Transactions on Computational Collective Intelligence. Kristinn holds a Ph.D. from the MIT Media Lab. He is a two-time recipient of the Kurzweil Award for AI research.

 

Jón Bragi Pálsson interviewed Dr. Kristinn R. Thórisson

 

Why did IIIM decide to create their own ethics policy and why do you think that ethic policies are important for science and technology institutions?

If one of an institute’s key aims is an improved society, and the institute is a public non-profit one, its existence is in large part justified by its ability to achieve that aim. It would be in such an institute’s own interest, and the public’s, to be crystal clear about how it intends to achieve its aims and goals. IIIM is a non-profit with precisely such an aim, and our Ethics Policy for Peaceful Research and Development is part of our effort to communicate this intent to our collaborators, and to the world at large, in a prominent and publicly accessible manner.

The policy, which we published on our Website this past August, states our top-level goal explicitly, in very clear terms, and outlines our ethical compass by saying what could be called stating the obvious – that we will not participate in any project whose purpose may break – or have the intent of breaking – the law, or infringe on human rights in some way.

Then we take one step further. Historically, going back to the first half of the 1900s, research in artificial intelligence has to a considerable extent been funded through military grants; certainly this is the case in the United States, arguably the strongest developer of AI technology to date. A significant part of IIIM development and research involves advanced control technologies and high-tech information systems, including artificial intelligence. In our opinion, funding for AI research, and efforts towards building software applications for war-related goals, is already in over-abundance. With clear evidence of abuse of power by military forces around the world, past and present, coupled with significant recent advances towards what has been called “killer robots” – autonomous weapons of various kinds that can algorithmically decide who should live and who should die – it is easy to foresee a very bleak future, one that is clearly at odds with the mission and goal of IIIM. With a large percentage of AI research being funded directly or indirectly by military funds, we felt it important to take a counter-measure in the opposite direction, by fully renouncing funding from military sources.

One purpose of an ethics policy is to help us “do the right thing”, and be clear to others that we are doing the right thing – and what we mean by “the right thing”. I am of the opinion – and I am not alone – that researchers can and should take responsibility for the new knowledge they produce, and actively oppose and counteract potential abuse. This can of course be done in many ways, e.g. through active information disclosure to the government, or simply on the World Wide Web for everyone to see. Our way has been to develop the Ethics Policy for Peaceful R&D. It tells the world what we consider to be the “right thing” at this point in time, given our work, aims and goals.

 

What are the core elements of IIIM’s ethics policy?

The policy has three parts or segments to it. The first is codification of the goals and aims of the institute and outlining the broad strokes or rules that we believe will help it achieve them. The second is to state explicitly the place the institute holds in society – its social context, especially with respect to R&D that has a history of creating unrest and destabilization. And since AI is increasingly being mentioned in such contexts, this is a timely action in our opinion. The third is to operationalize the rules in some way that allows adherence to the rules to be measured. We do this with the 5/15 rule: If a potential project involves a collaborator that has received military funding for more than 15% of its operations over the past 5 years we will not consider the project.

 

Your ethics policy is licensed under Creative Commons Attribution, which makes other institutions capable of adopting your policy. Has any other AI or technological institution adopted your ethics policy and do you think that the policy will influence other science facilities to create their own ethics policy?

It is a bit of a surprise to see how few research labs have a publicly stated ethics policy. One of our hopes is that others will adopt policies along similar lines, and we have released ours under a share-alike Creative Commons license to make it easy for others to adopt ours and adapt it to their needs.

So far we are not aware of any institution, group or company that has adopted it, but whether they use ours or create their own from scratch, we would be very happy. Maybe ours will be an inspiration. I truly hope that others will follow suit – even if it takes time. The influx of AI technology is just beginning, so it’s not too late. Among those showing great interest in our policy is the group in the UK working towards a global agreement on banning killer robots – the International Committee for Robot Arms Control (ICRAC.net). Another is the Future of Life institute in the US. If IIIM can help such efforts to reduce the potential risks stemming from advanced technologies like AI, and lower the risk of an AI arms race – or even to make people think twice about these matters – then we would be very happy.

 

Do you think there is an increasing risk of AI institutions getting involved with ethically controversial projects, such as making of autonomous weapons, due to financial gains of military funding?

Yes. As already mentioned, the involvement of the military-industrial complex with basic and applied AI research in both industry and academia is the norm rather than the exception. This trend will probably not slow down unless we actively oppose it – there is now a clearer-than-ever benefit from introducing autonomy into every nook and cranny of a battlefield, why should it slow down now? One mantra that researchers use to appease themselves when working on weapons and deadly technologies: “I just make this stuff – I’m not the one pulling the trigger.” But if you want to take an active role in preventing the world from the various dangers and perils that humanity is likely to face in the near future, such a lazes-faire attitude will not go very far.

Every researcher – and in fact every citizen – has a responsibility to actively steer the use of the knowledge they create for the betterment of this world, for the benefit of humanity. Reducing risk of conflict is one of those things. If all our policy does is make some people think twice about undertaking a project in the gray zone, then that’s a start. If it leads to a widespread discussion about the use and abuse of scientific knowledge I will be even happier. Rather than war and unrest, I think the majority of the Earth’s population ultimately want peace, and if the majority voices their opinion to influence those in power, instead of taking a lazes-faire approach, there is great potential to reduce tension and conflict in the world, making it a better place for all.