(1) Governments worldwide should invest in developing — and keeping — home-grown talent and expertise in AI.
AI expertise must not reside in only a small number of countries — or solely within narrow segments of the population — as there is a danger that countries could become dependent on the expertise currently concentrated in the US and China.
(2) Corporations, foundations and governments should allocate funding to develop and deploy AI systems with humanitarian goals.
The humanitarian sector could benefit from such systems, which might, for example, improve response times in emergencies. Since such systems are unlikely to be immediately profitable for the private sector, a concerted effort needs to be made to develop them on a not-for-profit basis.A Red Cross employee in Mexico works at the collection centre in the city of Toluca, Mexico on 8 June 2018. The Mexican Red Cross is sending more than 130 tons of humanitarian aid to the people affected by the recent eruption of the Fuego Volcano in Guatemala. Image: Mario Vazquez/AFP/Getty Images.
(3) It should not be left to technical experts to understand the benefits — and limitations — of AI.
Better education and training on what AI is — and what it is not — should be made as broadly available as possible. Those developing the technologies would benefit from a greater understanding of the underlying ethical goals.A student attends a lesson in robotics at the IT Lyceum at the Kazan Federal University. Image: Yegor Aleyev/TASS/Getty Images.
(4) Developing strong working relationships between public and private AI developers, particularly in the defence sector, is critical.
Since much of the innovation is taking place in the commercial sector, ensuring that intelligent systems charged with critical tasks can carry them out safely — and ethically — will require openness between different types of institutions.U1208 Lab at Inserm studies cognitive sciences in robot-human communication. Image: BSIP/UIG/Getty Images.
(5) Clear codes of practice are necessary to ensure that the benefits of AI can be shared widely while at the same time the risks are well-managed.
In developing these codes of practice, policymakers and technology experts should understand the ways in which regulating artificially intelligent systems may be different from regulating arms or trade flows while also drawing relevant lessons from those models.
(6) Developers and regulators should pay particular attention to the question of human–machine interfaces.
Artificial and human intelligence are fundamentally different, and interfaces between the two must be designed carefully, and reviewed constantly in order to avoid misunderstandings that in many applications could have serious consequences.
The findings of this article are based on a Chatham House report ‘Artificial Intelligence and International Affairs: Disruption Anticipated’ by M. L. Cummings, Heather M. Roff, Kenneth Cukier, Jacob Parakilas and Hannah Bryce.