Highlights from the debate Ethical AI: Actioning a Nordic Stronghold Together arranged by Nordic Council of Ministers (NCM) and Association of Nordic Engineers (ANE).
Mikkel Flyvebom, Professor and Director of digital transformations and management at Copenhagen Business School, says a starting point for identifying a Nordic approach to artificial intelligence (AI) should not be what is technically possible, but rather the kind of world we would like to live in.
“Today, we see clear differences in the use of artificial intelligence across the world. In China and USA, the use of AI is seen as problematic in terms of ethics, accountability and justice. The money and ways of thinking in Silicon Valley differs from what we in the Nordics and Europe would like to pursue. Going after a Silicon Valley-approach is really flipping things around, “said Mikkel.
“Instead we in the Nordics should ask which problems need solving and what our ambitions are for justice and fairness towards humankind, open society and transparent public sector,” Mikkel said at Techfestival 2019 in Copenhagen, in a debate titled Ethical AI: Actioning a Nordic Stronghold Together arranged by NCM and ANE.
Based on the Nordic and EU core values, we may perhaps have more resources when it comes to verifying the ethical aspect of AI compared to regions where individual rights are not that strong. If so, that could be a competitive advantage.
A remark from the audience
However, Mikael Anneroth, expert on the human and society perspectives of information and communications technologies at Ericsson Research, said it would be dangerous to think that we in the Nordics are the ones facing ethics in the “right way”.
“MIT made a survey among 2.3 million people about how a self-driving car should decide between two bad outcomes, revealing three distinct value bases. So, we need a global discussion and bring in USA and Asia to find the common denominators for what is ethical,” said Mikael.
He reminded that while it is not possible to make AI systems that are ethical in themselves, we can develop algorithms that take ethical considerations to minimise or prevent unintended harm.
AI can be understood as the intelligence attributed to the computing of systems that display intelligent behaviour by perceiving, learning and concluding.
“Today’s AI systems are very narrow, doing specific tasks and not always that well. A bottleneck for developing ethical AI is a lack of awareness among engineers about the future impact on societies, the work force and new business models. There is a felt need in the AI industry to maintain trustworthiness, but it is still a long way to go before the mitigation of possible negative side effects becomes part of the DNA of engineer culture,” said Mikael.
“We lack a tool kit to assess future AI solutions ethically, so it is not enough to just make public and corporate guidelines.”