The European Commission has set out its key requirements for trustworthy Artificial Intelligence, announcing a large-scale pilot phase for feedback from stakeholders and building an international consensus for human-centric AI.

Building on the work of the group of independent experts appointed in June 2018, the Commission has launched a pilot phase to ensure that the ethical guidelines for Artificial Intelligence (AI) development and use can be implemented in practice. The AI strategy aims to increase public and private investments to at least €20bn annually over the next decade, making more data available, fostering talent and ensuring trust.

The EC said that Artificial Intelligence  can benefit a wide-range of sectors, such as healthcare, energy consumption, cars safety, farming, climate change and financial risk management. It can also help to detect fraud and cybersecurity threats, and enables law enforcement authorities to fight crime more efficiently. However, AI  also brings new challenges for the future of work, and raises legal and ethical questions.

The Commission is taking a three-step approach: setting-out the key requirements for trustworthy Artificial Intelligence, launching a large scale pilot phase for feedback from stakeholders, and working on international consensus building for human-centric Artificial Intelligence.

  1. Seven essentials for achieving trustworthy Artificial Intelligence

Trustworthy Artificial Intelligence should respect all applicable laws and regulations, as well as a series of requirements; specific assessment lists aim to help verify the application of each of the key requirements:

Human agency and oversight: Artificial Intelligence systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.

Robustness and safety: Trustworthy Artificial Intelligence requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of Artificial Intelligence systems.

Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.

Transparency: The traceability of Artificial Intelligence systems should be ensured.

Diversity, non-discrimination and fairness: Artificial Intelligence systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.

Societal and environmental well-being: Artificial Intelligence systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.

Accountability: Mechanisms should be put in place to ensure responsibility and accountability for Artificial Intelligence  systems and their outcomes.

  1. Large-scale pilot with partners

In summer 2019, the Commission will launch a pilot phase involving a wide range of stakeholders. Companies, public administrations and organisations can sign up to the European AI Alliance and receive notification when the pilot starts. In addition, members of the AI high-level expert group will help present and explain the guidelines to relevant stakeholders in Member States.

  1. Building international consensus for human-centric Artificial Intelligence

The Commission wants to bring this approach to AI ethics to the global stage because technologies, data and algorithms know no borders. To this end, the Commission said it will strengthen cooperation with like-minded partners such as Japan, Canada or Singapore and continue to play an active role in international discussions and initiatives including the G7 and G20. The pilot phase will also involve companies from other countries and international organisations.

Matt Walmsley, EMEA director at Vectra, said: “It’s pleasing to see the EU commission moving from planning and debate towards pilot activity around securing trustworthy AI for us all. This goes beyond just technical dimensions as society’s design and use of AI, along with its data selection, for example, have significant influence on AI’s decisions and outcomes.

“The timing is good, we’re ahead of the curve as the majority of AI in operational use today is applied to specific focused tasks, rather than “general” AI which we have a tendency to anthropomorphise a la science fiction. In this context, applied AI, whilst opaque in its specific processing and “decision” making, is primarily providing decision support and automating repetitive tasks rather than the wholesale replacement of humans, or finalising decisions imposed upon us.

“AI is here to stay and it’s right and prudent that we’re thinking about its use and impact on us all and starting to develop frameworks and controls to help ensure AI is a positive influence and creator of value.”