Introducing Nokia's 6 Pillars of Responsible AI
By William Kennedy, Lab Leader, AI Research Lab
In 2014, Amazon sought to streamline its job recruiting through artificial intelligence (AI). A team of machine-learning specialists developed algorithms that could automatically identify the most qualified candidates among a vast pool of resumes, allowing Amazon to efficiently target the best recruits for any given role. There was only one problem: The program discriminated against women. Amazon’s computer models examined historical patterns in submitted resumes – which were dominated by male applicants – and ultimately concluded that men would make better recruits than women. In short, the algorithms taught themselves the very bias they were designed to eliminate.
Amazon shut down the program after it discovered its flaws, but this is certainly not the only instance in which a well-intentioned application of AI has produced an unintended and unethical result. AI is radically disrupting the way new value is created. Companies are relying on AI to effectively leverage massive troves of data, creating scalable solutions. However, if they do so blindly or shortsightedly, they are potentially exposing themselves to reputational and even legal risks.
Furthermore, failing to consider AI ethics will increasingly invite regulatory action. With its AI Act, the EU is laying down harmonized rules on AI. As a result, some uses of AI, such as those for social scoring and biometric identification in public spaces, will be deemed to have unacceptable risks and will be prohibited. Other uses, such as critical-infrastructure management, recruitment tools and medical devices, will be subject to strict controls and monitoring.
It’s becoming quite clear that we as an industry need to adopt clear processes and practices that ensure AI systems comply with strict responsible AI principles.
The responsible six
Nokia has defined six principles that should guide all AI research and development in the future. We believe these principles should be applied the moment any new AI solution is conceived and then enforced throughout its development, implementation and operation stages. These principles not only reflect the future of AI standards but also comprehensively account for our industry’s renewed focus on environmental sustainability, social responsibility and good governance.
We call these principles the 6 Pillars of Responsible AI. They are:
- Fairness: AI systems must be designed in ways that maximize fairness, non-discrimination and accessibility. All AI designs should promote inclusivity by correcting both unwanted data biases and unwanted algorithmic biases.
- Safety, Reliability and Security: AI systems should cause no direct harm and always aim to minimize indirect harmful behavior. They must be reliable in that they should always perform as from unauthorized parties.
- Privacy: By design, AI systems must respect privacy by providing individuals with agency over their data and the decisions made with it. AI systems must also respect the integrity of the data they use.
- Transparency: AI systems must be explainable and understandable. They should allow for human agency and oversight by producing outputs that are comprehensible to the average stakeholder. These outputs should be auditable and traceable to create trust, which will ultimately affect acceptance and encourage increased usage of AI systems.
- Sustainability: AI systems should attempt to be both societally sustainable, by empowering society and democracy, and environmentally sustainable, by reducing the amount of power required to train and run these systems. We should ideally gravitate toward an industry-defined and recognized measure for environmental impact.
- Accountability: AI systems should be developed and deployed through consultation and collaboration with all stakeholders such that true accountability becomes possible. We must ensure that the long-term effects of any AI application are understandable by all stakeholders, and that those stakeholders are empowered to act if any proposed change would adversely affect the application. If an AI system deviates from its intended results, then we need to have policies in place to ensure those deviations are detected, reported and remedied.
Building trust through responsibility
Defining the 6 Pillars is only the first step toward addressing the challenges faced in creating new AI value. In the coming months, Nokia Bell Labs researchers will explore concrete examples related to the 6 Pillars in more detail in a series of blog posts. We will describe the proactive steps we are taking in each to build responsible technology that will enable the world to act together.
As companies across our industry are now learning, the issue of AI ethics is no mere academic curiosity. It has become a strategic business necessity. If companies don’t develop a clear plan for achieving responsible AI, they risk alienating the public and reinforcing notions of “malevolent AI” popularized in the media. The inability to internalize these principles could prompt governments, legislators and regulators to act, possibly in a prohibitive manner, in spite of AI systems having clear potential to benefit business, society and humanity.
Responsible AI, however, is more than a business imperative. Rather than being perceived as a set of rules and guidelines that limit innovation, these 6 Pillars should be seen as an opportunity for Nokia to become a leader in ethical business practices in our industry. By embracing responsible AI, we are starting down a path toward building AI systems that offer a true competitive advantage and helping shape the AI conversation for decades to come.