The rapidly increasing use of Artificial Intelligence (AI) is accompanied by a number of potential risks related to human rights. Focusing on relevant areas such as the right to privacy, non-discrimination, fair trials and freedom of expression, the OECD’s Rashad Abelson looks at how OECD guidance can be applied to AI systems throughout their lifecycle and provide a framework for considering these issues holistically.
With legislation on the horizon that will require companies in all sectors to implement human rights due diligence using OECD standards, companies in the digital sector and users of their products will have to grapple with how these rules may apply to them.
Voluntary standards to implement human rights due diligence have been in place for over a decade, when governments adopted expectations for all companies to identify and address human rights impacts through the OECD Guidelines for Multinational Enterprises and the UN Guiding Principles on Business and Human Rights. The OECD also developed Due Diligence Guidance for Responsible Business Conduct as a practical tool to help companies implement due diligence to meet expectations under the Guidelines.
Governments are now drafting mandatory due diligence laws after independent studies found voluntary standards falling short in the promotion of widespread uptake of due diligence (European Union, Norway, and the Netherlands). EU Commissioner Reynders announced in 2020 that the EU would develop a directive on mandatory cross-sectoral due diligence based on OECD standards.
Recent high profile legal cases, civil society reports, and allegations brought to National Contact Points for Responsible Business Conduct (NCPs), a government-based non-judicial grievance mechanism to address potential violations of the Guidelines, suggest a wide range of companies may be exposed by the legislation, spanning software designers, data collectors, telecommunications providers, cloud services, investors and venders. Examples include: French executives of technology firms charged with “complicity in acts of torture” for selling surveillance equipment used to spy on political dissidents; a Swedish firm allegedly selling phone hacking equipment to the Myanmar military; and, an NCP case in Switzerland involving a financial institution’s relationship with a Chinese firm developing AI-based surveillance technology used to track Uyghurs in China.
Preparing for new legislation
So how can companies in the technology sector get ready for this legislation? While no specific due diligence guidance exists for technology companies, the OECD Due Diligence Guidance for Responsible Business Conduct provides a solid foundation for its implementation and should be the first reference point for companies. Sector-specific resources on responsibly developing and selling technology can also complement due diligence efforts. Examples include the OECD Principles on Artificial Intelligence, Foundational Papers from the OHCHR B-Tech Project, and the United States Department of State Guidance on transactions involving surveillance equipment.
Technology companies can deploy these resources in combination to roll out the six steps of the OECD Due Diligence Guidance, though the sector presents some unique challenges that technology companies will need to grapple with in order to implement due diligence meaningfully.
Identifying and assessing impacts (Step 2) would include a mapping and risk assessment of business relationships, but also a risk assessment of the product to determine potential for misuse or negative side effects. Using AI as an example, the full range of companies in an AI lifecycle can be more complex than for physical supply chains.
Technology companies should orient due diligence around their position in the product lifecycle, their business relationships, and the nature of the product or service (Steps 2-3). Defining the exact division of due diligence responsibilities in this context merits further discussion among companies, civil society organisations, and policy makers. The OECD organised similar discussions for the minerals, garment, and agriculture supply chains, leading to detailed recommendations on the division of due diligence responsibilities across the supply chain. For example:
Companies at early stages in the lifecycle (design, data, and models) could ask
- Where does the data originate? Are there any biases in the data?
- Who will likely use the product and for what purpose?
- Are there safeguards to uphold relevant standards and prevent misuse?
Companies selling AI products could ask
- Was the product designed and assembled according to RBC standards?
- Is the product being sold directly to the end-user or to another distributor?
- Are there laws in the recipient country allowing for (or preventing) abuse of the technology (e.g. counter terrorism laws that unduly restrict freedom of expression or allow for arbitrary surveillance)?
- Can licenses be revoked or the product be disabled if misuse occurs?
End-user due diligence could include asking
- Does the product have a dual use that is harmful?
- Has the product been altered in any way that may increase its potential RBC risks through resale channels?
Companies should track due diligence efforts (Step 4) through internal or third-party reviews. Many legislative proposals contain accountability requirements to ensure companies comply with their privacy programmes, and some include self-audits or third-party audits. Auditing machine-learning systems remains a difficult and still relatively nascent process.
Transparency and public reporting on due diligence efforts (Step 5) are also key to their effectiveness and earning public trust, particularly with controversial products (e.g. surveillance technology) and when the buyer has a track record of violating of people’s digital rights.
The business case for human rights due diligence
Technology companies are expected to address actual or potential negative impacts regardless of the approach policy makers take to promote implementation of international standards. Beyond simply being the right thing to do, implementing these due diligence steps can help companies comply with existing legislation on related topics (e.g. labour, environment, data protection, dual-use exports, taxation and anti-corruption). The expectations set out in the OECD Principles on AI also closely track with each step of the OECD Due Diligence Guidance.
In a recent analysis of the benefits of RBC and supply chain due diligence, companies reported improved perception, increased ability to retain and attract talent, increased productivity, increased shareholder returns, reduced stock price volatility, and improved investor satisfaction. Many companies also reported increased revenues through access to markets, and increase in sales volume and price premium due to consumer awareness of social responsibility. These benefits can apply to technology companies engaged in rolling out AI innovations in numerous fields such as surveillance, security, law enforcement, public justice, social networks and advertising.

The use of AI brings up a number of issues related to human rights across a broad scope of applications. A chapter in the 2021 edition of the OECD Business and Finance Outlook examines potential human rights impacts in the areas of the right to privacy, non-discrimination, fair trials and freedom of expression. It then provides practical guidance to help mitigate these risks by reviewing national, international, business-led, and multi-stakeholder initiatives that are tackling some of these issues. In particular, the chapter illustrates how the OECD Due Diligence Guidance for Responsible Business Conduct can be applied to AI systems throughout their lifecycle and provide a framework for considering these issues holistically.