Created in partnership with the Helpdesk on Business & Human Rights
Digital and Emerging Technologies and Human Rights
Overview
What are Digital and Emerging Technologies?
As business and society undergo digital transformation, both established digital technologies and emerging technologies bring their own human rights risks and implications for business’ human rights approaches.
- Digital technologies include widely adopted tools underpinning business operations and critical infrastructure. These include:
- Cloud computing, digital telecommunications, search engines, big data analytics, Internet of Things (IoT), cybersecurity systems, basic drones and industrial robots for commercial uses, blockchain for traceability and mobile technology.
- Pervasive Artificial Intelligence (AI) systems such as predictive analytics, Natural Language Processing systems (e.g. customer service chatbots) and machine learning.
- Emerging technologies are characterized by rapid development and uncertainty in trajectory and impact. These include:
- Generative AI models, autonomous decision-making and agentic AI systems such as advanced robotics and autonomous vehicles, emotion-recognition software, algorithms used in policing or credit risk assessments, cryptocurrencies, synthetic data generation and digital-twin modelling (i.e. creating data to train other AI systems or to model supply chains).
- The capacity of these technologies to take independent decisions introduces safety, liability and oversight challenges. Many are still untested at scale, with reliability, bias and security risks not yet fully understood. Given the rapid pace of adoption of some of these technologies, for the purposes of human rights due diligence (HRDD) these should be considered emerging when the regulatory environment is still nascent and human rights impacts are uncertain.
What is AI?
Artificial Intelligence (AI) is a broad term encompassing technologies like machine learning, deep learning, natural language processing and generative AI. The EU, the Council of Europe, the US, the UN and other jurisdictions use the OECD’s definition (2024) of an AI system.
AI’s applications are varied and include the following:
- Searching for and understanding information (ie. Gemini, Chat GPT, Co-Pilot)
- Smart devices and voice assistants
- Digital companions used in gaming, dating and elderly care
- The content social media users see on their platforms
- Government systems including welfare, policing, criminal justice, healthcare and banking
- Natural language processing to automate tasks, enable conversations with chatbots and accelerate decision making.
What is Generative AI (GenAI)?
According to the OECD, genAI “is a category of AI that can create new content such as text, images, videos, and music. It gained global attention in 2022 with text-to-image generators and Large Language Models (LLMs). While GenAI has the potential to revolutionize entire industries and society, it also poses critical challenges and considerations that policymakers must confront.”
GenAI not only creates significant opportunities for business but also introduces legal risks including Intellectual Property (IP) and data ownership issues. These have been extensively researched by European Union Intellectual Property Office (EUIPO) and World Intellectual Property Organization (WIPO).
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
OECD (2024)
What is the Internet of Things (IoT)?
The Internet of Things (IoT) refers to a network of physical devices and objects that can be monitored, controlled, or interacted with via the Internet, either automatically or with human involvement (OECD, 2023). These devices, often called “endpoints,” are uniquely identifiable and can exchange data with each other in real time. IoT application domains spam across all major economic sectors, notably including: health, education, agriculture, transportation, manufacturing and electric grids (OECD, 2016).
The IoT includes overlapping technologies such as AI, which is increasingly integrated into connected devices.
Examples of IoT devices include:
- Wearable fitness trackers that monitor heart rate, steps and sleep patterns.
- Connected vehicles that share data for navigation, maintenance and safety.
- Smart home assistants like voice-controlled speakers that manage lighting, appliances and schedules.
The key value of the IoT is its ability to gather, store, and share data about the environments and assets it tracks. This data can make processes more measurable and manageable, leading to increased efficiency. However, the IoT is also exposed to security and privacy risks, notably including: device vulnerabilities, surveillance and data misuse.
OECD AI Principles
The OECD AI principles (adopted 2019, updated 2024) provide the first intergovernmental standard on AI. The principles emphasize innovation, trustworthiness and respect human rights and democratic values:
- Inclusive growth, sustainable development and well-being “This Principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.”
- Respect for the rule of law, human rights and democratic values, including fairness and privacy “AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.”
- Transparency and explainability “This principle is about transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.”
- Robustness, security and safety “AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.”
- Accountability “Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI.”
Section 2 of the AI Principles complements the five principles with policy recommendations on research and development, AI ecosystems, governance, workforce and international cooperation.
Connection Between Technology and Human Rights
- Given technology’s integration across business operations, understanding its human rights impacts is essential. The United Nations Guiding Principles on Business and Human Rights (UNGPs) highlight business responsibility to respect human rights throughout their activities, operations and relationships, as emphasized by Principles 13 and 17. For example, a 2025 study from the University of Melbourne revealed that AI hiring systems, which are increasingly used by employers across industries, create serious risks of algorithmic discrimination against marginalized groups.
- Digital technologies can both enhance and undermine human rights. Technological progress will fundamentally transform societies and bring enormous benefits including better protection of individual rights. In recent decades, mobile and cloud technologies have improved access to information, healthcare and education. Now, emerging technologies – particularly AI – are reshaping how businesses operate, manage workers and oversee global supply chains.
- As technology increasingly influences how people work, access services and engage in society, it presents opportunities to advance the UN Sustainable Development Goals (SDGs). AI and digital tools provide pathways to improve accessibility, enhance transparency and enable civic mobilization aligned with the SDGs.
- However, these same systems can reinforce bias and compromise privacy and labour rights, potentially worsening and introducing complex risks to fundamental rights. The pervasive nature of the digital transformation means that inevitably, every business will be somehow connected with technology-related human rights impacts.
- Every business therefore has a responsibility to understand how technology affects the rights of all individuals connected to its operations and relationships. This responsibility goes beyond employees and customers; it includes supply chain workers and communities impacted by business decisions. It spans the entire technology lifecycle, from development and deployment to ongoing use and eventual disposal.
What is the Dilemma?
Businesses face a double-edged challenge in the age of digital and emerging technologies: the imperative to adopt AI and advanced systems to maintain competitive advantage, while simultaneously ensuring these technologies protect human rights as new technologies are embedded within business operations.
Technology companies may focus human rights efforts primarily on development and deployment, overlooking their downstream responsibilities. Yet under the UN Guiding Principles on Business and Human Rights (UNGPs), businesses are expected to respect human rights throughout their operations and business relationships. This includes understanding and addressing how technologies affect not only workers and suppliers, but also end users and communities.
To meet their responsibilities under UNGPs, businesses must go beyond compliance, particularly while technological developments outpace regulation. Companies should implement robust, risk-based human rights strategies that include due diligence, stakeholder engagement and ongoing monitoring. Leveraging technology, including AI itself, can help identify and mitigate risks, but responsible deployment requires careful planning. The pressure to innovate quickly may be in conflict with the time needed for meaningful consultation and impact assessment.
Negative Impacts on Human Rights from Technology
Digital technologies have transformative potential for businesses and societies, but the rapid advancement of tools like generative AI without robust regulation, poses serious risks to individual rights and freedoms.
Issues such as algorithmic bias, mass surveillance and privacy erosion have profound human rights implications. The competitive nature of the AI industry may also compromise safety. These challenges disproportionately impact vulnerable groups, raising ethical and operational concerns for companies.
Technology has the potential to impact a range of human rights risks including but not limited to:
- Right to privacy and freedom of expression (UDHR, Article 12, 19; ICCPR, Article 17, 19; OECD, AI Principle 1.2): AI systems’ reliance on large volumes of personal and biometrics data raises concerns about unlawful surveillance, data misuse and lack of informed consent, especially in workplaces and public spaces. Further, AI-generated bots can influence social media, news and advertising with misinformation or manipulated narratives, while algorithmic echo chambers and harassment can cause self-censorship, particularly among human rights defenders, journalists and activists. High-risk AI system obligations will come into force in 2026 and 2027 as the legal landscape evolves to address emerging privacy threats. Until then, the EU’s General Data Protection Regulation (GDPR) offers privacy protections, while the EU AI Act introduces risk-based obligations, including prohibitions for ‘unacceptable risk’ in 2025.
- Rights to non-discrimination, equality, to be free from targeting and persecution and minority rights (UDHR, Articles and 7; ICCPR, Articles 2, 26 and 27; ICERD; 1992 Minorities): AI systems can reinforce human biases through both flawed algorithm design and the use of biased training data, compounding the exclusion and misrepresentation of minorities. Surveillance technologies may be used to target individuals based on identifying markers such as ethnicity, religion or political affiliation, enabling systematic discrimination and persecution of minority populations. Racial and gender minorities, LGBTQ+ individuals and other groups who are targeted offline are also more likely to be victims of online harassment. However, the potential impacts of discriminatory technology may be felt more broadly, influencing the delivery of critical services like policing and healthcare. Discriminatory outcomes in areas such as hiring, criminal sentencing and service delivery have been widely documented, perpetuating systemic inequalities.
- Right to work and just and favourable conditions of work; freedom from forced labour (UDHR, Article 23, ICESCR, Articles 6 and 7, OECD, AI Principle 1.1; ILO Conventions): Data labeling and annotation work, which enable AI systems, is often low-paid and insecure, particularly in the Global South. Technological advancements have also enabled the rise of platforms making use of “gig workers”, which may improve access to work. However, job insecurity for gig workers is higher and opportunities for professional development are lower, lacking basic labour protections. Workers in electronics supply chains are also frequently exploited and work in dangerous conditions, from raw materials to recycling of e-waste. The extraction of transition minerals essential for digital technologies (including cobalt and lithium) can be linked to armed conflict, environmental destruction and exploitation including child labour. People in the Global South are disproportionately at risk. In both manufacturing and extraction, automation could render many jobs obsolete without safety nets in place, replacing entire categories of jobs and disrupting workforces and supply chains.
- Right to an adequate standard of living, health and a clean environment (UDHR, Article 25; ICESCR, Articles 11 and 12; UNGA Resolution 76/300; Human Rights Council Resolution 48/13): The race for AI dominance and rapid growth in digital services poses risks to health and environmental impacts, from multiple facets. AI’s climate impact is uncertain. While the IEA forecasts data centre emissions rising from 180 Mt to 300 Mt between 2025 and 2035, AI applications could simultaneously reduce emissions from the energy sector by more than that. However, outcomes will depend on AI adoption rates, business incentives and regulatory responses.
- Impacts on the environment are not limited to GHG emissions. Toxic waste is generated throughout the lifecycle of electronic devices – and informal recycling practices in the Global South expose workers and communities to heavy metals and carcinogens. Mineral extraction for technologies can disproportionately harm Indigenous communities with little accountability.
- AI adoption also carries risks to healthcare. The pressure to quickly adopt emerging technologies may compromise inclusive design of services, raising concerns about biases in training data and resulting health outcomes. Data privacy risks in health applications can deter individuals from seeking care or sharing sensitive information necessary for diagnosis and treatment, undermining the effectiveness of digital health solutions. Marginalized groups may receive inferior digital care replacing in-person services, creating a two-tiered system that widens existing healthcare inequities.
- Policies designed to protect human rights must include safeguards against the negative consequences of being constantly connected. For workers in the digital economy, the expectation of constant connectivity erodes boundaries between work and personal life, contributing to burnout, stress-related illnesses and mental health challenges.
- Right to life and personal security (UDHR, Article 3; ICCPR, Article 6; OECD, AI Principle 1.4): The misuse of digital identity systems and biometrics databases can increase risk of identity theft or even targeted violence by state or non-state actors. In the UK, the Greater London Metropolitan Police drew concern from rights groups, including Amnesty International, over its decision to more than double their use of live facial recognition. This technology has been criticized due to potential racial biases as it is less accurate in scanning the faces of People of Colour. AI is also increasingly being used to support decisions that directly affect human life, such as in lethal autonomous weapons systems (LAWS) and vehicles. Digital platforms can also be exploited to incite violence, coordinate attacks and spread extremist ideologies.
- Right to participate in cultural life, education and share in scientific advancement (UDHR, Article 27; ICESCR, Article 15; UDHR, Article 26; ICESCR, Articles 13 and 14): The benefits of technology are not equally shared, with digital divides reinforcing existing social and economic inequalities along lines of geography, income, age and education. Technological solutions are typically designed for high-connectivity contexts and able-bodied users, systematically excluding those in low-bandwidth environments, rural areas or people with disabilities. This limits their ability to participate fully in increasingly digitalized societies.
- Limited access to devices, reliable internet, and digital skills can worsen educational inequalities for students from low-income and under-resourced communities. The use of surveillance tools including monitoring software, facial recognition and behaviour tracking can damage trust, limit free expression and deepen the digital divide by normalizing invasive data practices for vulnerable students.
Vulnerable groups
Groups at heightened risk will bear the brunt of negative impacts from technology, facing heightened risks of exclusion, exploitation and injustice. Businesses need to explicitly consider these groups in their human rights due diligence, as their exposure to harm is higher. Companies should also consider how the same technology can impact different groups. Examples of people at particular risk include, but are not limited to:
- People of Colour and Indigenous Peoples, who are often underrepresented or misrepresented in datasets, leading to serious human rights violations. Businesses should implement inclusive data auditing to ensure representation and prevent discrimination.
- Women and girls, who often lack equitable access to the internet and may be discriminated against through AI and algorithmic gender biases. Companies should conduct gender impact assessments on AI systems to identify and mitigate biases.
- Low-income and digitally underserved communities, who may be excluded from digital services or decision-making processes. Businesses should develop inclusive digital services that cater to the needs of underserved populations.
- Migrant and refugee populations, who are more likely to be subject to surveillance and profiling. Companies should ensure data protection policies prevent profiling and surveillance of vulnerable populations when planning, designing and procuring data processing initiatives.
- People with disabilities are often excluded due to lack of inclusive data or system design. Technology developers should aim to increase accessibility of mainstream solutions alongside specialized assistive technologies. For example, companies developing autonomous vehicles must ensure they are accessible to people with disabilities; otherwise, mainstream AVs could increase transportation discrimination and create even higher barriers to employment. The OECD has reported that ‘mainstreaming’ accessible technology also reduces the cost of offering specialized solutions, creating more sustainable business models.
- Children and older adults who might be more vulnerable to exploitation or exclusion, require robust data protection and privacy measures to safeguard their rights. These measures align with the UN Convention on the Rights of the Child (CRC) and the UN Principles for Older Persons, which both emphasize the protection of these vulnerable age groups.
- Human Rights defenders, whose beliefs and activism may be targeted. Companies should ensure that data practices do not facilitate surveillance or targeting of activists, in order to align with UNGPs. The OSCE Office for Democratic Institutions and Human Rights (ODIHR) publishes Guidelines on the Protection of Human Rights Defenders.
- Individual users and consumers of AI, who may be subject to opaque decisions making and lack of meaningful resources. Business should provide transparent and explainable AI systems to ensure users understand decision-making processes. This adheres to the OECD Principles on Artificial Intelligence, which advocate for transparency, accountability, and inclusiveness in AI systems.
- Citizens of autocratic or oppressive regimes, where digital tools may be used for state-surveillance and control. Companies should avoid the use of tools that could exacerbate these issues by conducting thorough human rights due diligence in line with UNGPs.
- People in conflict zones or humanitarian crises, who might be targeted or misrepresented by AI-powered surveillance, misinformation or autonomous weapons. Companies must avoid using or procuring data that could be used for surveillance and ensure informed consent is genuine, given the power imbalances in conflict settings. Engaging local stakeholders including humanitarian organisations will help businesses to understand specific risks. Companies should consult UNDP guidance for heightened human rights due diligence (hHRDD) for business in conflict‑affected contexts.
- Workers in precarious or gig economy roles, who may be exposed to lower wages, limited career growth, algorithmic management and lack of transparency of conditions. Businesses must ensure fair wages, career growth opportunities and transparency in working conditions. This is supported by the ILO’s “Decent Work Agenda,” which emphasizes the importance of fair and equitable working conditions.
What is the gig economy?
“Gig work” involves short-term, task-based employment typically facilitated by digital platforms and algorithms underpinned by AI. Workers are usually self-employed and paid per task rather than a salary. The model offers flexibility for platforms, consumers and workers, allowing on-demand labour and varied work schedules. Where gig work is common – along with other non-traditional forms of employment – this is considered a “gig economy”.
Technological advancements have enabled the rise in digital platforms that make use of this innovative business model. To capture the benefits of the gig economy while protecting workers and consumers, businesses need to review and update policies with this model in mind.
Gig-economy-specific issues include the following:
- Algorithmic inequalities: The use of algorithms to manage gig workers often leads to unfair practices which undermine provisions of decent wages, and weaker labour rights and access to remedy.
- Lack of traditional employment benefits: Gig-economy workers may have more difficulty accessing healthcare and retirement benefits compared to those in traditional employment. According to a 2025 report, 51% of surveyed riders and drivers for food delivery and ride-hailing apps have risked their health and safety, with 42% reporting physical pain resulting from their work.
- Work-Life Balance: Gig-economy workers have a varied work-life balance, with flexibility as a main stated benefit. However, challenges including irregular hours, low pay, and a lack of benefits create significant work-life balance challenges. A 2025 report revealed that 3 out of 4 surveyed riders and drivers for food delivery and ride hailing apps have felt anxiety over the potential for their income to drop.
- Freedom of Association: Many jurisdictions classify gig-economy workers as ‘independent contractors’, something that often precludes them from joining unions or engaging in collective bargaining according to an ILO report.
Definition & Legal Instruments
Definition
There are no single, universally agreed definitions of digital and emerging technologies in law. Regulation generally refers to technologies used to create, store, process, and communicate information, including those expected to become mainstream within the next five to ten years. In this section we explore definitions and regulations of high-risk AI and digital rights.
Digital rights refer to the application of national and international human rights law in digital contexts. The concept has gained prominence due to rapid technological advancement, the increasing digitization of civic life and work – including the rise of the gig economy – and the erosion of offline freedoms. At the same time, the widening digital divide underscores how access to technology is closely tied to inequality and exclusion. In its 2022 Declaration on Digital Rights and Principles, the EU committed to a secure, safe and sustainable digital transformation that puts people at the centre.
The OECD defines AI systems as machine-based systems that, for “human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments” and notes that “AI systems are designed to operate with varying levels of autonomy”.
- The definition of “AI” differs between jurisdictions and some have not defined AI comprehensively. The EU AI Act (Art. 3)’s definition is based on the OECD’s but they are not identical.
Many emerging regulations define high-risk AI systems as those involved in consequential decision-making or sensitive personal data processing.
- The EU AI Act (Chapter 3, Art. 6) identifies high-risk AI through use cases that pose serious risks to health, safety, or fundamental rights, including applications in critical infrastructure, law enforcement, and democratic processes, and imposes strict pre-market obligations.
- Colorado’s SB 205 – the first comprehensive US AI legislation – characterizes high-risk AI as systems that substantially influence consequential decisions.
- California defines high-risk automated decision systems (ADS) as those that assist or replace human discretion in decisions with significant legal or social impact – such as access to housing, education, employment, credit, healthcare, and justice.
- In contrast, Texas’s HB 149 adopts a narrower, intent-based approach, requiring proof of discriminatory intent for liability in AI deployment.
AI incidents and hazards
The OECD (2024) provides definitions for organizations to talk about the problems and failures of AI systems. An event where the development or use of an AI system results in actual harm is termed an AI incident and a potentially harmful event is termed an AI hazard. ‘Harm’ includes, among others, violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights.
Legal Instruments and Initiatives
Governments are rapidly enacting AI-specific legislation. The EU AI Act imposes strict requirements on high-risk AI systems, including algorithmic transparency and human oversight, while Colorado’s AI Act requires deployers to prevent algorithmic discrimination. Other jurisdictions, such as Japan and ASEAN nations have adopted voluntary AI guidelines, promoting business-led initiatives. Singapore’s Model AI Governance Framework also emphasizes voluntary best practices.
The UN Working Group on Human Rights has highlighted that current AI legislation fails to adequately protect human rights under the UN Guiding Principles, especially concerning generative AI, which remains largely unregulated. This lack of specific regulations is particularly concerning for human rights advocates, as it leaves offline and digital rights unprotected and lacks mechanisms for remedies. The UN has also noted that the views of the Global South are insufficiently incorporated into both binding and voluntary frameworks.
Given this fast-evolving landscape, businesses must monitor developments in relevant jurisdictions. This list is not exhaustive.
Binding legislation
The following legislation has been passed aimed at protecting human rights through imposing requirements on developers of technologies, as well as deployers, users, importers, distributors and supply chain participants.
EU AI Act
- Entered into force: August 2024
- Prohibited practices banned: February 2, 2025 (e.g., social scoring, manipulative AI).
- Transparency obligations for general-purpose AI: February 2026
- General provisions applicable: August 2026
- High-risk system obligations: August 2027 (e.g., conformity assessments, FRIA)
- Focus: harmonize EU regulatory approach to AI in order to protect against harmful effects and support innovation.
The EU AI Act is modelled on European product legislation, supplemented with a risk-based approach and explicit human rights safeguards. It is underpinned by principles of proportionality based on risk, transparency and accountability, fairness and non discrimination, prevention of harm, data privacy and security, safety and trustworthiness and the need for human oversight. It applies directly to all EU Member States, without the need for national legislation (in most cases).
Key human rights obligations in AI regulations
- Risk-based classification of AI systems – The EU AI Act classifies AI systems by four risk levels, with stricter obligations for high-risk applications, such as those affecting employment, critical infrastructure or law enforcement. AI systems that pose a clear threat to rights are banned completely (such as emotion recognition in workplaces). Colorado’s AI Act (2024) also classifies AI systems by risk. Other US states including California’s AI Transparency Act focus primarily on high-risk systems without broader categorization.
- Fundamental Rights Impact Assessments (FRIA) – Under the EU AI Act, deployers of high-risk AI systems must assess potential impacts on privacy, non-discrimination, human dignity, and other fundamental rights (Article 27).
- Transparency and disclosure – Users must be informed when interacting with AI systems (e.g. EU AI Act, Article 52).
- Data quality standards – Training Data for high-risk systems must meet quality standards and be reviewed for bias (EU AI Act, Article 10).
- Human oversight – High-risk AI systems must include mechanisms for effective human monitoring and intervention (EU I Act, Article 14).
- Data protection – The EU AI Act aligns with the General Data Protection Regulation (GDPR), reinforcing protections around personal data, especially sensitive personal data.
EU AI Pact
EU Artificial Intelligence (AI) Pact provides a two-pillar model of support to companies needing to comply with the EU AI Act. Pillar I supports companies preparing to comply through the facilitation of knowledge transfer and best practice sharing between stakeholders, including webinars hosted by the AI Office. Pillar II fosters the sharing of early or voluntary applications of EU AI Act high-risk system obligations, ahead of their entry into force in August 2027.
Council of Europe AI Framework Convention
- Opened for signature: September 5, 2024
- Focus: Ensure AI activities are consistent with human rights, democracy, and the rule of law.
The AI Framework Convention is the first-ever international legally binding treaty focused on AI and human rights. It aims to ensure that activities within the lifecycle of AI systems are fully consistent with human rights, democracy and the rule of law, while being conducive to technological progress and innovation. On 5 September 2024, the Convention was signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the UK, the US, Israel and the EU. Following its entry into force the treaty is open for accession by other non-member States.
China
China’s Interim Measures for the Management of Generative Artificial Intelligence Services (“AI Measures”)
- Effective: 2023
- Focus: generative AI service providers and users in China
The AI Measures [1] apply to entities providing services in mainland China regardless of where they are incorporated. Chinese government agencies can take action against foreign companies if any Generative AI services offered within the country violate local regulations. The AI Measures do not categorize AI systems according to risk, although some will be subject to stricter scrutiny by authorities, such as service providers with “social mobilization capabilities” (Art. 17), offering AI products perceived as having the potential to support social movements challenging state power.
Service providers must comply with requirements on data processing, data labelling, data training, content moderation and complaints and reporting mechanisms. Generative AI service providers must also “uphold the core socialist values and refrain from generating content that incites the subversion of state power” (Art. 4(i)) and take steps to prevent discrimination and not infringe on privacy or personal information rights, or harm the physical or mental health of others.
China’s Measures for Identifying Artificial Intelligence-Generated and Synthetic Content
- Effective: 1 September 2025
To complement the AI Measures and other fundamental laws on AI, the labeling law [2] imposes both implicit (i.e. in metadata not easily perceived by users) and explicit (i.e. text and images visible to users) identification requirements on service providers for AI-generated content.
China’s National Standards on Generative AI
- Effective: 1 November 2025.
- Focus: three national standards aimed at enhancing the security and governance of generative AI.
- Recommended national standard Generative Artificial Intelligence Data Annotation Security Specification [3] (2024) aims to improve safety and security of GenAI systems, defined as preventing disinformation but also censorship of content that criticizes Communist Party rule. The requirements state that training data should be diverse, but this isn’t defined.
- Recommended national standard Security Specification for Generative Artificial Intelligence Pre-training and Fine-tuning Data[4] (2024) aims to establish safety rules for developers of GenAI models, including not only the protection of individual safety and preventing disinformation but censorship of content that criticizes Communist Party rule.
- Technical Documentation of National Technical Committee 260 on Cybersecurity of Standardization Administration of China: Basic Security Requirements for Generative Artificial Intelligence Service [5] (2024) – establishes specific oversight processes for Chinese AI companies, focusing on safety risks, which include algorithmic bias and disclosure of personal information and censorship.
There are other laws and regulations in China that do not directly seek to regulate AI, but that may affect the development or use of AI.
China’s Artificial Intelligence Security Governance Framework
On 15 September 2025, China published “Framework 2.0”, [6] ‘Artificial Intelligence Security Governance Framework’, in response to rapid developments in AI technology and applications since the release of Framework 1.0, in September 2024. The updated Framework reflects changes in risks, introduces a guide to categorize and grade AI risks and adjusts prevention and governance measures. The technical governance document, though not legally binding, is likely to be greatly influential not only on eventual regulation in China but globally on AI systems standards.
United Kingdom
UK Online Safety Act
- Effective: 2023
- Focus: Protecting children and adults online, managing risks from harmful content.
Instead of regulating AI directly, the UK has tasked sector-specific regulators to interpret and apply a “principles-based framework” to their development and use of AI. One such legislation is the UK Online Safety Act (2023) which aims to protect children and adults online by requiring providers to identify, mitigate and manage the risks of harm from illegal and harmful content and activity, while protecting users’ rights to freedom of expression and privacy, and ensuring transparency and accountability are provided.
United States
No federal law on AI exists as of October 2025. Many states are in the process of developing their own AI regulations, with some already in force:
Colorado AI Act (CAIA)
- Effective: May 2024
- Focus: Duties for developers and deployers of high-risk AI systems, compliance starting 1 February 2026.
Colorado enacted the first AI legislation in the US SB 205 Colorado AI Act (CAIA) in May 2024. The Act creates duties for all developers and deployers of high-risk AI systems in Colorado, with compliance starting from 1 February 2026. The act takes a risk focused approach, particularly on bias and discrimination. The CAIA is both the first and most far-reaching legislation in the US, requiring deployers of all AI systems to notify consumers that they are interacting with AI. The Act also imposes obligations relating to transparency, disclosures, risk assessment and mitigation, governance and impact assessment for developers and deployers of high-risk AI systems. The Colorado AI Act may emerge as a template for other states regulation – some are already considering bills that would impose safeguards against bias by AI systems.
California has also passed several other AI-related laws focused on transparency and consumer protection:
California AB 2013 (Generative AI Training Data Transparency Act)
- Effective: 1 January 2026
- Focus: Documentation and disclosures by developers of generative AI systems.
In September 2024, California enacted AB 2013 ‘Generative AI Training Data Transparency Act’, which requires documentation and disclosures by developers of generative AI systems, including detailed information on the data used to train models. These requirements take effect on 1 January 2026, but the Act applies retroactively to some systems from 1 January 2022, creating obligations for most AI systems.
California SB 942 (AI Transparency Act)
- Effective: 1 January 2026
- Focus: Transparency measures and licensing practices for commonly used AI systems.
SB 942 California AI Transparency Act (effective 1 January 2026) mandates transparency measures and licensing practices for commonly used AI systems, with penalties including fines.
California AB 3030 (Health Care Services; AI Act)
- Effective: 1 January 2025
- Focus: Transparency and disclosure requirements for health care providers using generative AI.
AB 3030 Health Care Services; AI Act (already in effect) targets health care providers using generative AI, emphasizing transparency and disclosure requirements.
California AB 2355 (Political Reform Act of 1974)
- Effective: 1 January 2025
- Focus: Disclaimers for AI-generated political ads.
AB 2355 Political Reform Act of 1974: Political Advertisements: Artificial Intelligence requires disclaimers for AI-generated political ads created by political committees.[6]
California Automated Decision-Making Technologies (ADMT) Rules
- Effective: 1 January 2026
- Focus: Consumer rights to opt out of automated decision-making in housing, employment, and healthcare.
In July 2025, California adopted rules on automated decision-making technologies (ADMT), under the California Consumer Privacy Act (CCPA) effective 1 January 2026. These rules grant consumers the right to opt out of automated decision-making in areas such as housing, employment, and healthcare. Businesses must provide pre-use notices, ensure transparency about the data used, and conduct risk assessments. Full compliance is required by 1 January 2027.
Texas Responsible AI Governance Act (TRAIGA)
- Effective: 1 January 2026
- Focus: Preventing discrimination and behavioural manipulation in AI systems.
In June 2025, Texas passed the Texas Responsible AI Governance Act (TRAIGA) (HB149) which takes effect 1 January 2026. While initially a broad attempt to prevent discrimination, the final version significantly narrowed its scope and has limited obligations for the private sector. While TRAIGA prohibits AI systems used for behavioural manipulation and unlawful discrimination, it adopts a narrower, intent-based approach and thereby offers lower protections for individuals than Colorado.
United States Uyghur Forced Labor Prevention Act (UFLPA)
- Effective: June 2022
- Focus: prohibiting import of goods made with forced labour into the US
United States Uyghur Forced Labor Prevention Act (UFLPA) (2022) requires importers to provide evidence that goods have not been made using forced labour. The legislation targets goods produced or manufactured wholly or partly in the Xinjiang region of China, products made by entities on the UFLPA entity list, or goods made with raw materials sourced through forced labour. The vast majority of the value of shipments detained under UFLPA have been linked to electronics (more than $3.1 billion).
Other relevant legislation
Existing data protection frameworks like EU General Data Protection Regulation (GDPR), regulation of platforms such as the EU Digital Services Act (DSA) and reporting requirements under EU Non-Financial Reporting Directive (2018) and EU Corporate Sustainability Reporting Directive (2024) also apply to AI systems.
Voluntary initiatives
- OECD AI Principles, the first intergovernmental standard on AI, was initially adopted in 2019 and updated in May 2024. The Principles guide AI actors in their efforts to develop trustworthy AI and provide policymakers with recommendations for effective AI policies. The EU, the Council of Europe, the United States and the United Nations and other jurisdictions have made this political commitment to adhere to the Principles. These adherents use the OECD’s definition of an AI system and lifecycle.
- The UN’s Global Digital Compact, introduced in response to Member States’ Declaration A/RES/75/1, aims to advance an open, secure and human centred digital future. Its AI Panel and Dialogue is an intergovernmental initiative to foster a Global Dialogue on AI Governance, centering on international human rights frameworks and the UN SDGs, to address risks and power imbalances associated with AI.
- The UN’s Charter of Human Rights and Principles for the Internet (2022) from the Internet Rights and Principles Dynamic Coalition connects human rights law and norms with rights-based aspirations for the online environment. It has applications for different stakeholders including businesses interested in digital rights and governance.
- The International Principles on the Application of Human Rights to Communications Surveillance (the “Necessary and Proportionate Principles” or “13 Principles”), from the Electronic Frontier Foundation and a coalition of NGOs, show how existing human rights law applies to modern digital surveillance; the 13 Principles have been endorsed by over 600 organizations since 2013.
- Others relevant frameworks include:
- The EU European Declaration on Digital Rights and Principles (2022) presents the EU’s vision for digital transformation, providing a reference framework for citizens and guides the EU and Member States.
- The European Commission’s High-Level Expert Group on AI’s Ethics Guidelines for Trustworthy AI (updated 2021).
- The EU EOM Guidelines for Observing Online Election-Related Content (Updated, March 2025).
Key resources
- EU, Ethics guidelines for trustworthy AI
- EU, Assessment List for Trustworthy Artificial Intelligence (ALTAI) is a practical tool that translates the Ethics Guidelines into an accessible and dynamic (self-assessment) checklist. The checklist can be used by developers and deployers of AI who want to implement the key requirements in practice.
- Digital Infrastructures of Sustainability Regulation (DigiChain) is a research project between UvA and the Asser Institute to explore the role of technology in sustainability regulation, particularly in value chain due diligence.
Contextual Risk Factors
In the rapidly evolving landscape of technology and human rights, companies must understand their responsibilities under the UN Guiding Principles on Business and Human Rights (UNGPs), including in the realm of digital rights and business. Companies must understand risk factors that increase the likelihood of human rights impacts from technology development, deployment and end-use. To address these risks, businesses should adapt responsible business conduct to include HRDD of AI and emerging technologies, in accordance with UNGPs.
- Degree of automated decision-making: How often AI systems act autonomously and what kinds of decisions they make shapes the level of human rights risk an organization must manage. For example, algorithmic management systems that determine gig workers’ wages may create power structures with limited oversight, restricting workers’ access to remedy and grievance mechanisms. The OECD AI Principles emphasize human-centered values and fairness in AI systems, while the EU AI Act classifies AI systems by risk level, requiring human oversight, transparency and accuracy safeguards.
- Existing governance mechanisms: Companies with strong existing human rights due diligence (HRDD) processes and governance will be better positioned to prevent and address technology-related human rights risks. Businesses should adapt HRDD strategies and activities to ensure responsible business conduct of emerging technologies and ethical use of AI.
- Underlying data architectures: The code and source data used to programme AI systems can significantly influence human rights risks. For example, hiring algorithms trained on historical employment data may perpetuate past discrimination by learning patterns that favoured certain demographics, discriminating against qualified candidates from underrepresented groups.
- Extent to which new technologies can change business models: Across industries, businesses are adapting their strategic frameworks to leverage emerging technologies, transforming the risks that various sectors face. The OECD AI Principles emphasize inclusive growth and sustainable development in AI deployment, recognizing these transformative impacts. For example, an e-commerce retail company may not employ workers within physical store locations, but instead have the majority of their employees working in warehouse or delivery roles, changing the human rights and occupational risks that workers are exposed to.
Industry-specific Risk Factors
Human rights impacts from digital technologies extend far beyond the companies that design and deploy them, creating interconnected risks across the entire value chain, from mineral extraction to end-user applications. The globalized and fragmented nature of technology development and deployment creates shared responsibility among stakeholders – for both the risks and opportunities of technology for human rights. The below analysis focuses on some key sectors across the technology value chain, highlighting sector-specific human rights risk factors from technology. (Cross-sector opportunities are explored in the Positive Impacts section in the Overview; the focus of this guidance is on the prevention of human rights risks by business).
While not exhaustive, the following summary demonstrates the breadth and severity of human rights risks associated with technology raw materials, manufacturing, financing, and digital services. This should help businesses to start to understand the nature of risks it could be associated with, from its operations and business relationships.
Mining and Extractive Industry
Mining sector-specific human rights risk factors include the following:
- Conflict affected areas: In conflict-affected and high-risk areas (CAHRAs), there is an elevated risk for environmental and human rights harms in the mining and extractive industry. In CAHRAs there are risks of violence linked to resource extraction, minerals funding armed conflict, and community unrest near mining sites. The main challenge is the supply chain risk since companies throughout the value chain can be associated with materials from CAHRAs.
- Weak governance: poor regulatory oversight, weak rule of law and limited institutional capacity in mining jurisdictions can increase the risk of labour exploitation and community displacement. The UN Guiding Principles on Business and Human Rights (UNGPs) recognize that some of the worst human rights abuses involving business occur in regions where there is competition for resources, high levels of corruption, and weak corporate accountability.
- Occupational health and safety, poor working conditions: People working in the mining and extractive industry face significant risks including environmental, chemical, and physical risks stemming from respiratory hazards, unsafe equipment, explosions, and fires.
- Complexity of supply – brokers and artisanal mining: Many minerals necessary for emerging technologies including cobalt and tantalum are mined through complex critical mineral supply chains. Informal artisanal and small-scale mining (ASM) operations can evade government regulations and leave workers vulnerable to exploitation, unfair pay, dangerous working conditions, abuse, and child labour.
- Environmental destruction: The UN Environmental Programme emphasizes that natural resource extraction has serious impacts on the environment and human health. Environmental damages may result in violations of the right to a Clean, Healthy, and Sustainable Environment. Notable challenges include climate change, biodiversity loss, and pollution. For example, copper is often extracted through open-pit mining that can devastate ecosystems, contaminate water sources, and leave lasting scars on landscapes.
- Indigenous Peoples: Indigenous Peoples are vulnerable to the impacts of land dispossession, particularly when there is a lack of Free, Prior, and Informed Consent (FPIC) in mining projects (UNDRIP, Article 32).
- Forced displacement and land grabbing: In mineral-rich regions, there is a risk of forced displacement and land grabbing due to the mining and extractive industry. Indigenous Peoples are especially vulnerable to being victims of land grabbing.
- Child labour: According to the ILO, approximately 1 million children are affected by child labour in the mining and extractive sector, primarily in artisanal and small-scale mining (ASM). Cobalt, a key resource for the green transition and rechargeable batteries, is often sourced from the Democratic Republic of the Congo where tens of thousands of children work at mine sites. In the mining sector, most work performed by children is hazardous and categorized as the worst form of child labour. Risks include unstable tunnels collapsing, exposure to toxic gases, psychosocial risks, sexual assault, and physical assault.
Renewable Energy – Transition and Critical Minerals
Moving away from fossil fuels requires increased mining of transition minerals that are explicitly linked with human rights violations. Risks include hazardous working conditions and child labour, and are compounded by weak governance.
The International Energy Agency (IEA) forecasts demand for transition minerals will surge 3.5 times by 2050. These minerals are crucial in the transition to net zero and in achieving the SDGs.
- For example, lithium is essential for batteries powering electric vehicles and renewable energy storage
- Copper underpins global connectivity through electrical wiring, internet infrastructure and power grids
This complex relationship between transition and critical minerals and achieving the SDGs has been highlighted by UN Secretary-General António Guterres. Significant investments, adequate laws and robust relationships will be required between states and extractive companies to make sure human rights are effectively protected.
Helpful Resources
- BSR, AI and Human Rights in Extractives: This resource contains a recommendations section (p.11-12) that provides businesses with actions they can take to mitigate human rights harms related to AI usage in the extractives sector.
- OECD, Handbook on Environmental Due Diligence in Mineral Supply Chains: This handbook supports businesses in the extractives sector with the implementation of their human rights and environmental principles in line with OECD standards.
- OECD, How to address bribery and corruption risks in mineral supply chains: This resource answers frequently asked questions and explains in simple language what actions companies should take to address human rights risks in their minerals value chains.
- OECD, Human Rights Due Diligence for Digital Technology Use: Artificial Intelligence
- OECD, Practical actions for companies to identify and address the worst forms of child labour in mineral supply chains. Practical guidance for companies to help them identify, mitigate and account for the risks of child labour in their mineral supply chains, developed to build on the due diligence framework of the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High Risk Areas.
Software Development
Software sector-specific human rights risk factors from technology are distinct, and may be overlooked in traditional frameworks. Unlike hardware manufacturers, they must account for how products are used, misused or weaponized across diverse contexts – including enabling surveillance, reinforcing algorithmic bias or facilitating online harm. Software developers (who design AI systems, apps, and algorithms) may directly cause or contribute to harms through design choices, while platform providers (who host third-party content and services) may be linked to harms through how others use their infrastructure.
Under the EU AI Act, developers[1] bear primary responsibility for system design and compliance, while deployers (companies using AI systems) must ensure proper implementation and monitoring. However, software companies don’t just face legal compliance under the AI Act, they also have a human rights responsibility under the UNGPs to proactively assess and mitigate these risks, design for safety and remediate harm. Companies should tailor their due diligence processes to digital technologies and their specific role as developer, platform or deployer.
Software specific risk factors include the following:
- Discrimination and bias: Vulnerable groups, including children, elderly adults, ethnic minorities, gender minorities and sexual minorities, face additional risks for digital discrimination during software development. Specific risks include algorithmic bias in AI models, exclusion from technology design and discrimination in automated decision-making. Additionally, children are particularly at risk of AI exploitation.
- Digital autonomy, privacy and expression: Digital tools can create risks to both privacy and freedom of expression. Tracking features and analytics can become intrusive without adequate consent mechanisms. Surveillance-enabling tools, such as facial recognition modules or device-level monitoring, can be misused in ways that violate rights. Partnerships with government clients can introduce additional risks if safeguards are not built into the design.
- Exploitative business models: Software business models, whose incentives may conflict with human rights objectives, pose significant risks. Surveillance-based advertising models that micro-target vulnerable users can cause mental and physical health harms, while algorithms may amplify biased content, misinformation, online hate and child abuse material. However, alternative models are emerging – open-source software projects and subscription-based platforms demonstrate that human rights-centered approaches can be commercially viable.
- Right to life and personal security: Advances in weapons technology, including AI-enabled autonomous weapons capable of making targeting decisions, pose significant risks to the right to life and personal security.
Financial Institutions
Institutional investors, including asset owners and asset managers, have significant influence over how technology companies are run. They can encourage companies to build human rights protections into their operations, product design, raw-material sourcing and business relationships. As AI grows quickly, expectations for financial institutions to invest responsibly are rising.
The World Economic Forum notes that responsible AI governance supports long-term value, trust and stability. AI is now a major business risk: in 2025, 73% of S&P 500 companies reported at least one important AI-related risk, compared with only 12% in 2023. Reputational damage from bias, misinformation, and privacy lapses can significantly erode investor confidence and portfolio value. The investor community has called on tech companies to uphold digital rights, expressing concern over insufficient corporate accountability.
Investors throughout the technology lifecycle must ensure their investments do not cause, contribute to, or are linked to human rights harms by conducting risk-based due diligence under the UNGPs.
Key finance sector-specific human rights risk factors include:
- Investment in discriminatory AI systems: Financing companies that develop or deploy AI perpetuating bias in lending, hiring, or service provision. Investors may be linked to algorithmic discrimination, creating regulatory liability under emerging AI laws and significant reputational risks.
- Financing surveillance technologies: Supporting companies whose products enable authoritarian governments or corporations to violate privacy rights and suppress civil liberties. Investors may contribute to human rights abuses through export of dual-use technologies.
- Funding job-displacing automation: Investing in AI development without ensuring adequate Just Transition measures, worker retraining programs or social safety nets. Investors may be linked to labor rights violations and community economic disruption.
- Enabling disinformation platforms: Supporting AI-powered platforms that amplify misinformation, threatening democratic processes and social cohesion. This creates both human rights concerns and systemic risks to market stability that can affect portfolio value.
Helpful Resources
- Business and Human Rights Resource Centre, Navigating the surveillance technology ecosystem: A human rights due diligence guide for investors: This guide assists investors of all sizes, types and geographies to navigate the surveillance technology ecosystem and strengthen their human rights due diligence.
- UK Government, The Mitigating ‘Hidden’ AI Risks Toolkit: This toolkit is designed for individuals and teams responsible for implementing AI tools and services and those involved in AI governance.
- UN B-Tech, Human Rights Risks In Tech: Engaging And Assessing Human Rights Risks Arising From Technology Company Business Models: This tool aims to equip investors to (1) accurately assess technology companies’ policies and procedures for addressing business model-related human rights risks; and (2) encourage technology companies to adopt approaches to such human rights risks that align with their responsibility to respect human rights.
Corporate AI Principles
The majority of large tech companies developing generative AI have published AI Principles to guide the responsible development and deployment of their products. These frameworks typically include commitments to non-discrimination, transparency, accountability and safety.
Given the potential scale and influence of generative AI across public and private sectors and services, clear and actionable human rights commitments within these principles have the potential to significantly mitigate risks and drive positive impact. When used and applied effectively, these companies should embed human rights protections into their goods and services.
Company AI principles identified by UN B-Tech that explicitly reference human rights include:
- Google’s AI principles, which include a commitment to not design or deploy “technologies whose purpose contravenes widely accepted principles of international law and human rights.”
- Salesforce’s AI principles, which pledge to “safeguard human rights and protect the data we are entrusted with.”
- NEC’s AI principles, which state that their purpose is to “prevent and address human rights issues arising from AI utilization” and to “guide our employees to recognize respect for human rights as the highest priority in each and every stage of our business operations.”
Since the launch of the Collective Impact Coalition for Digital Inclusion (“Digital CIC”) by the World Benchmarking Alliance, 19 of the 200 companies evaluated in the 2023 Digital Inclusion Benchmark have announced their inaugural AI principles. In the 2023 Progress Report, Digital CIC found that 26% of 200 companies in scope of the benchmark have adopted ethical AI principles. Three companies – Deutsche Telekom, Microsoft, and Telefónica – scored the highest possible in the Benchmark. Their leading practices include disclosure of Human Rights Impact Assessments (HRIAs) conducted on their development and use of AI tools. The findings indicate some strong practices but generally gaps in companies’ transparency and protection of rights in their AI principles.
Technology Hardware Manufacturers
Technology hardware manufacturing carries serious human rights risks at multiple points in the supply chain. Migrant workers in manufacturing hubs are more vulnerable to exploitation, including forced labour and unsafe working conditions. The sector’s reliance on high-risk raw materials also links manufacturers to impacts on Indigenous communities, exploitation of women and children and environmental degradation. Weak contractual oversight in globalized supply chains can further compound these risks where human rights controls are inadequate.
Technology hardware manufacturing sector-specific human rights risk factors include:
- Forced labour: is a significant risk in hardware manufacturing, occuring both in mineral extraction and component manufacturing, including debt bondage of migrant workers in electronics supply chains and forced labour in cobalt and lithium mining – minerals essential for renewable energy infrastructure that supports digital technology systems.
- Occupational health and safety: risks arise when workers are exposed to toxic chemicals in semiconductor production, given inadequate PPE, or suffer long-term health impacts from handling hazardous materials without proper safeguards.
- Indigenous lands rights violations occur when mining operations displace Indigenous communities without obtaining Free, Prior and Informed Consent, destroying traditional livelihoods and damaging cultural sites that are central to Indigenous identity.
- Gender based discrimination and harassment: can be common in manufacturing including severe human rights abuses such as pregnancy testing for female workers, unfair pay structures and workplace sexual harassment – which can all be made harder to prevent by complex and opaque supply chain practices.
- Freedom of Association and collective bargaining: is often restricted in electronics manufacturing, where fragmented contract labour systems leave temporary or agency workers unable to unionize or join workplace associations, undermining their ability to advocate for fair working conditions.
Key Sectors Deploying Emerging Technologies
While developers bear primary responsibility for human rights impacts from system design, deployers (companies using AI systems) must ensure proper implementation and monitoring of systems to avoid and mitigate human rights impacts. The complexity and severity of human rights risks arising from the use of technology will depend on the company’s use cases, location and number of its users and the nature of its goods and services. Enterprises should conduct human rights due diligence (HRDD) that consider user behaviour and use cases across relevant sectors. State users and private-sector users each present distinct risk profiles and due-diligence expectations, including heightened risks where state use may affect civic freedoms or vulnerable groups.
Governments
Governments worldwide are rapidly integrating digital technologies, including artificial intelligence, across critical public sectors such as healthcare, education, social welfare, and law enforcement. While these tools promise greater efficiency and accessibility, their deployment often lacks adequate safeguards and human rights due diligence (HRDD). According to a 2025 report from the UN Working Group on Business and Human Rights, states are increasingly using AI systems without proper oversight, resulting in negative impacts across key sectors.
Government-specific human rights risk factors include the following:
- Surveillance and privacy: Governments risk violating human rights to privacy when using technology to conduct actions including phone-tapping and government surveillance systems.
- Freedom of expression: The human right to freedom of expression and speech can be violated by governments using technology to conduct censorship and suppress dissent through actions including suppression of dissent, algorithmic repression on social media, internet blackouts, and other censorship technology.
- Conflict-affected areas: In regions impacted by conflict (CAHRAs), governments may pose significant human rights risks through the use of technology to enable repression, enforce mass surveillance, and impose internet shutdowns that silence dissent and restrict access to information.
- Law enforcement & counter-terrorism: Governments risk serious human rights violations through the deployment of spyware, excessive surveillance technologies, and predictive policing systems that reinforce systemic discrimination, racial profiling and harmful stereotypes.
- Algorithm-bias: Algorithmic bias in government systems, amplified by flawed human oversight and weak institutional accountability, can lead to discriminatory outcomes that disproportionately impact marginalized communities.
Healthcare
With the growing use of digital platforms to deliver essential services, the healthcare sector must recognize and address additional human rights risks, including safeguarding patient privacy, ensuring equitable access to digital services and preventing algorithmic bias in medical decision-making tools.
Healthcare sector-specific human rights risk factors include the following:
- Human autonomy: Digital health systems may pose risks to patient autonomy, particularly where informed digital consent mechanisms are unclear or inconsistently applied.
- Privacy: The use of health apps and digital platforms increases exposure to privacy violations, including data breaches and misuse of sensitive health information, with implications for regulatory compliance and reputational harm.
- Discrimination & inequality: Algorithmic decision-making in healthcare risks reinforcing existing health disparities, raising concerns around digital health equity and potential discriminatory outcomes.
- Access to healthcare: Limited connectivity and digital infrastructure contribute to a digital access divide, with broadband inequality and telehealth exclusion affecting equitable access to health services.
The World Health Organization (WHO) released guidance in 2024 that addresses the risks and benefits of AI in the healthcare sector and advises organizations developing, deploying or using AI in a healthcare setting to introduce mandatory post-release auditing and impact assessments, including for data protection and human rights.
Education
The education sector is being profoundly impacted by emerging technologies such as artificial intelligence, posing both risks and benefits for the accessibility and quality of education. The European Commission has classified all AI systems intended to be used in assessing students and that may impact children’s personalized education or cognitive and emotional development as high-risk and subject to compliance.
Education sector-specific human rights risk factors include:
- Equitable access to education: As of 2024, nearly one-third of the world’s population lacks internet access, deepening the digital divide and increasing risks of inequality where technology is used in education. Vulnerable groups, including girls, people with disabilities, rural populations and marginalized communities are especially impacted.
- Privacy: Constantly and continuously collecting, aggregating and analyzing data is at the core of machine learning and AI, which can pose serious data privacy risks for users, particularly children. Additionally, the use of AI-driven tools to determine a learner’s emotional state can help a teacher manage their classroom, however it also creates privacy concerns especially when the analysis takes place in proprietary systems.
- Discrimination and inequality: AI-driven tools can create learner profiles to predict academic performance and identify students for early interventions, however this approach can lead to discrimination in underrepresented populations as AI often infers from features including gender, ethnic or cultural background and socio-economic status. Using AI in this way can introduce and reaffirm biases which may widen existing equity gaps.
However, emerging technologies can also be used to increase access to education for those with access to the internet and smart devices, personalize learning and assist educators with management and organization. Educational product and service providers should consider evaluating the specific human rights risks they may pose through their use of emerging technologies. The University of Waterloo in Canada has created a free AI Human Rights Impact Assessment tool designed for educators wishing to assess an AI application for use in educational settings, which may also be helpful to developers and deployers seeking to assess their risks.
FinTech
As demand for digital financial services grows, new human rights risks are emerging in the FinTech sector, including weak data protections, algorithmic discrimination in credit and insurance decisions and predatory digital lending practices that disproportionately impact vulnerable users.
FinTech sector-specific human rights risk factors include the following:
- Data collection: FinTech platforms increasingly rely on intrusive financial profiling and consumer data surveillance, often sharing sensitive information with financial data brokers without user awareness of control. AI may also lead to incremental disclosure of protected data that could breach agreed usage terms, such as algorithms reading facial features during video calls.
- Discrimination: Algorithmic bias in lending can lead to credit score discrimination where exclusionary fintech models deny access to financial services for marginalized groups based on biased or flawed data.
- Privacy: Many FinTech apps and platforms risk engaging in data-sharing without consent. Weak encryption in payment apps exposes users to identity theft, fraud and unauthorized access to financial records.
- Illicit activities: Without robust oversight, FinTech systems risk being exploited for money laundering or used to bypass regulations, creating terrorist and financing loopholes that threaten global security.
The FinTech industry can also provide opportunities to address the SDGs through digital financial inclusion, for instance through increased access to mobile banking or microcredit. In Kenya, the spread of mobile money lifted 1 million households out of extreme poverty from 2008 to 2014, the equivalent of 2% of the population.
Helpful resources
- UN Working Group on Business and Human Rights, Artificial Intelligence Procurement and Deployment: ensuring alignment with the Guiding Principles on Business and Human Rights: Report provides clear guidance regarding implementation of the UNGPs and Human Rights in the procurement and deployment of AI systems
- UN B-Tech (Business and Human Rights in Technology), Identifying and Assessing Human Rights Risks related to End Use: This tool provides guidance for enterprises working to embed respect for human rights in the business of technology. It includes guidance for enterprises to conduct HRDD for related end users and end-use scenarios.
Due Diligence Considerations
Due Diligence Considerations
This section outlines practical steps for human rights due diligence in technology to assist technology companies and companies using technology with identifying, preventing, and addressing human rights risks in their operations, products, and value chains. These steps are aligned with the UN Guiding Principles on Business and Human Rights (UNGPs) and tailored to the technology sector’s specific context, including risks linked to product misuse, algorithmic bias, surveillance and labour rights in hardware supply chains. Further information on the UNGPs is provided in the ‘Key Human Rights Due Diligence Frameworks’ section below.
Each step is detailed below, with guidance tailored to the technology sector’s operating realities and risk profile. In the identification of risks, impacts, and appropriate actions (including remedy), meaningful stakeholder engagement is a core element of human rights due diligence.

What do we mean by stakeholder engagement in the tech and human rights context?
Meaningful stakeholder engagement is a core element of human rights due diligence. In the tech sector, it refers to ongoing, two-way dialogue with individuals or groups – especially those most severely or potentially impacted by a company’s operations, products, or services. To ensure stakeholder engagement is meaningful and effective, it should be timely, accessible, appropriate and safe for stakeholders, and reflect the severity of risks or impacts. It may involve consultations, hearings, or other participatory processes, and in some cases, may also be a right in itself. Prioritising the most affected stakeholders helps ensure that engagement is focused, inclusive, and effective.
Why does human rights due diligence matter for technology and human rights?
Effective human rights due diligence is essential to help prevent and mitigate harms from technology and ensure company operations and supply chains align with international standards. As digital technologies increasingly shape many aspects of society, businesses and governments face growing responsibilities to ensure these tools do not negatively impact human rights. Currently, many companies are not conducting sufficient human rights due diligence in the design and deployment of technology, and particularly of AI systems.
Additionally, digital tools can assist companies in completing human rights due diligence. For example, a company can provide grievance mechanisms through the use of digital tools such as online grievance portals.
See further information on this above in the Overview, Contextual Risks and Industry Specific Risk sections.
What should I do about human rights risks from technology?
Understanding your legal and compliance responsibilities
Companies should evaluate their legal and compliance responsibilities which will vary depending on where they are operating, what kind of technology they are using and what they are using it for. For example, companies operating in the EU must conduct a Fundamental Rights Impact Assessment (FRIA) prior to deploying a high-risk AI system according to Article 27 of the AI Act. In the case of Colorado’s SB24-205 law, deployers of a high-risk AI system must follow key due diligence steps starting in February 2026, including implementing a risk management policy and program, conducting a risk assessment and disclosing the discovery of any algorithmic discrimination to the Attorney General within 90 days of discovery. For more information, see Legal Instruments and Initiatives.
Fundamental due diligence – companies beginning due diligence journey
All businesses should design, plan and implement due diligence in accordance with the UNGPs. Fundamental due diligence is relevant for companies starting to consider their responsibilities, taking first steps, integrating due diligence into existing processes and reviewing activities to align with good practice.
Intermediate due diligence – companies with some systems, looking to deepen alignment
Once companies have established foundational human rights commitments and processes, they should systematically embed due diligence across operations, building internal capacity and accountability mechanisms and expand due diligence beyond initial implementation to address more complex value chain risks. This includes all companies operating in the technology sector.
Advanced due diligence – companies ready to embed human rights across technology governance
Companies that already have significant due diligence experience and are interested specifically in how technology is transforming the HRDD landscape, will include companies active in the technology sector, such as software developers or service providers to governments.
When should a company conduct human rights due diligence of technology goods and services?
Due diligence should be integrated throughout the lifecycle of technology, taking place early and often throughout the ideation phase, product design, development, deployment, and decommissioning. This includes Minimum Viable Products. Due diligence is not a one-off exercise and should be ongoing. Companies should demonstrate the ability to learn iteratively through due diligence activities.

Key Human Rights Due Diligence Frameworks
Several human rights frameworks describe the due diligence steps that businesses should ideally implement to address human rights issues. The primary framework is the UNGPs on Business and Human Rights (UNGPs). Launched in 2011, the UNGPs offer guidance on how to implement the United Nations “Protect, Respect and Remedy” Framework, which establishes the respective responsibilities of Governments and businesses — and where they intersect.
The UNGPs set out how companies, in meeting their responsibility to respect human rights, should put in place due diligence and other related policies and process, which include:
- A publicly available policy setting out the company’s commitment to respect human rights;
- Assessment of any actual or potential adverse human rights impacts with which the company may be involved across its entire value chain;
- Integration of the findings from their impact assessments into relevant internal functions/processes — and the taking of effective action to manage the same;
- Tracking of the effectiveness of the company’s management actions;
- Reporting on how the company is addressing its actual or potential adverse impacts; and
- Remediation of adverse impacts that the company has caused or contributed to.
The steps outlined below follow the UNGPs framework and can be considered a process which a business looking to start implementing human rights due diligence processes can follow.
Additionally, the OECD Guidelines on Multinational Enterprises define the elements of responsible business conduct, including human and labour rights.
Another important reference document is the ILO Tripartite Declaration of Principles concerning Multinational Enterprises and Social Policy (MNE Declaration), which contains the most detailed guidance on due diligence as it pertains to labour rights. These instruments, articulating principles of responsible business conduct, draw on international standards enjoying widespread consensus.
Key Resources
Companies can seek specific guidance on this and other issues relating to international labour standards from the ILO Helpdesk for Business. The ILO Helpdesk assists company managers and workers who want to align their policies and practices with principles of international labour standards and build good industrial relations. It has a specific section on forced labour.
Additionally, the SME Compass offers guidance on the overall human rights due diligence process by taking businesses through five key due diligence phases. The SME Compass has been developed in particular to address the needs of SMEs but is freely available and can be used by other companies as well. The tool, available in English and German, is a joint project by the German Government’s Helpdesk on Business & Human Rights and the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH.
The B-Tech project, a UN initiative, offers guidance to support companies in implementing the UNGPs in the technology sector. The B-Tech project includes guidance and communities of practice that aim to mainstream human rights due diligence into the design and development of AI tools and products. For example, in 2023 B-Tech published “Advancing Responsible Development and Deployment of Generative AI,” a paper that provides practical recommendations for how lawmakers, businesses and civil society can leverage the UNGPs to foster practices capable of addressing human rights risks and impacts from AI.
1. Develop a Policy Commitment on Technology and Human Rights

UNGP Requirements
As per the UNGPs, a human rights policy should be:
- “Approved at the most senior level” of the company;
- “Informed by relevant internal and/or external expertise”;
- Specific about company’s “human rights expectations of personnel, business partners and other parties directly linked to its operations, products or services”;
- “Publicly available and communicated internally and externally to all personnel, business partners and other relevant parties”; and
- “Reflected in operational policies and procedures necessary to embed it throughout the business”.
Businesses must establish a clear and public policy commitment to respecting human rights in all technology-related activities. This commitment should be embedded across governance structures and operational processes, serving as the foundation for responsible innovation and ethical digital transformation. Companies should ensure that human rights are embedded into AI policies, integrating human rights principles in procurement, deployment and decision making.
SME considerations
SMEs may face resourcing challenges. They should embed a shared commitment to human rights, while larger tech providers should support SMEs in identifying, mitigating and remediating impacts. See this guidance from UNDP for SMEs on human rights due diligence (although not tech specific, it contains many tailored recommendations for smaller businesses).
Fundamental policy commitment
At a minimum, companies should update their existing policies, such as supplier codes of conduct and data protection policies, to ensure they are consistent with technological advancements.
If you do not already have one, develop a standalone human rights policy that reflects your commitment to identifying and mitigating human rights impacts across operations and supply chains from technologies.
Conduct a targeted risk assessment to ensure actual and potential human rights issues related to your use of technology are included in your human rights policies. For instance, if you identify child labour risks in procurement of hardware, include provisions on responsible sourcing and conflict minerals.
Intermediate policy commitment
Review your existing Human Rights Policy and ensure it reflects actual and potential impacts from the use and development of current and emerging technologies. Conduct a gap assessment comparing your commitments with current practices to identify your priority value chain areas.
Based on this assessment, select one or two high-impact policies – such as procurement or supplier management – and update them to address identified risks. For example, if significant AI investments are planned, ensure procurement policies include criteria for ethical sourcing and responsible technology use.
Advanced policy commitment
Actively integrate human rights considerations into all relevant technology related policies and processes. This means conducting a thorough review and regular updates of existing frameworks, to ensure they reflect evolving ethical standards and societal expectations. Businesses should take a holistic view of their operational structure – particularly those operating under a parent-child or decentralised model. A cross-functional review is essential to embed human rights principles across codified technology-related processes and decision-making pathways. This includes areas such as technology development, procurement, product development and data governance.
For example, Synamedia, a video solutions company, published their Human Rights and Technology Policy Statement which includes direct references to human rights due diligence and accountability.
Corporate AI Governance
Responsible AI governance:
A lack of existing and agreed standards is leading to fragmented approaches in company AI governance, increasing the risk of human rights impacts.
Due to the rapid pace of AI development, corporate governance frameworks also need to be flexible and reviewed regularly.
The Bank of England recommends that AI practitioners establish governance frameworks at both the enterprise level to set overarching standards, and at the execution level where those standards are applied to specific use-cases.
Helpful resources
- United Nations, Human Rights Due Diligence: An Interpretive Guide: This guide provides an overview of the UNGPs and due diligence stages, including aspects such as leverage and stakeholder engagement.
- United Nations, Human Rights Due Diligence for Digital Technology Use: This guidance provides a practical introduction to HRDD to assist in the design, development, and implementation of human rights due diligence for digital technology use.
- United Nations, The Feasibility of Mandating Downstream Human Rights Due Diligence: Reflections from Technology Company Policies
- Herbert Smith Freehills Kramer, Business Human Rights in the Tech Sector
- Deloitte, Human Rights Due Diligence in the Modern Era
- UNDP, Human Rights Due Diligence Handbook for Small and Medium-Sized Enterprises: This guidance provides clarity on the various requirements of human rights due diligence, with a focus on SMEs.
2. Assess Actual and Potential Human Rights Impacts from Technology

UNGP Requirements
The UNGPs note that impact assessments:
- Will vary in complexity depending on “the size of the business enterprise, the risk of severe human rights impacts, and the nature and context of its operations”;
- Should cover impacts that the company may “cause or contribute to through its own activities, or which may be directly linked to its operations, products or services by its business relationships”;
- Should involve “meaningful consultation with potentially affected groups and other relevant stakeholders” in addition to other sources of information such as audits; and
- Should be ongoing.
Impact assessments should look at both actual and potential impacts, i.e. impacts that have already manifested or could manifest. This compares to risk assessment that would only look at potential impacts and may not satisfy all of the above criteria.
Businesses should assess both potential and actual human rights impacts, factoring in the severity and likelihood of risks. For technology products and services, assessments should cover the full lifecycle, from production to end-use, to address upstream and downstream human rights implications.
Understanding your Risk Universe
A risk universe is the full spectrum of potential human rights risks a company could be connected to, directly or indirectly, through its operations, products, services, or business operations. A company should conduct a risk universe mapping exercise to identify all relevant risk areas before any prioritization or assessment takes place. In the context of technology, this means broadening the scope of your human rights risk assessment (HRIA) to include technology-related risks. When mapping their risk universe, a company should use a gender-lens and consider how vulnerable risks may be more significantly impacted.
To identify its risk universe, a company can take steps including: data collection, evaluating the context in which it uses technology, conducting meaningful stakeholder consultation (including with business partners), examining any recorded actual impacts and identifying its relationship to the risk. To understand its connection to a human rights impact, a company should ask whether it is directly causing or contributing to the impact, whether its business partners are involved, or whether the impact is external and beyond the control of both the company and its partners.
Enterprises should map risks to their value chains, either early in their risk universe assessment or in their value chain mapping. While companies should start by considering their direct impacts, they should also consider the downstream and end-use impacts. For example, the disposal of technology can impact human rights including privacy, health and safety, hazardous working conditions and clean, healthy, and sustainable environment.
1.1 Companies developing or deploying technology at scale
Companies that deploy and develop technology at scale may be contributing to or causing human rights impacts related to their direct actions and operations.
For example, the risk universe of a phone manufacturer may include the conditions of workers in its factories and pollution of local communities near operations and shipping facilities.
1.2 Companies using technology
Companies using technology in their operations may be contributing to or causing human rights impacts through their usage of technology.
For example, the risk universe of a clothing company that sells online and markets through social media may include risks like data privacy for customers and harmful marketing targeting vulnerable demographics including adolescents.
Assessing Risks – Potential Impacts
A potential impact assessment seeks to identify the extent of the adverse impact that a company may be having on rightsholders. A company should assess and prioritize its potential human rights impacts based on the severity and likelihood of each. Potential risks should be prevented through appropriate preventative measures.
2.1. Companies developing or deploying technology at scale
Businesses that develop or deploy technology at scale, such as AI systems, cloud infrastructure, hardware manufacturing, or biometric tools, should proactively assess how their products and operations may detrimentally impact human rights. A company should evaluate risks related to privacy, surveillance, discrimination and misuse of technology by end-users.
For example, a company developing facial recognition software should assess the risk of racial bias in its algorithms and consider restricting sales to clients with strong human rights protections.
2.2. Companies using technology
Enterprises using technology, whether for internal operations, customer engagement, or supply chain management, should assess how their technology and data practices may impact human rights. Related risks include surveillance, data privacy and digital exclusion.
For example, a company using employee tracking software should assess whether constant monitoring infringes on a worker’s privacy and dignity and adjust their practices accordingly.
Assessing Risks – Actual Impacts
An actual impact assessment analyses precisely what impacts a company is having on rightsholders. For example, a digital technology company may be violating human rights to privacy for its users. Unlike potential risks, actual risks have already arisen and should be ended or managed through corrected or remedial measures.
A Human Rights Impact Assessment (HRIA) is an instrument for examining policies, legislation, projects and programs to identify and measure their potential and actual human rights impacts. The primary goal of an HRIA is to prevent harmful impacts and maximize positive impacts for a given project, program or policy. The Danish Institute for Human Rights has created a human rights impact assessment guidance for digital activities designed for businesses and other users of digital technologies.
3.1. Companies developing or deploying technology at scale
Companies that develop or deploy technology at scale should assess the actual human rights impacts their products and operations are having on individuals and communities. Key assessment actions include conducting HRIAs to identify and measure impacts and engage impacted stakeholders to understand their lived experiences.
For example, a social media company whose algorithms amplify discriminatory content should revise its software and work with impacted parties to determine the extent of impacts and remediate harms.
3.2. Companies using technology
Enterprises using technology should regularly assess how their digital tools are affecting rights holders. Key assessment actions include monitoring how technology is used in practice, identifying consequences and taking action to mitigate harm.
For example, a retail company using digital time-tracking for employees finds that the system is logging hours inaccurately, leading to underpayment. The company should assess the harm, correct the system and compensate impacted workers accordingly.
Prioritisation – Assessing Likelihood and Severity
In their potential and actual risk assessments, a company should prioritise assessing and addressing human rights risks based on i) the likelihood they will occur, and ii) the severity of the impact.
When prioritising where to start, a company should first assess risks within their own operations and use-cases, especially if that company operates in the tech sector. Then, a company can evaluate their upstream risks from suppliers and downstream risks from end-users.
To assess their risks, a company can use internal sources of data, external sources of data, or some combination of both.
Positive Impacts of Technology in Assessing your Risks
While technology raises human rights risks, it also provides opportunities for companies to improve their HRDD assessments and more easily identify potential and actual human rights impacts throughout their value chains. If your company is conducting a double materiality assessment while preparing for legal compliance, for example with the EU Corporate Sustainability Reporting Directive (CSRD), then technological developments including AI provide opportunities for positive impact in strengthening your HRDD assessment process.
When used responsibly, digital qualitative tools can strengthen the rigour and responsiveness of Human Rights Impact Assessment (HRIA) processes. Companies can mobilize tools such as Self-Assessment Questionnaires (SAQs) and Natural Language Processing (NLP) programs to collect and analyse supplier data.
Stakeholder Engagement in Impact Assessments
Conducting meaningful stakeholder engagement through methods including interviews with rights-holders, duty-bearers and relevant parties is an integral part of assessing actual and potential human rights impacts. Through stakeholder engagement, a company should aim to gather as many viewpoints as possible and piece them together, consulting internal and external parties to consolidate a view of the company’s potential and actual risks. Internal engagement is important and companies should consult with their internal technical experts and programmers to understand the impact of their digital services. A company may need to hire an independent third-party expert to support the risk assessment process. For example, a technology company may need to hire an expert in AI and human rights to examine their operational impact.
The design of stakeholder engagement should reflect heightened risks for vulnerable groups affected by automated decisions, thereby aligning with OECD AI Principles fairness and oversight expectations.
Companies should use the information gleaned from stakeholder engagement to drive purposeful assessments. For example, in 2023 Microsoft commissioned an independent human rights impact assessment examining its licensing of cloud services, including AI, to United States government agencies following concerns raised by shareholders about human rights abuses, particularly against individuals who identify as Black, Indigenous, and People of Colour (BIPOC).
Responsible Governance, Strategy, and Product Design
A thorough risk assessment can reveal strategic, governance, and product design gaps that may require fundamental reconfigurations of how a business is structured and managed. Businesses should use the insights from risk assessments to rethink oversight, policy, and operational responsibilities across teams and embed human rights into their operations and structures.
Collective and Collaborative Action
Through their risk assessment processes, companies should strive to identify peers and other value chain actors with whom to collaborate, such as suppliers or through industry associations. Companies may also consider a collaborative HRIA, a joint risk assessment process undertaken by project-affected people and a company, with potential involvement from the host government and other stakeholders. For example, Nestle had a ten-year HRIA partnership with the Danish Institute for Human Rights.
Questions for reflection during impact assessments
How can we undertake meaningful engagement with stakeholders, while recognizing the engagement fatigue that arises from duplicative due diligence by companies operating independently?
How are employee rights, notably including privacy rights, impacted by digital monitoring practices?
How are hiring practices being impacted by technology (including AI)? Are there potential or actual human rights risks in hiring practices, including but not limited to the right to privacy and the right to non-discrimination?
Other forms of impact assessments related to digital activities
- Data protection impact assessment
- Ethical impact assessment
- Technology impact assessment
- Algorithmic impact assessment
- ‘Futures Thinking’ Methodology
- Safety impact assessment
Helpful Resources
- The Danish Institute for Human Rights, Key Principles for Human Rights Impact Assessment of Digital Business Activities (2023)
- Ontario Human Rights Commission, Human Rights AI Impact Assessment, 2024
- The Alan Turing Institute, A risk-based approach to assessing and mitigating adverse impacts developed for the Council of Europe’s Framework Convention
- The Digital Rights Check, A web-based tool that is designed to help staff working on development projects to assess the potential human rights impacts of their digital projects or project components.
3. Integrate and Take Action to Address Impacts

UNGP Requirements
As per the UNGPs, effective integration requires that:
- “Responsibility for addressing [human rights] impacts is assigned to the appropriate level and function within the business enterprise” (e.g. senior leadership, executive and board level); and
- “Internal decision-making, budget allocations and oversight processes enable effective responses to such impacts”.
Taking action to address and correct human rights impacts is the most important step of any due diligence cycle. The actions a company should take will depend on the outcomes of its risk and impact assessments and stakeholder engagement exercises. Given the varied uses of technologies, companies should tailor their approaches to the specific risks, impacts, contexts and stakeholders involved. The UNGPs require that companies take appropriate action to address their impacts, with actions varying according to a company’s level of causation, the extent of its leverage and the likelihood and severity of the impact.
Impacts in your development and deployment of technology
Technology companies face unique and extensive human rights risks due to the nature and widespread availability of their products and services. Across their hardware and software development and deployment, technology companies should take action to ensure they are respecting human rights. Additionally, companies should look inward at human rights impacts on their employees due to technology, especially linked to privacy, security and discrimination risks.
- Strategy: Technology driven business models may be reliant on advertising revenue, which could heighten human rights risks by incentivizing practices that prioritize engagement or profit over user wellbeing. Companies should critically assess their business models, incentive systems and KPIs to ensure they are consistent with respect for human rights and do not put stakeholders at risk.
- Algorithms: A Council of Europe study highlighted that algorithms and automated data processing techniques can adversely impact a range of human rights, including: privacy and data protection, fair trial and due process, freedom of expression, freedom of assembly and association, prohibition of discrimination and free elections. Technology service providers should take action to monitor the adverse social and human rights impacts that their algorithms may be having. Actions may include: increasing human oversight in automated processing, providing users with reporting mechanisms for harmful content and rewriting code to reduce discriminatory and biased outcomes.
- Privacy and Security: Technology companies may be at risk of impacting human rights to privacy and security, particularly as software services increasingly manage sensitive personal and financial information for their users. Companies should take action on privacy and security risks by improving their data protection measures, providing users with accessible information about how their data will be processed, stored, used, and shared and managing their business partnerships to maintain data privacy in line with legal requirements. For example, Microsoft provides a privacy statement accessible online that is easy to read and details what personal data is collected, how it is used and how it is shared.
- Data centres: According to a 2025 Verisk Maplecroft analysis, the majority of the world’s top data centre hubs are facing an array of heat related risks. Data centres are vital for many technology companies’ operations; however they are also detrimentally impacting water and energy access in many areas. Technology companies can take action by ‘greening’ their data centres to improve efficiency and reduce operational expenditures. Bahnhof, a Swedish Internet service provider, pumps excess heat from their data centres to nearby buildings, thereby reducing energy waste. Technology companies can also take action to mitigate environmental impacts from their data centres, such as Google’s water project that replenished 64% of their 2024 freshwater consumption.
Impacts in your value chain
A) If you have a direct relationship (such as a direct supplier)
Supply chain contracts:
- Supplier contracts represent a powerful form of leverage available to buyers seeking to improve human rights performance among their suppliers. As documented in Harvard Kennedy School research, contractual relationships provide a formal mechanism through which companies can set clear expectations, establish accountability measures and create incentives for improved human rights outcomes.
- Important to note that contracts should be proportionate and feasible particularly when working with SMEs. Companies have a responsibility not to impose unreasonable or prohibitively expensive contractual obligations that could harm smaller business partners’ viability.
Minerals and materials
- Companies that have identified human rights risks in their minerals or metals supply chains should implement due diligence aligned with OECD Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High-Risk Areas
- Consider conducting or commissioning third party independent audits of mining operations and smelters or processors
- Explore participation in industry schemes (e.g. Responsible Minerals Initiative)
- Engaging directly with suppliers on relevant issues such as working conditions, health and safety, or outsourced labour risks
Software and web service providers
- Buyers can require transparency on data handling practices, algorithmic decision making and content moderation policies
- Include provisions addressing potential discriminatory impacts, privacy violations and freedom of expression concerns
- Mandate regular human rights impact assessments (HRIA)
- Specifically for AI developers, buyers should use procurement requirements to mandate responsible AI practices including bias testing, explainability and human oversight mechanisms
B) If you don’t have a direct relationship (such as a tier 2 supplier or an end-user)
Even without a direct commercial relationship, enterprises have multiple avenues to influence human rights performance throughout the value chain. Strategies include:
- Engaging intermediaries – ie. .working with tier 1 to cascade standards down the supply chain
- AI and technology-specific procurement requirements – use procurement processes to require responsible practices
- End-user engagement – implement “know your customer” procedures for high-risk applications, provide clear guidance on responsible use in terms of service, establish reporting mechanisms for misuse of products
- Multi-stakeholder initiatives and industry collaboration
- Public advocacy and transparency
Responsible use of leverage
Where companies lack direct influence, they should use their leverage to influence third parties such as supply chain actors or customers, to uphold human rights standards. Companies can create leverage through proactively building influence, for example through contracts, incentives or collaboration, in order to influence other actors to address human rights risks or impacts.
Leverage should be used constructively and disengagement should be a last resort after attempts to influence positively have failed. Companies should balance their leverage with responsibility not to cause economic harm to vulnerable business partners. We also encourage businesses to maintain transparency on their own practices, which increases their own credibility and influence with other actors in the value chain.
Stakeholder Engagement – Taking Action
Stakeholder engagement is a crucial lens for companies to employ when taking action on human rights – it is context dependent and will vary in process depending on factors including the type of company and human rights impacted.
Meaningful stakeholder engagement must be relevant and timely. Additionally, a company needs to act on the information they have collected through stakeholder engagement and consult and inform stakeholders about these actions. The findings and perspectives from stakeholder engagement should be effectively incorporated into business operations.
For example, a company may conduct a survey wherein employees share concerns about the use of a workplace monitoring software to track productivity which is perceived to violate privacy. This company could then conduct actions including a worker consultation on acceptable monitoring, updating the privacy policy to improve transparency on monitoring and over the longer term involving worker representatives in the development of policies and governance.
Other forms of impact assessments related to digital activities
- Data protection impact assessment
- Ethical impact assessment
- Technology impact assessment
- Algorithmic impact assessment
- ‘Futures Thinking’ Methodology
Helpful resources
- UN Guiding Principles on Business and Human Rights (Principles 19-22 on leverage)
- OECD Due Diligence Guidance for Responsible Business Conduct
- Sector-specific guidance (OECD Minerals Guidance, UN B-Tech Project resources on technology and human rights)
Companies developing new technologies
Companies developing new technologies can:
Stress-test and as necessary improve the design of technologies in ways that demonstrably minimise the risks of severe human rights harms, rather than only optimising for maximising revenue.
Scrutinise plans for testing and expansion in new markets, with a focus on whether the business model exacerbates human rights risks in the local context.
Engage in collective action with peers, professional associations, customers, civil society and government to develop and implement rights-respecting standards of business conduct and technological design.
Explore opportunities to contribute to development of laws and regulations aimed at increasing human rights protections.
Developing terms of service to limit who can use your AI.
Companies using new technologies
Companies using new technologies can:
Implement digital literacy initiatives to ensure workers have the skills to thrive and exercise their human rights – see example or here.
Review performance incentives for top management and key functions to reward actions that prevent or mitigate human rights harms.
Evaluate how technologies will be used in different contexts along their supply chains and consider how this may result in varying human rights impacts across contexts.
Conduct due diligence of supply chain aligned with international guidance.
Helpful Resources
- BSR, Sales Partners and Human Rights Due Diligence in the Technology Sector, 2022
- UN OHCHR B-Tech, Taking Action to Address Human Rights Related to End-Use, 2020
4. Track Performance on Technology and Human Rights

UNGP Requirements
As per the UNGPs, tracking should:
- “Be based on appropriate qualitative and quantitative indicators”; and
- “Draw on feedback from both internal and external sources, including affected stakeholders” (e.g. through grievance mechanisms).
What are the benefits of tracking performance?
Tracking performance is critical for effective human rights due diligence, as stated by the OECD Guidelines for Multinational Enterprises, which guides companies to implement the UNGPs. Recent analyses suggest many large EU firms still lack robust tracking of human-rights action effectiveness.
In the technology context, monitoring progress against KPIs on human rights helps companies understand the impacts of their digital products, services and procurement decisions on rightsholders.
This includes risks such as:
- Digital surveillance
- Algorithmic bias
- Online harm
Tracking performance enables companies to:
- Identify strengths, weaknesses and unintended consequences
- Highlight systemic issues requiring policy or process changes
- Share best practices across the enterprise
As the UNDP notes: “what gets measured gets managed.” Tracking is essential for improving outcomes and ensuring accountability.
What methods should a company use to track technology and human rights performance?
Companies can use a range of methods to track their performance on technology and human rights, including:
Internal tracking
SMART targets and KPIs
Companies should set SMART (Specific, Measurable, Achievable, Relevant, Time-bound) targets that reflect both the effort to reduce risks and the extent to which they affect rightsholders (likelihood and impact). Companies can do this by developing human rights KPIs that directly link to their technology usage, development and/or deployment. (Basic to intermediate due diligence)
Worker engagement
Businesses should engage directly with workers to understand human rights impacts. Workers should be able to report violations safely and anonymously. Digital tools can support this, as discussed in the Remedy section. (Intermediate due diligence)
Internal audit
Companies should conduct regular internal reviews of technology use (e.g. monitoring tools, algorithmic decision-making) to assess compliance with human rights policies and risk mitigation. (Intermediate due diligence)
External tracking
Supplier Audits and SAQs
A company should aim to track technology and human rights impacts along its value chain. Therefore, conducting supplier audits and Self-Assessment Questionnaires (SAQs) is a crucial step. (intermediate to advanced due diligence). Example questions include:
- Has the use of workplace monitoring technologies been changed to uphold employee privacy?
- Have the biases been erased from recruiting algorithms?
Human Rights Impact Assessments (HRIAs)
HRIAs can be used to assess the efficacy of actions to improve human rights outcomes and inform future risk identification. (Advanced)
Stakeholder Engagement
Enterprises should build ongoing, meaningful dialogue with stakeholders and ensure rightsholders are kept informed of progress and performance. To promote accurate performance tracking, companies should ensure that the mechanisms they use are as anonymous and safe as possible to prevent incorrect results or retribution against those raising human rights complaints. (Intermediate to advanced)
Governance and integration
Assigning responsibility
Companies should assign clear roles and responsibilities for collecting data, tracking performance and ensure KPIs are well-defined. All internal stakeholders should be informed and trained on the meaning and relevance of information to the company’s human rights and technology policies and commitments. (Basic to Intermediate due diligence)
Safe and anonymous mechanisms
For companies developing or deploying technology, ensuring that performance tracking mechanisms are safe, anonymous and trusted is essential to uncovering real human rights risks — especially those that may be hidden due to fear of retaliation or reputational concerns. (Intermediate due diligence)
Feedback loops
Tracking is only meaningful if it leads to action. Feedback loops ensure that insights from performance tracking are translated into policy updates, training, product redesigns and systemic change. (Advanced due diligence)
5. Communicate Performance on Technology and Human Rights

UNGP Requirements
As per the UNGPs, regular communications of performance should:
- “Be of a form and frequency that reflect an enterprise’s human rights impacts and that are accessible to its intended audiences”;
- “Provide information that is sufficient to evaluate the adequacy of an enterprise’s response to the particular human rights impact involved”; and
- “Not pose risks to affected stakeholders, personnel or to legitimate requirements of commercial confidentiality”.
Transparent communication on your management of technology-related human rights risks builds trust with stakeholders, demonstrates accountability and supports continuous improvement. Clear reporting helps users, workers, investors and communities understand how their rights might be impacted and what your business is doing about it. It also enables meaningful feedback and dialogue, allowing you to identify blind spots and benchmark progress. Silence or vague statements increase reputational risk and can erode stakeholder confidence, while proactive communication – even about the challenges you face – shows commitment to responsible technology use.
Internal communications
Effective internal communication ensures everyone in your organisation understands their role in respecting human rights, while developing, procuring, or using technology. This includes (but is not limited to):
- Policy dissemination – Ensure all staff understand your commitments to technology and human rights, including responsible AI use, data protection, surveillance limits and supply chain standards.
- Training and capacity building – Equip product developers, procurement teams, HR and leadership with knowledge to recognise and address human rights risks in technology management.
- Cross-functional information sharing – Share relevant findings from human rights impact assessments [link to above] with product, legal, procurement and senior leadership so insights can be integrated into decision making.
- Internal feedback mechanisms – Create channels for employees to raise concerns about the human rights impacts of technology, without fear of retaliation.
- Knowledge transfer –
- For technology companies, ensure non-technical staff understand how systems work and support them to recognise human rights implications.
- For corporate users of technology, ensure technical teams understand human rights implications.
Regular, transparent communication helps to build accountability and embed human rights due diligence across operations.
External communications
Through effective external communication, businesses can demonstrate accountability to rightsholders – including users, workers and communities – who may have their rights impacted by technology. Transparent and proactive disclosures enable stakeholders to engage meaningfully with your approach. Key principles of effective external communication:
- Ensure transparency and accountability for AI – Openly communicate the capabilities, limitations and human rights impact of AI systems. Ensure these systems are explainable and that your procurement and deployment processes are fair and non-discriminatory.
- Tailor to different stakeholders – Affected communities need accessible information about how technology impacts them, while investors need information on risk management and civil society may need technical detail to assess your approach.
- Use multiple formats – Examples include: stakeholder consultations, accessible summaries, online dialogues, community meetings or grievance mechanism responses. You can of course report on human rights impacts from technology in your CSRD report, website, public sustainability report, or in an annual Communication on Progress which seeks to implement the Ten Principles of the UN Global Compact.
- Be substantive and specific – Explain assessment processes, as well as specific impacts, the actions you’re taking to prevent or mitigate them (whether directly or indirectly) and how you’re measuring effectiveness. Generic statements on “taking human rights seriously” are not sufficient – stakeholders need concrete information to evaluate your approach.
- Consider the impact of disclosures on rights – While maximising transparency is important, it should be balanced with protecting the right to privacy or revealing information that could help malicious actors circumvent safeguards (for example, detailed explanations of content moderation techniques). Commercial confidentiality should not be used as an excuse to evade duty to transparency – aim to maximise transparency on your accountability for human rights issues. A good example of a disclosure is that provided by Open AI about safety challenges with their GPT-4 model.
- Address the knowledge gap (for technology companies) – Technology companies understand their products and services far better than their customers, users, or communities. Use clear, accessible language to explain what your technology does and how rights have been considered and how impacts have been prevented or mitigated. This transparency is essential for informed consent and user trust.
- Be timely and responsive – Communicate proactively about salient risks, not just in annual reports. When concerns are raised by affected stakeholders, respond with updates on investigations and progress. Acknowledging and sharing challenges honestly builds credibility more than claims of perfection or denial.
External communication should create a dialogue which improves your human rights performance and empowers stakeholders to understand, challenge and inform your approach.
Stakeholder Engagement
Meaningful stakeholder engagement is a key component of any human rights due diligence process. The OECD Guidelines for Multinational Enterprises on Responsible Business Conduct describe stakeholder engagement as an “interactive process of engagement with relevant stakeholders, through, for example, meetings, hearings, or consultation proceedings.” Relevant stakeholders include any persons or groups, or their legitimate representatives, who have rights or interests that may be affected by adverse impacts associated with the enterprise’s operations, products, or services.
Helpful Resources
6. Remedy and Grievance Mechanisms

UNGP Requirements
As per the UNGPs, remedy and grievance mechanisms should include the following considerations:
- “Where business enterprises identify that they have caused or contributed to adverse impacts, they should provide for or cooperate in their remediation through legitimate processes”.
- “Operational-level grievance mechanisms for those potentially impacted by the business enterprise’s activities can be one effective means of enabling remediation when they meet certain core criteria.”
According to UN Guiding Principle 31, to ensure their effectiveness, grievance mechanisms should be:
- Legitimate: “enabling trust from the stakeholder groups for whose use they are intended, and being accountable for the fair conduct of grievance processes”
- Accessible: “being known to all stakeholder groups for whose use they are intended, and providing adequate assistance for those who may face particular barriers to access”
- Predictable: “providing a clear and known procedure with an indicative time frame for each stage, and clarity on the types of process and outcome available and means of monitoring implementation”
- Equitable: “seeking to ensure that aggrieved parties have reasonable access to sources of information, advice and expertise necessary to engage in a grievance process on fair, informed and respectful terms”
- Transparent: “keeping parties to a grievance informed about its progress, and providing sufficient information about the mechanism’s performance to build confidence in its effectiveness and meet any public interest at stake”
- Rights-compatible: “ensuring that outcomes and remedies accord with internationally recognized human rights”
- A source of continuous learning: “drawing on relevant measures to identify lessons for improving the mechanism and preventing future grievances and harms”
- Based on engagement and dialogue: “consulting the stakeholder groups for whose use they are intended on their design and performance, and focusing on dialogue as the means to address and resolve grievances”
What is your responsibility for providing remedy to victims of human rights violations?
The right to an effective remedy for human rights violations is a core principle of international law. In the context of digital and emerging technologies, providing remedy is a vital part of human rights due diligence. Rapid technological change increases risks for individuals across value chains. Businesses, whether tech providers or corporate users, may cause, contribute to, or be linked to harms.
- When harms arise from a company’s products or services, it is essential to clearly establish who holds responsibility for providing remedy. This may include not only the company itself, but also technology developers, service providers and end-users such as governments or healthcare institutions. Responsibility for remedy may extend beyond the company to include developers, service providers and end-users like governments.
- While the UN Guiding Principle 31 outlines criteria for effective remedy, it is ultimately rightsholders who determine whether those remedies are meaningful.
- The UNGPs recommend that all business enterprises (including technology companies) “establish or participate in effective operational-level grievance mechanisms” for affected individuals and communities to enable early and direct resolution of grievances arising from adverse human rights impacts. There are many types of mechanisms operated by technology companies that are relevant to business respect for human rights, but mechanisms may not have been created specifically to be human rights grievance mechanisms.
- Company-based grievance mechanisms include employment-related grievance mechanisms, general compliance “hotlines”, consumer or user complaints processes, terms of service enforcement processes, intellectual property-related processes, disability tech support services, systems for handling privacy related issues and queries (such as “right to be forgotten” processes), systems for monitoring and enforcing community conduct standards (including content moderation for digital and internet companies) and responsible sourcing alert systems.
- Companies must review their existing channels and grievance mechanisms to assess their effectiveness – see effectiveness criteria – their role in preventing future harm, as well as the provision of effective remedy to affected persons. The UN’s B Tech project publishes foundational papers on Designing and implementing effective company-based grievance mechanisms and Access to remedy and the technology sector: basic concepts and principles which technology companies should refer to.
How can you plan for effective and inclusive remediation?
Effective remediation begins with strategic planning that centres the needs and perspectives of affected rightsholders. Companies should develop remediation plans tailored to the type of harm, affected stakeholders, and company context (whether a technology developer, corporate user, or both). These plans should guide employees in responding effectively when harms are identified.
- Establish accessible grievance mechanisms: Develop operational level complaint systems that offer individuals timely and direct channels to address AI-related harms and abuses.
- Rightsholder-centric design is fundamental: Affected groups should be meaningfully consulted in the design and evaluation of remediation mechanisms. This ensures mechanisms address actual needs and are accessible and trusted by those they’re meant to serve.
- Gender sensitivity: Remediation processes must account for gender-specific harms such as discrimination and harassment, recognising that impacts vary across different groups of women and gender-diverse people. Collecting gender-disaggregated data also helps tailor responses to diverse needs.
- Complexity of harms: Technology-related harms present unique challenges due to their complexity: unlike some human rights violations (such as forced labour or child labour), technology harms may be diffuse, indirect, or systemic, making causation and appropriate remediation less straightforward.
Systemic risks: Companies should assess not only actual harms but also systemic risks—the potential for widely deployed technologies (like recruitment algorithms or facial recognition software) to cause harm at scale, affecting large numbers of people with similar discriminatory or privacy-violating impacts.
How to effectively implement grievance mechanisms for existing and emerging technologies?
Grievance mechanisms are essential tools for identifying and remediating human rights harms. They enhance the robustness of a company’s efforts to assess and address impacts and signal to stakeholders that the company takes concerns seriously. Best practices for technology and human rights grievance mechanisms include:
- Multi-channel access: Beyond conventional hotlines, companies should leverage digital tools such as worker voice technology that allows workers to submit grievances in real-time through text messages or apps without fear of reprisal. Multiple channels ensure accessibility for diverse stakeholders.
- Inclusive design: Mechanisms must be accessible to all affected stakeholders – including workers, suppliers, users and community members – whose experiences, literacy levels, languages and access to technology vary widely. Design should remove barriers to reporting, including language, digital literacy and fear of retaliation.
- Trust and timeliness: A well-functioning grievance mechanism builds trust by responding consistently and within specified timeframes, demonstrating that complaints are taken seriously and patterns of misconduct are monitored and addressed.
Recognising limitations: Company-led mechanisms have inherent limitations. When large numbers of rightsholders are impacted, or when power imbalances make internal mechanisms insufficient, companies should pursue external or collaborative approaches. This may include working with worker organisations, civil society, or independent ombudspersons, to ensure adequate remedy is available to affected rightsholders.
Technology-specific challenges
Emerging and digital technologies introduce distinct challenges for remediation that require specialised approaches:
- Algorithmic accountability: When harm is caused by automated decisions rather than human judgment, accountability becomes harder. Companies need clear ways to identify responsibility, explain how the system made its decision, prevent similar harms and provide meaningful remedy even when the technology is complex or opaque. Effective remediation may involve both technical fixes, like retraining models or adjusting decision thresholds and traditional remedy processes. Transparency and accountability are central to this, and are reflected in the EU AI Act, including Article 71 on the EU Database for high-risk AI systems and Article 74 on market surveillance and control of AI systems.
Multi-actor involvement: Technology ecosystems are often complex, with multiple companies contributing to harm through interconnected products and services (for example, a cloud provider, platform operator and AI developer may all play roles in a single harmful outcome). Safeguarding access to remedy in these contexts requires coordination across actors to determine shared responsibility, avoid gaps in remediation and ensure affected people can access remedy without having to navigate complex corporate relationships.
Legal and ecosystem cooperation
Companies must work within broader legal and institutional frameworks to ensure comprehensive remedy for affected rightsholders:
- Legal cooperation: In cases of severe harm, legal proceedings may be necessary. Companies should cooperate in good faith with investigations and judicial proceedings, providing relevant information and supporting accountability processes.
- Avoiding remedy interference: Companies should not restrict access to judicial mechanisms through contractual provisions such as mandatory arbitration clauses, liability waivers, or contractual exclusivity requirements that prevent affected individuals from seeking remedy through courts or other channels. Such restrictions undermine the right to remedy and should be removed from contracts and terms of service.
- Remedy ecosystem approach: No single remediation mechanism, whether company-led, judicial, or state-based, can address all harms. An ecosystem approach is essential, with companies cooperating across judicial, state-based and non-state mechanisms (including industry initiatives, multi-stakeholder forums and community-based processes) to deliver different forms of remedy suited to different types of harm. This may include financial compensation, policy changes, public acknowledgment, restoration of services, or guarantees of non-repetition.
Shared responsibility: Remediation should be cooperative. Even technology providers who have not directly caused harm should seek to support remediation efforts when their technology is implicated, using leverage over business relationships to ensure affected people can access remedy.
What about stakeholder engagement?
The broad reach of technology means that its harms can be far reaching and affect entire communities. Meaningful stakeholder engagement is therefore more critical than ever to ensure the success of implementing meaningful remediation and grievance mechanisms. Strategic approaches include:
Early and ongoing engagement: Companies should allocate adequate time and resources to consult with those potentially impacted by technology.
Digital engagement tools: Enterprises can seek to employ digital technologies in at least part of stakeholder engagement. Platforms like Ulula and &Wider offer digital surveys and participatory tools that help overcome barriers to accessibility and literacy.
Risk mitigation: While digital tools can enhance engagement, companies should assess and mitigate any human rights risks these tools themselves may post.
Helpful Resources
- UN OHCHR B-Tech, Access to remedy and the technology sector: basic concepts and principles
- UN OHCHR B-Tech, Designing and implementing effective company-based grievance mechanisms: This guidance
- UN Environment Programme, Remedy: This resource includes information about types of remedy and provides guidance about the role of financial institutions in remedy
Case Studies
Test
Further Guidance
Further Guidance
Further guidance on technology and human rights includes:
Overview:
- OECD, AI Principles: These principles provide the first intergovernmental standard on AI and emphasize innovation, trustworthiness and respect for human rights and democratic values.
- UN B-Tech, The practical application of the UN Guiding Principles on Business and Human Rights to the activities of technology companies including activities related to AI: This report analyses the application of the UNGPs to the activities of business activities.
Industry-Specific Risk Factors:
- Mining and Extractive Industry
- BSR, AI and Human Rights in Extractives: This resource contains a recommendations section (p.11-12) that provides businesses with actions they can take to mitigate human rights harms related to AI usage in the extractives sector.
- OECD, How to address bribery and corruption risks in mineral supply chains: This resource answers frequently asked questions and explains in simple language what actions companies should take to address human rights risks in their minerals value chains.
- OECD, Handbook on Environmental Due Diligence in Mineral Supply Chains: This handbook supports businesses in the extractives sector with the implementation of their human rights and environmental principles in line with OECD standards.
- OECD, Practical actions for companies to identify and address the worst forms of child labour in mineral supply chains: Practical guidance for companies to help them identify, mitigate and account for the risks of child labour in their mineral supply chains, developed to build on the due diligence framework of the OECD Due Diligence Guidance for Responsible Supply Chains of Minerals from Conflict-Affected and High Risk Areas.
- OECD, 2025, Human Rights Due Diligence for Digital Technology Use: Artificial Intelligence
- Software Development
- The Danish Institute for Human Rights and the German Society for International Cooperation, The Digital Rights Check, A web-based tool that is designed to help staff working on development projects to assess the potential human rights impacts of their digital projects or project components.
- UN B-Tech, The practical application of the UN Guiding Principles on Business and Human Rights to the activities of technology companies including activities related to AI: This report analyses the application of the UNGPs to the activities of business activities.
- Financial Institutions
- Business and Human Rights Resource Centre, Navigating the surveillance technology ecosystem: A human rights due diligence guide for investors: This guide assists investors of all sizes, types and geographies to navigate the surveillance technology ecosystem and strengthen their human rights due diligence.
- UK Government, The Mitigating ‘Hidden’ AI Risks Toolkit: This toolkit is designed for individuals and teams responsible for implementing AI tools and services and those involved in AI governance.
- UN B-Tech, Human Rights Risks In Tech: Engaging And Assessing Human Rights Risks Arising From Technology Company Business Models: This tool aims to equip investors to (1) accurately assess technology companies’ policies and procedures for addressing business model-related human rights risks; and (2) encourage technology companies to adopt approaches to such human rights risks that align with their responsibility to respect human rights.
- UN Global Compact, Responsible Investing into AI: This webinar will explore how investment in AI can enable human rights harms, the responsibility of investors to conduct HRDD, and how HRDD can benefit investors while contributing to human rights protections.
- Technology Hardware Manufacturers
- ILO, ILO Helpdesk for business on international labour standards: The ILO Helpdesk assists company managers and workers who want to align their policies and practices of international labour standards and build good industrial relations.
- Sectors deploying emerging technologies
- European Parliament, Addressing AI Risks in the Workplace: Workers and Algorithms. This guidance provides perspectives from both employers and trade unions about AI and evaluates how emerging AI technologies fit into existing EU laws.
- OECD, Guidance for Multinational Enterprises: Chapter 9 (p.46-p.48) “Science, Technology and Innovation” provides guidelines for enterprises to conduct human rights due diligence related to emerging technologies.
- Partnership on AI, Data Enrichment Sourcing Guidelines: This guidance lists five key, worker-centric guidelines that AI practitioners should follow when setting up a project involving data enrichment.
- UK Government, The Mitigating ‘Hidden’ AI Risks Toolkit: This toolkit is designed for individuals and teams responsible for implementing AI tools and services and those involved in AI governance.
- UN Working Group on Business and Human Rights, Artificial Intelligence Procurement and Deployment: ensuring alignment with the Guiding Principles on Business and Human Rights: Report provides clear guidance regarding implementation of the UNGPs and Human Rights in the procurement and deployment of AI systems
- UN B-Tech Project, Advancing Responsible Development and Deployment of Generative AI: This resource provides practical recommendations for how lawmakers, businesses and civil society can leverage the UNGPs to foster practices capable of addressing human rights risks and impacts from AI.
- UN B-Tech (Business and Human Rights in Technology), Identifying and Assessing Human Rights Risks related to End Use: This tool provides guidance for enterprises working to embed respect for human rights in the business of technology. It includes guidance for enterprises to conduct HRDD for related end users and end-use scenarios.
- UN Working Group on Business and Human Rights, Artificial Intelligence Procurement and Deployment: ensuring alignment with the Guiding Principles on Business and Human Rights: Report provides clear guidance regarding implementation of the UNGPs and Human Rights in the procurement and deployment of AI systems
Due Diligence Considerations:
General due diligence guidance:
- OECD, Guidance for Multinational Enterprises: Chapter 9 (p.46-p.48) “Science, Technology and Innovation” provides guidelines for enterprises to conduct human rights due diligence related to emerging technologies.
- SME Compass, Due Diligence Implementation Made Easy: The SME compass offers guidance on the overall due diligence process by taking business through five key due diligence phases. It is available in both English and German.
- UN, Human Rights Due Diligence: An Interpretive Guide: This guide provides an overview of the UNGPs and due diligence stages, including aspects such as leverage and stakeholder engagement.
- UN, Human Rights Due Diligence for Digital Technology Use: This guidance provides a practical introduction to HRDD to assist in the design, development, and implementation of human rights due diligence for digital technology use.
- UNDP, Human Rights Due Diligence Handbook for Small and Medium-Sized Enterprises: This guidance provides clarity on the various requirements of human rights due diligence, with a focus on SMEs.
Step 1: Develop a Policy Commitment
- United Nations, Human Rights Due Diligence: An Interpretive Guide: This guide provides an overview of the UNGPs and due diligence stages, including aspects such as leverage and stakeholder engagement.
- United Nations, Human Rights Due Diligence for Digital Technology Use: This guidance provides a practical introduction to HRDD to assist in the design, development, and implementation of human rights due diligence for digital technology use.
- United Nations, The Feasibility of Mandating Downstream Human Rights Due Diligence: Reflections from Technology Company Policies
- Herbert Smith Freehills Kramer, Business Human Rights in the Tech Sector
- Deloitte, Human Rights Due Diligence in the Modern Era
- UNDP, Human Rights Due Diligence Handbook for Small and Medium-Sized Enterprises: This guidance provides clarity on the various requirements of human rights due diligence, with a focus on SMEs.
Step 2: Assess Actual and Potential Impacts
- The Danish Institute for Human Rights, Human rights impact assessment of digital activities. Practical guidance for businesses on how to conduct human rights impact assessment of digital activities.
- The Danish Institute for Human Rights, Key Principles for Human Rights Impact Assessment of Digital Business Activities: This guidance outlines key criteria for human rights impact assessments of digital projects, products and services.
- Ontario Human Rights Commission, Human Rights AI Impact Assessment, 2024
- The Alan Turing Institute, Human Rights, Democracy, and the Rule of Law Impact Assessment for AI Systems (HUDERIA): A risk-based approach to assessing and mitigating adverse impacts developed for the Council of Europe’s Framework Convention
- The Digital Rights Check, A web-based tool that is designed to help staff working on development projects to assess the potential human rights impacts of their digital projects or project components.
- UN, Identifying and Assessing Human Rights Risks related to End Use: This tool provides guidance for enterprises working to embed respect for human rights in the business of technology.
- UN B-Tech, Identifying and Assessing Human Rights Risks related to End Use: This tool provides guidance for enterprises working to embed respect for human rights in the business of technology. It includes guidance for enterprises to conduct HRDD for related end users and end-use scenarios.
- UN B-Tech, Human Rights Risks In Tech: Engaging And Assessing Human Rights Risks Arising From Technology Company Business Models: This tool aims to equip investors to (1) accurately assess technology companies’ policies and procedures for addressing business model-related human rights risks; and (2) encourage technology companies to adopt approaches to such human rights risks that align with their responsibility to respect human rights.
Step 3: Integrate and Take Action to Address Impacts
- BSR, Sales Partners and Human Rights Due Diligence in the Technology Sector, 2022
- European Parliament, Addressing AI Risks in the Workplace: Workers and Algorithms. This guidance provides perspectives from both employers and trade unions about AI and evaluates how emerging AI technologies fit into existing EU laws.
- UN OHCHR B-Tech, Taking Action to Address Human Rights Related to End-Use, 2020
Step 4: Track Performance
- The Danish Institute for Human Rights, Human Rights Indicators for Business
Step 5: Communicate Performance
- United Nations, The UN Guiding Principles Reporting Framework
Step 6: Remedy and Grievance Mechanisms
- UN OHCHR B-Tech, Access to remedy and the technology sector: basic concepts and principles
- UN OHCHR B-Tech, Designing and implementing effective company-based grievance mechanisms: This guidance
- UN Environment Programme, Remedy: This resource includes information about types of remedy and provides guidance about the role of financial institutions in remedy