Created in partnership with the Helpdesk on Business & Human Rights
Digital and Emerging Technologies and Human Rights
Overview
What are Digital and Emerging Technologies?
The digital transformation of business and society encompasses both established digital technologies and rapidly evolving technologies, which present distinct characteristics and implications for human rights considerations.
- Digital technologies include widely adopted tools underpinning business operations and critical infrastructure. These include:
- Cloud computing, digital telecommunications, search engines, big data analytics, Internet of Things (IoT), cybersecurity systems, basic drones and industrial robots for commercial uses, blockchain for traceability and mobile technology.
- Pervasive Artificial Intelligence (AI) systems such as predictive analytics, Natural Language Processing systems (e.g. customer service chatbots) and machine learning.
- Emerging technologies are characterized by rapid development and uncertainty in trajectory and impact. These include:
- Generative AI models, autonomous decision-making and agentic AI systems such as advanced robotics and autonomous vehicles, emotion-recognition software, algorithms used in policing or credit risk assessments, cryptocurrencies, synthetic data generation and digital-twin modelling (i.e. creating data to train other AI systems or to model supply chains).
- The capacity of these technologies to take independent decisions introduces safety, liability and oversight challenges. Many are still untested at scale, with reliability, bias and security risks not yet fully understood. Given the rapid pace of adoption of some of these technologies, for the purposes of human rights due diligence (HRDD) these should be considered emerging when the regulatory environment is still nascent and human rights impacts are uncertain.
What is AI?
Artificial Intelligence (AI) is a broad term encompassing technologies like machine learning, deep learning, natural language processing and generative AI. The EU, the Council of Europe, the US, the UN and other jurisdictions use the OECD’s definition (2024) of an AI system.
AI’s applications are varied and include the following:
- Searching for and understanding information (ie. Gemini, Chat GPT, Co-Pilot)
- Smart devices and voice assistants
- Digital companions used in gaming, dating and elderly care
- The content social media users see on their platforms
- Government systems including welfare, policing, criminal justice, healthcare and banking
- Natural language processing to automate tasks, enable conversations with chatbots and accelerate decision making.
What is Generative AI (GenAI)?
According to the OECD, genAI “is a category of AI that can create new content such as text, images, videos, and music. It gained global attention in 2022 with text-to-image generators and Large Language Models (LLMs). While GenAI has the potential to revolutionize entire industries and society, it also poses critical challenges and considerations that policymakers must confront.”
GenAI not only creates significant opportunities for business but also introduces legal risks including Intellectual Property (IP) and data ownership issues. These have been extensively researched by European Union Intellectual Property Office (EUIPO) and World Intellectual Property Organization (WIPO).
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
OECD (2024)
What is the Internet of Things (IoT)?
The Internet of Things (IoT) refers to a network of physical devices and objects that can be monitored, controlled, or interacted with via the Internet, either automatically or with human involvement (OECD, 2023). These devices, often called “endpoints,” are uniquely identifiable and can exchange data with each other in real time. IoT application domains spam across all major economic sectors, notably including: health, education, agriculture, transportation, manufacturing and electric grids (OECD, 2016).
The IoT includes overlapping technologies such as AI, which is increasingly integrated into connected devices.
Examples of IoT devices include:
- Wearable fitness trackers that monitor heart rate, steps and sleep patterns.
- Connected vehicles that share data for navigation, maintenance and safety.
- Smart home assistants like voice-controlled speakers that manage lighting, appliances and schedules.
The key value of the IoT is its ability to gather, store, and share data about the environments and assets it tracks. This data can make processes more measurable and manageable, leading to increased efficiency. However, the IoT is also exposed to security and privacy risks, notably including: device vulnerabilities, surveillance and data misuse.
OECD AI Principles
The OECD AI principles (adopted 2019, updated 2024) provide the first intergovernmental standard on AI. The principles emphasize innovation, trustworthiness and respect human rights and democratic values:
- Inclusive growth, sustainable development and well-being “This Principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.”
- Respect for the rule of law, human rights and democratic values, including fairness and privacy “AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.”
- Transparency and explainability “This principle is about transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.”
- Robustness, security and safety “AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.”
- Accountability “Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the OECD’s values-based principles for AI.”
Section 2 of the AI Principles complements the five principles with policy recommendations on research and development, AI ecosystems, governance, workforce and international cooperation.
Responsible AI
According to the OECD, responsible AI “covers a broad range of practices related to the responsible development and deployment of AI systems, such as developing guiding principles, conducting risk / impact assessments, establishing data collection protocols, refining model development practices, performing technical audits, and following transparency and disclosure practices, among others.”
Connection Between Technology and Human Rights
- Given technology’s integration across business operations, understanding its human rights impacts is essential. The United Nations Guiding Principles on Business and Human Rights (UNGPs) highlight business responsibility to respect human rights throughout their activities, operations and relationships, as emphasized by Principles 13 and 17. For example, a 2025 study from the University of Melbourne revealed that AI hiring systems, which are increasingly used by employers across industries, create serious risks of algorithmic discrimination against marginalized groups.
- Digital technologies can both enhance and undermine human rights. Technological progress will fundamentally transform societies and bring enormous benefits including better protection of individual rights. In recent decades, mobile and cloud technologies have improved access to information, healthcare and education. Now, emerging technologies – particularly AI – are reshaping how businesses operate, manage workers and oversee global supply chains.
- As technology increasingly influences how people work, access services and engage in society, it presents opportunities to advance the UN Sustainable Development Goals (SDGs). AI and digital tools provide pathways to improve accessibility, enhance transparency and enable civic mobilization aligned with the SDGs.
- However, these same systems can reinforce bias and compromise privacy and labour rights, potentially worsening and introducing complex risks to fundamental rights. The pervasive nature of the digital transformation means that inevitably, every business will be somehow connected with technology-related human rights impacts.
- Every business therefore has a responsibility to understand how technology affects the rights of all individuals connected to its operations and relationships. This responsibility goes beyond employees and customers; it includes supply chain workers and communities impacted by business decisions. It spans the entire technology lifecycle, from development and deployment to ongoing use and eventual disposal.
What is the Dilemma?
Businesses face a double-edged challenge in the age of digital and emerging technologies: the imperative to adopt AI and advanced systems to maintain competitive advantage, while simultaneously ensuring these technologies protect human rights as new technologies are embedded within business operations.
Technology companies may focus human rights efforts primarily on development and deployment, overlooking their downstream responsibilities. Yet under the UN Guiding Principles on Business and Human Rights (UNGPs), businesses are expected to respect human rights throughout their operations and business relationships. This includes understanding and addressing how technologies affect not only workers and suppliers, but also end users and communities.
To meet their responsibilities under UNGPs, businesses must go beyond compliance, particularly while technological developments outpace regulation. Companies should implement robust, risk-based human rights strategies that include due diligence, stakeholder engagement and ongoing monitoring. Leveraging technology, including AI itself, can help identify and mitigate risks, but responsible deployment requires careful planning. The pressure to innovate quickly may be in conflict with the time needed for meaningful consultation and impact assessment.
Negative Impacts on Human Rights from Technology
Digital technologies hold transformative potential for businesses and societies, but the rapid advancement of tools like generative AI, without robust regulation, poses serious risks to individual rights and freedoms.
Issues such as algorithmic bias, mass surveillance, and privacy erosion could have profound human rights implications. The competitive nature of the AI industry may also compromise safety. These risks disproportionately affect vulnerable groups raising ethical and operational concerns for companies. Such risks can undermine consumer trust, brand reputation, and compliance, as detailed in the Impacts on Business section.
Technology has the potential to impact a range of human rights risks including but not limited to:
- Right to privacy and freedom of expression (UDHR, Article 12, 19; ICCPR, Article 17, 19; OECD, AI Principle 1.2): AI systems require large volumes of personal and biometrics data raising concerns about unlawful surveillance, data misuse and lack of informed consent. This can undermine individual freedoms, especially in workplaces and public spaces, through real-time tracking, voice and biometrics monitoring and opaque data practices. In the EU context, the General Data Protection Regulation (GDPR) provides privacy protections, while the EU AI Act introduces risk-based obligations. These include prohibitions for ‘unacceptable risk’ systems (began applying on 2 February 2025) and high-risk AI system obligations come into force between August 2026 and August 2027. This evolving legal landscape aims to address these emerging threats to privacy rights. Online harassment and the spread of misinformation can suppress freedom of expression and lead to self-censorship, particularly by human rights defenders, journalists and activists. Content moderation algorithms can suppress legitimate forms of expression, both intentionally and unintentionally, while also failing to address genuinely harmful content. The proliferation of AI-generated bots influences social media, news and advertising with misinformation or manipulated narratives. Algorithmic echo chambers and lack of platform safeguards can limit freely made decision making. These factors can fundamentally undermine democratic processes, as critiqued by prominent civil society organizations. Access Now expressed alarm in October 2025 when news broke that the United States had reactivated its contract with a spyware vendor with the ability to covertly access encrypted apps and has been used previously to target journalists.
- Right to non-discrimination and equality (UDHR, Articles and 7; ICCPR, Articles 2 and 26; ICERD; OECD, AI Principles 1.1 and 1.2): AI systems can reinforce human biases through biased data sets and flawed algorithm design. Digital abuse and harassment remain persistent issues, particularly impacting gender minorities, racial minorities, LGBTQ+ individuals and other groups who face disproportionate targeting online. Discriminatory outcomes in areas such as hiring, policing, criminal sentencing and service delivery have been widely documented, perpetuating systemic inequalities.
- Right to be free from targeting and persecution and minority rights (ICCPR, Article 27; 1992 Minorities Declaration; OECD, AI Principles 1.1 and 1.2): Surveillance technologies may be used to target individuals based on identifying markers such as ethnicity, religion or political affiliation, enabling systematic discrimination and persecution of minority populations. Non-representative datasets used to formulate algorithms and decision-making models systematically exclude or misrepresent minority experiences. This leads to two main types of bias: one from algorithms trained on biased data and societal bias that causes technology to overlook or misrepresent certain groups.
- Right to work and just and favourable conditions of work; freedom from forced labour (UDHR, Article 23, ICESCR, Articles 6 and 7, OECD, AI Principle 1.1; ILO Conventions): Technological advancements have enabled the rise of platforms making use of “gig workers“, which may improve access to work. However, job insecurity for gig workers is higher and opportunities for professional development are lower. Gig workers often lack basic labour protections and are more likely to be surveilled. Data labeling and annotation, which enables AI systems, is often low-paid and insecure, particularly in the Global South. Workers in electronics supply chains are frequently exploited and work in dangerous conditions, from raw materials to recycling of e-waste. The extraction of transition minerals essential for digital technologies (including cobalt and lithium) can be linked to armed conflict, environmental destruction and exploitation including child labour. People in the Global South are disproportionately at risk. Beyond extraction and manufacturing, automation could render many jobs obsolete without safety nets in place, replacing entire categories of jobs, disrupting workforces and supply chains. These issues raise concerns about whether the shift to AI-driven economies will be fair and inclusive or will worsen existing inequalities, without responsible governance.
- Right to an adequate standard of living and a clean environment (UDHR, Article 25; ICESCR, Article 11; UNGA Resolution 76/300; Human Rights Council Resolution 48/13): Over-reliance on technology in critical systems—from healthcare to infrastructure—creates single points of failure that can risk lives when systems fail or are compromised. The expectation of constant connectivity and availability erodes boundaries between work and personal life, contributing to burnout, stress-related illnesses, and mental health challenges, particularly for workers in the digital economy who find themselves unable to truly disconnect. The deployment of digital health technologies raises concerns around whether these tools are developed with meaningful input from diverse patient populations and healthcare workers. Data privacy risks in health applications can deter individuals from seeking care or sharing sensitive information necessary for diagnosis and treatment. There is also a risk that inferior digital healthcare services may replace essential in-person care, particularly for marginalized populations who already face barriers to accessing quality healthcare. This could potentially create a two-tiered system where technology substitutes human care instead of supporting it.
- Right to health (UDHR, Article 25; ICESCR, Article 12): The training and operation of large language models (LLMs) and AI systems requires enormous computing power, causing significant environmental impacts. These include substantial carbon dioxide emissions and increased water consumption for data centre cooling, as highlighted in reports on Big Tech companies’ investments in AI threatening progress toward net zero targets. The production of electronic devices also generates toxic waste throughout their lifecycle, and informal recycling practices in the Global South expose workers and communities to hazardous materials, including heavy metals and carcinogens. Currently, environmental considerations and the ensuing human rights impacts on affected communities, including Indigenous Peoples whose lands are exploited for mineral extraction, are inadequately addressed in technology development and deployment decisions. According to the IEA, data centres accounted for around 1.5% of the world’s electricity consumption in 2024, a figure set to more than double to around 945 TWh by 2030.
- Right to life and personal security (UDHR, Article 3; ICCPR, Article 6; OECD, AI Principle 1.4): The misuse of digital identity systems and biometrics databases can increase risk of identity theft or even targeted violence by state or non-state actors. In the UK, the Greater London Metropolitan Police drew concern from rights groups, including Amnesty International, over its decision to more than double their use of live facial recognition. This technology has been criticized due to potential racial biases as it is less accurate in scanning the faces of people of colour. AI is also increasingly being used to support decisions that directly affect human life, such as in lethal autonomous weapons systems (LAWS) and vehicles. Digital platforms can also be exploited to incite violence, coordinate attacks, and spread extremist ideologies.
- Right to participate in cultural life, education and share in scientific advancement (UDHR, Article 27; ICESCR, Article 15; UDHR, Article 26; ICESCR, Articles 13 and 14): The benefits of technology are not equally shared, with digital divides reinforcing existing social and economic inequalities along lines of geography, income, age, and education. Technological solutions are typically designed for high-connectivity contexts and able-bodied users, systematically excluding those in low-bandwidth environments, rural areas, or persons with disabilities. This limits their ability to participate fully in increasingly digitalized societies. Limited access to devices, reliable internet, and digital skills may widen the educational gap for students from low-income and under-resourced communities. The use of surveillance tools including monitoring software, facial recognition, and behavior tracking in schools erodes trust, restricts free expression and widens the digital divide by normalizing invasive data practices for vulnerable students.
Vulnerable groups
Groups at heightened risk will bear the brunt of negative impacts from technology, facing heightened risks of exclusion, exploitation and injustice. Businesses need to explicitly consider these groups in their human rights due diligence, as their exposure to harm is higher. Companies should also consider how the same technology can impact different groups. Examples of people at particular risk include, but are not limited to:
- People of Colour and Indigenous Peoples, who are often underrepresented or misrepresented in datasets, leading to serious human rights violations. Businesses should implement inclusive data auditing to ensure representation and prevent discrimination.
- Women and girls, who often lack equitable access to the internet and may be discriminated against through AI and algorithmic gender biases. Companies should conduct gender impact assessments on AI systems to identify and mitigate biases.
- Low-income and digitally underserved communities, who may be excluded from digital services or decision-making processes. Businesses should develop inclusive digital services that cater to the needs of underserved populations.
- Migrant and refugee populations, who are more likely to be subject to surveillance and profiling. Companies should ensure data protection policies prevent profiling and surveillance of vulnerable populations when planning, designing and procuring data processing initiatives.
- People with disabilities are often excluded due to lack of inclusive data or system design. Technology developers should aim to increase accessibility of mainstream solutions alongside specialized assistive technologies. For example, companies developing autonomous vehicles must ensure they are accessible to people with disabilities; otherwise, mainstream AVs could increase transportation discrimination and create even higher barriers to employment. The OECD has reported that ‘mainstreaming’ accessible technology also reduces the cost of offering specialized solutions, creating more sustainable business models.
- Children and older adults who might be more vulnerable to exploitation or exclusion, require robust data protection and privacy measures to safeguard their rights. These measures align with the UN Convention on the Rights of the Child (CRC) and the UN Principles for Older Persons, which both emphasize the protection of these vulnerable age groups.
- Human Rights defenders, whose beliefs and activism may be targeted. Companies should ensure that data practices do not facilitate surveillance or targeting of activists, in order to align with UNGPs. The OSCE Office for Democratic Institutions and Human Rights (ODIHR) publishes Guidelines on the Protection of Human Rights Defenders.
- Individual users and consumers of AI, who may be subject to opaque decisions making and lack of meaningful resources. Business should provide transparent and explainable AI systems to ensure users understand decision-making processes. This adheres to the OECD Principles on Artificial Intelligence, which advocate for transparency, accountability, and inclusiveness in AI systems.
- Citizens of autocratic or oppressive regimes, where digital tools may be used for state-surveillance and control. Companies should avoid the use of tools that could exacerbate these issues by conducting thorough human rights due diligence in line with UNGPs.
- People in conflict zones or humanitarian crises, who might be targeted or misrepresented by AI-powered surveillance, misinformation or autonomous weapons. Companies must avoid using or procuring data that could be used for surveillance and ensure informed consent is genuine, given the power imbalances in conflict settings. Engaging local stakeholders including humanitarian organisations will help businesses to understand specific risks. Companies should consult UNDP guidance for heightened human rights due diligence (hHRDD) for business in conflict‑affected contexts.
- Workers in precarious or gig economy roles, who may be exposed to lower wages, limited career growth, algorithmic management and lack of transparency of conditions. Businesses must ensure fair wages, career growth opportunities and transparency in working conditions. This is supported by the ILO’s “Decent Work Agenda,” which emphasizes the importance of fair and equitable working conditions.
What is the gig economy?
“Gig work” involves short-term, task-based employment typically facilitated by digital platforms and algorithms underpinned by AI. Workers are usually self-employed and paid per task rather than a salary. The model offers flexibility for platforms, consumers and workers, allowing on-demand labour and varied work schedules. Where gig work is common – along with other non-traditional forms of employment – this is considered a “gig economy”.
Technological advancements have enabled the rise in digital platforms that make use of this innovative business model. To capture the benefits of the gig economy while protecting workers and consumers, businesses need to review and update policies with this model in mind.
Gig-economy-specific issues include the following:
- Algorithmic inequalities: The use of algorithms to manage gig workers often leads to unfair practices which undermine provisions of decent wages, and weaker labour rights and access to remedy.
- Lack of traditional employment benefits: Gig-economy workers may have more difficulty accessing healthcare and retirement benefits compared to those in traditional employment. According to a 2025 report, 51% of surveyed riders and drivers for food delivery and ride-hailing apps have risked their health and safety, with 42% reporting physical pain resulting from their work.
- Work-Life Balance: Gig-economy workers have a varied work-life balance, with flexibility as a main stated benefit. However, challenges including irregular hours, low pay, and a lack of benefits create significant work-life balance challenges. A 2025 report revealed that 3 out of 4 surveyed riders and drivers for food delivery and ride hailing apps have felt anxiety over the potential for their income to drop.
- Freedom of Association: Many jurisdictions classify gig-economy workers as ‘independent contractors’, something that often precludes them from joining unions or engaging in collective bargaining according to an ILO report.