top of page

The parallels of AI technology: between digital authoritarianism and activism

  • kclhrp2023
  • Mar 26
  • 7 min read

By Sincere Leung


The presence of data-driven algorithms and artificial intelligence technologies in our lives has never been greater. In an age where our personal information becomes a transactional currency, our rights to privacy and even fundamental human rights come into question when these tools are used by authoritarian governments to conduct surveillance and diffuse resistance. This article aims to explore the mechanisms of such tools, and its legal implications and impact through the lens of human rights.

 

Scope of digital authoritarianism

Technological advancements have allowed authoritarian governments to incorporate AI-powered tools to perform a variety of functions to exert control over opposition, spread of information, and public narrative. AI-powered facial recognition technology has been adopted to state-wide systems in China to facilitate surveillance of political dissidents and marginalized communities, with direct links to human rights violations of Uyghur Muslim minorities in Xinjiang, and suppression of political activists in Hong Kong involved in the 2019 Pro-democracy movement[1]. Apart from mass surveillance, autocrats fabricate online discourse in an effort to diffuse dissenting opinions and sway public perception of government performance. Studies have shown that autonomous bots are running social media accounts to mimic genuine online discourse while spreading government propaganda[2]. Its ability to manufacture volumes of content automatically disproportionately overshadows dissenting voices, fabricating an online space where the overwhelming support shows no doubt towards the authority. A recent study on digital authoritarianism highlights how regimes integrate Technology, Regulation, Influence and Adaptive Dynamics, coined as the TRIAD framework to consolidate control across the Middle East region, from Iran’s use of facial recognition technology to enforce compliance to hijab laws during the 2022 demonstrations, to the UAE employing spyware to hack dissendents’ phones for collecting data[3]. Artificial Intelligence performs a variety of functions of surveillance, censorship and disinformation, reshaping authoritarian governance and its impact on human rights.  

 

Human Rights Implications

The damage AI-powered tools bring to authoritarian practices is unprecedented and multifaceted. The integration of Artificial Intelligence into surveillance, censorship and propaganda enables the further erosion of civil and human rights by increasing the precision and scale of these activities[4]. Online censorship enhanced by sophisticated AI tools show the clear intentions of authoritarian governments to prevent coordination and collective action[5]. Thus, Authoritarian governments have a tendency to share censorship technology to like-minded states, building on technology’s growing impact on freedom of expression and the safety of regional activists. It was reported that Chinese company Geedge networks have been supplying surveillance networks to Pakistan, a deal that has attracted budding clientele such as Myanmar, Ethiopia and Kazakhstan. Moreover, impacts of these technologies extend to established democracies[6]. Where manipulative, surveilling and controlling practices of AI-powered systems are adopted in online platforms of democratic countries, these tools have the potential to harm democracy, when censorship and official disinformation sabotages institutional accountability, attributes positively related to autocratization[7].

 

Human Rights Council-appointed experts have called for urgent and strict regulatory boundaries for technologies that claim to perform recognition of a physical attributes, which authoritarian governments have been adopting similar software to shake down work of human rights defenders and journalists, often under the guise of national security or counter-terrorism measures[8]. Experts also noted that surveillance technology justified in the name of counter-terrorism is enabled by private actors, which they called for private companies that supply AI tools for the systematic human rights abuses of authoritarian states to be held accountable[9]. It is evident that collective action by industry and state is imperative to curb the commodification of data-driven systems as surveillance and propaganda tools by authoritarian governments, where activists and civilians continue to risk violations of human rights.

 

Current regulatory measures and obstacles

Given the eminent trend of ‘digital authoritarianism’ adopting AI-driven technologies that puts activists and civilians at risk of human rights violations, the regulatory gap is still catching up to govern misuse of AI tools not only in industry, but government institutions. Use of AI technology is bound by International Human Rights Law such as the Charter of the United Nations, the Universal Declaration of Human Rights etc. Domestic laws apply to its private sector use concerning data privacy, and regional legislation tailored to the use and development of AI[10]. However, there has been no binding international precedent governing AI technology. International initiatives and reports play a major role in shaping international governance frameworks, aimed to guide development and global approach towards AI, such as UNESCO’s 2024 Recommendations on the Ethics of AI[11]. The 2024 integrated partnership between Global Partnership on AI (GPAI) and OECD AI Principles presents renewed efforts to reach a consensus regarding the key definitions and concepts of AI, and a regulatory framework that transcends beyond country borders[12]. International efforts to set up a global regulatory framework have been in place insofar as to attempt on reaching a consensus surrounding the ethics, definitions, and regulation of AI. Nonetheless, the non-binding nature of these initiatives is a fundamental constraint and vague, broad definitions over the principles of AI usage cause a further delay to global consensus.

 

The EU AI Act is the most ambitious attempt to regulate AI to date. Article 5 set out prohibited AI practices, including social scoring and certain forms of biometric surveillance[13]. However, the act contains a national security exemption that excludes AI systems used for defense for military, defense, national security purposes in the EU market from the provisions of the act. The exception is justified by Article 4(2) TEU, recognizing that Member States have sole responsibility over national security among other public international law provisions[14]. Critics are concerned that the vague boundaries of what constitutes ‘national security’ risks creating a digital rights-free zone, particularly affecting migrants and marginalized groups[15]. The ‘national security exception’ to comprehensive AI regulation precedented leaves the regulatory future of AI tools used for ‘national security’ purposes open to challenge and interpretation. Liberal democracies are not immune to this threat either, where in USA, a 2023 GAO reported federal agencies deployed facial recognition services from military vendors without proper authorization[16]. Use of AI tools in the military has a propensity to push even established democratic institutions towards authoritarian-leaning surveillance practices, a phenomenon demanding vigilance of regulators worldwide.

 

Future of digital authoritarianism: using AI against autocracy

Recent developments reflect the importance of open conversation on the regulatory future of AI especially its use by government institutions. In spite AI’s negative impact towards democracy, technology itself is not the enemy. AI tools have been used by activists and journalists in authoritarian regimes to effectively dodge government censorship and surveillance attempts, streamline evidence recording of human rights violations, and tailor messages for maximum impact. Journalists in the MENA region have been using Open-source intelligence (OSINT) to conduct democratic journalistic investigations, allowing journalists to independently verify and analyze information with reverse image search tools and analysis tools[17]. AI tools like Geneva circumvents in-network censorships such as those in China, India and Kazakhstan by exploiting bugs in censors relying only on either the client or server side, allowing activists to browse and share information an encrypted space[18]. Huridocs has integrated machine learning into Uwazi, the organization’s open source which allows users to expand access to databases and classify, analyze information in higher efficiency, helping activists organize, analyze and verify evidence of human rights violations[19].  

 

Examples articulated in this blog post demonstrate a fundamental truth: technology is inherently not a force for good nor for evil. As much as authoritarian governments adopt AI technology into their repressive practices, activists can harness AI strategically to level their playing field against these actors by disrupting security and amplifying resistance. Artificial Intelligence is an inevitable part of the future, insofar as humans remain the decisive factor of what and how we apply these technologies. With mindful usage and comprehensive regulation, AI technology can be leveraged for good to combat the ongoing global human rights crisis.



[1] Kaskina, R. and Cvetkovska, A. (2024). Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights. Available at: https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450(SUM01)_EN.pdf.

[2] King, G. (2017). How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument. Harvard.edu. Available at: https://gking.harvard.edu/50c.

[3] Beidollahkhani, A. (2025). From predicting dissent to programming power; analyzing AI-driven authoritarian governance in the Middle East through TRIAD framework. Democratization, 1–24. https://doi.org/10.1080/13510347.2025.2576527

[4] Kaskina, R. and Cvetkovska, A. (2024). Artificial intelligence (AI) and human rights: Using AI as a weapon of repression and its impact on human rights. Available at: https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450(SUM01)_EN.pdf.

[5] Vesteinsson, K. and Baker, G. (2025). As Authoritarians Invest in Online Censorship, Democracies Must Meet the Challenge. Freedom House. Available at: https://freedomhouse.org/article/authoritarians-invest-online-censorship-democracies-must-meet-challenge.

[6] Global Voices East Asia (2025). How a Chinese company exports the Great Firewall to autocratic regimes. Global Voices Advox. Available at: https://advox.globalvoices.org/2025/09/18/how-a-chinese-company-exports-the-great-firewall-to-autocratic-regimes/.

[7] Maerz, S. F. (2026) ‘How practices of digital authoritarianism harm democracy’, Democratization, 33(1), pp. 163–190. doi: 10.1080/13510347.2025.2553826.

[8] Rightscon 2023 - Costa Rica (2023). New and emerging technologies need urgent oversight and robust transparency: UN experts. OHCHR. Available at: https://www.ohchr.org/en/press-releases/2023/06/new-and-emerging-technologies-need-urgent-oversight-and-robust-transparency.

[9] UN News (2023). Counter-terrorism ‘rhetoric’ used to justify rise of surveillance technology: human rights expert | UN News. [online] news.un.org. Available at: https://news.un.org/en/story/2023/03/1134552.

[10] IAPP Research and Insights (2026). Global AI Law and Policy Tracker. Available at: https://assets.contentstack.io/v3/assets/bltd4dd5b2d705252bc/blt34a8e3844fb44942/global_ai_law_policy_tracker.pdf .

[11] UNESCO (2024). Ethics of Artificial Intelligence. UNESCO. Available at: https://www.unesco.org/en/artificial-intelligence/recommendation-ethics.

[12] OECD.AI Political Observatory (2019). About the Global Partnership on Artificial Intelligence (GPAI) - OECD.AI. OECD.AI. Available at: https://oecd.ai/en/about/about-gpai.

[13] European Union (2025). The EU Artificial Intelligence Act. The Artificial Intelligence Act. Available at: https://artificialintelligenceact.eu/.

[14] European Union (2024). Regulation - EU - 2024/1689 - EN - EUR-Lex. Europa.eu. Available at: https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng.

[15] James, N. (2024). EU’s AI Act fails to set gold standard for human rights - European Disability Forum. European Disability Forum. Available at: https://www.edf-feph.org/publications/eus-ai-act-fails-to-set-gold-standard-for-human.

[16] Candice N. Wright. Facial recognition technology: Federal agencies’ use and related privacy protections. Technical report, U.S. Government Accountability Office, 2023. Statement of Candice N. Wright, Director, Science, Technology Assessment, and Analytics.

[17] Walid Al-Saqaf (2024). OSINT is democratizing investigative journalism. Deutsche Welle. Available at: https://akademie.dw.com/en/open-source-intelligence-is-democratizing-investigative-journalism-in-the-middle-east-and-north-africa/a-69952860.

[18] censorship.ai (2023). Geneva: Evolving Censorship Evasion. [online] censorship.ai. Available at: https://geneva.cs.umd.edu/.  

[19] Huridocs (2025). Using machine learning in Uwazi to support human rights documentation work | HURIDOCS. HURIDOCS. Available at: https://huridocs.org/2025/04/using-machine-learning-in-uwazi-to-support-human-rights-documentation-work/

Recent Posts

See All

Comments


bottom of page