Navigating the Complex Landscape of AI, Human Rights, and Regulation: Insights from Recent Events

In a rapidly evolving world, the intersection of artificial intelligence (AI), human rights, and regulation has never been more critical.

Introduction

A recent event, organized by B-TECH and the Global Network Initiative (GNI), featured Dr Sebastian Smart, CAJI (Centre for Access to Justice & Inclusion) Research Fellow in Access to Justice, Inclusion, & Technology. Dr. Smart's presentation and the ensuing discussions shed light on the complex dynamics at play, particularly in the context of Latin America, where regulatory frameworks are rapidly evolving.

Panel 1: Mapping AI Regulations and Human Rights

Dr. Smart unveiled a forthcoming report that meticulously maps the regulatory landscape in Latin America. This comprehensive analysis, set to be published by the end of the year, delves into AI regulations across nine countries in the region. The panel was followed by a discussion of current European regulatory frameworks, including the EU AI Act and the Corporate Sustainability Due Diligence Directive (CSDDD).

Key Highlights from Panel 1:

  • A Shift Towards Mandatory Regulation in Latin America: In 2023, there was a notable shift towards more comprehensive, mandatory AI regulation, spurred by the increased accessibility and widespread use of large-scale AI models like Chat GPT.
  • Focus on Human Rights and Transparency: Most Latin American regulations emphasize human rights and transparency standards, making significant strides in clarifying business expectations. However, the report identified crucial gaps, such as the lack of reference to human rights due diligence and limited attention to the full AI value chain.
  • European Context: Similar issues were raised in the European context, where references to the UN Guiding Principles on Business and Human Rights (UNGPs) and stakeholders were not well-established in current discussed regulations.

Panel 2: Policy Coherence for AI Regulation

The second panel focused on the UN Guiding Principles on Business and Human Rights (UNGPs) and policy coherence in AI regulation. Key takeaways included:

  • Recognizing Global Power Dynamics: The panel stressed the importance of recognizing power imbalances in global AI regulatory discussions, where the voices of the global majority are often marginalized.
  • Rights-Based Regulation: Effective AI regulation should go beyond a risk-based approach and incorporate key rights like the right to explanation, non-discrimination, and transparency.
  • Sectorial Approach: Rather than regulating AI as a monolithic entity, there is a growing call to adopt a sectorial approach, considering the unique requirements of different industries like finance and healthcare.
  • European AI Act: Ongoing political negotiations to pass the AI Act in Europe are characterized by advancements, including mandatory human rights impact assessments, enhanced protection for affected individuals, and democratic processes.

Panel 3: Risk Assessment Methodologies for AI

The third panel tackled risk assessment methodologies for AI, with a focus on generative AI and the importance of human rights-based assessments. Key insights included:

  • Addressing Current Harms: While existential risks were considered, the primary emphasis was on addressing current harms, especially those affecting vulnerable groups.
  • Call for More HRIAs Assessments: Civil society organizations stressed the need for an increased number of human rights impact assessments (HRIAs), recognizing that even imperfect assessments can be valuable for litigation.
  • Examination of Key Processes: The discussion identified critical processes, such as how companies co-opt public services and the impacts of generative AI on vulnerable groups, particularly in cases of deep fakes and sextortion.
  • Faster HRIAs: Recognizing the lengthy nature of comprehensive human rights impact assessments (HRIAs), the need for faster assessments and prioritization criteria was emphasized.

Launch of Freedom House Freedom on the Net Report

Dr. Smart also attended the launch of the Freedom House Freedom on the Net Report, which highlighted the repressive potential of AI. Key findings included the increasing sophistication and accessibility of AI-based tools for manipulating text, audio, and imagery.

The report underscored that while AI has not replaced older methods of information control, it is a powerful tool for governments to stifle free expression. The ensuing discussion emphasized the need to focus on real, current risks rather than existential ones.

An important emphasis during the discussion was put on the African context, where the discussions have transited from digital infrastructure and seen technology as an enabler for participation and connection to a vision of technology as a double-edged sword. This is particularly evident in repressive regimes that use technology and AI to target opponents with sophisticated tools.

Conclusion

The events offered valuable insights into the dynamic relationship between AI, human rights, and regulation. They underscored the importance of ensuring that AI regulation is rooted in human rights principles and that it keeps pace with evolving technology. As AI continues to shape our world, a focus on protecting human rights and addressing real-world issues is paramount. The future of AI, human rights, and regulation will undoubtedly be a complex yet vital journey for all stakeholders.