Human rights in the digital environment: navigating the interplay of global policy and technology in AI

Between the 14 and 15 of November, I had the privilege to represent the Centre for Access to Justice & Inclusion (CAJI) in a series of meeting at Geneva that brought together representatives form civil society organisations, academia, companies and governments to discuss about issues related to human rights in the digital environment, with a particular focus on AI. One of the key questions was how to navigate different forums and how to focus attention on some of them. While that is usually a strategic decision that each stakeholder needs to address, I here want to give some suggestions of how to close the gap between the aims of different stakeholders, the technical aspects of AI and how to operationalise them in these forums. Also I navigate -in a very exploratory form- some of the challenges and opportunities most of which were highlighted during a workshop delivered by Responsible AI, Law, Ethics and Society.

Only during the last weeks, we have observed a series of different normative frameworks for artificial intelligence. The signature of the Bletchley Declaration, several initiatives announced by the U.S. Vice-President Harris, the launch of the UN High Level Advisory Body on AI, the publication of the G7 Code of Conduct and Principles, a separate U.S. Executive Order on AI, and China’s Global AI Governance Initiative. And these are not the only initiatives around the world. Global Partners Digital has mapped over 50 multilateral, multistakeholder, state-led or private initiatives to regulate artificial intelligence. The massive proliferation of forums and initiatives make it difficult to follow-up and to select which of these initiatives must be the focus of different stakeholders. Governments, civil society organisations, companies all have different resources and capacities. Knowing how to navigate these forums is particularly important for stakeholders from the Global Majority. Embarking on a guided tour through the intricate landscape of global policy and technology in artificial intelligence, we will explore the nexus of human rights, technological advancements, and the challenges of operationalizing policy goals.

Bridging the gap between foundational principles and technology

Depending on the stakeholder there will be some values that need to be represented in different forums. Some stakeholders will have policy goals such as strengthening democracy or safety, other stakeholders will aim at enhancing human-centric values and human rights as core principles that should shape the trajectory of AI development. Beyond optimisation of resources, let’s suppose that these are some of the foundational principles (inclusivity, privacy, democracy, among others) that different stakeholders would like to see reflected in the core discussions and forums on AI. The question then is how to translate these broad values into actionable policies that impact how citizens, consumers, and businesses operate on a global scale.

Technology, on its hand, represents the means to achieve policy goals. From foundational models to generative AI (as seen in the previous blog piece), the landscape is vast and evolving. Understanding the building blocks and technical aspects, such as training data, machine learning, is crucial to navigate the different approaches and initiatives that have been developed at a global scale. Bridging the gap between policy goals and technology lies in the operationalization phase. Here, we face challenges and opportunities that include interfacing with existing laws, technical governance, market dynamics, jurisdictional issues, and the quest for interoperability. As we strive to make policy goals actionable, there is a need to explore the implications of legislation, regulations, and the role of international bodies like the UN.

Challenges and opportunities

At the national level we may observe a series of challenges and opportunities for the regulation of artificial intelligence. Firstly, there is an issue of political coherence and how new regulations or norms could potentially interface with existing Laws. In this context it is fundamental for any stakeholders that want to engage in the discussions of AI regulation to clearly understand the existing legal frameworks, including among other data protection and copyright laws and how they influence the development and deployment of AI models. Secondly, stakeholders may find barriers in terms of technical language. In this point is essential to have multistakeholder involvement and cooperation. This is something that we extensively reviewed during the previous blog post, but also frameworks like the NIST Risk Management Framework may be useful, for example, for policy makers. Finally, it is important to highlight that at the national level, different stakeholders would need to also navigate the market dynamics. That means among others to understand how antitrust considerations may become crucial as major players in the AI supply chain wield significant power.

At the international level, we find a series of challenges and opportunities. Firstly, we can observe jurisdictional issues. It is not always easy to navigate different approaches to regulation across jurisdictions. The G7 Hiroshima process is quite explicit in this realm stating that “Different jurisdictions may take their own unique approaches to implementing these actions in different ways” an open door for fragmentation in the aim of an international regulation of AI. But also, how different processes may interoperate between each other avoiding repetition. Also, at the international level, we observe a clear geopolitical challenge. Most of the regulatory forums and initiatives have been developed in the Global North - mainly Europe and US. Yet there are some signs of potential shift, for example through the G20 with the 2023 New Delhi Leaders’ Declaration and the BRICS Institute of Future Networks through its AI Study Group.

How these international efforts communicate with national process is another challenge. The influence of public demands, domestic regulations, and national security concerns on AI policy will be heavily influenced by diplomacy and collaboration in negotiating with different stakeholders. International forums such as the United Nations should then be important players. It's in already established international forums where global South/ Global Majority concerns in the AI policy discourse could be heard and where diplomatic avenues and collaborations for addressing challenges on a global scale could be developed.

Conclusion

It becomes evident that the interplay between global policy and technology in AI is a multifaceted journey. Navigating through the layers of policy goals, technology, and operationalization requires a nuanced understanding of the challenges and opportunities. The convergence of international efforts and the role of organizations like the UN will shape the future of AI governance, ensuring a balanced and inclusive approach to the development and deployment of artificial intelligence technologies.

Sebastian Smart

Human rights in the digital environment #1: Principles and trade-offs of algorithm design in social programs

Human rights in the digital environment #2: Demystifying large information models and foundation models in AI