Skip to the main content
Global Governance and Inclusion

Global Governance and Inclusion

Reframing the Global Debate Around AI

This post looks at some of the lessons we’ve learned over the last year in our work as part of the Ethics and Governance of AI Initiative, a collaboration of the Berkman Klein Center and the MIT Media Lab.

As part of our work on governance and inclusion with the Ethics and Governance of Artificial Intelligence Initiative, we recently met with regulators from a Pacific island nation who described the chaos of each agency’s IT department making monumental decisions about procuring AI technologies without a basic understanding of the underlying technologies and without knowing how to ensure these technologies are fair, reliable, and accurate. Similarly, when discussing emerging AI issues, a state law enforcement official, as part of our collaboration with the Harvard Law School’s Cyberlaw Clinic, remarked, “I guarantee you that no one in my office is thinking about these issues.” Policymakers all over the world are trying to address these emerging AI technologies, yet they face a substantial information asymmetry.

Our work in this first year has given us the opportunity to speak directly with policymakers and institutions across the world who are grappling with how to unlock the societal beneficial aspects of AI technologies while mitigating its risks. From small countries to multilateral international organizations, from the Global North to the Global South, and from regions leading the development and implementation of AI technologies to those that face more basic challenges such as knowledge infrastructure gaps, we see both a recognition of the challenges and a willingness to work toward solutions. As policymakers and regulators have asked us for guidance, we have the opportunity to reframe the policy debate around AI and to hopefully shape the trajectory of the technology itself.

Although a handful of government AI strategies and draft principles have recently emerged, we still lack a consensus definition of AI. Many of these strategies treat AI as if it was one thing, and not several different technologies. Moreover, the actual impact of AI-based technologies on the digital economy and society at large — and even how to measure it — remains unclear. Despite the fact that new AI technologies are increasingly touching every part of society, from schools, to homes, to hospitals, to industry, and beyond, policymakers currently have a limited knowledge base to draw from when deciding how to leverage the opportunities and address the challenges these next-generation technologies bring. Over this past year, we have worked to better understand the needs of global policymakers and have made strides to begin addressing those needs.

Here are three lessons we have learned in our work and takeaways for those working on issues of the ethics and governance of AI.

1. Today, it remains unclear what the actual impact of AI-based technologies on the digital economy and society at large will be, and even more unclear is how we can measure such changes across contexts. In our work, we have learned that the right metrics and instruments are not yet readily available to measure AI’s societal impact. In response we have facilitated initial efforts at academic institutions that show promise in helping to measure AI’s impacts.

It is impossible for policymakers to effectively address challenges that they do not yet understand. When it comes to AI, there is much policymakers do not understand about the AI technologies. In part this is because of information asymmetries between the public and private sectors, the highly technical nature of AI technologies, and sheer pace of technological developments. But even beyond those issues, large questions about AI’s societal impacts remain: How will it affect the future of labor? How will it affect underrepresented communities? How will it change social interactions and societal norms? How will it change education? How do we encourage individuals from underrepresented populations to explore and pursue careers in the field of AI, through both educational programs and informal learning? These and many other questions are currently unanswered, largely because we are not yet tracking and studying the relevant metrics. Traditional metrics and measurements such as GDP, inflation, and unemployment cannot fully capture the range of impacts that AI is likely to have on society. We need new approaches to adequately describe those impacts.

To that end, we have played an important role in facilitating and building a new collaboration between Tsinghua University in China and the AI Index (a project within the Stanford 100 Year Study on AI (AI 100)) to begin developing a coordinated set of social impact measurements across the United States and China. Without this coordination, two of the countries that presumably will be most impacted by AI in the near future would develop divergent and incompatible systems for measuring those impacts. We believe that with such measurements we will be better able to respond to some of the most significant impacts of AI across these countries, while also better understanding how AI’s impacts differ in two very different political, social, and economic contexts.

2. AI has the potential to exacerbate structural, economic, social, and political imbalances and further reinforce inequalities without thoughtful intervention. In our work, we have learned that existing and emerging governance frameworks must be designed to enable underrepresented communities to participate meaningfully in how AI technologies are designed, developed, deployed, and evaluated. In response we have forged new partnerships that enable underrepresented communities to impact AI and not just be impacted by AI.

Throughout the last year, we have consistently heard from policymakers and leaders from around the world that questions of diversity and inclusion are some of their most pressing AI governance concerns (see herehere, and here). These concerns take different forms. Some policymakers are concerned about the negative impacts of AI technologies developed from datasets and by engineers who do not reflect the diversity of their constituents. Other policymakers want to ensure that their own engineers and computer scientists will be well prepared to compete in the global AI marketplace. Still others want to ensure that their national technology companies will find relevance (and success) in very different societies and markets around the world. Policymakers have received the message that diversity and representation can have significant impacts on the effectiveness and legitimacy of AI technologies, but now they want to know what steps they should take.

In November 2017, we co-hosted the Global Symposium on AI and Inclusion, which brought together nearly 200 policymakers, academics, activists, technologists, and civil society and business leaders from 70 countries. The symposium initiated a reframing of the AI and inclusion debate. Traditionally, within the diversity and inclusion debate, gaps are addressed by inviting underrepresented communities into conversations about AI or by throwing money at the problem in the hopes of spurring participation. Instead, at the Symposium we explored how to advance inclusion through a “self-determination” approach that offers agency and space for underrepresented individuals to represent themselves in a way that makes sense to them — at every stage of AI technical development: design, development, deployment, and evaluation — and to guide the systems and policies that affect them.

We have translated the lessons learned from the Symposium to better help policymakers address inclusion issues with respect to AI, working side-by-side with them to develop contextually appropriate responses. For example, we have created an AI-specific educational workshop for state attorneys general in the United States — a model that in the coming year could be extended to help train decision makers in the social and criminal justice system in other parts of the world, including judges across Latin America. We are also advising in the development of AI policy academic research programs in Thailand and Singapore.

3. Bridging the knowledge gap between different sectors, experiences, norms, and cultures is complex because stakeholders may come to the table without a shared, common language about AI technologies or governance. They may also have different levels of comfort with AI technologies, different levels of resources available to invest in governance processes, or a lack of trust. For societies to benefit from AI technologies, global policymakers and governance systems will be significantly enhanced through long-term capacity building efforts. In response we have provided direct guidance to policymakers through the United Nations and other international organizations.

When faced with difficult and novel challenges, global policymakers often look to each other; international organizations like the ITU and the United Nations have typically served as a clearinghouses, facilitating that important knowledge exchange. However, with AI many of these organizations have found themselves searching for answers as well. By creating a playbook for policymakers and regulators and educating them about the range of tools at their disposal, we have an opportunity to shape the debate and influence the development of AI governance worldwide.

As the UN System seeks to enhance its own capacity to address AI issues, we have been invited to share our own research and expertise. For example, we directly shaped the discussion around “data for good” at the ITU’s recent AI for Good Summit, culminating in the publication of a “Data Commons” framework. We have also created a framework to help guide regulators in responding to AI issues, which will be published at the ITU’s Global Symposium for Regulators this summer.

Looking Ahead

Policymakers are seeking guidance on how to address some of the most pressing and vexing technical and societal challenges of our day. We have a tremendous opportunity to help reshape that debate, and we have already begun. Our work has laid a foundation by initiating baselines for capacity development, inclusive governance, and impact measurement. This necessary first step now allows us to focus on some of the hard governance questions that will need to be addressed in specific areas like government procurement and use of AI, autonomous vehicle and weapons, data collection and privacy, and more.

The development, application, and capabilities of AI-based systems are evolving rapidly, leaving largely unanswered a broad range of important short- and long-term questions related to the social impact, governance, and ethical implementations of these technologies and practices. Over the past year, the Berkman Klein Center and the MIT Media Lab, as anchor institutions of the Ethics and Governance of Artificial Intelligence Fund, have initiated projects in areas such as social and criminal justice, media and information quality, and global governance and inclusion, in order to provide guidance to decision-makers in the private and public sectors, and to engage in impact-oriented pilot projects to bolster the use of AI for the public good, while also building an institutional knowledge base on the ethics and governance of AI, fostering human capacity, and strengthening interfaces with industry and policy-makers. Over this initial year, we have learned a lot about the challenges and opportunities for impact. This snapshot provides a brief look at some of those lessons and how they inform our work going forward.

You might also like


Projects & Tools 01

AI: Global Governance and Inclusion

In a world challenged by growing domestic and international inequalities, policymakers face hard problems and difficult choices when dealing with AI systems.