Expanding our commitments to countering violent extremism online

September 20, 2022

Today, Microsoft is expanding our commitments and contributions to the Christchurch Call, a critical multistakeholder initiative to eliminate terrorist and violent extremist content online.

These new commitments are part of our wider work to advance the responsible use of AI and are focused on empowering researchers, increasing transparency and explainability around recommender systems, and promoting responsible AI safeguards. We’re supporting these commitments with a new $500,000 pledge to the Christchurch Call Initiative on Algorithmic Outcomes to fund research on privacy-enhancing technologies and the societal impact of AI-powered recommendation engines.

Meaningful progress after the Christchurch tragedy

Three years ago, after the horrific attack at two mosques in Christchurch, New Zealand, Prime Minister Jacinda Ardern called on government, industry and civil society leaders to come together to find meaningful solutions to address the growing threat of terrorist and extremist content online. Two months after the tragedy, Prime Minister Ardern and French President Emmanuel Macron established the Christchurch Call to Action, creating a community that has grown to include 120 governments, online service providers and civil society organizations to take forward this important and difficult work.

Important progress has been made, but as events like this year’s shooting in Buffalo, New York, make painfully clear, there is more work to do. That’s why it is critical for industry leaders to join the Christchurch Call 2022 Leaders’ Summit in New York today.

As a founding supporter of the Christchurch Call, Microsoft has committed to industry’s nine steps to tackle terrorist and violent extremist content. In the three years since the Call was formed, we have worked with industry, civil society and governments to advance these commitments, including through the Global Internet Forum to Counter Terrorism (GIFCT). Working together, we have made strides towards tackling these online harms and demonstrating the power of multistakeholder models in addressing complex, societal problems. Today’s meeting provides an opportunity for the community to come together, to take stock of our progress and – most critically – look to the future.

One important area that requires more attention is understanding how technology can contribute to the spread of harmful content, particularly through AI systems that recommend content. These systems create significant benefits, helping people process ever-growing volumes of information in ways that help them be more creative and productive. Examples include helping people reduce energy consumption, students identify learning resources and farmers anticipate weather conditions to improve crop production. Yet, this same technology can play a role in the spread of harmful content.

In recent months, Prime Minister Ardern has highlighted these challenges and spoken eloquently about the need for stronger action. As she has indicated, it is not easy to delineate the risks of this technology. But, given what’s at stake, we need to address these risks head on. The potential harms are wide-ranging and diffuse, and evolving technology interacts with impacts social challenges in even more complex ways. The path forward must include research through meaningful multistakeholder collaborations across industry and academia, built in part on greater transparency from industry about how these systems work.

To advance these goals, Microsoft commits to the following next steps:

Empowering researchers

We need effective partnerships to enable industry and the research community to dig into key questions. To help with this critical endeavour, we pledge to provide:

Support for the Christchurch Call Initiative on Algorithmic Outcomes: We are joining a new partnership with Twitter, OpenMined and the governments of New Zealand and the United States to research the impact of AI systems that recommend content. The partnership will explore how privacy enhancing technologies (PETs) can drive greater accountability and understanding of algorithmic outcomes, starting with a pilot project as a “proof of function.”

Advancing transparency

We are also taking steps to increase transparency and user control for recommender systems developed at Microsoft. Specifically, we are:

Launching new transparency features for Azure Personalizer. To help advance understanding around recommender systems, we are launching new transparency features for Azure Personalizer, a service offering enterprise customers generally applicable recommender and decision-making functionality that they can embed in their own products. This new functionality will inform customers of the most important attributes that have influenced a recommendation and the relevant weights of each attribute. Our customers can pass this functionality on to their end users, helping the user understand why, for example, an article or product has been shown to them and helping people better understand what these systems are being used for.

Advancing transparency at LinkedIn: LinkedIn continues to take important steps to foster transparency and explainability with its use of AI recommender systems. This includes the publication of an ongoing series of educational content about its feed – such as what content shows up, how its algorithms work, and how members can tailor and personalize their content experience. LinkedIn has also shared perspectives and insight on their engineering blog about their approach to Responsible AI, how they integrate fairness into their AI products, and how they build transparent and explainable AI systems.

Continuing to build out safeguards for responsible AI

The current discussion around recommender systems highlights the importance of thinking deeply about AI system design and development. There are many choices that human beings make about the use cases to which AI systems are put and the goals that AI systems will serve. For example, with an AI system like Azure Personalizer, which recommends content or actions, the system owner decides which actions to observe and reward, and how to embed this system in a product or operational process, ultimately shaping the potential benefits and risks of a system.

At Microsoft, we continue to build out our responsible AI program to help ensure that all AI systems are used responsibly. We recently published our Responsible AI Standard and our Impact Assessment template and guide to share what we are learning from this process and help inform the broader discussion about responsible AI. In the case of Personalizer, we have published a Transparency Note to help our customers better understand how the technology works, the considerations relevant to choosing a use case, and the important characteristics and limitations of the system. We look forward to continuing this important work so that the benefits of AI can be realized responsibly.

Looking ahead

We know we have more work to do to help create a safe, healthy online ecosystem and ensure the responsible use of AI and other technology. Today’s Christchurch Call Leaders’ Summit is an important step on this journey and a timely reminder that no company, government or group can do this alone. As I’ve been reminded by today’s discussion, we also need to hear from young people. The 15 young men and women in our Council for Digital Good Europe tell us that, while young people may be at risk from online hate and toxic online ecosystems, they also have the passion, idealism and determination to help shape healthier online communities. In a world where the impact of technology is increasingly linked to the fundamental health of democracy, we owe young people our best efforts to help them build a safer future.

Share this
Connect