Home World Open AI And Microsoft Join UK’s International Coalition To Safeguard AI Development.
World - 2 hours ago

Open AI And Microsoft Join UK’s International Coalition To Safeguard AI Development.

London; February 2026: Leading tech firms OpenAI and Microsoft are the latest to join an initiative spearheaded by the UK’s AI Security Institute (AISI), encouraging trust and public confidence in AI as it rewires public services and drives national renewal.

Announced by Deputy Prime Minister David Lammy, and AI Minister Kanishka Narayan as the AI Impact Summit in India draws to a close today (Friday 20 February), the news bolsters the work of AISI’s Alignment Project which was first announced last summer.

Some £27 million will now be made available through the fund, supporting research efforts to ensure AI systems work as they’re supposed to, with £5.6 million coming from OpenAI, and additional support from Microsoft and others.

Consolidating United Kingdom’s position as a world leader in frontier AI research, today also sees the first Alignment Project grants awarded to 60 projects from across 8 countries, with a second round due to open this summer. AI alignment refers to the effort of steering advanced AI systems to reliably act as we intend them to, without unintentional or harmful behaviours. It involves developing methods that prevent such unsafe behaviours as AI systems become more capable. Progress on alignment is something that will boost confidence and trust in AI, ultimately supporting the adoption of systems which are increasing productivity, slashing medical scan times for patients, and unlocking new jobs for communities up and down the country.

Without continued progress in alignment research, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.

UK Deputy Prime Minister, David Lammy, said: “AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. We’ve built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort”.

UK AI Minister, Kanishka Narayan, said: “We can only unlock the full power of AI if people trust it – that’s the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on. With fresh backing from OpenAI and Microsoft, we’re supporting work that’s crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone”.

Kanishka Narayan further added, “Alignment is crucial for the security of advanced AI systems and its long-term adoption across all walks of life. It is about making sure AI models operate as they should do, even as their capabilities rapidly evolve. With the rise of AI systems that can perform increasingly complex tasks, there is a growing global consensus that AI alignment is one of the most urgent technical challenges of our era”.

It is worth mentioning that other than OpenAI and Microsoft, AISI’s Alignment Project is supported by an international coalition including the:

  • Canadian Institute for Advanced Research (CIFAR)
  • Australian Department of Industry, Science and Resources’ AI Safety Institute
  • Schmidt Sciences
  • Amazon Web Services (AWS)
  • Anthropic
  • AI Safety Tactical Opportunities Fund
  • Halcyon Futures
  • Safe AI Fund
  • Sympatico Ventures
  • Renaissance Philanthropy
  • UK Research and Innovation (UKRI)
  • Advanced Research and Invention Agency (ARIA)

Mia Glaese, VP of Research at OpenAI, said: “As AI systems become more capable and more autonomous, alignment has to keep pace. The hardest problems won’t be solved by any one organisation working in isolation; we need independent teams testing different assumptions and approaches. Our support for the UK AI Security Institute’s Alignment Project complements our internal alignment work and helps strengthen a broader research ecosystem focused on keeping advanced systems reliable and controllable as they’re deployed in more open-ended settings”.

As home to world-leading AI companies and research institutions, and 4 of the world’s top 10 universities, the UK is uniquely positioned to lead global efforts to build AI that we can have confidence in. The Alignment Project builds on AISI’s international leadership, ensuring leading researchers from the UK and collaborating partners can shape the direction of the field and drive progress on safe, AI that behaves predictably. The Project combines grant funding for research, access to compute infrastructure, and ongoing academic mentorship from AISI’s own leading scientists in the field to drive progress in alignment research.

The Alignment Project advisory board includes:

  • Yoshua Bengio, Full Professor at Université de Montréal and founder and scientific advisor of Mila – Quebec AI Institute.
  • Zico Kolter, Professor and Head of Machine Learning Department at Carnegie Mellon University.
  • Shafi Goldwasser, Research Director for Resilience, Simons Institute, UC Berkeley.
  • Andrea Lincoln, Assistant Professor of Computer Science, Boston University.
  • Buck Shlegeris, Chief Executive Officer, Redwood Research.
  • Sydney Levine, Research Scientist, Google DeepMind.
  • Marcelo Mattar, Assistant Professor of Psychology and Neural Science at New York University.

The Alignment Project: An international, cross-sector coalition offering funding of up to £1 million to advance the field of alignment. 

Transformative AI has the potential to deliver unprecedented benefits to humanity, from medical breakthroughs and sustainable energy to solving the global housing crisis. But this future depends on ensuring powerful AI systems reliably act as we intend them to, without unintended or harmful behaviours. Without advances in alignment research, future systems risk operating in ways we cannot fully understand or control, with profound implications for global safety and security.

The Alignment Project is a coalition of government, industry, and philanthropic funders fostering collaboration and providing financial support for AI research. The Alignment Project is a global fund of over £15 million, dedicated to accelerating progress in AI alignment research. Our aim is to promote the development of advanced AI systems that are safe, reliable, and beneficial to society. We provide funding of up to £1 million to researchers from across disciplines.

Team Maverick.

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also

South Africa On The Pivotal Threshold Of Exporting Stone Fruits To China.

Pretoria; February 2026: The Minister of Agriculture, John Steenhuisen, accompanied by His…