UK’s AI Safety Summit and US executive order usher in an era of AI regulation

This is the week where governments globally have perceptibly taken forward conversation about a broader artificial intelligence (AI) landscape including capabilities, risks, sustainability and confidence as well as standardization of information watermarks on AI generated content. Between the US President Joe Biden signing an executive order, The Bletchley Declaration at the UK AI Safety Summit, and the G7 agreement on a code of conduct for AI companies, first definitive steps have been taken towards AI regulation. AI companies seem willing partners in this process, at least for now.

For representational purposes only. (Getty Images/iStockphoto)
For representational purposes only. (Getty Images/iStockphoto)

The push for regulations and placing controls on quality, were needed. At a time when capabilities of general-purpose AI models (called frontier AI) are exceeding what most advanced AI models of today can and cannot do. While the more specific AI models are also adopting broader tools. Guidelines are being pieced together to keep tabs on how AI systems are tested for safety and accuracy, whilst user data privacy isn’t ignored with training data for AI systems.

UK’s pitch for safe AI leadership

The UK’s AI Safety Institute, the first of its kind globally, which will evaluate risks of AI models before they’re released, and after. The institute has the support of other countries as well as major AI companies including OpenAI and DeepMind. Global governmental cooperation was something UK Prime Minister Rishi Sunak had pitched for, earlier this year. At the London Tech Week this summer, he cited how AI doesn’t respect traditional national borders, necessitating need global cooperation between nations and labs.

“I believe the achievements of this summit will tip the balance in favour of humanity. Because they show we have both the political will and the capability to control this technology and secure its benefits for the long-term,” said Sunak, at the AI Safety Summit.

It’ll be a collaborative effort with the Alan Turing Institute, UK’s national institute for data science and AI, to weigh in on risks including bias and misinformation or even the risk of humans losing control of an AI system in its entirety. What was reserved for sci-fi films till a few years ago, is a plausible scenario now. “The support of international governments and companies is an important validation of the work we’ll be carrying out to advance AI safety and ensure its responsible development,” says AI Safety Institute chair Ian Hogarth.

The Bletchley Declaration, marking the end of the AI Safety Summit and signed by 29 countries including UK, India, US, China, the EU, France, Germany, Canada, Israel, Saudi Arabia, Ukraine, Spain and Singapore, spoke specifically about opportunities and risks from AI. There’s concern that as AI models continue to develop at a rapid pace, risks may emerge which will be difficult to pre-empt or predict since capabilities may not be understood beforehand.

“Particular safety risks arise at the ‘frontier’ of AI, understood as being those highly capable general-purpose AI models, including foundation models, that could perform a wide variety of tasks – as well as relevant specific narrow AI that could exhibit capabilities that cause harm – which match or exceed the capabilities present in today’s most advanced models,” says the declaration statement.

If development, testing and results remain unchecked, there is the possible risk of misalignment of AI’s systems’ intention with society at large, with domains including cybersecurity and biotechnology at risk.

Ahead of the summit, AI companies including Anthropic, Amazon, DeepMind, Meta, Microsoft and OpenAI published their safety policies for the first time. This transparency, perhaps helpful in understanding gaps between what companies do and what global regulations would like them to do.

US sets the ball rolling on AI regulation

When US President Joe Biden signed the long-promised ‘Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence’ on Tuesday, it was amply clear a new chapter in AI evolution was underway. AI companies are now required to share safety test results with the government before those models are released for commercial use.

There will also be security standards that need to be followed and verified, particularly for use of AI in society facing implementations such as healthcare and education.

Simplifying the generational improvement aspect with the help of a hypothetical example, OpenAI will need to share these test results with federal agencies, before releasing GPT-5.0 for use by enterprises, consumers and Microsoft (for their Bing chatbot). This process, necessitated by the significant jumps in improvements and capabilities that each evolution of AI models potentially integrates. Be it chatbots, image generators or larger AI models.

Case in point, a new AI system that is believed to be better at the same tasks than OpenAI’s GPT model. Brendan M. Lake of the New York University and Marco Baroni of the Catalan Institution for Research and Advanced Studies (ICREA) published research in October, which talks about an AI system that allows machines to communicate with humans, more like humans, than most present AI systems can manage.

US’ AI guidelines have also standardised digital watermarking for AI-generated visual content. Some AI companies are already incorporating creator and owner information nuggets with AI generated content. Just last month, Adobe announced the Content Credentials “nutrition label” for each AI generated image from its model, called Firefly. Details included will be the creator’s name, date and the edits made.

“Exciting news! Hindustan Times is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!” Click here!

Leave a Reply

Your email address will not be published. Required fields are marked *