The UK could risk falling behind the European Union’s strides in ensuring AI safety unless immediate measures are taken, caution members of the Commons Technology Committee.
Scheduled for the beginning of November, an international AI summit will be hosted by the UK government.
In response to inquiries from the BBC, government representatives stated they are open to considering additional actions if deemed necessary.
However, whether they support the rapid proposal of a new law remains undisclosed. Instead, a spokesperson emphasized the upcoming summit and the initial £100 million investment in a task force aimed at promoting secure development of AI models.
The UK government asserted that this is “the largest dedicated funding to AI safety by any government globally”.
If legislative action isn’t introduced during the King’s Speech on November 7th, the earliest possible implementation of such laws would be in 2025, according to a report published by the committee on Thursday.
The report argues that delaying legislation for two years could lead to the UK lagging behind other regulatory frameworks, such as the EU AI Act, which could potentially establish itself as the prevailing standard and be challenging to replace.
The situation might mirror the trajectory of data protection regulations, where UK laws followed the EU’s lead, the report suggests.
While the government’s white paper on AI regulation acknowledges the potential need for new laws, Rishi Sunak has previously contended that initially, “much of this can likely be achieved without legislation”.
A cornerstone of his strategy is the November summit, which the government claims will mark the “world’s first major global summit on AI safety”.
The committee insists that the summit should include a diverse range of countries, including China.
The report also outlines twelve “challenges” that the UK government must confront, encompassing issues such as:
Bias: AI tools in employment might associate women’s names with stereotypically female roles.
Privacy: AI applications have the potential to controversially identify individuals, such as in police use of live facial recognition systems.
Employment: AI systems are poised to replace certain jobs, necessitating addressing the resulting economic impact.
Use of Copyrighted Material: The report also highlights that training AI systems on copyrighted material, especially for generative AI, raises questions of permission and compensation.
Generative AI can now produce new works in the style of renowned artists, actors, and musicians. However, achieving this requires extensive training on copyrighted content. Many creators argue that AI should not be trained on their works without proper authorization and recompense.
Efforts to establish a voluntary agreement that grants AI firms access to copyrighted material while supporting artists are underway, as noted in the report.
A planned copyright exemption for AI firms was discarded by the UK government in February.
The potential for AI to mimic individuals could also be exploited for spreading misinformation, committing fraud, or deceiving voice-recognition security systems in banks, according to MPs.
This report follows a warning from the National Cyber Security Centre, released on Wednesday, stating that large language models, a category of AI powering popular chatbots, cannot be entirely safeguarded from specific attacks aimed at coercing them into malicious actions. The center noted that there are currently “no failsafe measures” to eliminate this risk.
MPs generally support the government’s approach to AI safety, which avoids establishing a new AI regulator and instead assigns oversight responsibilities to existing regulatory bodies based on AI functions.
Some individuals who provided testimony to the committee, including Hugh Milward of Microsoft UK, prefer this approach over the EU’s model, which he characterized as “an example of what not to do”.
However, Milward has also cautioned against overburdening UK legislation. He warned against trying to address every issue with a single piece of legislation, which could result in it becoming overly complex and unwieldy.