Skip to main content

The UK government has officially launched a research and funding initiative to improve” structural AI safety,” which will award up to £200, 000 in grants to researchers who work to make the technology safer. &nbsp,

The UK’s Artificial Intelligence Safety Institute ( AISI), which is expected to provide funding for around 20 projects through the first phase of the scheme with an initial funding of £4 million, is the company behind the launch of the Systemic Safety Grants Programme, which was co-founded by the Engineering and Physical Sciences Research Council ( EPSRC ) and Innovate UK, a component of UK Research and Innovation ( UKRI ).

As extra phases are introduced, additional funds will be made available, with an entire sum of £8.5 million earmarked for the scheme.

Established in the run-up to the UK AI Safety Summit in November 2023, the AISI is&nbsp, tasked with examining, evaluating and testing new types of AI, and is&nbsp, now collaborating with its US counterpart&nbsp, to share capabilities and build common approaches to AI safety testing.

The grants program, which is focused on how society can be shielded from a range of AI-related risks, including deepfakes, misinformation, and cyberattacks, will build on the AISI’s work by increasing public trust in the technology and placing the UK at the heart of “responsible and reputable” AI development.

Important risks

The research will also aim to identify the crucial dangers of using frontier AI in crucial fields like healthcare and energy services, as well as identify possible offerings that can be developed into long-term tools to address possible risks in these areas.

Digital secretary Peter Kyle stated that “my focus is on accelerating the adoption of AI across the country so that we can kickstart growth and improve people services.” However, boosting public confidence in the innovations that are currently bringing about real change is at the heart of that plan.

” That’s where this grants programme comes in, “he said”. By utilizing a range of expertise, both from academia and industry, we are supporting the research that will ensure that as we roll out Artificial systems across our economy, they can be reliable and reliable at the point of delivery. ” &nbsp,

The program’s opening phase will aim to deepen understandings regarding the challenges AI is likely to pose to society in the near future. UK-based organizations will be ready to apply for the grant funding via a dedicated website.

Global partners can also be used in projects, boosting collaboration between AI researchers and developers while strengthening the common world approach to the technology’s safe deployment and development. &nbsp, &nbsp,

The deadline for proposals is 26 November 2024, and powerful applicants must be confirmed by the end of January 2025 before being fully funded in February. ” This grants program allows us to advance a greater understanding of the emerging topic of systemic AI safety,” said AISI chair Ian Hogarth. It will concentrate on identifying and reducing the risks posed by AI deployment in particular sectors that might have an impact on society, such as those involving deep-fakes or the ability for unanticipated failure of Artificial systems.

By bringing together research from a variety of disciplines and backgrounds in this effort to build up empirical evidence of where AI models might be at risk, so we can create a balanced approach to AI safety for the global common good.

The Department of Science, Innovation and Technology ( DSIT )’s detailed press release outlining the funding scheme also reiterated Labour’s pledge to pass highly targeted legislation for the few companies creating the most powerful AI models, adding that the government would ensure” a proportionate approach to regulation rather than new blanket rules on its use.”

In May 2024, the AISI announced it had opened its first international offices in San Fransisco&nbsp, to make further inroads with leading AI companies headquartered it, such as Anthrophic&nbsp, and OpenAI.

For the first time ever, the AISI officially released its Artificial model safety testing results in the same announcement.

It found that none of the five publicly available big language models ( LLMs) tested were able to do more complex, time-consuming tasks without humans overseeing them, and that all of them remain very vulnerable to simple “jailbreaks” of their safeguards. Additionally, it discovered that some models will still have dangerous outputs even after repeated attempts to circumvent these safeguards.

However, the AISI claimed that the models could handle basic to intermediate cyber security challenges, and that some demonstrated a PhD-equivalent level of knowledge in chemistry and biology ( meaning that their responses to questions at the PhD-level level were comparable to those provided by PhD-level experts ).

Leave a Reply