2024-09-13 16:29:08
www.ft.com
Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
OpenAI’s latest models have “meaningfully” increased the risk that artificial intelligence will be misused to create biological weapons, the company has acknowledged.
The San Francisco-based group announced its new models, known as o1, on Thursday, touting their new abilities to reason, solve hard maths problems and answer scientific research questions. These advances are seen as a crucial breakthrough in the effort to create artificial general intelligence — machines with human-level cognition.
OpenAI’s system card, a tool to explain how the AI operates, said the new models had a “medium risk” for issues related to chemical, biological, radiological and nuclear (CBRN) weapons — the highest risk that OpenAI has ever given for its models. The company said it meant the technology has “meaningfully improved” the ability of experts to create bioweapons.
AI software with more advanced capabilities, such as the ability to perform step-by-step reasoning, pose an increased risk of misuse in the hands of bad actors, according to experts.
Yoshua Bengio, a professor of computer science at the University of Montreal and one of the world’s leading AI scientists, said that if OpenAI now represented “medium risk” for chemical and biological weapons “this only reinforces the importance and urgency” of legislation such as a hotly debated bill in California to regulate the sector.
The measure — known as SB 1047 — would require makers of the most costly models to take steps to minimise the risk their models were used to develop bioweapons. As “frontier” AI models advance towards AGI, the “risks will continue to increase if the proper guardrails are missing”, Bengio said. “The improvement of AI’s ability to reason and to use this skill to deceive is particularly dangerous.”
These warnings came as tech companies including Google, Meta and Anthropic are racing to build and improve sophisticated AI systems, as they seek to create software that can act as “agents” that assist humans in completing tasks and navigating their lives.
These AI agents are also seen as potential moneymakers for companies that are battling with the huge costs required to train and run new models.
Mira Murati, OpenAI’s chief technology officer, told the Financial Times that the company was being particularly “cautious” with how it was bringing o1 to the public, because of its advanced capabilities, although the product will be widely accessible via ChatGPT’s paid subscribers and to programmers via an API.
She added the model had been tested by so-called red-teamers — experts in various scientific domains who have tried to break the model — to push its limits. Murati said the current models performed far better on overall safety metrics than its previous ones.
OpenAI said the preview model “is safe to deploy under [its own policies and] rated ‘medium risk’ on [its] cautious scale, because it doesn’t facilitate increased risks beyond what’s already possible with existing resources”.
Additional reporting by George Hammond in San Francisco
Support Techcratic
If you find value in Techcratic’s insights and articles, consider supporting us with Bitcoin. Your support helps me, as a solo operator, continue delivering high-quality content while managing all the technical aspects, from server maintenance to blog writing, future updates, and improvements. Support Innovation! Thank you.
Bitcoin Address:
bc1qlszw7elx2qahjwvaryh0tkgg8y68enw30gpvge
Please verify this address before sending funds.
Bitcoin QR Code
Simply scan the QR code below to support Techcratic.
Please read the Privacy and Security Disclaimer on how Techcratic handles your support.
Disclaimer: As an Amazon Associate, Techcratic may earn from qualifying purchases.