An AI ‘godfather’ is raising a red flag about AI agents

  • Talk of AI agents is everywhere at Davos. AI pioneer Yoshua Bengio warned against them.
  • Bengio said AGI-powered agents could lead to “catastrophic scenarios.”
  • Bengio is researching how to build non-agent systems to keep agents in check.

Artificial intelligence pioneer Yoshua Bengio was at the World Economic Forum in Davos this week with a message: AI agents can end badly.

The topic of AI agents – artificial intelligence that can act independently of human input – has been one of the buzziest at this year’s gathering in snowy Switzerland. The event has drawn a collection of pioneering AI researchers to debate where AI is headed next, how it should be governed, and when we might see signs of machines that can reason as well as humans — a milestone known as artificial general intelligence (AGI).

“All catastrophic scenarios with AGI or superintelligence happen if we have agents,” Bengio told BI in an interview. He said he believes it is possible to achieve AGI without building agent systems.

“All AI for science and medicine, all the things people are interested in, are not agents,” Bengio said. “And we can continue to build more powerful systems that are not agents.”

Bengio, a Canadian research scientist whose early research in deep learning and neural networks laid the groundwork for the modern AI boom, is considered one of the “godfathers of AI” along with Geoffrey Hinton and Yann LeCun. Like Hinton, Bengio has warned against the potential harms of AI and called for collective action to mitigate the risks.

After two years of AI testing, businesses are recognizing the tangible return on investment provided by AI agents, which could enter the workforce in a meaningful way as early as this year. OpenAI, which does not have a presence at this year’s Davos, this week unveiled an AI agent that can surf the web for you and perform tasks such as making restaurant reservations or adding groceries to your cart. Google has seen a similar tool of its own.

The problem Bengio sees is that people will continue to build agents no matter what, especially as competing companies and countries worry that others will get to agent AI before them.

“The good news is that if we build non-agent systems, they can be used to control agent systems,” he told BI.

One way would be to build more sophisticated “monitors” that can do this, though that would require significant investment, Bengio said.

He also called for national regulation that would prevent AI companies from building agent models without first proving the system would be safe.

“We can advance our science of safe and capable AI, but we need to acknowledge the risks, understand scientifically where it comes from, and then make the technological investment to make it happen before it’s too late, and we we build things that can destroy us,” said Bengio.

“I want to raise a red flag”

Before speaking to BI, Bengio spoke on a panel about AI security with Google DeepMind CEO Demis Hassabis.

“I want to raise a red flag. This is the most dangerous path,” Bengio told the audience when asked about AI agents. He pointed to the ways AI can be used for scientific discoveries, such as DeepMind’s discovery of protein folding, as examples of how it can still be deep without being an agent. Bengio said he believes it is possible to go AGI without giving the AI ​​agency.

“It’s a bet, I agree,” he said, “but I think it’s a worthwhile bet.”

Hassabis agreed with Bengio that measures should be taken to mitigate the risks, such as cybersecurity protections or experimenting with agents in simulations before releasing them. That would only work if everyone agreed to build them the same way, he added.

“Unfortunately I think there is an economic gradient, beyond science and workers, that people want their systems to be agents,” Hassabis said. “When you say ‘recommend a restaurant,’ why wouldn’t you want the next step, which is, book the table.”

Scroll to Top