ο

Geoffrey Hinton fields questions from scholars, students during academic talk on responsible AI

'Godfather of AI' asks: Will Digital Intelligence Replace Biological Intelligence?
""

Geoffrey Hinton, a University Professor Emeritus of computer science who has been dubbed the “Godfather of AI,” delivers an academic talk about artificial intelligence in U of T’s Convocation Hall (photo by Johnny Guatto)

Does artificial intelligence actually understand? Would knowing more about its inner workings help to keep it in check? Could AI come up with the law of gravity if it hadn’t yet been devised?

These were among the questions that professors and students put to Geoffrey Hinton during a recent event at the University of Toronto’s 1,730-seat Convocation Hall.

The ο emeritus of computer science and “godfather of AI” was there to deliver an academic talk about – and take queries on – the key differences between biological and digital intelligences, whether large language models such as ChatGPT understand what they are doing and .

 

“My guess is that they will take over – they'll be much, much more intelligent than people ever were,” said Hinton, who added that humanity was likely “just a passing stage” in intelligence’s evolution.

“That's my best guess and I hope I'm wrong.”

The Oct. 27. event was co-hosted by U of T’s and the in the Faculty of Arts & Science in collaboration with the and the .  

Hinton’s talk came amid a flurry of AI-related developments. Three days earlier, Hinton, fellow Turing Award-winner Yoshua Bengio and 22 other AI experts, including οprofessors Gillian Hadfield, Tegan Maharaj and Sheila McIlraith, calling for governments and Big Tech firms to take action on the issue, including by devoting one-third of their AI research and development budgets to AI safety. And on Oct. 30, U.S. President Joe Biden signed an .

""
Hinton took questions from audience members, many of them professors and students (photo by Johnny Guatto)

“AI is poised to transform how we live and work,” said Professor Melanie Woodin, dean of the Faculty of Arts & Science, after she summarized the seminal work Hinton did on deep learning neural networks with the help of his graduate students.

“At this pivotal moment when we consider the opportunities and risks of AI, who better to guide us in this conversation than Dr. Hinton himself?”

Hinton, who is also a cognitive scientist, explained why he after spending decades developing the technology to better understand how the human mind works. In short: It is the rapid advances in large language models such as OpenAI’s ChatGPT and Google’s PaLM coupled with the scaling advantages that digital intelligences enjoy due to their ability to be copied and share information.

And he warned that neural networks’ learning capacity is likely to grow even further as more sources of information, including video, are incorporated into their training. “They could also learn much faster if they manipulated the physical world,” he said.

He finished his presentation by suggesting AI chatbots may even be capable of subjective experience – a concept that is tied up with ideas about consciousness and sentience. “The reason I believe that is because I think people are wrong in their analysis of what subjective experience is,” he said.

""
Left to right: Sheila McIlraith, Geoffrey Hinton, Gillian Hadfield and Melanie Woodin (photo by Johnny Guatto)

The talk was followed by a lengthy Q-and-A session co-ordinated by McIlraith, a professor in the department of computer science and a faculty member at the Vector Institute, where Hinton is chief scientific adviser. McIlraith said she hoped the event would inspire attendees to “help chart a course toward a future where digital and biological intelligence both enrich the human experience.”

Scholars – both professors and students – in fields ranging from philosophy to cognition probed Hinton’s thinking and, in some cases, his conclusions.

Shalev Lifshitz, a fourth-year undergraduate student in computer science who is doing AI research in McIlraith’s group at οand the Vector Institute, got into a back-and-forth discussion with Hinton about whether tools like ChatGPT ever truly understand what they are doing (Hinton says yes).

“I’m on the fence – I was on the fence before – but I thought he made very interesting points,” Lifshitz said immediately following the event. “I think it depends on what the definition of ‘understanding’ is. I’m not clear on that yet.”

Others, like Jennifer Nagel, a professor in the department of philosophy at οMississauga, wondered if future AI might find us interesting or special “in a way that would make the best and brightest artificial intelligences take our side.”

""
Scholars in fields ranging from philosophy to cognition probed Hinton’s thinking during the Q-and-A (photo by Johnny Guatto)

“I mean, for me to be an interesting conversational partner with you right now, I don't even have to be smarter than you … I just have to have some knowledge that you don't have – or even just some way of looking at a problem that you find interesting,” she said.

Hinton was also asked to give advice to students studying in the field.

“Work on AI safety,” he said, noting that top researchers such as OpenAI co-founder Ilya Sutskever, a οalumnus, and Roger Grosse and David Duvenaud – both associate professors of computer science at the university and Vector Institute faculty members – are all working on the subject.

For many, the event was simply a rare chance to hear directly from a world-renowned researcher whose work has already forever changed our lives.

Guijin Li, a PhD student in biomedical engineering, said she is really interested in Hinton’s thoughts on AI development and jumped at the chance to hear him in person.

“It’s a once-in-a-lifetime opportunity.”

—with files from Mariam Matti

UTC