Jovana Jankovic / en οexperts tackle questions about AI safety, ethics during panel discussion /news/u-t-experts-tackle-questions-about-ai-safety-ethics-during-panel-discussion <span class="field field--name-title field--type-string field--label-hidden">οexperts tackle questions about AI safety, ethics during panel discussion</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=CnUVindP 370w, /sites/default/files/styles/news_banner_740/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=wFB73LpO 740w, /sites/default/files/styles/news_banner_1110/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=YAREtckR 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-09/_DJC7384-crop.jpg?h=8ff31e88&amp;itok=CnUVindP" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>rahul.kalvapalle</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-10-02T14:33:43-04:00" title="Wednesday, October 2, 2024 - 14:33" class="datetime">Wed, 10/02/2024 - 14:33</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>From left: U of T's Roger Grosse, Sedef Kocak, Sheila McIlraith and Karina Vold take part in a panel discussion on AI safety (photo by Duane Cole)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/kyle-coulter" hreflang="en">Kyle Coulter</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/institute-history-and-philosophy-science-and-technology" hreflang="en">The Institute for the History and Philosophy of Science and Technology</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/centre-ethics" hreflang="en">Centre for Ethics</a></div> <div class="field__item"><a href="/news/tags/department-computer-science" hreflang="en">Department of Computer Science</a></div> <div class="field__item"><a href="/news/tags/department-philosophy" hreflang="en">Department of Philosophy</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/vector-institute" hreflang="en">Vector Institute</a></div> <div class="field__item"><a href="/news/tags/victoria-college" hreflang="en">Victoria College</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">"We should be building AI systems that promote human flourishing – that allow human beings to live with dignity and purpose, and to be valued contributors to society”&nbsp;&nbsp;</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>What does safe artificial intelligence look like? Could AI go rogue and pose an existential threat to humanity? How have media portrayals of AI influenced people’s perceptions of the technology’s benefits and risks?</p> <p>These were among the pressing questions tackled by four experts at the University of Toronto and its partner institutions – in disciplines ranging from computer science to philosophy – during a recent panel discussion on AI safety.</p> <p><strong>Sheila McIlraith</strong>, professor in U of T’s department of computer science at the Faculty of Arts &amp; Science and Canada CIFAR AI Chair at the Vector Institute, said the notion of AI safety evokes different things to different people.&nbsp;</p> <p>“Computer scientists often think about safety critical systems – the types of systems that we’ve built to send astronauts to the moon or control our nuclear power plants – but AI safety is actually quite different,” said McIlraith, an associate director at the U of T’s <a href="https://srinstitute.utoronto.ca">Schwartz Reisman Institute for Technology and Society</a> (SRI).</p> <p>“For me personally, I have a higher bar, and I really think we should be building AI systems that promote human flourishing – that allow human beings to live with dignity and purpose, and to be valued contributors to society.”&nbsp;&nbsp;</p> <p>The event, hosted by SRI in partnership with the <a href="https://vectorinstitute.ai">Vector Institute</a>, the <a href="https://ihpst.utoronto.ca">Institute for the History &amp; Philosophy of Science &amp; Technology</a>, the <a href="https://ethics.utoronto.ca">Centre for Ethics</a> and <a href="https://www.vic.utoronto.ca">Victoria College</a>, invited McIlraith and her fellow panelists to discuss how AI technologies can be aligned with human values in an increasingly automated world.</p> <p>They also discussed how risks surrounding the technology can be mitigated in different sectors.</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-09/_DJC7290-crop.jpg?itok=HAe8oD2Q" width="750" height="501" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Karina Vold, the event’s moderator, underscored the challenge of building safe AI systems in an uncertain world (photo by Duane Cole)</em></figcaption> </figure> <p>Moderator,&nbsp;<strong>Karina Vold</strong>, assistant professor in the Institute for the History &amp; Philosophy of Science &amp; Technology in the Faculty of Arts &amp; Science, noted that because AI systems operate “in a world filled with uncertainty and volatility, the challenge of building safe and reliable AI is not easy and mitigation strategies vary widely.”&nbsp;</p> <p>She proceeded to ask the panel to share their thoughts on the portrayal of AI in popular culture.&nbsp;</p> <p>“The media devotes more attention to different aspects of AI – the social, philosophical, maybe even psychological,” said&nbsp;<strong>Sedef Kocak</strong>, director of AI professional development at the Vector Institute.&nbsp;</p> <p>“These narratives are important to help show the potential fears, as well as the positive potential of the technology.”</p> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-09/_DJC7298-crop.jpg?itok=O2pDcVyg" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>The discussion touched on several topics related to AI safety (photo by Duane Cole)</em></figcaption> </figure> <p><strong>Roger Grosse</strong>, associate professor in U of T’s department of computer science in the Faculty of Arts &amp; Science and a founding member of the Vector Institute, said that safety concerns around AI are not merely rooted in science and pop culture, but also in philosophy.&nbsp;</p> <p>“Many people think that the public’s concerns regarding AI risks come from sci-fi, but I think the early reasoning regarding AI risks actually has its roots in philosophy,” said Grosse, who also holds Schwartz Reisman Chair in Technology and Society.&nbsp;&nbsp;</p> <p>“If we’re trying to reason about AI systems that don’t yet exist, we don’t have the empirical information, and don’t yet know what their design would be, what we can do is come up with various thought experiments. For example, what if we designed an AI that has some specific role, and all of the actions that it takes are in service of the role?</p> <p>“For the last decade, a lot of the reasons for being concerned about the long-term existential risks really came from this careful philosophical reasoning.”</p> <p>The discussion also touched on the dangers of AI models misaligning themselves, how to guard against bias in the training of large language models, and how to ensure that AI models with potentially catastrophic capabilities are safeguarded.</p> <p>“This [safeguarding] is an area where new research ideas and principles will be required to make the case,” said Grosse. “Developers saying, ‘Trust us’ is not sufficient. It’s not a good foundation for policy.”&nbsp;</p> <p>Despite addressing topics surrounding potential harms and risks of AI, the panelists also shared their optimism about how AI can be wielded for the greater good – with Grosse noting AI offers the promise of making knowledge more widely accessible, and Kocak focusing on the myriad benefits for industries.</p> <p><strong>Watch the Sept. 10 conversation below:</strong></p> <p><iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen frameborder="0" height="500" referrerpolicy="strict-origin-when-cross-origin" src="https://www.youtube.com/embed/Z1EqkTrotHE?si=xCuaVunRk0e7YDDt" title="YouTube video player" width="750"></iframe></p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 02 Oct 2024 18:33:43 +0000 rahul.kalvapalle 309490 at Four AI trends to watch in 2024 /news/four-ai-trends-watch-2024 <span class="field field--name-title field--type-string field--label-hidden">Four AI trends to watch in 2024</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH 370w, /sites/default/files/styles/news_banner_740/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=xSzVRTv8 740w, /sites/default/files/styles/news_banner_1110/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=5GUAZclT 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2024-01/GettyImages-1933427591-crop.jpg?h=81d682ee&amp;itok=bWQQfFcH" alt="A person dressed like a monk stands in front of a sign that reads The Future is AI on a crowded street in Davos"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2024-01-19T12:02:40-05:00" title="Friday, January 19, 2024 - 12:02" class="datetime">Fri, 01/19/2024 - 12:02</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p><em>AI was a hot topic at this week’s annual meeting of the World Economic Forum in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket via Getty Images)</em></p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/daniel-browne" hreflang="en">Daniel Browne</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/munk-school-global-affairs-public-policy-0" hreflang="en">Munk School of Global Affairs &amp; Public Policy</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-applied-science-engineering" hreflang="en">Faculty of Applied Science &amp; Engineering</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-information" hreflang="en">Faculty of Information</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/global" hreflang="en">Global</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">οMississauga</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions”</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>As artificial intelligence continues to develop rapidly, the world is watching with excitement and apprehension – as evidenced by the <a href="https://www.washingtonpost.com/technology/2024/01/18/davos-ai-world-economic-forum/">AI buzz in Davos this week at the World Economic Forum’s annual meeting</a>.</p> <p>University of Toronto researchers are using AI to <a href="/news/u-t-receives-200-million-grant-support-acceleration-consortium-s-self-driving-labs-research">advance scientific discovery</a> and <a href="https://tcairem.utoronto.ca/">improve health-care delivery</a>, <a href="/news/who-owns-your-face-scholars-u-t-s-schwartz-reisman-institute-explore-tech-s-thorniest-questions">exploring how to mitigate potential harms</a> and finding new ways to ensure the technology <a href="/news/achieving-alignment-how-u-t-researchers-are-working-keep-ai-track">aligns with human values</a>.&nbsp;</p> <p>“The advancement of AI is moving quickly, and the year ahead holds a lot of promise but also a lot of unanswered questions,” says <strong>Monique Crichlow</strong>, executive director of the Schwartz Reisman Institute for Technology and Society (SRI). “Researchers at SRI and across the university are tackling how to build and regulate AI systems for safer outcomes, as well as the social impacts of these powerful technologies.”</p> <p>“From health-care delivery to accessible financial and legal services, AI has the potential to benefit society in many ways and tackle inequality around the world. But we have real work to do in 2024 to ensure that happens safely.”</p> <p>As AI continues to reshape industries and challenge many aspects of society, here are four emerging themes οresearchers are keeping their eyes on in 2024:</p> <hr> <h3>1. AI regulation is on its way</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1754158756-crop.jpg?itok=IvlN2HdV" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>U.S. Vice President Kamala Harris applauds as U.S. President Joe Biden signs an executive order on the safe, secure, and trustworthy development and use of artificial intelligence on Oct. 30, 2023 (photo by Brendan Simialowski/AFP/Getty Images)&nbsp;</em></figcaption> </figure> <p>As a technology with a wide range of potential applications, AI has the potential to impact all aspects of society – and regulators around the world are scrambling to catch up<span style="font-size: 1rem;">.</span></p> <p>Set to pass later this year, the <a href="https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act"><em>Artificial Intelligence and Data Act </em></a>(AIDA) is the Canadian government’s first attempt to comprehensively regulate AI. Similar attempts by <a href="https://srinstitute.utoronto.ca/news/global-ai-safety-and-governance">other governments</a> include the European Union’s <a href="https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence"><em>AI Act</em> </a>and the <a href="https://www.congress.gov/bill/117th-congress/house-bill/6580/text"><em>Algorithmic Accountability Act</em></a> in the United States.</p> <p>But <a href="https://srinstitute.utoronto.ca/news/ai-regulation-in-canada-is-moving-forward-heres-what-needs-to-come-next">there is still much to be done</a>.</p> <p>In the coming year, legislators and policymakers in Canada will tackle many questions, including what counts as fair use when it comes to training data and what privacy means in the 21st century. Is it illegal for companies to train AI systems on copyrighted data, as <a href="https://www.cbc.ca/news/business/new-york-times-openai-lawsuit-copyright-1.7069701">a recent lawsuit</a> from the <em>New York Times</em> alleges? Who owns the rights to AI-generated artworks? Will Canada’s new privacy bill sufficiently <a href="https://srinstitute.utoronto.ca/news/to-guarantee-our-rights-canadas-privacy-legislation-must-protect-our-biometric-data">protect citizens’ biometric data</a>?</p> <p>On top of this, AI’s entry into other sectors and industries will increasingly affect and transform how we regulate other products and services. As&nbsp;<strong>Gillian Hadfield</strong>, a professor in the Faculty of Law and the Schwartz Reisman Chair in Technology and Society, Policy Researcher <strong>Jamie Sandhu</strong>&nbsp;and Faculty of Law doctorial candidate <strong>Noam Kolt</strong> explore in <a href="https://srinstitute.utoronto.ca/news/cifar-ai-insights-policy-regulatory-transformation">a recent policy brief for CIFAR</a>&nbsp;(formerly the Canadian Institute for Advanced Research),&nbsp;a focus on regulating AI through its harms and risks alone “obscures the bigger picture” of how these systems will transform other industries and society as a whole. For example: are current car safety regulations adequate to account for self-driving vehicles powered by AI?</p> <h3>2. The use of generative AI will continue to stir up controversy</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1889111776-crop.jpg?itok=_v5Nv_QX" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>Microsoft Bing Image Creator is displayed on a smartphone (photo by Jonathan Raa/NurPhoto/Getty Images)</em></figcaption> </figure> <p>From AI-generated text and pictures to videos and music, use of generative AI has exploded over the past year – and so have questions surrounding issues such as academic integrity, misinformation and the displacement of creative workers.</p> <p>In the classroom, teachers are seeking to understand how <a href="https://magazine.utoronto.ca/campus/education-is-evolving-in-the-age-of-ai/">education is evolving in the age of machine learning</a>. Instructors will need to find new ways to embrace these tools – or perhaps opt to reject them altogether – and students will continue to discover new ways to learn alongside these systems.</p> <p>At the same time, AI systems <a href="https://journal.everypixel.com/ai-image-statistics">created more than 15 billion images last year</a>&nbsp;by some counts – more than the entire 150-year history of photography. Online content will increasingly lack human authorship, and some researchers have proposed that by 2026 <a href="https://thelivinglib.org/experts-90-of-online-content-will-be-ai-generated-by-2026/">as much as 90 per cent of internet text could be generated by AI</a>. Risks around disinformation will increase, and new methods to label content as trustworthy will be essential.</p> <p>Many workers – including writers, translators, illustrators and designers – are worried about job losses. But a tidal wave of machine-generated text could also have negative impacts on AI development. In a recent study, <strong>Nicolas Papernot</strong>, an assistant professor in the Edward S. Rogers Sr. department of electrical and computer engineering in Faculty of Applied Science &amp; Engineering and an SRI faculty affiliate,&nbsp;and his co-authors found <a href="/news/training-ai-machine-generated-text-could-lead-model-collapse-researchers-warn">training AI on machine-generated text led to the system becoming less reliable</a> and subject to “model collapse.”</p> <h3>3. Public perception and trust of AI is shifting</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1933427856-crop.jpg?itok=WipX3hEz" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A person walks past a temporary AI stall in Davos, Switzerland (photo by Andy Barton/SOPA Images/LightRocket/Getty Images)</em></figcaption> </figure> <p>Can we trust AI? Is our data secure?</p> <p>Emerging research on public trust of AI is shedding light on changing preferences, desires and viewpoints.&nbsp;<strong>Peter Loewen&nbsp;</strong>–&nbsp;the director of the <a href="https://munkschool.utoronto.ca/">Munk School of Global Affairs &amp; Public Policy</a>, SRI’s associate director and the director of the Munk School’s&nbsp;<a href="https://munkschool.utoronto.ca/pearl">Policy, Elections &amp; Representation Lab</a> (PEARL) – is developing an index measuring public perceptions of and attitudes towards AI technologies.</p> <p>Loewen’s team conducted a representative survey of more than 23,000 people across 21 countries, examining attitudes towards regulation, AI development, perceived personal and societal economic impacts, specific emerging technologies such as ChatGPT and the use of AI by government. They plan to release their results soon.</p> <p>Meanwhile, 2024 is being called <a href="https://www.forbes.com/sites/siladityaray/2024/01/03/2024-is-the-biggest-election-year-in-history-here-are-the-countries-going-to-the-polls-this-year/?sh=6c930f8265f9">“the biggest election year in history,”</a> with more than 50 countries headed to the polls, and <a href="https://foreignpolicy.com/2024/01/03/2024-elections-ai-tech-social-media-disinformation/">experts expect interference and misinformation to hit an all-time high</a> thanks to AI. How will citizens know which information, candidates, and policies to trust?&nbsp;</p> <p>In response, some researchers are investigating the foundations of trust itself.&nbsp;<strong>Beth Coleman</strong>, an associate professor at οMississauga’s Institute of Communication, Culture, Information and Technology and the Faculty of Information who is an SRI research lead, is leading <a href="https://srinstitute.utoronto.ca/news/call-for-applicants-trust-working-group">an interdisciplinary working group</a> on the role of trust in interactions between humans and AI systems, examining how trust is conceptualized, earned and maintained in our interactions with the pivotal technology of our time.</p> <h3>4. AI will increasingly transform labour, markets and industries&nbsp;</h3> <figure role="group" class="caption caption-drupal-media align-center"> <div> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/styles/scale_image_750_width_/public/2024-01/GettyImages-1546723736-crop.jpg?itok=oLMOosKv" width="750" height="500" alt="&quot;&quot;" class="image-style-scale-image-750-width-"> </div> </div> <figcaption><em>A protester in London holds a placard during a rally in Leicester Square (photo by Vuk Valcic/SOPA Images/LightRocket via Getty Images)</em></figcaption> </figure> <p><strong>Kristina McElheran</strong>, an assistant professor in the Rotman School of Management and an SRI researcher,<strong>&nbsp;</strong>and her collaborators may have recently found <a href="https://www.nbcnews.com/data-graphics/wide-gap-ais-hype-use-business-rcna127210">a gap between AI buzz in the workplace and businesses who are actually using it</a>&nbsp;– but&nbsp;there remains a real possibility that labour, markets and industries will undergo massive transformation.<br> <br> οresearchers who have published books on how AI will transform industry include: Rotman faculty members <strong>Ajay Agrawal</strong>, <strong>Joshua Gans</strong>&nbsp;and <strong>Avi Goldfarb</strong>, whose <a href="https://www.predictionmachines.ai/power-prediction"><em>Power and Prediction: The Disruptive Economics of Artificial Intelligence</em></a> argues that “old ways of doing things will be upended” as AI predictions improve; and the Faculty of Law’s <strong>Benjamin Alarie</strong> and <strong>Abdi Aidid</strong>, who propose in <a href="https://utorontopress.com/9781487529420/the-legal-singularity/"><em>The Legal Singularity: How Artificial Intelligence Can Make Law Radically Better</em></a> that AI will improve legal services by increasing ease of access and fairness for individuals.</p> <p>In 2024, institutions –&nbsp;public and private – will be creating more guidelines and rules around how AI systems can or cannot be used in their operations. Disruptors will be challenging the hierarchy of the current marketplace.&nbsp;</p> <p>The coming year promises to be transformative for AI as it continues to find new applications across society. Experts and citizens must stay alert to the changes AI will bring and continue to advocate that ethical and responsible practices guide the development of this powerful technology.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 19 Jan 2024 17:02:40 +0000 Christopher.Sorensen 305503 at Research shows decision-making AI could be made more accurate when judging humans /news/research-shows-decision-making-ai-could-be-made-more-accurate-when-judging-humans <span class="field field--name-title field--type-string field--label-hidden">Research shows decision-making AI could be made more accurate when judging humans</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=Ax3BG70g 370w, /sites/default/files/styles/news_banner_740/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=sdmLkXSN 740w, /sites/default/files/styles/news_banner_1110/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=GsTOwlcw 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=Ax3BG70g" alt="an illustration of the scales of justice "> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-05-23T18:08:43-04:00" title="Tuesday, May 23, 2023 - 18:08" class="datetime">Tue, 05/23/2023 - 18:08</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>(photo by wildpixel/iStock)</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/alexander-bernier" hreflang="en">Alexander Bernier</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/equity" hreflang="en">Equity</a></div> <div class="field__item"><a href="/news/tags/machine-learning" hreflang="en">machine learning</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">New study by researchers from οand MIT suggests that clearly labelling data might help reduce bias</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A new study from researchers at the University of Toronto and the Massachusetts Institute of Technology (MIT) is challenging conventional wisdom on human-computer interaction and reducing bias in AI.</p> <p>The paper, which was <a href="https://www.science.org/doi/10.1126/sciadv.abq0701">published this month</a> in the journal <em>Science Advances</em>, demonstrates empirical evidence on the relationship between the methods used to label the data that trains machine learning (ML) models and the performance of those models when applying norms.</p> <p>MIT PhD student <strong><a href="https://aparna-b.github.io/researcher/">Aparna Balagopalan</a></strong>, a <strong><a href="https://www.youtube.com/watch?v=-WaikW7aSp0">graduate of U of T</a></strong>'s masters program in applied computing, is lead author, with co-authors <strong><a href="https://srinstitute.utoronto.ca/who-we-are/#gillian-hadfield-bio">Gillian Hadfield</a></strong>, director of U of T’s <a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology</a> (SRI), Schwartz Reisman Chair in Technology and Society, CIFAR AI Chair, and a professor of law and strategic management in the Faculty of Law; <a href="https://www.cs.toronto.edu/~madras/">David Madras</a>, a PhD student in the <a href="http://learning.cs.toronto.edu/">Machine Learning Group</a> at the department of computer science in the Faculty of Arts &amp; Science and the Vector Institute; research assistant <a href="https://ca.linkedin.com/in/david-yang-1986b8b1">David H. Yang</a>, a graduate student in the applied computing program in the Faculty of Arts &amp; Science; <a href="https://healthyml.org/marzyeh/">Marzyeh Ghassemi</a>, a faculty affiliate at SRI and an assistant professor at MIT; and Dylan Hadfield-Menell, an assistant professor at MIT.</p> <p>Much of the scholarship in this area presumes that calibrating AI behaviour to human conventions requires value-neutral, observational data from which AI can best reason toward sound normative conclusions. But the new research suggests that labels explicitly reflecting value judgments, rather than the facts used to reach those judgments, might yield ML models that assess rule adherence and rule violation in a manner that humans would deem acceptable.</p> <p>To reach this conclusion, the authors conducted experiments to see how individuals behaved when asked to provide factual assessments as opposed to when asked to judge whether a rule had been followed.</p> <div class="align-center"> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-05/thumbnail_composite-trio-inside-750-500.jpg" width="750" height="500" alt="&quot;&quot;"> </div> </div> <p><em>From left to right: MIT PhD student Aparna Balagopalan, SRI Director Gillian Hadfield and SRI Faculty Affiliate Marzyeh Ghassemi (supplied photos)</em></p> <p>For example, one group of participants was asked to label dogs that exhibited certain characteristics – namely, those that were large, not well groomed, or aggressive. Meanwhile, another group of participants was instead asked whether or not the dogs shown to them violate a building pet code predicated on the same characteristics, rather than assessing the presence or absence of specific features.</p> <p>The first group was asked to make a factual assessment – and the second, a normative one.</p> <p>Hadfield says the researchers were surprised by the findings.</p> <p>“When you ask people a normative question, they answer it differently than when you ask them a factual question,” she says.</p> <p>Human participants in the experiments were more likely to recognize (and label) a factual feature than the violation of an explicit rule predicated on the factual feature.</p> <p>Current thinking on this topic presumes that calibrating AI behaviour to human conventions requires value-neutral, observational data from which AI can best reason toward sound normative conclusions.</p> <p>But this new research suggests that labelling data with labels that explicitly reflect value judgments, rather than the facts used to reach those judgments, might yield ML models that assess rule-adherence and rule-violation in a manner that we humans would deem acceptable.</p> <p>The results of these experiments showed that ML models trained on normative labels achieve higher accuracy in predicting human normative judgments. Essentially, they are better at predicting. Therefore, if we train automated judgment systems on factual labels – which is how several existing systems are being built – they are likely overpredicting rule violations.</p> <p>The implications of the research are significant. Not only does it show that reasoning about norms is qualitatively different from reasoning about facts, but it also has important real-world ramifications.</p> <p>“People could say, ‘I don’t want to be judged by a machine – I want to be judged by a human,’ given that we’ve got evidence to show that the machine will not judge them properly,” Hadfield says.</p> <p>“Our research shows that this factor has a bigger effect [on an ML model’s performance] than things like model architecture, label noise and subsampling – factors that are often looked to for errors in prediction.”</p> <p>Ensuring that the data used to train decision-making ML algorithms mirrors the results of human judgment – rather than simple factual observation – is no small feat. Proposed approaches include ensuring that the training data used to reproduce human assessments is collected in an appropriate context.</p> <p>To this end, the authors of the paper recommend that the creators of trained models and of datasets supplement those products with clear descriptions of the approaches used to tag the data – taking special care to establish whether the tags relate to facts perceived or judgments applied.</p> <p>“We need to train on and evaluate normative labels. We have to pay the money for normative labels, and probably for specific applications. We should be a lot better at documenting that labelling practice. Otherwise, it’s not a fair judgment system,” Hadfield says.</p> <p>“There’s a ton more research we need to be doing on this.”</p> <p>The study was funded by the Schwartz Reisman Institute for Technology and Society and the Vector Institute, among others.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 23 May 2023 22:08:43 +0000 siddiq22 301760 at Pilot program embeds ethics into οundergraduate technology courses /news/pilot-program-embeds-ethics-u-t-undergraduate-technology-courses <span class="field field--name-title field--type-string field--label-hidden">Pilot program embeds ethics into οundergraduate technology courses</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-04/ethics-program.jpeg?h=afdc3185&amp;itok=ErHZtHP6 370w, /sites/default/files/styles/news_banner_740/public/2023-04/ethics-program.jpeg?h=afdc3185&amp;itok=H1l2n2DJ 740w, /sites/default/files/styles/news_banner_1110/public/2023-04/ethics-program.jpeg?h=afdc3185&amp;itok=ihQ4jDDj 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-04/ethics-program.jpeg?h=afdc3185&amp;itok=ErHZtHP6" alt="Diane Horton and Sheila McIlraith"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2021-07-05T13:18:27-04:00" title="Monday, July 5, 2021 - 13:18" class="datetime">Mon, 07/05/2021 - 13:18</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>Diane Horton and Sheila McIlraith are co-leading a pilot program to embed ethics into a cross section of undergraduate computer science courses at the university (photos by Ken Jones and Johnny Guatto)</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/ethics" hreflang="en">Ethics</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/philosophy" hreflang="en">Philosophy</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A new pilot program at the University of Toronto will embed ethics modules into existing undergraduate computer science courses in a bid to ensure future technologies designed and deployed in ways that consider their broader societal impact.</p> <p>From learning about the complex trade-off between data privacy and public benefit to making design decisions that impact marginalized communities, the pilot program&nbsp;– led by the department of computer science, in the Faculty of Arts &amp; Science,&nbsp;and the <a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology and Society</a> (SRI)&nbsp;– will teach computer science students skills to identify potential ethical risks in the technologies they are learning to build.</p> <p>The initiative aims to equip&nbsp;οgraduates, who may go on to become&nbsp;global tech leaders, to make informed&nbsp;decisions about technology and its wide-ranging effects on justice, health care, education, economies, human rights&nbsp;and beyond.&nbsp;</p> <p>“We want to teach students how to think, not what to think,” says&nbsp;<strong>Sheila McIlraith</strong>, a professor of computer science and a research lead at SRI who is co-leading the initiative, which&nbsp;includes scholars who specialize in ethics from U of T’s department of philosophy.</p> <p>“We’re not proselytizing about ‘right’ or ‘wrong,’ But we want students to identify ethical questions because, when they enter the workforce, they will be on the front lines. They’ll be the ones writing the code, developing the systems, using the data. It’s imperative that ethical considerations are part of fundamental design principles.”</p> <p>McIlraith points to the rapidly changing role technology plays in society as evidence of the urgent need for such a program.</p> <p>“It used to be that technologists would build systems for a particular purpose or industry,” she says. “But now&nbsp;technology is no longer just for individual tasks like completing tax returns or keeping track of company inventory. Technology impacts the way all of us live, work&nbsp;and interact with each other. A lot of the money and investment that fuels our economy is related to technology. And emerging tech companies are often led by young people who have just come out of computer science degrees.”</p> <p>When SRI was founded in 2019, McIlraith was appointed as one of its inaugural research leads. She quickly approached SRI Director <strong>Gillian K. Hadfield</strong> about the need for an embedded ethics initiative in computer science, citing a similar pioneering program already underway at Harvard University. Hadfield immediately saw the alignment with SRI’s mission to explore the dynamics between technology and the human agenda – and to solve problems at the intersection of technology and public good.</p> <p>McIlraith and Horton are joined on the team by <strong>Benjamin Wald</strong>, most recently a post-doctoral researcher at SRI and an alumnus of U of T’s department of philosophy; <strong>Maryam Majedi</strong>, a post-doctoral researcher&nbsp;in the department of computer science; and <strong>Emma McClure</strong>, a PhD candidate in the department of philosophy.</p> <p>“Embedding ethical considerations into existing courses helps students see their relevance at the very moment they’re learning the computer science,” says&nbsp;<strong>Diane Horton</strong>, a professor, teaching stream, in the department who is co-leading the pilot program with McIlraith. “The ethics modules are associated very closely with the technical content, so when students are eventually in the workplace, we hope the two will remain very connected in their minds.”</p> <p>Horton, who has been teaching in the department for 25 years,&nbsp;has seen first-hand how eager students are to talk about ethics. She also noted that they bring different perspectives to the conversation.</p> <p>“One student had a very intense appreciation for the vulnerability of the homeless population,” says Horton, “and she brought that from her personal experience. Another student talked about the hospital where he works, and how private medical data is so carefully protected.”</p> <p>“There has been so much curiosity from the students,” adds Majedi of the initiative so far. “They ask a lot of questions and offer interesting and creative ideas. Some get so excited, and they stay long after class to talk with us.”</p> <p>Majedi says her own research into data privacy has highlighted a gap in curricula where ethical training for students is badly needed.</p> <p>“It's critical to teach ethics in computer science,” she says, “because these students will be responsible for many important tasks in the future.”</p> <p>Both Wald and McClure say they are excited&nbsp;to see the enthusiasm among computer science students when it comes to addressing ethical questions.</p> <p>“I think the students really want to have these critical thinking tools, because it’s clear they’ve been considering these issues already,” says McClure.</p> <p>“Sometimes, a computer science student might recognize a potential ethical issue,” says Wald, “but might not know how it’s been discussed by other people, or where to find the right resources to address it. They might think, ‘How do I put the concern I have into words?’ Hopefully we can give them the tools to do that.”</p> <p>The embedded ethics initiative will produce a longitudinal study to inform its future directions. The goal is for every computer science student to encounter ethics modules at several points in their οcomputer science program – and bring those insights to their future careers.</p> <p>“Big tech companies like Apple often employ people in specialized ethics roles, but our program aims to equip people who are actually building the technologies at a company like that,” says McClure. “That way, the ethical behaviour comes from within the design of technologies. It comes from the bottom instead of being imposed from the outside by an ‘ethics specialist.’”</p> <p>McIlraith and Horton both credit Harvard’s Barbara Grosz and Jeff Behrends for supporting&nbsp;the οteam at the early stages of the pilot program’s conception and development. Grosz is a founder of <a href="https://embeddedethics.seas.harvard.edu/">Harvard’s Embedded EthiCS program</a>, while&nbsp;Behrends is a faculty team leader.</p> <p>The οteam aims to engage other faculty, instructors&nbsp;and researchers as it grows – in particular, computer science faculty who have already been teaching undergraduate courses in the core curriculum for years.</p> <p>“Longer-term, we aspire to have ethical considerations as a cornerstone of many of our tech-oriented disciplines within the university,” says McIlraith. “One of our goals is to create a winning strategy so that this pilot can transform into something broader.”</p> <p>&nbsp;</p> <p>&nbsp;</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Mon, 05 Jul 2021 17:18:27 +0000 Christopher.Sorensen 301280 at Algorithms and art: Researchers explore impact of AI on music and culture /news/algorithms-and-art-researchers-explore-impact-ai-music-and-culture <span class="field field--name-title field--type-string field--label-hidden">Algorithms and art: Researchers explore impact of AI on music and culture</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-04/GettyImages-1192397775.jpeg?h=afdc3185&amp;itok=nStn1Ssj 370w, /sites/default/files/styles/news_banner_740/public/2023-04/GettyImages-1192397775.jpeg?h=afdc3185&amp;itok=6j-o-Uik 740w, /sites/default/files/styles/news_banner_1110/public/2023-04/GettyImages-1192397775.jpeg?h=afdc3185&amp;itok=m_gr_4Ru 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-04/GettyImages-1192397775.jpeg?h=afdc3185&amp;itok=nStn1Ssj" alt="Headphones"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2021-06-11T13:05:38-04:00" title="Friday, June 11, 2021 - 13:05" class="datetime">Fri, 06/11/2021 - 13:05</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>(Photo by Krisanapong Detraphiphat via Getty Images)</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/music" hreflang="en">Music</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Global access to art, culture, and entertainment products – music, movies, books, and more – has undergone fundamental changes over the past 20 years in light of groundbreaking developments in artificial intelligence.</p> <p>For example, users of streaming services like Netflix and Spotify have data collected and analyzed by algorithms&nbsp;to determine their streaming habits – resulting in&nbsp;recommendations that cater to their tastes. But this is only one&nbsp;of the many ways in which AI tools are transforming the arts and culture industries. AI is also being used in the production of music and other art, with&nbsp;algorithms generating photos or writing&nbsp;songs on their own.</p> <p>Warner Music <a href="https://www.theguardian.com/music/2019/mar/22/algorithm-endel-signs-warner-music-first-ever-record-deal">even “signed” an algorithm to a record deal</a> in 2019.</p> <p>Yet,&nbsp;while AI is drastically reshaping cultural industries around the world, we have yet to fully understand the consequences.</p> <div class="image-with-caption left"> <p>&nbsp;</p> <div class="align-left"> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-04/UofT75524_Ashton_Anderson-16-2.jpeg" width="200" height="300" alt="Ashton Anderson"> </div> </div> <em>Ashton Anderson</em></div> <p>“The societal impacts these algorithmic developments are having on the production, circulation, and consumption of culture remain largely unknown,” says <strong>Ashton Anderson</strong>, an assistant professor in the department computer science in the University of Toronto’s Faculty of Arts &amp; Science and a faculty affiliate at the&nbsp;<a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology and Society</a>.</p> <p>Anderson’s research aims to bridge the divide between computer science and the social sciences. He uses computation to study online well-being – for example, studying the impact of “echo chambers” on social media. Such interdisciplinary work is a key component of the Schwartz Reisman Institute’s mission to re-conceptualize common notions of the ways technology, systems&nbsp;and society interact.</p> <p>In October of 2019, Anderson and his collaborators – Georgina Born, a professor of music and anthropology at Oxford University; Jeremy Morris, an associate professor of media and cultural studies at the University of Wisconsin-Madison; and Fernando Diaz, a research scientist at Google in Montreal and a Canada CIFAR AI chair – convened a <a href="https://cifar.ca/ai/ai-society/">CIFAR AI &amp; Society</a> workshop to explore the effects of AI on the curation of culture, with a particular focus on the music industry.</p> <p>“We deliberately brought together communications and computer science scholars, musicians, industry members&nbsp;and users to map out the major issues we could foresee now that cultural products are largely being distributed via algorithms on global platforms,” says Anderson.</p> <p>The researchers’&nbsp;findings and recommendations were published in <a href="https://static1.squarespace.com/static/5ef0b24bc96ec4739e7275d3/t/60b68ccb5a371a1bcdf79317/1622576334766/Born-Morris-etal-AI_Music_Recommendation_Culture.pdf">a report titled “Artificial Intelligence, Music Recommendation and the Curation of Culture.”</a></p> <p>“This report is the first major document to recognize and describe the societal effects of the algorithmic revolution in cultural industries,” says Anderson. “Existing journals are virtually entirely aligned with only one of the many stakeholder groups that took part in this interdisciplinary effort so, unfortunately, publishing this report in one of these journals would be next to impossible.</p> <p>“We’re very happy to have Schwartz Reisman publish this report, as our methodology and the cross-disciplinary expertise we convened is well-aligned with SRI’s mission to straddle traditional academic boundaries in the pursuit of understanding how powerful new technologies shape the world around us.”</p> <p>The report contains&nbsp;three overarching themes. First, participants agreed there will be major long-term impacts of the use of AI-driven technologies on cultural consumption and creation. For example, if algorithms decide what to distribute and recommend – and to whom and when – then arts and culture creators, and the organizations who fund them, may be incentivized to produce content that is more likely to get listener exposure or reach fans based on how algorithms work to connect audiences with content.</p> <p>Participants also saw “a clear need to enrich existing AI-driven technologies so they can better serve diverse communities and genres of culture, art, and music,” says Anderson. If algorithms overly generalize information about certain groups, subcultures&nbsp;or communities – whether by race, gender, or other identity markers – Anderson notes that “we risk reinforcing rigid and potentially harmful social boundaries.”</p> <p>A third theme emerging from the workshop is that the curation of culture always has involved, and always will involve, balancing competing objectives.</p> <p>“The extraction of personal data has been privatized and corporatized by curation platforms, but as yet without any public debate or intervention for accountability and transparency,” says Anderson.</p> <p>In other words, how should we measure the convenience and increased accessibility that streaming platforms provide against the fact that they threaten to harm important ideals of public safety, such as the right to privacy and cultural sovereignty?</p> <p>The workshop’s report considers almost every step of the music industry’s processes, from the ways in which algorithms manipulate the existing variety of content itself, to the ways in which content is produced, distributed, valued, understood&nbsp;and consumed around the world – including the ways in which artists and creators are remunerated.</p> <p>Important questions explored in the report include:</p> <ul> <li>What assumptions are built into media recommendation systems?</li> <li>What happens to the crucial role of social and community relationships at the heart of the experience of music?</li> <li>How can we ensure appropriate cultural expertise is represented in algorithmic and technological design?</li> <li>AI-based classifications of music may be highly efficient, but they do not necessarily reflect a truly intelligent&nbsp;analysis of music. Can they ever have any real&nbsp;understanding of what music is?</li> <li>Will music and musical tastes become increasingly homogenized due to AI-based systems of production, promotion&nbsp;and distribution? Do we risk misrepresenting&nbsp;or underrepresenting marginalized communities&nbsp;or their agency to represent themselves?</li> <li>Is the personalization of algorithms too seductive? Do we risk no longer “thinking for ourselves”?</li> </ul> <p>“I’m delighted to see us producing this report at SRI,” says Professor&nbsp;<strong>Gillian K. Hadfield</strong>, the director of the Schwartz Reisman Institute for Technology and Society. “This project is aligned with the work that SRI Engineering Lead Ron Bodkin is leading to improve AI systems’ objectives and recommender systems, surveying research in this area and building new techniques. Recommendation is a huge part of the daily role AI plays in our lives, and ensuring it’s aligned with human values is a key part of SRI’s mission.”</p> <p>“I'm glad to see SRI contributing a thoughtful interdisciplinary perspective to the considerations of how AI is affecting media and culture,” adds Bodkin. “This report raises important topics for how various stakeholders should be able to participate and how to allow for more autonomy and diversity.</p> <p>“I believe that incorporating a wider range of values and increasing agency is a critical direction for recommendation systems and algorithmically curated media,” says Bodkin. “The report's call for giving stakeholders&nbsp;meaningful&nbsp;controls over recommendation systems is important and&nbsp;it's an area where we're exploring how AI research can contribute.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Fri, 11 Jun 2021 17:05:38 +0000 Christopher.Sorensen 301272 at U of T's Schwartz Reisman Institute and AI Global to develop global certification mark for trustworthy AI /news/u-t-s-schwartz-reisman-institute-and-ai-global-develop-global-certification-mark-trustworthy-ai <span class="field field--name-title field--type-string field--label-hidden">U of T's Schwartz Reisman Institute and AI Global to develop global certification mark for trustworthy AI</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/GettyImages-1202271610.jpg?h=afdc3185&amp;itok=QtKmrGFt 370w, /sites/default/files/styles/news_banner_740/public/GettyImages-1202271610.jpg?h=afdc3185&amp;itok=5rgRtYlh 740w, /sites/default/files/styles/news_banner_1110/public/GettyImages-1202271610.jpg?h=afdc3185&amp;itok=IqYgeHBE 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/GettyImages-1202271610.jpg?h=afdc3185&amp;itok=QtKmrGFt" alt> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-12-01T17:14:13-05:00" title="Tuesday, December 1, 2020 - 17:14" class="datetime">Tue, 12/01/2020 - 17:14</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">The partnership between U of T's Schwartz Reisman Institute for Technology and Society and the non-profit organization AI Global aims to build trust in responsible, ethical and fair AI systems (photo by Andriy Onufriyenko/Getty Images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/global-lens" hreflang="en">Global Lens</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> <div class="field__item"><a href="/news/tags/global" hreflang="en">Global</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>The products and services we use in our daily lives have to abide by safety and security standards, from car airbags to construction materials. But no such broad, internationally agreed-upon standards exist&nbsp;for artificial intelligence.</p> <p>And yet, AI tools and technologies are steadily being integrated into all aspects of our lives. AI’s potential benefits to humanity, such as improving health-care delivery or tackling climate change, are immense. But potential harms caused by AI tools –from algorithmic bias and labour displacement to risks associated with autonomous vehicles and weapons – risk&nbsp;leading to a lack of trust in AI technologies.</p> <p>To tackle these problems, a new partnership between&nbsp;<a href="http://ai-global.org/">AI Global, a nonprofit organization focused on advancing responsible and ethical adoption of artificial intelligence</a>, and the <a href="https://www.torontosri.ca/">Schwartz Reisman Institute for Technology and Society</a> (SRI) at the University of Toronto will create a globally recognized certification mark for the responsible and trusted use of AI systems.</p> <p>In collaboration with the World Economic Forum’s <a href="https://www.weforum.org/platforms/shaping-the-future-of-technology-governance-artificial-intelligence-and-machine-learning">Shaping the Future of Technology Governance: Artificial Intelligence and Machine Learning</a> platform, the partnership will convene industry actors, policy-makers, civil society representatives&nbsp;and academics to build a universally recognized framework that validates AI tools and technologies as responsible, trustworthy, ethical&nbsp;and fair.</p> <p>“In addition to our fundamental multidisciplinary research, SRI also aims to craft practical, implementable&nbsp;and globally appealing solutions to the challenge of building responsible and inclusive AI,” says&nbsp;<strong>Gillian Hadfield</strong>,&nbsp;the director of the Schwartz Reisman Institute for Technology and Society and&nbsp;a professor at U of T’s Faculty of Law and Rotman School of Management.</p> <p>Hadfield’s current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies. She also works on <a href="/refers%20to%20the%20ideal%20that%20an%20AI%E2%80%99s%20actions%20should%20align%20with%20what%20humans%20would%20want">“the alignment problem”</a>: a term that refers to the ideal that an AI’s actions should align with what humans would want.</p> <p>“One of the reasons why we’re excited to partner with AI Global is that they’re focused on building tangible, usable tools to support the responsible development of AI,” says Hadfield. “And we firmly believe that’s what the world currently needs. The need for clear, objective regulations has never been more urgent.”</p> <p>A wide variety of initiatives have already sought to drive AI development and deployment in the right directions: governments around the world have established advisory councils or created rules for singular AI tools in certain contexts;&nbsp;NGOs and think tanks have published sets of principles and best practices;&nbsp;and private companies such as Google have released official statements about the ways in which their AI practices pledge to be “responsible.”</p> <p>But none of these initiatives amounts to enforceable and measurable regulations. Furthermore, there isn’t always agreement between regions, sectors&nbsp;and stakeholders about what, exactly, is “responsible” and why.</p> <p>“We’ve heard a growing group of voices in recent years sharing insights on how AI systems should be built and managed,” says Ashley Casovan, executive director of AI Global. “But the kinds of high-level, non-binding principles we’ve seen proliferating are simply not enough given the scope, scale&nbsp;and complexity of these tools. It’s imperative that we take the next step now, pulling these concepts out of theory and into action.”</p> <p>A global certification mark like the one being built by SRI and AI Global is the next step.</p> <p>“Recognizing the importance of an independent and authoritative certification program working across sectors and across regions, this initiative aims to be the first third-party accredited certification for AI systems,” says Hadfield.</p> <p>How will it work? First, experts will examine the wealth of existing research and calls for global reform in order to define the key requirements for a global AI certification program. Next, they’ll design a framework to support the validation of the program by a respected accreditation body or bodies. They’ll also design a framework for independent auditors to assess AI systems against the requirements for global certification. Finally, the framework will be applied to various use cases across sectors and regions.</p> <p>“AI should empower people and businesses, impacting customers and society fairly, while allowing companies to engender trust and scale AI with confidence,” says Kay Firth-Butterfield, head of AI and machine learning at the World Economic Forum.&nbsp;“Industry actors that receive certification would be able to show that they have implemented credible, independently-validated&nbsp;and tested processes for the responsible use of AI systems.”</p> <p>The project will unfold over a 12- to 18-month timeline, with two global workshops scheduled for May and November of 2021. A <a href="https://us02web.zoom.us/meeting/register/tZEufuGhrDwoGdF7pRMr832qWt_XeTz_v1Xg">virtual kick-off event</a> will be held on Dec. 9, 2020.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 01 Dec 2020 22:14:13 +0000 Christopher.Sorensen 167722 at 'Making uncertainty visible': οresearcher says AI could help avoid improper denial of refugee claims /news/making-uncertainty-visible-u-t-researcher-says-ai-could-help-avoid-improper-denial-refugee <span class="field field--name-title field--type-string field--label-hidden">'Making uncertainty visible': οresearcher says AI could help avoid improper denial of refugee claims</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/avi-goldfarb-diobox.jpg?h=afdc3185&amp;itok=D8zO0lDZ 370w, /sites/default/files/styles/news_banner_740/public/avi-goldfarb-diobox.jpg?h=afdc3185&amp;itok=Zjc3LSp6 740w, /sites/default/files/styles/news_banner_1110/public/avi-goldfarb-diobox.jpg?h=afdc3185&amp;itok=wtCgI3l8 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/avi-goldfarb-diobox.jpg?h=afdc3185&amp;itok=D8zO0lDZ" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-10-20T08:56:34-04:00" title="Tuesday, October 20, 2020 - 08:56" class="datetime">Tue, 10/20/2020 - 08:56</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Avi Goldfarb is a professor at the University of Toronto's Rotman School of Management and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society (photo courtesy of the Rotman School of Management)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">Alumni</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/refugees" hreflang="en">Refugees</a></div> <div class="field__item"><a href="/news/tags/rotman-school-management" hreflang="en">Rotman School of Management</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p><strong>Avi Goldfarb</strong> is an economist and data scientist specializing in marketing. So how is it that he came to publish a paper on reducing false denials of refugee claims through artificial intelligence?</p> <p>Goldfarb, a professor at the Rotman School of Management at the University of Toronto and a faculty affiliate at the Schwartz Reisman Institute for Technology and Society, read <em>Refugee Law's Fact-Finding Crisis: Truth, Risk, and the Wrong Mistake</em>, a 2019 book by&nbsp;<strong>Hilary Evans Cameron</strong>, a οalumna and&nbsp;assistant professor at the Ryerson University Faculty of Law.</p> <p>He found some remarkable overlaps with his own work, particularly the methodology he employs in his 2018 book, <em>Prediction Machines: The Simple Economics of Artificial Intelligence</em>.</p> <p>It just so happened that Evans Cameron had read Goldfarb’s book, too.</p> <p>“It turned out we effectively had the same classic decision theoretic framework,” says Goldfarb, “although hers applied to refugee law and problems with fact-finding in the Canadian refugee system, and mine applied to implementing AI in business.”</p> <p>Decision theory is a methodology often used in economics and some corners of philosophy – in particular, the branch of philosophy known as formal epistemology. Its concern is figuring out how and why an “agent” (usually a person) evaluates and makes certain choices.</p> <p>The main idea around which Evans Cameron’s and Goldfarb’s thoughts coalesced was this: Human decision-makers who approve or deny refugee claims are, as Goldfarb noted in his research presentation at the Schwartz Reisman weekly seminar on Oct. 7,&nbsp;“often unjustifiably certain in their beliefs.”</p> <p>In other words: people who make decisions about claimants seeking refugee status are more confident about the accuracy of their decisions than they should be.</p> <p>Why? Because “refugee claims are inherently uncertain,” says Goldfarb. “If you’re a decision-maker in a refugee case, you have no real way of knowing whether your decision was the right one.”</p> <p>If a refugee claim is denied and the refugee is sent back to their home country where they may face persecution, there is often no monitoring or recording of that information.</p> <p>Goldfarb was particularly struck by the opening lines of Evans Cameron’s book: “Which mistake is worse?” That is, denying a legitimate refugee claim or approving an unjustified one?</p> <p>In Goldfarb’s view, the&nbsp;answer is clear: sending a legitimate refugee back to their home country is a much greater harm than granting refugee status to someone who may not be eligible for it. This is what Goldfarb refers to as “the wrong mistake.”</p> <p>So, from Goldfarb’s perspective as an economist and data scientist with specialization in&nbsp;machine learning (ML), a type of artificial intelligence, he started to wonder: Could ML’s well-known ability to reduce uncertainty help reduce incidences of “the wrong mistake”?</p> <p>Goldfarb’s collaboration with&nbsp;and Evans Cameron reflects the intersections between the four “conversations” that guide the Schwartz Reisman Institute’s mission and vision. Their work asks not only how information is generated, but also who it benefits, and to what extent it aligns – or fails to align – with human norms and values.</p> <p>“ML has the ability to make uncertainty visible,” says Goldfarb. “Human refugee claim adjudicators may think they know the right answer, but if you can communicate the level of uncertainty [to them], it might reduce their overconfidence.”</p> <p><img class="migrated-asset" src="/sites/default/files/evans-cameron-schwartz-reisman.jpg" alt></p> <p><em>Refugee law expert Hilary Evans Cameron is a οalumna and an assistant professor at Ryerson University’s Faculty of Law (photo courtesy of Ryerson University)</em></p> <p>Goldfarb is careful to note that shedding light on “the wrong mistake” is only part of the battle. “Using AI to reduce confidence would only work in the way described if accompanied by the changes to the law and legal reasoning that Evans Cameron recommends,” he says.</p> <p>“When uncertainty is large, that does not excuse you from being callous or making a decision at all. Uncertainty should help you make a better-informed decision by helping you recognize that all sorts of things could happen as a result.”</p> <p>So, what can AI do to help people realize the vast and varied consequences of their decisions, reducing their overconfidence and helping them make better decisions?</p> <p>“AI prediction technology already provides decision support in all sorts of applications, from health to entertainment,” says Goldfarb. But he’s careful to outline AI’s&nbsp;limitations: It&nbsp;lacks transparency and can introduce and perpetuate bias, among other things.</p> <p>Goldfarb and Evans Cameron advocate for AI to play an&nbsp;assistive role&nbsp;–&nbsp;one in which the necessary statistical predictions of evaluating refugee claim decisions could be improved.</p> <p>“Fundamentally, this is a point about data science and stats. Yes, we’re talking about AI, but really the point is that statistical prediction tools can give us the ability to recognize uncertainty, reduce human overconfidence&nbsp;and increase protection of vulnerable populations.”</p> <p>So, how would AI work in this context? Goldfarb is careful to specify that this doesn’t mean an individual decision-maker would immediately be informed whether they made a poor decision, and given the chance to reverse it. That level of precision and individual-level insight is not possible, he says. So, while&nbsp;we may not solve “the wrong mistake” overnight, he says AI could at least help us understand what shortfalls and data gaps we’re working with.</p> <p>There are many challenges to implementing the researchers’&nbsp;ideas. It would involve designing an effective user interface, changing legal infrastructure to conform with the information these new tools produce, ensuring accurate data-gathering and processing&nbsp;and firing up the political mechanisms necessary for incorporating these processes into existing refugee claim assessment frameworks.</p> <p>While we may be far from implementing AI to reduce incidences of “the wrong mistake” in refugee claim decisions, Goldfarb highlights the interdisciplinary collaboration with Evans Cameron as a&nbsp;promising start to exploring what the future could bring.</p> <p>“It was really a fun process to work with someone in another field,” he says.&nbsp;“That’s something the Schwartz Reisman Institute is really working hard to facilitate between academic disciplines, and which will be crucial for solving the kinds of complex and tough problems we face in today’s world.”</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 20 Oct 2020 12:56:34 +0000 Christopher.Sorensen 166118 at U of T's Centre for Ethics explores ethical questions surrounding the COVID-19 pandemic /news/u-t-s-centre-ethics-explores-ethical-questions-surrounding-covid-19-pandemic <span class="field field--name-title field--type-string field--label-hidden">U of T's Centre for Ethics explores ethical questions surrounding the COVID-19 pandemic</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/UofT16956_0W7A5190-crop.jpg?h=afdc3185&amp;itok=BlD-NS8u 370w, /sites/default/files/styles/news_banner_740/public/UofT16956_0W7A5190-crop.jpg?h=afdc3185&amp;itok=wMNCrE41 740w, /sites/default/files/styles/news_banner_1110/public/UofT16956_0W7A5190-crop.jpg?h=afdc3185&amp;itok=KJ-z8zpZ 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/UofT16956_0W7A5190-crop.jpg?h=afdc3185&amp;itok=BlD-NS8u" alt="Markus Dubber"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-05-25T11:58:02-04:00" title="Monday, May 25, 2020 - 11:58" class="datetime">Mon, 05/25/2020 - 11:58</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Markus Dubber, director of U of T's Centre for Ethics, says οis uniquely situated to tackle the ethical dimensions of the COVID-19 crisis because it's a "global research university with unusual excellence across the board" (photo by Chris Sorensen)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/coronavirus" hreflang="en">Coronavirus</a></div> <div class="field__item"><a href="/news/tags/centre-ethics" hreflang="en">Centre for Ethics</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/faculty-law" hreflang="en">Faculty of Law</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Is it acceptable to financially profit from the COVID-19 crisis? Should researchers publish preliminary COVID-19 drug research in real time? Should governments be using criminal sanctions to enforce public health guidelines? The continuing spread of COVID-19 around the world has raised a wide variety of ethical questions in many areas of our lives.&nbsp;</p> <p>At the University of Toronto, the&nbsp;Centre for Ethics&nbsp;in the Faculty of Arts &amp; Science is exploring these and other pressing issues in their new series of remotely broadcast talks, the&nbsp;Ethics of COVID.</p> <p>“One of our regular attendees recently asked us to host online events about the ethical issues raised by COVID-19, and we thought it was a terrific idea,” says&nbsp;<strong>Markus Dubber</strong>, who is director of the centre and a professor in the Faculty of Law. “So, we’re building a virtual resource of interdisciplinary takes on the ethical dimension of the current crisis.</p> <p>“This series highlights U of T’s unique interdisciplinary strength as a global research university with unusual excellence across the board. Lots of universities are covering the science of COVID-19; there’s much less on the normative dimensions of the crisis.”</p> <p><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen frameborder="0" height="422" src="https://www.youtube.com/embed/videoseries?list=PL3yt4Dw2i5BhK0HfKd83dERWgJRTQ1FqI" width="750"></iframe></p> <p>As a&nbsp;centre that draws researchers of all levels from across the university – from undergraduate to faculty members – the Centre for Ethics has a unique perspective on the ethical dimensions of crucial issues from a variety of academic streams, including law, medicine, public policy, philosophy and beyond.</p> <p>All of the Ethics of COVID videos that have been produced so far are available on a&nbsp;YouTube playlist, <a href="https://ethics.utoronto.ca/events-listings/">with&nbsp;future sessions</a>&nbsp;being added on an ongoing basis.&nbsp;</p> <h3><a href="https://c4ejournal.net/category/ethics-of-covid/">Watch the Ethics of COVID series at the Centre for Ethics</a></h3> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Mon, 25 May 2020 15:58:02 +0000 Christopher.Sorensen 164703 at οcognitive scientist livestreams daily meditation lessons during COVID-19 /news/u-t-cognitive-scientist-livestreams-daily-meditation-lessons-during-covid-19 <span class="field field--name-title field--type-string field--label-hidden">οcognitive scientist livestreams daily meditation lessons during COVID-19</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/GettyImages-1166590625.jpg?h=afdc3185&amp;itok=jwoW4U9A 370w, /sites/default/files/styles/news_banner_740/public/GettyImages-1166590625.jpg?h=afdc3185&amp;itok=ssKuvMel 740w, /sites/default/files/styles/news_banner_1110/public/GettyImages-1166590625.jpg?h=afdc3185&amp;itok=Hj_261pa 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/GettyImages-1166590625.jpg?h=afdc3185&amp;itok=jwoW4U9A" alt="Man meditates while kneeling in his living room"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-04-30T15:38:29-04:00" title="Thursday, April 30, 2020 - 15:38" class="datetime">Thu, 04/30/2020 - 15:38</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">(photo by visualspace via Getty Images)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/alumni" hreflang="en">Alumni</a></div> <div class="field__item"><a href="/news/tags/cognitive-science-program" hreflang="en">Cognitive Science Program</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/psychology" hreflang="en">Psychology</a></div> <div class="field__item"><a href="/news/tags/university-college" hreflang="en">University College</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>How do our minds deal with the increasing complexity of the modern world? How can we train ourselves to face life’s challenges? How do we stay connected in a world full of distractions and alienation?</p> <p><img class="migrated-asset" src="/sites/default/files/JV%20Pic%202019.jpg" alt>Such questions are at the centre of the University of Toronto’s&nbsp;<strong>John Vervaeke</strong>’s academic work – not to mention&nbsp;<a href="https://www.youtube.com/channel/UCpqDUjTsof-kTNpnyWper_Q">his successful&nbsp;YouTube channel</a> – and have never been more relevant than during&nbsp;COVID-19.</p> <p>&nbsp;</p> <p>An assistant professor, teaching stream, in the&nbsp;department of psychology&nbsp;in the Faculty of Arts &amp; Science and the&nbsp;cognitive science&nbsp;program at University College, Vervaeke (left) recently launched a new series of videos in response to the pandemic. Every weekday morning, he livestreams a short lesson about meditation followed by a brief silent meditation period.</p> <p>Arts &amp; Science writer <strong>Jovana Jankovic </strong>recently spoke to Vervaeke about mindfulness and meditation, particularly during times of stress and anxiety.</p> <hr> <p><strong>What are the biggest misconceptions about meditation?</strong></p> <p>One is that meditation is about achieving a kind of relaxation akin to sleepiness – that it should make your body and mind cloudy and dull, and your consciousness fade away. That’s not the kind of relaxation you want in meditation. You want a type of relaxation that enhances your sense of stability and your sensitivity. Meditation is not a vacation, it’s an education.</p> <p>The other misconception is that you’re not meditating unless your mind goes wide open and blank. That’s exactly the wrong attitude. Every time you catch yourself in distraction and come back to your meditative focus, you’re actually building the mindfulness muscle. It’s like doing reps in weight training.</p> <p><strong>Why did you decide to do this series of morning sessions during the pandemic?</strong></p> <p>I think our culture in general is going through a meaning crisis in which we lack a sense of how we are connected to ourselves, to each other, to the world; how much we matter, how much we’re in touch with reality, how much we’re overcoming self-deception, how much we’re affording wisdom.</p> <p>If the meaning crisis is a fundamental sense of disconnection, the COVID crisis certainly exacerbates that. People feel very disconnected from their life, disconnected from the world, disconnected from each other. So, while it’s good to develop a mindfulness practice in general, I think it’s especially pertinent right now.</p> <p><strong>You say that frequency of meditation is more important than length of sessions. Could you tell us more about that?</strong></p> <p>Continuity of practice is more important than quantity,&nbsp;but&nbsp;that doesn’t mean the quantity is irrelevant. If you’re trying to learn something new and you just stay inside your comfort zone, you’re not challenging yourself, which is how learning happens.</p> <p>So, when you’re sitting for meditation, if you only sit as long as it’s comfortable, you don’t get into what psychologists call the “zone of proximal development.”&nbsp;That’s where you learn new things. You have to keep sitting when it’s challenging and you have to use the principles and practices you’re taught to keep going.</p> <p>But if you say, "Well, I can’t sit for a full 15 minutes, so I won’t sit at all," that will erode your practice. If you can honestly say to yourself that you can only sit for five minutes, then sit for five minutes. It’s not always sufficient, but it’s certainly better than nothing.</p> <p><strong>What are some quick and easy tips for beginners who are just trying meditation?</strong></p> <p>People try to get into a posture that is free from unpleasant sensations or discomfort. But you’ll never get there. I’ve been meditating for 20 years and I’ve never found such a thing. That’s really important to remember.</p> <p>And centring your mind doesn’t mean just focusing your attention. It means stepping back and looking at your mind rather than looking through it. The metaphor I use is my glasses: I look through my glasses all day long, but if they are full of gunk, what I have to do is actually step back and look at them.</p> <p>You need to try to do the same thing with your mind: Step back and look at its patterns and processes. And don’t frame meditation as instantly getting your mind to go blank. You have to learn to sit until your mind settles.</p> <p>Finally, you need both a meditative practice and a contemplative practice. We use those terms as if they’re synonyms, but they’re not. To go back to my analogy, if meditation is like stepping back and looking at your glasses, how do you know if you’ve spotted a distortion or a defect in your glasses? You have to put them back on, right?</p> <p>If you put them back on and you see better, that’s a contemplative aspect of the practice. Can you see through what was previously illusion into reality? This relates to work I did with my former student <strong>Leo Ferraro</strong>, who was a TA and an alumnus in the cognitive science program as well as my co-author for a book chapter called “Reformulating the Mindfulness Construct.”</p> <p><strong>Cognitive science is perhaps not as well-known as psychology or neuroscience. What is it?</strong></p> <p>In many ways, cognitive science is close to philosophy and particularly ancient philosophy like that of Socrates and Plato. It’s about studying the mind and its relationship to reality in a very comprehensive manner.</p> <p>In the modern world, we have a discipline that studies the brain and that’s neuroscience. We have computer science, which studies artificial intelligence – that’s about systems that can solve problems and do information processing. Or we have psychologists, who study human behaviour. We might go to a linguist who studies how we communicate through grammar and syntax with each other. Or we might be concerned with culture, so we might go to an anthropologist for that.</p> <p>Those things are not isolated. They interact with and affect each other. And cognitive science gets all of those other disciplines to insightfully talk to each other as they study the mind in their different ways. It’s sort of a very powerful bridging discourse between them.&nbsp;&nbsp;</p> <p><iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen frameborder="0" height="422" src="https://www.youtube.com/embed/AdL9Yd0lB_k" width="750"></iframe></p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Thu, 30 Apr 2020 19:38:29 +0000 Christopher.Sorensen 164333 at U of T's School of the Environment launches first stand-alone graduate degree /news/u-t-s-school-environment-launches-first-stand-alone-graduate-degree <span class="field field--name-title field--type-string field--label-hidden">U of T's School of the Environment launches first stand-alone graduate degree</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/green-roof_1.jpg?h=afdc3185&amp;itok=BT7glmP- 370w, /sites/default/files/styles/news_banner_740/public/green-roof_1.jpg?h=afdc3185&amp;itok=D1DNV3Tu 740w, /sites/default/files/styles/news_banner_1110/public/green-roof_1.jpg?h=afdc3185&amp;itok=g7J1dYz4 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/green-roof_1.jpg?h=afdc3185&amp;itok=BT7glmP-" alt="&quot;&quot;"> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>Christopher.Sorensen</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2020-03-11T09:40:14-04:00" title="Wednesday, March 11, 2020 - 09:40" class="datetime">Wed, 03/11/2020 - 09:40</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item">Nicolas Côté, Rashad Brugmann and Nathan Postma in Trinity College's rooftop garden on the St. George campus (photo by Geoffrey Vendeville)</div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/our-community" hreflang="en">Our Community</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/computer-science" hreflang="en">Computer Science</a></div> <div class="field__item"><a href="/news/tags/faculty-arts-science" hreflang="en">Faculty of Arts &amp; Science</a></div> <div class="field__item"><a href="/news/tags/graduate-students" hreflang="en">Graduate Students</a></div> <div class="field__item"><a href="/news/tags/school-environment" hreflang="en">School of the Environment</a></div> <div class="field__item"><a href="/news/tags/sustainability" hreflang="en">Sustainability</a></div> <div class="field__item"><a href="/news/tags/u-t-mississauga" hreflang="en">οMississauga</a></div> <div class="field__item"><a href="/news/tags/u-t-scarborough" hreflang="en">οScarborough</a></div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>Already home to a range of robust&nbsp;undergraduate programs&nbsp;and two interdisciplinary&nbsp;graduate-level collaborative specializations, the University of Toronto’s School of the Environment&nbsp;is launching its first stand-alone graduate degree program: a Master of Environment &amp; Sustainability (MES).</p> <p>A full-time, 12-month intensive study program, the MES in the Faculty of Arts &amp; Science will give students from a variety of academic backgrounds a broad overview of interactions between humans and their environment at a time when questions about the sustainability of human activity in the world are becoming more urgent by the day.</p> <p>Finding answers to these questions is a rapidly growing priority for both the scholarly community and the general public.</p> <p>“Students want a program that is interdisciplinary from the ground up,” says&nbsp;<strong>Steve Easterbrook</strong>, the director of the School of the Environment and a professor in the department of computer science.</p> <p>“This program will allow them to start with big societal challenges around climate change, sustainability, biodiversity, and build the skills needed to tackle them, drawing on multiple disciplines as they do.”</p> <p>Crucially, the MES program will be problem-focused rather than discipline-focused, and will involve active engagement with non-academic community partners.</p> <p>“The program will emphasize strong teamwork and communication skills to bring together the right set of people to address real-world problems,” says Easterbrook. “We won’t just draw on sources of expertise within the university, but rather we’ll work with communities throughout society, where local knowledge and skills are just as important as academic scholarship.”</p> <p>Students will be given the opportunity to work closely with partners in the private, public and NGO sectors, preparing them for a variety of careers in the research and practice of environmental protection – or for further studies at the doctoral level.</p> <p>The MES will offer students a choice of four concentrations:</p> <ul> <li>Adaptation and resilience</li> <li>Global change science</li> <li>Social sustainability</li> <li>The sustainability transition</li> </ul> <p>All concentrations will be supported by a broad and diverse network of graduate faculty from all three οcampuses and a wide range of academic disciplines, as well as a group of core faculty at the school who are cross-appointed to other departments.</p> <p>“The program will build on our hugely successful ‘Campus as a Living Lab’ concept,” says Easterbrook. “We’ve used the infrastructure of the university itself – its buildings, facilities and services – to test new ideas such as green buildings, sustainable food services, urban gardening, waste-handling and so on. With the MES, we plan to extend this approach to work with groups across the city and beyond.”</p> <p>The MES will have a set of mandatory core courses, a choice of electives and a research thesis on a topic relevant to a student’s further work in graduate school or professional practice. All admitted students will be offered financial support from Arts &amp; Science funds, endowed scholarships, teaching assistantships within the school and a stipend from the thesis supervisor’s research grants.</p> <p>Finally, the school is in the process of renovating a suite of offices to create a research hub, “so that each cohort of MES students can come together and develop a sense of community and interact with our faculty, postdocs and students,” says Easterbrook.</p> <p>“A welcoming and supportive space for students is important, as they will learn from each other as much as they will from our faculty. We look forward to welcoming a diverse group of students who will bring many different skills and interests to the program, and who are passionate about working to address urgent environmental issues.”</p> <p>Pending final approval by the Quality Council and the Ministry of Colleges and Universities, applications for entry to the MES in the 2021-22 academic year will be accepted from late fall 2020.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Wed, 11 Mar 2020 13:40:14 +0000 Christopher.Sorensen 163467 at