Alexander Bernier / en Research shows decision-making AI could be made more accurate when judging humans /news/research-shows-decision-making-ai-could-be-made-more-accurate-when-judging-humans <span class="field field--name-title field--type-string field--label-hidden">Research shows decision-making AI could be made more accurate when judging humans</span> <div class="field field--name-field-featured-picture field--type-image field--label-hidden field__item"> <img loading="eager" srcset="/sites/default/files/styles/news_banner_370/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=Ax3BG70g 370w, /sites/default/files/styles/news_banner_740/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=sdmLkXSN 740w, /sites/default/files/styles/news_banner_1110/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=GsTOwlcw 1110w" sizes="(min-width:1200px) 1110px, (max-width: 1199px) 80vw, (max-width: 767px) 90vw, (max-width: 575px) 95vw" width="740" height="494" src="/sites/default/files/styles/news_banner_370/public/2023-05/thumbnail_scales-crop.jpg?h=afdc3185&amp;itok=Ax3BG70g" alt="an illustration of the scales of justice "> </div> <span class="field field--name-uid field--type-entity-reference field--label-hidden"><span>siddiq22</span></span> <span class="field field--name-created field--type-created field--label-hidden"><time datetime="2023-05-23T18:08:43-04:00" title="Tuesday, May 23, 2023 - 18:08" class="datetime">Tue, 05/23/2023 - 18:08</time> </span> <div class="clearfix text-formatted field field--name-field-cutline-long field--type-text-long field--label-above"> <div class="field__label">Cutline</div> <div class="field__item"><p>(photo by wildpixel/iStock)</p> </div> </div> <div class="field field--name-field-author-reporters field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/jovana-jankovic" hreflang="en">Jovana Jankovic</a></div> </div> <div class="field field--name-field-secondary-author-reporter field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/authors-reporters/alexander-bernier" hreflang="en">Alexander Bernier</a></div> </div> <div class="field field--name-field-topic field--type-entity-reference field--label-above"> <div class="field__label">Topic</div> <div class="field__item"><a href="/news/topics/breaking-research" hreflang="en">Breaking Research</a></div> </div> <div class="field field--name-field-story-tags field--type-entity-reference field--label-hidden field__items"> <div class="field__item"><a href="/news/tags/schwartz-reisman-institute-technology-and-society" hreflang="en">Schwartz Reisman Institute for Technology and Society</a></div> <div class="field__item"><a href="/news/tags/artificial-intelligence" hreflang="en">Artificial Intelligence</a></div> <div class="field__item"><a href="/news/tags/equity" hreflang="en">Equity</a></div> <div class="field__item"><a href="/news/tags/machine-learning" hreflang="en">machine learning</a></div> <div class="field__item"><a href="/news/tags/research-innovation" hreflang="en">Research &amp; Innovation</a></div> </div> <div class="field field--name-field-subheadline field--type-string-long field--label-above"> <div class="field__label">Subheadline</div> <div class="field__item">New study by researchers from οand MIT suggests that clearly labelling data might help reduce bias</div> </div> <div class="clearfix text-formatted field field--name-body field--type-text-with-summary field--label-hidden field__item"><p>A new study from researchers at the University of Toronto and the Massachusetts Institute of Technology (MIT) is challenging conventional wisdom on human-computer interaction and reducing bias in AI.</p> <p>The paper, which was <a href="https://www.science.org/doi/10.1126/sciadv.abq0701">published this month</a> in the journal <em>Science Advances</em>, demonstrates empirical evidence on the relationship between the methods used to label the data that trains machine learning (ML) models and the performance of those models when applying norms.</p> <p>MIT PhD student <strong><a href="https://aparna-b.github.io/researcher/">Aparna Balagopalan</a></strong>, a <strong><a href="https://www.youtube.com/watch?v=-WaikW7aSp0">graduate of U of T</a></strong>'s masters program in applied computing, is lead author, with co-authors <strong><a href="https://srinstitute.utoronto.ca/who-we-are/#gillian-hadfield-bio">Gillian Hadfield</a></strong>, director of U of T’s <a href="https://srinstitute.utoronto.ca/">Schwartz Reisman Institute for Technology</a> (SRI), Schwartz Reisman Chair in Technology and Society, CIFAR AI Chair, and a professor of law and strategic management in the Faculty of Law; <a href="https://www.cs.toronto.edu/~madras/">David Madras</a>, a PhD student in the <a href="http://learning.cs.toronto.edu/">Machine Learning Group</a> at the department of computer science in the Faculty of Arts &amp; Science and the Vector Institute; research assistant <a href="https://ca.linkedin.com/in/david-yang-1986b8b1">David H. Yang</a>, a graduate student in the applied computing program in the Faculty of Arts &amp; Science; <a href="https://healthyml.org/marzyeh/">Marzyeh Ghassemi</a>, a faculty affiliate at SRI and an assistant professor at MIT; and Dylan Hadfield-Menell, an assistant professor at MIT.</p> <p>Much of the scholarship in this area presumes that calibrating AI behaviour to human conventions requires value-neutral, observational data from which AI can best reason toward sound normative conclusions. But the new research suggests that labels explicitly reflecting value judgments, rather than the facts used to reach those judgments, might yield ML models that assess rule adherence and rule violation in a manner that humans would deem acceptable.</p> <p>To reach this conclusion, the authors conducted experiments to see how individuals behaved when asked to provide factual assessments as opposed to when asked to judge whether a rule had been followed.</p> <div class="align-center"> <div class="field field--name-field-media-image field--type-image field--label-hidden field__item"> <img loading="lazy" src="/sites/default/files/2023-05/thumbnail_composite-trio-inside-750-500.jpg" width="750" height="500" alt="&quot;&quot;"> </div> </div> <p><em>From left to right: MIT PhD student Aparna Balagopalan, SRI Director Gillian Hadfield and SRI Faculty Affiliate Marzyeh Ghassemi (supplied photos)</em></p> <p>For example, one group of participants was asked to label dogs that exhibited certain characteristics – namely, those that were large, not well groomed, or aggressive. Meanwhile, another group of participants was instead asked whether or not the dogs shown to them violate a building pet code predicated on the same characteristics, rather than assessing the presence or absence of specific features.</p> <p>The first group was asked to make a factual assessment – and the second, a normative one.</p> <p>Hadfield says the researchers were surprised by the findings.</p> <p>“When you ask people a normative question, they answer it differently than when you ask them a factual question,” she says.</p> <p>Human participants in the experiments were more likely to recognize (and label) a factual feature than the violation of an explicit rule predicated on the factual feature.</p> <p>Current thinking on this topic presumes that calibrating AI behaviour to human conventions requires value-neutral, observational data from which AI can best reason toward sound normative conclusions.</p> <p>But this new research suggests that labelling data with labels that explicitly reflect value judgments, rather than the facts used to reach those judgments, might yield ML models that assess rule-adherence and rule-violation in a manner that we humans would deem acceptable.</p> <p>The results of these experiments showed that ML models trained on normative labels achieve higher accuracy in predicting human normative judgments. Essentially, they are better at predicting. Therefore, if we train automated judgment systems on factual labels – which is how several existing systems are being built – they are likely overpredicting rule violations.</p> <p>The implications of the research are significant. Not only does it show that reasoning about norms is qualitatively different from reasoning about facts, but it also has important real-world ramifications.</p> <p>“People could say, ‘I don’t want to be judged by a machine – I want to be judged by a human,’ given that we’ve got evidence to show that the machine will not judge them properly,” Hadfield says.</p> <p>“Our research shows that this factor has a bigger effect [on an ML model’s performance] than things like model architecture, label noise and subsampling – factors that are often looked to for errors in prediction.”</p> <p>Ensuring that the data used to train decision-making ML algorithms mirrors the results of human judgment – rather than simple factual observation – is no small feat. Proposed approaches include ensuring that the training data used to reproduce human assessments is collected in an appropriate context.</p> <p>To this end, the authors of the paper recommend that the creators of trained models and of datasets supplement those products with clear descriptions of the approaches used to tag the data – taking special care to establish whether the tags relate to facts perceived or judgments applied.</p> <p>“We need to train on and evaluate normative labels. We have to pay the money for normative labels, and probably for specific applications. We should be a lot better at documenting that labelling practice. Otherwise, it’s not a fair judgment system,” Hadfield says.</p> <p>“There’s a ton more research we need to be doing on this.”</p> <p>The study was funded by the Schwartz Reisman Institute for Technology and Society and the Vector Institute, among others.</p> </div> <div class="field field--name-field-news-home-page-banner field--type-boolean field--label-above"> <div class="field__label">News home page banner</div> <div class="field__item">Off</div> </div> Tue, 23 May 2023 22:08:43 +0000 siddiq22 301760 at