ο

οgraduate students use AI to improve imaging tool used during breast cancer surgery

As part of an internship with a medical imaging firm, Bryant Bak-Yin Lim and Ali Yassine developed algorithms for a next-gen imaging system
""

Bryant Bak-Yin Lim, left, and Ali Yassine simulate reviewing a breast cancer tissue scan. As MITACS interns at Perimeter Medical Imaging, Lim and Yassine developed new AI algorithms for breast cancer imaging (photo by Neil Ta)

Bryant Bak-Yin Lim and Ali Yassine got the chance to make a difference in the lives of patients last summer by improving how breast cancer surgery is performed.

The two researchers at the University of Toronto were participating in internships with , a company with offices in Toronto and Dallas, that were organized through . There, they developed artificial intelligence (AI) algorithms for the next generation of a  that helps surgeons visualize tissue microstructures during a lumpectomy to determine whether they have excised all the cancerous tissue.

Their algorithms prioritize suspicious images, making it easier for surgeons to parse the images and reduce time spent in the operating room.

The imaging device is about the size and shape of a small photocopier, says Lim, an MD student in the Temerty Faculty of Medicine who is completing an in the Faculty of Applied Science & Engineering. 

Situated within the operating area, the device employs a technology called optical coherence tomography (OCT), which is similar to ultrasound technology but uses light instead of sound to generate images, resulting in an image resolution 10 times greater than ultrasound.

OCT has been widely used in clinical settings, including ophthalmology, dermatology and interventional cardiology, but Perimeter’s device is the first to bring wide-field OCT imaging into the OR, Lim says.

“The tissue removed from the patient is put in a plastic bag and placed on a glass imaging plate on the device, using mild suction to hold it in place,” Lim says. “Light shoots up from the optical imaging system below, penetrates the tissue and reflects back into the device, which then displays results as a digital image on the monitor.”

Surgeons are looking for any suspicious features in what’s called the “margin,” striving for about a two-millimetre rim of healthy tissue along the outer edges of the excised tissue.

“Currently, to assess a margin, specimens are sent out to a pathologist. That process usually takes days,” says Yassine, who recently graduated with a master’s degree in electrical and computer engineering. “If there’s cancerous tissue left, patients sometimes have to go back for another procedure, with all the risks and resource costs that come with it.

“The type of deep learning algorithm that I trained, called a convolutional neural network, can analyze the tissue image and identify whether the material is suspicious or non-suspicious with a very high accuracy rate.”

The challenge then is to display this analysis for the surgeon so they can make a timely, informed decision on whether they need to return to the operating table and remove more tissue from the patient, who is still under anesthesia.

Lim was tasked with building an efficient user interface to guide the surgeon.

“This device typically outputs hundreds of images, and it’s challenging for a surgeon in the OR to read through all of them and make a decision on the spot,” he says.

“I developed an algorithm that clustered images together based on certain parameters and then displayed only the most representative one.”

The algorithm reduced the hundreds of images to a more manageable number of thumbnails that account for all the information gathered from the tissue scan. The surgeon can also manipulate the digital images to see the tissue from different perspectives.

There is great potential for AI-enhanced tools to make the medical professional’s work – and patient’s experience – smoother, says Ervin Sejdić, a professor in the Edward S. Rogers Sr. department of electrical and computer engineering who supervised both students.

“The Perimeter device that Bryant and Ali worked on is part of a wave of new tools that do the grunt work of sorting through and repackaging the copious amounts of data necessary for complex procedures or diagnoses,” says Sejdić.

“This helps doctors sharpen their focus on the treatment.”

For his part, Yassine didn’t expect he would be this interested in medicine before he undertook this internship. He is finishing up his master’s project – a multi-class labeller algorithm for Perimeter that identifies specific tissues in breast cancer samples – and is planning to continue his career in medical technology.

“I had my own personal health challenges a while back, and that has motivated me to work in this field,” he says. “It’s nice to help people through technology.”

Lim, who has two years left to complete his medical degree, says, “I hope to combine parts of AI and medicine and apply that to my future practice, whether industry research or some other collaborations. That’s where I want to bring my career to.”

“We are growing our MEng program in part because there are so many exciting possibilities out there for graduates,” says Professor Deepa Kundur, chair of the department of electrical and computer engineering.

“Lim and Yassine’s internships at Perimeter demonstrate how quickly hands-on training can translate into real-world results.”

Note: Technologies referenced in this article are currently not available for sale in the United States and have not been evaluated by the FDA.

Engineering