
For artificial intelligence (AI) to deliver on its promises in the field of medical imaging, it must first be trained. But in addition to requiring a large amount of data, the task of manual image annotation is extremely time-consuming. Not to mention that in the medical field, patient images come from a variety of sources (MRI, scanner, etc.), which makes machine interpretation more difficult. And what’s more, even the same type of image may vary greatly from one hospital to another.
To address these challenges, José Dolz, a professor in the Department of Software Engineering and IT at École de technologie supérieure (ETS), is developing weakly supervised machine learning methods that can also combine multiple image types. This assistive technology could help physicians with tasks such as diagnosing cancer, recommending treatment options, and monitoring disease progression. The goal? To enable machines to perform accurate diagnostics using multiple images of the same region, even in the absence of certain information.
The researcher and his team worked with publicly available images of brain tumors, particularly glioblastomas—particularly aggressive tumors that are the most common and deadly form of brain cancer in adults—to train their models, which proved to be very effective. The results also demonstrated that the approach can be applied to other types of multi-modal images, for example, for prostate cancer monitoring.
While these models are not yet available in doctors’ offices, many advances have been made, and work is continuing to enable clinical research to begin. The next step is for machines to show greater humility and learn to admit when they don’t know something! Because in order for doctors to integrate them into their practice, they will need to be reliable, regardless of the clinical situation.
Reference
Gaurav Patel et José Dolz (2022). Weakly supervised segmentation with cross-modality equivariant constraints. Medical Image Analysis, vol. 77. https://doi.org/10.1016/j.media.2022.102374