Past event

Dr Raghavendra Selvan (Copenhagen): "Representation Learning for Multi-Modal Machine Learning" School of Psychology and Neuroscience Seminar

Abstract: Recent class of Machine Learning (ML) algorithms driving artificial intelligence (AI) are primarily based on deep learning. These classes of methods are adept at learning compact representations of vast amounts of data such as text, images, and videos. These compact representations can then be used in downstream tasks such as text comprehension, image classification, or for object tracking. Combining multiple types (modality) of data into a common representation space could allow discovery of interactions between data that might not be possible when only studying any single data modality. In this talk, we will look at how DL models build representations, and how this can be used to building more complex models based on multiple data modalities.

Bio: https://raghavian.github.io/

Related papers:

1. Nicklas Boserup, Raghavendra Selvan. Efficient Self-Supervision using Patch-based Contrastive Learning for Histopathology Image Segmentation (2023)
https://arxiv.org/abs/2208.10779
2. Justinas Antanavicius, Roberto Leiras Gonzalez, Raghavendra Selvan. Identifying partial mouse brain microscopy images from Allen reference atlas using a contrastively learned semantic space (2022)
https://arxiv.org/abs/2109.06662
3. Klas Rydhmer, Raghavendra Selvan. Dynamic β-VAEs for quantifying biodiversity by clustering optically recorded insect signals (2021)

https://arxiv.org/abs/2102.05526
4. Raghavendra Selvan, Erik B Dam, Nicki Skafte Detlefsen, Sofus Rischel, Kaining Sheng, Mads Nielsen, Akshay Pai. Lung Segmentation from Chest X-rays using Variational Data Imputation (2020).