Speakers

External speakers and contributors

Antoine Boutet, Inria, France

 

Antoine Boutet (M) is assistant professor at INSA-Lyon working in the Inria Privatics research group since 2017. Before that, he worked at Mediego, a startup in personalised recommendation systems on the Web, and then he was postdoctoral researcher working on private web search and location privacy. He defended in 2013 his PhD focused on exploring the decentralisation and the privacy in personalised recommendation systems at Inria.
Before his doctorate, he worked on Mobile IPv6 standardisation at ETSI (European Telecommunication Standards Institute) and Inria. His research interests encompass privacy-enhancing technologies, distributed systems and machine learning.

Abstract of the presentation together with Carole Frindel:

Privacy in Machine Learning: From Centralized to Federated Approaches for Medical Data

Machine learning is increasingly used in the healthcare field. This massive deployment of ML exploiting sensitive data and medical images raises privacy concern.

This course will explore centralized and federated learning approaches to address these privacy concerns.
Centralized learning involves training ML models on a centralized server using all available data. This centralisation of data raises privacy concerns due to the potential for data leakage or misuse. In this context, the course will scan minimization and transformation methods to mitigate these risks. Data minimization reduces the amount of sensitive information in a dataset by removing or grouping certain fields or records, while preserving the overall utility of the data. Transformation techniques change the values of certain data elements to make them less identifiable while preserving the statistical properties of the data.
A new ML alternative, federated learning, makes it possible to decentralize the training of a model on all the participants (e.g., different bio centers, patients) without them having to reveal data to others. In this iterative scheme, each participant receives periodically a model that they train locally through its own data before to send the model update to the server which aggregates all updates. This local training taking into account local data specificities can be leveraged to provide personalized healthcare. However, exchange of model updates may still reveal information about the data used to trained the model. 
The course will introduce the advantages and limitations of federated learning.

 

Paul De Brem, journaliste scientifique, Paris

Paul de Brem is a professional anchorman specialised in space, scientific and technical events. During the last 15 years, he has hosted more than 500 symposiums and debates for various clients such as CNRS, Procter&Gamble, Région Ile-de-France, Inserm, EDF, Sanofi, Institut Pasteur, ministère de la Recherche, CNES, etc.

In 2018, he has hosted a two-day ministerial conference in english dedicated to higher education with 48 ministers from 4 continents.

He also leads communication courses for professionals: Media-training, Powerful PowerPoint, Writing for the Internet, etc. for clients such as CNES, Banque de France, Orange, the ENA (Ecole nationale d’administration), etc.

He has been leading courses in scientific journalism at Sorbonne Université for 8 years. Previously, as a science editor for television and printed media, he actively collaborated with LCI, France 2, France 24, le Journal du dimanche, L’Express, etc.

 

Stefan Duffner, LIRIS, Lyon, France

Stefan Duffner received a Bachelor's degree in Computer Science from the University of Applied Sciences Konstanz, Germany in 2002 and a Master's degree in Applied Computer Science from the University of Freiburg, Germany in 2004.
He performed his dissertation research at Orange Labs in Rennes, France, on face image analysis with statistical machine learning methods, and in 2008, he obtained a Ph.D. degree in Computer Science from the University of Freiburg.
He then worked for 4 years as a post-doctoral researcher at the Idiap Research Institute in Martigny, Switzerland, in the field of computer vision and object tracking using Markov Models and efficient sampling methods.
Since 2012, Stefan Duffner is an associate professor in the IMAGINE team of the LIRIS research lab at the National Institute of Applied Sciences (INSA) of Lyon, France.
His main research interests concern machine learning and, in particular, neural networks for weakly supervised or unsupervised learning with few data, continual learning and federated learning as well as algorithms to reduce the complexity and energy-consumption of these models.


Kamel Guerda, IDRIS-CNRS, Paris 

Kamel Guerda joined the Institute for Development and Resources in Scientific Computing (IDRIS - CNRS) in 2021 as an AI engineer. In the framework of the PNRIA, he works with research teams and brings his expertise in software development and use of the Jean Zay supercomputer. He has been involved in a wide range of projects such as mathematical optimal transport, natural language processing, detection of risk situations for vulnerable people, and the application of neural networks in medical imaging.

 

 Thibault Pelletier, Kitware, France

Thibault Pelletier has integrated Kitware Europe in September 2019 as a Lead Developer in the Software Solution team.
Thibault obtained a double MSc degree in engineering from the Art et Metier ParisTech school in France and Mechatronics system design from Lancaster University in the UK.
Afterwards, Thibault has worked for 9 years at ECA Robotics in the field of image processing, machine learning and autonomous underwater vehicle’s navigation algorithms.
At Kitware, Thibault has been working as an R&D engineer and Lead Developer on numerous Medical imaging projects based on 3D Slicer, ITK, RTK and MONAI.

 Presentation abstract: Hands-on introduction to MONAI (Medical Open Network for Artificial Intelligence)

Project MONAI was created to accelerate the pace of research and development of deep learning for medical imaging by offering a common software foundation and a vibrant community. In this hands-on session, we will see what MONAI is and how you can leverage the library to build state of the art machine learning models for medical imaging.

 

Nicolas Thome, Sorbonne University, France

Nicolas Thome is a full professor at Sorbonne University (Paris).
His research interests include machine learning and deep learning for understanding low-level signals, e.g. vision, time series, acoustics, etc. He also explores solutions for combining low-level and higher-level data for multi-modal data processing. His current application domains are essentially targeted towards healthcare, physics and autonomous vehicles.  
He is involved in several French, European and international collaborative research projects on artificial intelligence and deep learning.
 
Oral presentation astract
 
Transformers for medical image segmentation
 
Transformers are attention models that have been introduced in natural language processing (NLP), and that are nowadays extensively used for low-level data, e.g. images or time series. I present the main features of transformers and position them with respect to traditional neural networks including stronger inductive biases, e.g. ConvNets or RNNs. 
For medical image analysis, a key feature in transformers relates to their ability to model long-range interactions, which enables to include a full contextual information in images. I present seminal attempts for using transformers for medical image analysis, especially for semantic segmentation. A major practical bottleneck in transformers is the attention's computation, which is quadratic in the number of input tokens and poses major challenges for medical image analysis, especially when dealing with 3D inputs. I discuss recent approaches to tackle this problem.

Denis Trystram, INP Grenoble, Grenoble Alpes University

 

Denis Trystram is a professor in Computer Science at Grenoble INP, Grenoble Alpes University. He is an honorary member of the Institut Universitaire de France. His past research has focused on the design and analysis of algorithms for efficient resource management in distributed platforms. He contributed in particular to the optimization of energy consumption on large-scale computing systems and data centers.

In recent years, he has focused his research mainly on the ecological impacts of the digital transition. He holds a chair in the Grenoble Institute of Artificial Intelligence MIAI (on the theme of edge computing and sober learning) where he co-hosts a group that questions the uses of the AI community on the environment. He is part of the GdS EcoInfo and often intervenes in society on mediation actions on digital-related ecological issues.

 

 

Local speakers and contributors

 

 

 

Olivier Bernard, Creatis laboratory, Lyon, France

Olivier Bernard received his Electrical Engineering degree and Ph.D. from the University of Lyon (INSA), France, in 2003 and 2006, respectively. He was a Postdoctoral Fellow with the Biomedical Imaging Group at the Federal Polytechnic Institute of Lausanne, EPFL, Switzerland in 2007. Currently, he is a Professor with the University of Lyon (INSA) and the CREATIS laboratory in France. He is also the head of the Myriad research team, which specializes in medical image analysis, simulation, and modeling. His current research interests focus on image analysis through deep learning techniques, with applications in cardiovascular imaging, blood flow imaging, and population representation. Prof. Bernard was also an Associate Editor of the IEEE Transactions on Image Processing.

 

Christian Desrosiers, École de Technologie Supérieure, Canada

Prof. Desrosiers obtained a Ph.D. in Applied Mathematics from Polytechnique Montreal in 2008, and was a postdoctoral researcher at the University of Minnesota with prof George Karypis. In 2009, he joined École de technologie supérieure (ÉTS) as professor in the Departement of Software and IT Engineering. He is codirector of the Laboratoire d’imagerie, de vision et d’intelligence artificielle (LIVIA) and a member of the REPARTI research network. He has over 100 publications in the fields of machine learning, image processing, computer vision and medical imaging, and has served on the scientific committee of several important conferences in these fields.

Presentation abstracts

Generative, auto-encoders and adversarial methods for medical imaging  together with Olivier Bernard

Generative models are now well-established tools in medical imaging, successfully applied to dimensionality reduction, generative processes, and domain adaptation. In this talk, we will present the two most popular techniques and their variants: auto-encoders and GANs. For each case, we will provide a clear mathematical background, along with a large set of examples.

 Basics of deep learning - Part I

 

In the first part of the tutorial, we start by introducing the basic element of neural networks, the neuron, and explain how it relates to linear regression and classification. We then present the logistic regression and Perceptron models for binary classification, describe their corresponding loss functions, and show how these models can be trained with a stochastic gradient descent algorithm. We end the first part of the tutorial by explaining how logistic regression and the Perceptron can be extended to multi-layer networks and multiclass classification tasks.

 

 

 

Basics of deep learning - Part II

 

The second part of the tutorial first covers the fundamental principles of training neural networks, including backpropagation and mini-batch stochastic gradient descent. We then present the main activation functions for neural networks, describing their respective advantages and drawbacks. We finish the tutorial by introducing convolutional neural networks (CNN), presenting their key properties, and illustrating their use in different image-based applications.

 

 

Jose Dolz, École de Technologie Supérieure, Canada

 

 

Prof Jose Dolz is currently Associate Professor at École de technologie supérieure Montreal. His current research focuses on deep learning, medical imaging, optimization and learning strategies with limited supervision. He has authored over 80 fully peer-reviewed papers, many of which are published in the top venues in medical imaging (MICCAI/MedIA/TMI/IPMI/NeuroImage), computer vision (CVPR) and machine learning (ICML, NeurIPS), and organized 5 tutorials in deep learning with limited supervision (MICCAI’19, MICCAI’20, MICCAI’21, MICCAI’22 and ICPR’22). Jose serves regularly as Program Committee for MICCAI and MIDL, and has been recognized as Outstanding reviewer at prestigious conferences (ECCV’20, CVPR’21, CVPR’22, NeurIPS’22).

Presentations abstracts

 

Weakly and Semi-supervised learning:

 

Deep convolutional neural networks (CNNs) are currently dominating semantic segmentation problems, yielding ground-breaking results when full-supervision is available, in a breadth of computer vision and medical imaging applications. A major limitation of such fully supervised models is that they require very large amounts of reliable training data, i.e., accurately and densely labeled (annotated) images built with extensive human labor and expertise. This is not feasible in many important problems and applications. In medical imaging, for instance, supervision of semantic segmentation requires scarce clinical-expert knowledge and labor-intensive, pixel-level annotations of a large number of images, a difficulty further compounded by the complexity of the data, e.g., 3D, multi-modal or temporal data. Weakly- and semi-supervised learning methods, which do not require full annotations and scale up to large problems and data sets, are currently attracting substantial research interest in both the CVPR and MICCAI communities to alleviate the labeled data scarcity issue. The general purpose of these methods is to mitigate the lack of annotations by leveraging unlabeled data with priors, either knowledge-driven (e.g., anatomy priors) or data-driven (e.g., domain adversarial priors). For instance, semi-supervision uses both labeled and unlabeled samples, weak supervision uses uncertain (noisy) labels, and domain adaptation attempts to generalize the representations learned by CNNs across different domains (e.g., different modalities or imaging protocols). In semantic segmentation, a large body of very recent works focused on training deep CNNs with very limited and/or weak annotations, for instance, scribbles, image level tags, bounding boxes, points, or annotations limited to a single domain of the task (e.g., a single imaging protocol). Several of these works showed that adding specific priors in the form of unsupervised loss terms can achieve outstanding performances, close to full-supervision results, but using only fractions of the ground-truth labels.

 

Few-Shot learning

 

Deep learning models are dominating pattern recognition, machine learning and computer vision, and have achieved human level performances in various tasks, such as image classification and segmentation. The success and unprecedented performances of state-of-the-art learning models are often achieved via training on large-scale labeled data sets. Nevertheless, modern models still present two important limitations. First, they still encounter difficulties generalizing to novel tasks (e.g., new image classes) unseen during training, given only a few labeled instances for these new tasks. And second, trained models trend to be poorly calibrated, leading to overconfident predictions that may assign high confidence to incorrect predictions. Recently, few-shot learning and calibration have emerged as powerful strategies to address each of these issues, which have recently triggered substantial research efforts and interests, with large numbers of publications within the machine learning, computer vision and medical imaging communities. In this talk, we will introduce the basic concepts of these techniques, show relevant literature and will provide insights for future works on these topics.

 

Nicolas Duchateau, CREATIS laboratory, Lyon, France

Nicolas Duchateau is Associate Professor (Maître de Conférences) at the Université Lyon 1 and the CREATIS lab in Lyon, France. His research focuses on the statistical analysis of medical imaging data to better understand disease apparition and evolution, and to a certain extent computer-aided diagnosis. On the technical side, it mainly covers post-processing through statistical atlases and machine learning techniques. It also includes dedicated pre-processing and validation, among which the generation of synthetic databases. On the clinical/applicative side, it covers the study of cardiac function from heart failure populations, through routine imaging data and advanced 2D/3D shape, motion and deformation descriptors.

 

Nicolas Ducros, CREATIS Laboratory, Lyon, France

Nicolas Ducros has been an Associate Professor in the Electrical Engineering Department of Lyon University and with the Biomedical Imaging Laboratory CREATIS since 2014. His research interests include signal and image processing, and applied inverse problems with particular emphasis on single-pixel imaging and spectral computed tomography. His recent work focus on deep learning for image reconstruction and, in particular,  on network architectures that can be interpreted as conventional reconstruction methods. He is an Associated Member of the IEEE Bio Imaging and Signal Processing Technical Committee.

Presentation abstract: Deep learning for image reconstruction

In this lesson we will study the reconstruction of an image from a sequence of a few linear measurements corrupted by noise. This generic problem has many biomedical applications, such as computed tomography, positron emission tomography and optical microscopy. We will first review classical approaches that rely on the optimisation of hand-crafted objective functions. Then, we will introduce modern data-driven approaches that bridge the gap between the former approaches and deep learning. In particular, we will examine unrolled networks that rely on the computation of traditional solutions (e.g. pseudo-inverse, maximum a posteriori). Unrolled networks can be interpreted as iterative schemes optimised with respect to a particular database. We will discuss network variants and learning strategies. Finally, we will focus on an optical problem in which the setup acquires some coefficients of the Hadamard transform of the image of the scene. We will present reconstruction results from experimental datasets acquired under different noise levels.

The lesson will be accompanied by a practical session in which we will address the problem of limited view computed tomography.

 

 

Carole Frindel, CREATIS laboratory, Lyon, France

Carole Frindel is an Associate Professor at INSA Lyon and at the CREATIS laboratory in Lyon, France. Her research focuses on computational medical imaging, with a particular interest in predicting the outcome of stroke. This task is complex because the lesion visible in imaging evolves up to one month later. For this purpose, I develop new approaches in machine and deep learning, for the fusion, encoding and simulation of multimodal data. I strive to bridge the gap between theory and applications.

Abstract of the presentation together with Antoine Boutet:

Privacy in Machine Learning: From Centralized to Federated Approaches for Medical Data

Machine learning is increasingly used in the healthcare field. This massive deployment of ML exploiting sensitive data and medical images raises privacy concern.

This course will explore centralized and federated learning approaches to address these privacy concerns.
Centralized learning involves training ML models on a centralized server using all available data. This centralisation of data raises privacy concerns due to the potential for data leakage or misuse. In this context, the course will scan minimization and transformation methods to mitigate these risks. Data minimization reduces the amount of sensitive information in a dataset by removing or grouping certain fields or records, while preserving the overall utility of the data. Transformation techniques change the values of certain data elements to make them less identifiable while preserving the statistical properties of the data.
A new ML alternative, federated learning, makes it possible to decentralize the training of a model on all the participants (e.g., different bio centers, patients) without them having to reveal data to others. In this iterative scheme, each participant receives periodically a model that they train locally through its own data before to send the model update to the server which aggregates all updates. This local training taking into account local data specificities can be leveraged to provide personalized healthcare. However, exchange of model updates may still reveal information about the data used to trained the model. 
The course will introduce the advantages and limitations of federated learning.

 

 

 

Thomas Grenier, CREATIS laboratory, Lyon, France

 

 

Dr. Thomas Grenier is Associate Professor at INSA Lyon Electrical Engineering department and at the CREATIS lab in Lyon, France.

 

My research focuses on longitudinal analysis of medical data to study evolution as Multiple Sclerosis lesions, functional activity (muscle and hydrocephaly). Most of these studies involve a segmentation task and dedicated pre and post processing steps. Clustering (spatio-temporal mean-shift), semi-supervised (multi-atlas with machine learning) or fully supervised (DNN) schemes are used to solve such problems considering their specific constraints.

 

Pierre-Marc Jodoin, University of Sherbrooke, Canada

 

Pierre-Marc Jodoin is from  the University of Sherbrooke, Canada where he works as a full professor since 2007.  He specializes in the development of novel techniques for machine learning and deep learning applied to computer vision and medical imaging.   He mostly works in video analytics and brain and cardiac image analytics.  He is the co-director of the Sherbrooke AI plateform and co-founder of the medical imaging company called "Imeka.ca" which specializes in MRI brain image analytics. web site: http://info.usherbrooke.ca/pmjodoin/

 

Hervé Lombaert, École de Technologie Supérieure, Canada

 

Hervé Lombaert is a Professor at ETS Montreal, Canada, where he holds a Canada Research Chair in Shape Analysis in Medical Imaging. His research focuses on the statistics and analysis of shapes in the context of machine learning and medical imaging. His work on graph analysis has impacted the performance of several applications in medical imaging, from the early image segmentation techniques with graph cuts, to recent surface analysis with spectral graph theory and graph convolutional networks. Hervé has authored over 60 papers, 5 patents, and has presented over 20 invited talks. He had the chance to work in multiple centers, including Microsoft Research (Cambridge, UK), Siemens Corporate Research (Princeton, NJ), Inria Sophia-Antipolis (France), McGill University (Canada), and the University of Montreal (Canada). His research has also received several awards, including the Erbsmann Prize in Medical Imaging.

More at [https://profs.etsmtl.ca/hlombaert]

Oral presentation abstract

Geometric deep learning - Examples on brain surfaces

How to analyze complex shapes, such as of the highly folded surface of the brain?  This talk will show how spectral shape analysis can benefit general problems where data fundamentally lives on surfaces.  Here, we exploit spectral coordinates derived from the Laplacian eigenfunctions of shapes.  Spectral coordinates have the advantage over Euclidean coordinates, to be geometry aware and to parameterize surfaces explicitly.  This change of paradigm, from Euclidean to spectral representations, enables a classifier to be applied *directly* on surface data, via spectral coordinates.  Brain matching and learning of surface data will be shown as examples.  The talk will focus, first, on spectral representations of shapes, with an example on brain surface matching; second, on the basics of geometric, or spectral deep learning; and finally, on the learning of surface data, with an example on automatic brain surface parcellation. 

 

Odyssée Merveille, CREATIS laboratory, Lyon France

 

Odyssée Merveille has been an associate professor at INSA Lyon and at the Creatis laboratory since 2019.
She received a PhD degree in computer science from the Université Paris-Est in 2016 and was a postdoc at Université de Strasbourg. Her scientific interests include inverse problems and deep learning for medical imaging, in particular for the analysis of vascular networks.

Fabien Millioz CREATIS laboratory, Lyon, France

Fabien Millioz graduated from the École Normale Supérieure de Cachan, France and received the M.Sc. degree in 2005 and Ph.D. degree in 2009 both in signal processing from the Institut National Polytechnique of Grenoble, France. Since 2011, he is lecturer at University Claude Bernard Lyon 1, and member of the Creatis lab since 2015.

His research interests are statistical signal processing, fast acquisition, compressed sensing and neural networks.

 

Bruno Montcel CREATIS laboratory, Lyon, France

Bruno Montcel is Associate Professor (Maître de Conférences - HDR) at the Université Lyon 1 and the CREATIS lab in Lyon, France. His research focuses on optical imaging methods and experimental set up for the exploration of brain physiology and pathologies. It mainly focuses on intraoperative and point of care hyperspectral optical imaging methods for medical diagnosis and gesture assistance.

 

Michaël Sdika CREATIS laboratory, Lyon, France

Michaël Sdika is from the CREATIS lab in Lyon, France. His current research field focuses on the development of new analysis method based on deep learning for medical data. His main contributions are centered around image registration, atlas based segmentation, structure localization and machine learning for MR image of the nervous central system.

 

Personnes connectées : 2 Vie privée
Chargement...