CREATE-MIA Summer School 2014
May 27-30, 2014
May 27, 2014 01:00 PM
May 30, 2014 05:00 PM
|Where||May 27-28 in McConnell 437, May 29-30 in Trottier 3120|
All are welcome to attend the research talks.
The hackathon is open to CREATE-MIA trainees and invitees only.
|Add event to calendar||
The second annual CREATE-MIA Summer School will be held May 27-30 at McGill. The event will begin with a talk by Dr. Haz-Edine Assemlal from NeuroRx Research, one of CREATE-MIA's industrial partners, be followed by presentations of their research by the CREATE-MIA trainees, and then lead in to a collaborative hackathon.
All are welcome to attend Dr. Assemlal's talk and the CREATE-MIA research presentations.
Details will be added to this page as they become available.
"Image processing in clinical trials"
Dr. Haz-Edine Assemlal, NeuroRx Research
Abstract: Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and quantitative assessment. We will briefly talk about some the processing techniques used in neuroimaging to perform adequate quality control and advanced analysis.
CREATE-MIA Research Presentations
Mahsa Dadar, McGill, "Detecting White Matter Hyperintensities Using Markov Random Fields"
Jawad Dahmani, ÉTS, "Non Local Spatial and Angular Matching : a new denoising technique for diffusion MRI"
Chris Donnelly, McGill, "Learning feature-based matching for image registration"
Halleh Ghaderi, McGill, TBA
Houssem-Eddine Gueziri, ÉTS, "Visualizing positional uncertainty in freehand 3D ultrasound"
Shahab Kadkhodaeian Bakhtiari, McGill, "Natural versus Unnatural Stimulation of the Early Visual System"
Sepideh Movaghati, McGill, "Spectrotemporal analysis of the visual cortex response to natural images and their phase-shuffled versions"
Emmanuel Piuze-Phaneuf, McGill, "Restoration of Fragmentary Cardiac Diffusion Volumes"
Samuel St-Jean, Université de Sherbrooke, "Non Local Spatial and Angular Matching : a new denoising technique for diffusion MRI"
Etienne St-Onge, Université de Sherbrooke, "Introduction to Brain Tractography"
Babak Samari, McGill, "Dynamic 3D Facial Expression Capture and Analysis"
Zeshan Yao, McGill, "Monte carlo simulation of light propagation in heterogenous media and filling of the missing data in a 2D image"
Maor Zaltzhendler, McGill, TBA
Azar Zandifar, McGill, "A unified assessment of fully automated hippocampus segmentation methods"
The goal of this hackathon is to develop collaborative, leadership, project management, and coding skills in a relaxed, stimulating environment by tackling a challenging multi-faceted image analysis problem. Data sensing, processing and visualization of medical images are key aspects of the project.
Sub-groups will focus on specific aspects of these modules.
We will attempt to develop an educational augmented reality solution for the visualization and navigation of internal anatomy. The ultimate goal is to have someone stand in front of the sensor camera, and then to project medical images (MRI, CT or others) onto the RGB image where that person is pointing at, while automatically labelling relevant biological structures. For instance, a person pointing his/her finger to her forehead would see an MRI image overlayed onto his/her face showing parts of the brain that are located around that area.
Other interactive methods could eventually be used to control the depth and radius of the display, or the tissue type to display.
The trainees will organize themselves into different teams based on skills and areas of specialization. The results of the hackathon will be presented to a panel consisting of CREATE-MIA program faculty as well as several industrial partners.
This team will be in charge of the 3D sensing (RGB + depth) with a PrimeSense short range camera. They will acquire the data, process it (e.g. computing surface normals), store it, and then visualize it.
This team will be in charge of the handling and visualization of medical images. They will work on loading the data and providing an interface for other teams to use if they need to access the medical volumes. They will also be in charge of the visualization of the medical volumes, using volume rendering techniques (raycasting, volume slicing, etc.).
This team will be in charge of segmenting and detecting different body parts from the point cloud data obtained from the Team Sensor. They will also be in charge of locating where a person is pointing at.
This team will be composed of hackers migrated from other teams once they are done developing their respective features. They will be in charge of using a pinhole camera model with estimated parameters for the PrimeSense camera in order to project 3D images onto the image . The result will be an augmented reality display of medical volumes onto the PrimeSense RGB images.
This team will be in charge of the middle ground for communication between the different teams. They will construct an interface for passing data and for accessing visualization features.