CSE 523-524 Master's Projects

Prof. Arie Kaufman

Email: ari@cs.stonybrook.edu

Office: NCS 152

CT to CT Translation with GANs

Coordinator: Konstantin Dmitriev (kdmitriev@cs.stonybrook.edu)

       Pancreatic cancer is among the most lethal cancer types with an extremely high mortality rate. Contrast-enhanced computed tomography (CT) is one of the most common imaging procedures used for pancreatic cancer screening. Standard pancreas-specific imaging protocol involves acquiring CT images 20 seconds after contrast injection (arterial phase) and 60 seconds after contrast injection (venous phase), which are helpful in better differentiation of benign and malignant pancreatic lesions and the assessment of the surrounding organs and vessels. Computer-aided diagnosis (CAD) systems which analyze the CT images and can potentially improve the diagnostic accuracy often require training datasets of manual annotations, such as outlines of the pancreas and pancreatic lesions. However, creating these annotations is a time-consuming and intensive task, and with regards to our project, only one phase is annotated. Given the potential benefits of analyzing arterial and venous phases, it is critical to acquire a training dataset of both phases. This project is focused on estimating CT images of one phase given the corresponding CT images of the other phase utilizing generative adversarial networks (GANs). Students are expected to have some background in computer vision and machine learning.


Augmented Colonoscopy

Coordinator: Shawn Mathew (shawmathew@cs.stonybrook.edu)

        Virtual colonoscopy is a noninvasive procedure for screening colorectal cancer. Optical colonoscopy is invasive, but allow the doctor to take biopsies and remove polyps. This project focuses on bringing some of the technologies found in virtual colonoscopy into optical colonoscopy. Optical colonoscopy tends to have a lot of corrupted images, including ones with instruments, fluid motion, and motion blur. One of the tasks here is to detect frames that have colonoscopy instruments in them or are corrupted by fluid motion. In addition to this, we would like to remove the instrument in these corrupted frames. We would like to use deep learning methods to handle this task.

Requirements:
Python/Pytorch proficiency, background in computer vision and deep learning frameworks, blender experience would be helpful.


High Performance Stereoscopic Volume Rendering for Immersive Virtual Colonoscopy (iVC)

Coordinator: Shreeraj Jadhav (sdjadhav@cs.stonybrook.edu)

        Virtual colonoscopy (VC) is a non-invasive screening technique for colorectal cancer. A patient undergoes a CT scan, and the colon is digitally cleansed and reconstructed from the CT images. A radiologist can then virtually navigate through this reconstructed colon looking for polyps (small bumps in the colon surface), the precursor for cancer. For improved visual appearance, volume rendering through the CT data is preferred over rendering a triangular mesh, yielding a smoother and more accurate view of the colon surface. Currently, VC systems are displayed on a conventional desktop screen. Our current work is to advance VC into immersive environments, developing an immersive VC (iVC), allowing for the user to experience greater field-of-view and field-of-regard, which should lead to increased accuracy and decreased interrogation time. To accomplish this, high resolution imagery must be generated at high frame rates, in stereo vision, to provide smooth motion when flying through the colon. To this end, we seek to enhance the speed of the underlying volume rendering implementation. For higher efficiency, techniques such as empty space skipping and multi-GPU rendering can be utilized along with advanced techniques for improving image quality such as global illumination using ray tracing.

The major points of this project are:

  • Test the existing volume rendering implementations (e.g., OSPRay, VTK, etc.) for extending to stereoscopic 3D rendering.
  • Improve the performance of the volume rendering for iVC by accelerating the ray casting, providing multiple light sources, and performing other enhancements.
  • Interested students should have prior experience in C++ and DirectX or OpenGL, and concepts in computer graphics. A strong interest in developing fully functional flexible code and ability to read and implement newer techniques in volume rendering is desirable.

Free-form Volume Deformation for Illustrative Visualization of Pancreas

Coordinator: Shreeraj Jadhav (sdjadhav@cs.stonybrook.edu)

        Visual diagnosis of pancreatic cancer involved 3D visualization of pancreas extracted from CT scans. Visualization of important features of the pancreas includes cysts, primary ducts and surface of the pancreas. The geometry of the pancreas is not fixed due to the flexible nature of the organ in addition to deformations caused due to cystic growth. The project seeks to investigate state-of-the-art volume deformation techniques to simplify the process of visualization of the pancreas and generate high quality illustrative viewpoints that show all important features. For example, straightening the centerline of the pancreas to simplify its geometry while providing an automatically calculated viewpoint to visualize all the important features such as the cyst and the primary duct that passes through the interior of the pancreas. Use of volume deformation for illustrative visualization has been used before. However, deforming a volume entails trade off between preservation of geometric properties such as angles, distances, volume, etc. The main goal of the project is to find existing an volume deformation technique or develop a new technique that can produce desireable results.


Developing Applications for the 1.5 Gigapixel Reality Deck

Coordinator: Saeed Boorboor (sboorboor@cs.stonybrook.edu)

        The Reality Deck is the world's first immersive gigapixel display. Comprised of 416 LCD panels, it offers a combined resolution of more than 1.5 billion pixels. These displays are driven by a cluster of 18 render nodes with dual hex-core CPUs and 4 GPUs. Developing applications for distributed immersive systems is a complex and involved process. At the very least, any application input needs to be captured centrally so that the application state can be consistent between nodes. However, Unity3D and other VR libraries simplify this process provide us with an abstract interface in C# for implementing applications. Students who pursue this project will develop distributed visualization applications that will run on the Reality Deck. The application domain is wide open.
Some suggestions include:

  • Integrate and test the support for sonification and body tracking with Unity3D;
  • Improve the current volume renderer to support large volumes;
  • Distributed information visualization (parallel coordinates, scatterplots, and more advanced methods).
        Students interested in this project should have a very solid C# / C++ background as well as be familiar with developing 3D applications using OpenGL and Unity3D libraries. Knowledge of GPU compute languages (OpenCL/CUDA) is a plus and expands the application domain.

Rendering of brain microscopy datasets on VR Headsets

Coordinators: Saeed Boorboor (sboorboor@cs.stonybrook.edu)
Ping Hu (pihu@cs.stonybrook.edu)

        The use of consumer-grade virtual reality technology, such as head-mounted displays (HMDs), to interact with scientific data directly in 3D can help domain experts better study complex data faster and with less physical and mental strain. Advances in imaging technology have been instrumental in furthering scientific research. However, rendering of scientific data in an HMD is faced with the challenge that imaging technology is rapidly outpacing the supporting tools in terms of raw data size. State-of-the-art imaging technologies regularly produce terabytes worth of images, and thus few existing tools are capable of handling data at this scale. These challenges seriously impede the interactive rendering performance in virtual reality (VR). Students interested in this project can either focus on one or both milestones:

  • Develop efficient volume rendering techniques for interactive visualization in VR. Specifically, efficient data structure and accelerated rendering methods will be the major goal in this project.
  • A specific application for visualization of scientific data in an HMD VR environment is the rendering of brain microscopy data. The understanding of neural connections that underline brain function is central to neurobiology research. Students will have to implement an efficient GPU based volume rendering technique that utilize the unique characteristics of brain microscopy data.

Students interested in these projects should have a strong C++ background and be familiar with at least one rendering language (HLSL or GLSL). Being knowledgeable in CUDA is a plus.


Diagnosis and Classification of Cancer from Microscopic Biopsy Images

Coordinator: Parmida Ghahremani (pghahremani@cs.stonybrook.edu)

This project seeks to develop a deep learning-based system for automatic diagnosis and classification of cancer using microscopic biopsy images. The detection and classification of cancer from microscopic biopsy images are a challenging task because an image usually contains many clusters and overlapping objects. Therefore, various steps should be involved in the algorithm including enhancement of microscopic images, normalization of the images, nuclei segmentation, features extraction, cancer detection and finally the classification.

R/D Impacts: This project requires implementing image processing techniques and developing a new deep learning algorithm for detecting and classifying cancer in pathology slides.

Educational Impacts: This project will provide graduate students opportunities to obtain research experience in 1) developing novel deep learning algorithms, and 2) developing image processing techniques.

Outcomes and Deliverables:

  • Preprocessing data including enhancing and normalizing microscopic biopsy images.
  • Extracting a set of features from the regions of interest to detect and grade potential cancers.
  • Gathering and providing a dataset of microscopic biopsy images for training and testing the network.
  • Development of a convolutional neural network to detect and classify cancer in microscopic biopsy images.
  • Evaluation of the proposed technique.

Milestones or project periods:

  • Feb 2019: Gathering a dataset of microscopic biopsy images.
  • March 2019: A preprocessed dataset of microscopic biopsy images.
  • May 2019: An extracted set of features.
  • July 2019: A convolutional neural network for detection/classification of cancer.
  • Aug 2019: Trained model + final results + evaluation of the proposed method.


Classifying Otoscope Images

Coordinator: Shawn Mathew (shawmathew@cs.stonybrook.edu)

        This project seeks to develop an automated machine learning classification algorithm to analyze and classify the otoscope image of the eardrum. The otoscope image will be extracted from the otoscope video. There are many types of abnormalities of the ear and the eardrum. This project will focus on classifying cases of Otitis Media and cases of non-Otitis Media. The algorithm findings will be provided for insertion into the health records. Otitis Media is an infection of the middle ear, i.e., the air-filled space behind the eardrum. It is characterized by a redness (erythema) of the eardrum, and bulging or a lack of movement of the eardrum from a puff of air (which can be assessed with a pneumatic otoscope with a rubber bulb; however, this will be part of a subsequent year expansion). Symptoms of otitis media may include ear pain, fever, and irritability. In subsequent years the algorithm will be expanded to include other types of ear diseases. This project requires annotated training and testing otoscope video data, which will be provided by Medpod. R/D Impacts: This project requires developing a new machine learning algorithm for classifying otoscope images.

Educational Impacts: This project will provide graduate students opportunities to obtain research experience in 1) developing novel machine learning algorithms, and 2) working with healthcare device developers.

Outcomes and Deliverables:

  • Development of an algorithm to extract an image of the eardrum from the otoscope video.
  • Development of a set of quantitative features to differentiate otoscope images of Otitis Media.
  • Development of an algorithm to classify otoscope images of the patient's eardrum with Otitis Media during ear examination.

Milestones or project periods:

  • Dec 2018: A software algorithm to extract an image of the eardrum from the otoscope video.
  • May 2019: A set of quantitative features to differentiate otoscope images of Otitis Media. A prototype of the algorithm to classify otoscope images.
  • Aug 2019: A software algorithm to classify otoscope images of the patient's eardrum with Otitis Media during ear examination.


Ground Truth Collection of Data

Coordinator: Yicheng Lin (yiclin@cs.stonybrook.edu)

        This project seeks to develop mechanisms and protocols for collecting ground truth data for training the machine learning networks and computer vision algorithms. The project will also determine the desired number of ground truth datasets and the number of testing datasets for each project.
The projects include:
Project 1 Patient's facial recognition in which annotated patients' faces will be provided to train and test the algorithm. A database of patients' faces will be developed by Medpod.
Project 2 Classifying otoscope images in which annotated patients' otoscope videos will be provided to train and test the algorithm.
Project 4 Tracking hand sanitization in which annotated Purell hand sanitization videos will be provided to train and test the algorithm.
Project 5 Tracking doctor's actions in which annotated videos of doctor's actions and assets related to ear examination will be provided to train and test the algorithm.
Project 6 Pain detection in which annotated videos of the patient's facial expressions when the doctor's touching the earlobe will be provided to train and test the algorithm.
Project 7 NLP for doctor's/patient's statements in which annotated audio of patient's statements and doctor's statements will be provided to train and test the algorithm.
All the datasets will be provided by Medpod.

R/D Impacts: This project requires developing a new mechanisms and protocols for collecting data for machine learning and computer vision algorithms.

Educational Impacts: This project will provide graduate students opportunities to obtain research experience in 1) developing novel ground truth data collection, and 2) working with healthcare device developers.

Outcomes and Deliverables:

  • Development of mechanisms, quantities, and protocols for collecting ground truth data for Projects 1, 2, 4, 5, 6, and 7.
  • Collecting ground truth data for Projects 1, 2, 4, 5, 6, and 7 from Medpod.

Milestones or project periods:

  • Dec 2018: Mechanisms, quantities, and protocols for collecting ground truth data for Projects 1, 2, 4, 5, 6, and 7.
  • May 2019: Collecting ground truth data for Projects 1, 2, 4, 5, 6, and 7 from Medpod.


Integrating Medpod Sub-projects into a Demo

Coordinator: Parmida Ghahremani (pghahremani@cs.stonybrook.edu)

This project seeks to incorporate all the results and capabilities developed in projects 1-7 into the Medpod system and create a demo of all the year 1 capabilities. Specifically, it will incorporate the face recognition capabilities and confirming the patient's identity for insurance verification and retrieval of patient's medical records from the EMR. It will further incorporate the classification of the otoscope images along with the otoscope video, the detection and verification of hand sanitization, tracking the doctor's actions with regards to an ear examination procedure, detection of pain in a physical ear examination, and the extraction and parsing of doctor's statements and verbal diagnosis which will be incorporated as a part of the final report. The student will be working closely with the Medpod staff and the faculty and students projects team.

R/D Impacts: This project requires developing new demo of all year 1 capabilities.

Educational Impacts: This project will provide students opportunities to obtain research and development experience in 1) incorporating all results and capabilities developed in projects 1-7 into the Medpod system, and 2) creating a demo of all the year 1 capabilities.

Outcomes and Deliverables:

  • All results and capabilities developed in projects 1-7 incorporated into the Medpod system.
  • Development a demo of all year 1 capabilities.

Milestones or Project Periods:

  • May 2019: Incorporation of results and capabilities developed in projects 1-7 thus far into the Medpod system.
  • Aug 2019: A demo of all the year 1 capabilities.


 
 
Arie Kaufman, 2019