All projects will be conducted in the well-equipped Visualization Lab. You will work closely with a Ph.D. student or a Post-Doc associate. Future RA support is available for good students for all the projects.
Coordinator: Konstantin Dmitriev (kdmitriev@cs.stonybrook.edu)
Pancreatic cancer is among the most lethal cancer types with an extremely high mortality rate. Contrast-enhanced computed tomography (CT) is one of the most common imaging procedures used for pancreatic cancer screening. Standard pancreas-specific imaging protocol involves acquiring CT images 20 seconds after contrast injection (arterial phase) and 60 seconds after contrast injection (venous phase), which are helpful in better differentiation of benign and malignant pancreatic lesions and the assessment of the surrounding organs and vessels. Computer-aided diagnosis (CAD) systems which analyze the CT images and can potentially improve the diagnostic accuracy often require training datasets of manual annotations, such as outlines of the pancreas and pancreatic lesions. However, creating these annotations is a time-consuming and intensive task, and with regards to our project, only one phase is annotated. Given the potential benefits of analyzing arterial and venous phases, it is critical to acquire a training dataset of both phases. This project is focused on estimating CT images of one phase given the corresponding CT images of the other phase utilizing generative adversarial networks (GANs). Students are expected to have some background in computer vision and machine learning.
Coordinator: Shawn Mathew (shawmathew@cs.stonybrook.edu)
Virtual colonoscopy is a noninvasive procedure for screening colorectal cancer. Optical colonoscopy is invasive, but allow the doctor to take biopsies and remove polyps. This project focuses on bringing some of the technologies found in virtual colonoscopy into optical colonoscopy. Optical colonoscopy tends to have a lot of corrupted images, including ones with instruments, fluid motion, and motion blur. One of the tasks here is to detect frames that have colonoscopy instruments in them or are corrupted by fluid motion. In addition to this, we would like to remove the instrument in these corrupted frames. We would like to use deep learning methods to handle this task.
Requirements:
Python/Pytorch proficiency, background in computer vision and deep learning frameworks, blender experience would be helpful.
Coordinator: Shreeraj Jadhav (sdjadhav@cs.stonybrook.edu)
Virtual colonoscopy (VC) is a non-invasive screening technique for colorectal cancer.
A patient undergoes a CT scan, and the colon is digitally cleansed and reconstructed from the CT images.
A radiologist can then virtually navigate through this reconstructed colon looking for polyps
(small bumps in the colon surface), the precursor for cancer. For improved visual appearance,
volume rendering through the CT data is preferred over rendering a triangular mesh,
yielding a smoother and more accurate view of the colon surface.
Currently, VC systems are displayed on a conventional desktop screen.
Our current work is to advance VC into immersive environments, developing an immersive VC (iVC),
allowing for the user to experience greater field-of-view and field-of-regard,
which should lead to increased accuracy and decreased interrogation time.
To accomplish this, high resolution imagery must be generated at high frame rates,
in stereo vision, to provide smooth motion when flying through the colon.
To this end, we seek to enhance the speed of the underlying volume rendering implementation.
For higher efficiency, techniques such as empty space skipping and multi-GPU rendering can
be utilized along with advanced techniques for improving image quality such as global
illumination using ray tracing.
The major points of this project are:
Coordinator: Shreeraj Jadhav (sdjadhav@cs.stonybrook.edu)
Visual diagnosis of pancreatic cancer involved 3D visualization of pancreas extracted from CT scans.
Visualization of important features of the pancreas includes cysts,
primary ducts and surface of the pancreas.
The geometry of the pancreas is not fixed due to the flexible nature of the organ in addition to
deformations caused due to cystic growth.
The project seeks to investigate state-of-the-art volume deformation techniques to simplify
the process of visualization of the pancreas and generate high quality illustrative viewpoints
that show all important features.
For example, straightening the centerline of the pancreas to simplify its geometry while providing
an automatically calculated viewpoint to visualize all the important features such as the cyst
and the primary duct that passes through the interior of the pancreas.
Use of volume deformation for illustrative visualization has been used before.
However, deforming a volume entails trade off between preservation of geometric properties such
as angles, distances, volume, etc.
The main goal of the project is to find existing an volume deformation technique or develop
a new technique that can produce desireable results.
Coordinator: Saeed Boorboor (sboorboor@cs.stonybrook.edu)
The Reality Deck is the world's first immersive gigapixel display.
Comprised of 416 LCD panels, it offers a combined resolution of more than 1.5 billion pixels.
These displays are driven by a cluster of 18 render nodes with dual hex-core CPUs and 4 GPUs.
Developing applications for distributed immersive systems is a complex and involved process.
At the very least, any application input needs to be captured centrally so that the application
state can be consistent between nodes.
However, Unity3D and other VR libraries simplify this process provide us with an abstract
interface in C# for implementing applications.
Students who pursue this project will develop distributed visualization applications that will
run on the Reality Deck. The application domain is wide open.
Some suggestions include:
Coordinators: Saeed Boorboor (sboorboor@cs.stonybrook.edu)
Ping Hu (pihu@cs.stonybrook.edu)
The use of consumer-grade virtual reality technology, such as head-mounted displays (HMDs), to interact with scientific data directly in 3D can help domain experts better study complex data faster and with less physical and mental strain. Advances in imaging technology have been instrumental in furthering scientific research. However, rendering of scientific data in an HMD is faced with the challenge that imaging technology is rapidly outpacing the supporting tools in terms of raw data size. State-of-the-art imaging technologies regularly produce terabytes worth of images, and thus few existing tools are capable of handling data at this scale. These challenges seriously impede the interactive rendering performance in virtual reality (VR). Students interested in this project can either focus on one or both milestones:
Students interested in these projects should have a strong C++ background and be familiar with at least one rendering language (HLSL or GLSL). Being knowledgeable in CUDA is a plus.
Coordinator: Parmida Ghahremani (pghahremani@cs.stonybrook.edu)
This project seeks to develop a deep learning-based system for automatic
diagnosis and classification of cancer using microscopic biopsy images. The detection
and classification of cancer from microscopic biopsy images are a challenging task
because an image usually contains many clusters and overlapping objects. Therefore,
various steps should be involved in the algorithm including enhancement of microscopic
images, normalization of the images, nuclei segmentation, features extraction, cancer
detection and finally the classification.
R/D Impacts: This project requires implementing image processing techniques and
developing a new deep learning algorithm for detecting and classifying cancer in
pathology slides.
Educational Impacts: This project will provide graduate students opportunities to
obtain research experience in 1) developing novel deep learning algorithms, and 2)
developing image processing techniques.
Outcomes and Deliverables:
Milestones or project periods:
Coordinator: Shawn Mathew (shawmathew@cs.stonybrook.edu)
This project seeks to develop an automated machine learning classification algorithm to analyze and classify the otoscope image of the eardrum. The otoscope image will be extracted from the otoscope video. There are many types of abnormalities of the ear and the eardrum. This project will focus on classifying cases of Otitis Media and cases of non-Otitis Media. The algorithm findings will be provided for insertion into the health records. Otitis Media is an infection of the middle ear, i.e., the air-filled space behind the eardrum. It is characterized by a redness (erythema) of the eardrum, and bulging or a lack of movement of the eardrum from a puff of air (which can be assessed with a pneumatic otoscope with a rubber bulb; however, this will be part of a subsequent year expansion). Symptoms of otitis media may include ear pain, fever, and irritability. In subsequent years the algorithm will be expanded to include other types of ear diseases. This project requires annotated training and testing otoscope video data, which will be provided by Medpod. R/D Impacts: This project requires developing a new machine learning algorithm for classifying otoscope images.
Educational Impacts: This project will provide graduate students opportunities to obtain research experience in 1) developing novel machine learning algorithms, and 2) working with healthcare device developers.
Outcomes and Deliverables:
Milestones or project periods:
Coordinator: Yicheng Lin (yiclin@cs.stonybrook.edu)
This project seeks to develop mechanisms and protocols for collecting
ground truth data for training the machine learning networks and computer vision
algorithms. The project will also determine the desired number of ground truth datasets
and the number of testing datasets for each project.
The projects include:
Project 1 Patient's facial recognition in which annotated patients' faces will be provided to
train and test the algorithm. A database of patients' faces will be developed by Medpod.
Project 2 Classifying otoscope images in which annotated patients' otoscope videos will be
provided to train and test the algorithm.
Project 4 Tracking hand sanitization in which annotated Purell hand sanitization videos will
be provided to train and test the algorithm.
Project 5 Tracking doctor's actions in which annotated videos of doctor's actions and
assets related to ear examination will be provided to train and test the algorithm.
Project 6 Pain detection in which annotated videos of the patient's facial expressions when
the doctor's touching the earlobe will be provided to train and test the algorithm.
Project 7 NLP for doctor's/patient's statements in which annotated audio of patient's
statements and doctor's statements will be provided to train and test the algorithm.
All the datasets will be provided by Medpod.
R/D Impacts: This project requires developing a new mechanisms and protocols for
collecting data for machine learning and computer vision algorithms.
Educational Impacts: This project will provide graduate students opportunities to obtain
research experience in 1) developing novel ground truth data collection, and 2) working
with healthcare device developers.
Outcomes and Deliverables:
Milestones or project periods:
Coordinator: Parmida Ghahremani (pghahremani@cs.stonybrook.edu)
This project seeks to incorporate all the results and capabilities developed in projects 1-7 into the Medpod system and create a demo of all the year 1 capabilities. Specifically, it will incorporate the face recognition capabilities and confirming the patient's identity for insurance verification and retrieval of patient's medical records from the EMR. It will further incorporate the classification of the otoscope images along with the otoscope video, the detection and verification of hand sanitization, tracking the doctor's actions with regards to an ear examination procedure, detection of pain in a physical ear examination, and the extraction and parsing of doctor's statements and verbal diagnosis which will be incorporated as a part of the final report. The student will be working closely with the Medpod staff and the faculty and students projects team.
R/D Impacts: This project requires developing new demo of all year 1 capabilities.
Educational Impacts: This project will provide students opportunities to obtain research and development experience in 1) incorporating all results and capabilities developed in projects 1-7 into the Medpod system, and 2) creating a demo of all the year 1 capabilities.
Outcomes and Deliverables:
Milestones or Project Periods: