The static X-Rays can damage healthy cancer cells on the edge of tumor along with the cancerous cells due to the contraction and expansion of lungs. This can be addressed by using a dynamically adjustable X-Ray projection that uses the depth information during respirtation. The aim of this project was to create a respiratory monitoring system to detect the depth of lungs. RGB-D camera (Microsoft Kinect for Windows v2) was used to capture the videos of the respiratory motion of 18 volunteers. The problem is formulated as a classiciation problem with three labels namely, below average breathing(0), average breathing(1) and above average breathing(2). VGGNet initialized with imagenet has been used as the classifier.
- The recorded data was in
.xef
format which was converted to.mat
&.jpg
files usingXef2Mat-Jpg
converter. - The converted
.mat
and.jpg
files were saved in folders named in this format:<sub> + <serial number>
. - The
.mat
files contain the matrices with depth values in millimeters which are converted to depth frames using value%256. - These depth frames are converted into 224x224 images covering the chest area.
-
show_colour_frames.py: This file displays all the colour frames for each of the 18 patients/volunteers.
-
show_depth_frames.py: This file displays all the depth frames for each of the 18 patients/volunteers.
-
mapping.py: This file is used to map the 512x424 depth frames into 224x224 input for VGGnet which is used in data.py
-
data.py: This file is used for training and testing. The model used is VGGNet.
-
labelling.py: This file is used to create the frame labels. It uses a 7x7 cropped matrix from the chest area and checks the changes in the depths to assign the labels. Alternatively, you can assign labels on your own. The labels are stored in labels.csv
NOTE: This repository is meant to demonstrate the workflow. Please change the paths accordingly to use this repository.
PS: The data has not been shared due to privacy constraints.