BrainSegmentation

by Rishub Kumar and Patrick Deshpande

To read our introduction to brain lesion segmentation, please read this

To look at our report summarizing our results, click here.

In short, brain lesions are abnormal tissue in/on brain tissue. We have refined a program (using the pix2pix tensorflow library) that will highlight brain lesions given a png image of a brain. This program will help automate the tedious task of manually highlighting brain lesions, and strives to perform just as well as a human operator.

The left side of this picture is an example of what we want to output, given the image on the right side.

Instructions for running our project

1) Download and unzip project folder from this link.

2) ONLY NECESSARY IF TEST FOLDER HAS DICOM IMAGES: If your test folder contains dicom images, run this command to create a new folder with png versions of those images (Copy the test folder into this directory first)

python mritopng.py -f <folder_with_dicom> <output_png_dir>

3) Now, we have a directory of png images that we want to highlight the lesions in. To do so, run the command:

./run.sh <input_png_dir>

If you performed step two, output_png_dir = input_png_dir. Otherwise, input_png_dir is the folder of png images you want to highlight.

Note that you may need to add permissions to run.sh

This command will open a web browser that shows the original image and our output image side by side.

If no browser is available, navigate to the c_test/images folder, and open the images in whatever image viewer you desire.

Using Example Test Folder

./run.sh test_pictures

This will run the test on our training data using some pictures we have provided.

Click here to see what the output of that test looks like

Sources

We used the pix2pix tensorflow library to implement this functionality. You can find out more about this library here, and view the source here.

The library uses conditional adversarial networks as a general purpose solution to image-image translations. You can find out more about it here.

We also used a dicom to png converter to give our model images that it can handle. This library can be found here