Skip to content

Code to supplement the paper "Learning Visual Importance for Graphic Designs and Data Visualizations" [UIST'17]

Notifications You must be signed in to change notification settings

rhinojosa/visimportance

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Code to train and test models to predict importance (saliency) on graphic designs and data visualizations. We also provide links to our models and our train/test data.

For our paper, supplemental material, video, and interactive demo, please visit: http://visimportance.csail.mit.edu/

If you use this code, please consider citing: Zoya Bylinskii, Nam Wook Kim, Peter O'Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. "Learning Visual Importance for Graphic Designs and Data Visualizations" (UIST'17)

@inproceedings{predimportance,
    author    = {Zoya Bylinskii and Nam Wook Kim and Peter O'Donovan and Sami Alsheikh and Spandan Madan
                 and Hanspeter Pfister and Fredo Durand and Bryan Russell and Aaron Hertzmann},
    title     = {Learning Visual Importance for Graphic Designs and Data Visualizations},
    booktitle = {Proceedings of the 30th Annual ACM Symposium on User Interface Software \& Technology},
    year      = {2017}
}

This code is written in Python 2.7 using the Caffe library, and is based on code for semantic segmentation.

Using the models for prediction:

About our models:

  • We initialized our models using the pre-trained VOC-FCN32s and fine-tuned the final importance prediction layers and additional skip connections (if applicable).

Setting up training:

  1. Choose whether to train an importance model for graphic designs or data visualizations. The models have slightly different architectures, and the training data is different.

  2. Download the corresponding data. We provide links to all the image files and ground truth importance maps. Once you clone this repo, if you download directly into the data directory, then the file paths indicated in the prototxt files should point to the right places.

  3. Download the pre-trained VOC-FCN32s. Download surgery.py from https://github.com/shelhamer/fcn.berkeleyvision.org.

  4. Check for correct paths to model and data files. Look for the #CHANGETHIS comment throughout the files.

  5. Start training: python solve.py N (where N is replaced by the desired GPU ID).

  6. We provide some starter code for plotting the training curves (loss over iterations).

About our data loaders:

  • We wrote custom data loaders for both models in imp_layers.py and imp_layers_massvis.py which get invoked by the data layers (see top of train.prototxt and val.prototxt files).
  • We also provide an example of how to load data using a pre-constructed LMDB database, without relying on these custom data loaders (see gdi/fcn16_lmdb). In this case, all the data processing occurs during database construction (see create_lmdb_data.py).

Download problems? If for some reason, any of the data/model download links are not working, please check for them here.

About

Code to supplement the paper "Learning Visual Importance for Graphic Designs and Data Visualizations" [UIST'17]

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.9%
  • Shell 4.1%