Starter kit: CMPlaces
Includes README and dataset splits
Data:
-
Sketches [278 MB JPEG]
14,830 training and 2,050 validation sketches collected through AMT. Colors indicate different objects.
-
Descriptions [4 MB TXT]
9,752 training and 2,050 validation descriptions collected through AMT
-
Spatial text [4.4 GB JPEG]
456,300 training and 2,050 validation synthetic spatial text images created from SUN database scenes
-
Clip art [547 MB JPEG]
11,372 training and 1,954 validation clip art images downloaded from search engines
-
Natural Images [Places Dataset]
~1.5 million training and 20,500 validation natural images
Please download them from the Places dataset
If you use the Cross-Modal Places dataset, please cite the following papers:
-
Learning Aligned Cross-Modal Representations from Weakly Aligned Data
Ll. Castrejón*, Y. Aytar*, C. Vondrick, H. Pirsiavash and A. Torralba
To appear at CVPR 2016 -
Learning Deep Features for Scene Recognition using Places Database
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva.
Advances in Neural Information Processing Systems 27 (NIPS), 2014
Terms of use: by downloading the image data from the above URLs, you agree to the following terms:
- You will use the data only for non-commercial research and educational purposes.
- You will NOT distribute the above URL(s).
- Massachusetts Institute of Technology makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
- You accept full responsibility for your use of the data and shall defend and indemnify Massachusetts Institute of Technology, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.