home
navigate_next
Blog
navigate_next
Architectures

Top 10 Datasets for Computer Vision

Top 10 Datasets for Computer Vision
SyntricAI

Would you like to try Synthetic data?  

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

In the realm of computer vision, where machines are becoming increasingly adept at understanding and interpreting visual information, the importance of high-quality datasets cannot be overstated. These collections of labeled images have become the cornerstone of training computer vision algorithms, enabling them to perceive the world with high accuracy. Whether it's object recognition, image classification, segmentation, or scene understanding, the availability of diverse datasets is crucial for driving advancements.

Recognising the significance of these datasets, we have curated a list of the 10 best datasets for computer vision.

Microsoft COCO:

 

Widely regarded as a benchmark in the community, COCO encompasses a massive collection of labeled images, designed to facilitate research in object recognition, segmentation, and captioning, pushing the boundaries of visual understanding and fostering innovation in computer vision algorithms. COCO is a large-scale object detection, segmentation, and captioning dataset developed by Microsoft. It’s features are the following:

  • Use cases: object segmentation, recognition in context
  • Dataset: 330K images (>200K labeled) with 5 captions per image. The dataset contains 1.5 million object instances, and 250K people with key points
  • Classes: 80 object categories, 91 “stuff” categories
  • License: Commercial as well as research purposes
  • Learn more and download the dataset:  https://cocodataset.org/#home

Example images from the COCO dataset.
Source: https://cocodataset.org

CIFAR-10 & CIFAR-100:

The CIFAR-10 dataset, which stands for Canadian Institute For Advanced Research, comprises a diverse collection of images specifically curated for training machine learning and computer vision algorithms. Renowned as one of the most extensively employed datasets in the field of machine learning research, it serves as a benchmark for numerous experiments and evaluations. The dataset is organised into five training batches and one test batch, with each batch containing 10,000 images. The test batch includes 1,000 randomly selected images from each class, while the training batches have the remaining images, with a balanced distribution of 5,000 images from each class.

  • Use case: image classification
  • Dataset: 60k color images (32x32 pixels) split into 50k training and 10k testing subsets. The dataset contains 6000 images per class
  • Classes: 10
  • License: Commercial purposes must be discussed with the provider of this dataset.
  • Learn more and download the dataset: https://www.cs.toronto.edu/~kriz/cifar.html

 

CIFAR-100, similar to CIFAR-10, is a dataset that consists of 100 classes, with each class comprising 600 images. The dataset is split into 500 training images and 100 testing images per class. What sets CIFAR-100 apart is that its 100 classes are organised into 20 broader superclasses. 

Each image within the dataset is assigned both a "fine" label, representing its specific class, and a "coarse" label, representing the superclass to which it belongs. This hierarchical labeling system allows for more detailed and higher-level categorisation within the CIFAR-100 dataset.

  • Use case: image classification
  • Dataset: 60k color images (32x32 pixels) split into 50k training and 10k testing subsets. The dataset contains 600 images per class
  • Classes: 100 classes, and are organised into 20 broader superclasses
  • License: Commercial purposes must be discussed with the provider of this dataset.
  • Learn more and download the dataset: https://www.cs.toronto.edu/~kriz/cifar.html

Example images from the CIFAR-100 dataset with classes.
Source: https://www.cs.toronto.edu/~kriz/cifar.html

ImageNet

ImageNet is a monumental dataset that has transformed the field of computer vision. With over 14 million labeled images spanning thousands of object categories, ImageNet has significantly contributed to the development and evaluation of deep learning models. It's vast scale and diverse range of objects make it a crucial resource for advancing image classification, object detection, and image understanding tasks. The ImageNet Large Scale Visual Recognition Challenge (ILSVRC) has become the most popular subset containing 1.3M training, 50.000 validation and 100.000 test images.

  • Use cases: Image classification, object detection
  • Dataset: more than 14 million images
  • Classes: thousands of classes, following WordNet hierarchy, with each synset representing a category
  • License: For non-commercial uses only
  • Learn more and download the dataset: http://image-net.org/index

The ImageNet dataset follows the WordNet hierarchy.
Source: https://devopedia.org/imagenet

YouTube-8M

The YouTube-8M dataset stands as a groundbreaking resource for video understanding and analysis in the field of computer vision. Comprising millions of YouTube videos, each labeled with multiple tags, YouTube-8M offers an extensive collection that enables researchers to tackle tasks such as video classification, video summarization, and content recommendation. This dataset's immense scale and rich diversity make it an invaluable asset for developing and training deep learning models to comprehend and extract meaningful insights from video content on a large scale, opening up new frontiers in visual understanding and multimedia research.

  • Use cases: Image classification in videos, action recognition, video classification
  • Dataset: 6.1M video IDs, 350.000 hours of video. The videos contain 2.6 Billion visual and audio features
  • Classes: 6.1M video IDs3862 classes
  • License: Commercial use, Modification, Distribution, Patent use, Private use
  • Learn more and download the dataset:  https://research.google.com/youtube8m/

The distribution of videos in the top-level verticals illustrates the scope and diversity of the dataset and reflects the natural distribution of popular YouTube videos.
Source: https://ai.googleblog.com/2016/09/announcing-youtube-8m-large-and-diverse.html?m=1

IMDB-WIKI

The IMDB-WIKI dataset holds significant value for researchers and developers interested in the domain of face recognition and age estimation. This dataset combines images from IMDb and Wikipedia, resulting in a vast collection of labeled (age and gender) facial images spanning a wide range of ages and demographics.

The IMDB-WIKI dataset combines images from IMDb and Wikipedia, resulting in a vast collection of labeled (age and gender) facial images spanning a wide range of ages and demographics.
Source: https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/

CityScapes


The CityScapes dataset is a valuable resource for assessing the performance of vision algorithms in urban scene understanding. It supports research utilising large volumes of annotated and weakly annotated data for training deep neural networks. The dataset consists of diverse stereo video sequences recorded in street scenes from 50 cities, with pixel-level annotations for 5,000 frames and additional weak annotations for 20,000 frames. Additionally, CityScapes 3D extends the dataset by providing 3D bounding box annotations for vehicle detection and serves as a benchmark for the 3D detection task.

  • Use cases: urban scene understanding
  • Dataset: 50 cities, with 25k images in total
  • Classes: 30 classes
  • License: For non-commercial use only
  • Learn more and download the dataset: https://www.cityscapes-dataset.com/

Example image from the CityScapes dataset containing frames with detailed semantic segmentations.
Source: https://www.cityscapes-dataset.com/

Fashion MNIST

Fashion-MNIST, created by Zalando, is a dataset consisting of 70,000 grayscale images of fashion articles, with 60,000 examples for training and 10,000 for testing. Each image is a 28x28 pixel grayscale image, labeled into one of 10 classes. Fashion-MNIST is designed to be a drop-in replacement for the original MNIST dataset, allowing researchers to benchmark their machine learning algorithms using the same image size and training/testing splits. Zalando aims to replace the original MNIST dataset and provide a more diverse alternative for evaluating algorithm performance in the AI/ML/Data Science community.

Example images from the Fashion-MNIST dataset with their  labels.
Source: https://www.tensorflow.org/datasets/catalog/fashion_mnist

 

MNIST

The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.

It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting.

  • Use cases: Handwriting recognition, Image classification
  • Dataset: 70k greyscale images (28x28 pixels) split into 60k training and 10k testing subsets
  • Classes: 10 classes (digits from 0 to 9)
  • License: Commercial and non-commercial uses
  • Learn more and download the dataset: https://www.kaggle.com/datasets/hojjatk/mnist-dataset

Example frames from the MNIST dataset that contains hand-written digits for various classification use cases.
Source: https://www.tensorflow.org/datasets/catalog/mnist

MPII Human Pose

The MPII Human Pose dataset is a leading benchmark for evaluating articulated human pose estimation. It consists of approximately 25,000 images with over 40,000 annotated body joints of people. The images represent various everyday human activities, covering 410 different activities, each labeled accordingly. The dataset includes images extracted from YouTube videos, accompanied by preceding and following un-annotated frames. The test set features additional annotations such as body part occlusions and 3D torso and head orientations. To maintain evaluation integrity, test annotations are withheld to prevent overfitting and tuning on the test set. The creators are also developing an automatic evaluation server and performance analysis tools based on the rich test set annotations.

  • Use cases: Human pose estimation, accident detection
  • Dataset: 25k images extracted from online videos with 40k annotated body joints
  • Classes: more than 410 individual classes (type of activities)
  • License: Freely usable, even for commercial purposes
  • Learn more and download the dataset: http://human-pose.mpi-inf.mpg.de/#

Example frames with their labels taken from the MPII Human Pose dataset.
Source: https://paperswithcode.com/dataset/mpii

CelebA

CelebA is a vast dataset comprising over 200,000 celebrity images, each annotated with 40 attributes. The images exhibit diverse poses, backgrounds, and quantities. With its rich annotations, CelebA serves as a valuable resource for various computer vision tasks such as face attribute recognition, face recognition, face detection, landmark localisation, and face editing & synthesis.

  • Use cases
  • Dataset: more than 200k images with 40 attribute annotations. (10,177 IDs, average 20 pictures per ID with 5 landmark locations) 
  • Classes: 10.177 classes
  • License: For non-commercial research purposes only
  • Learn more and download the dataset: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html 

Examples from the CelebA dataset with their labels.
Source: http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html

arrow_back
Back to blog

Want to read more?