Skip to content

linzhiqiu/clear-benchmark-new.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

description
By Carnegie Mellon University and CMU Argo AI Center

The CLEAR Benchmark: Continual LEArning on Real-World Imagery

CLEAR is a novel continual/lifelong benchmark that captures real-world distribution shifts in Internet image collection (YFCC100M) from 2004 to 2014.

For long, researchers in continual learning (CL) community have been working with artificial CL benchmarks such as "Permuted-MNIST" and "Split-CIFAR", which do not align with practical applications. In reality, distribution shifts are smooth, such as natural temporal evolution of visual concepts.

Below are examples of classes in CLEAR-100 that changed over the past decade:

Back to 2004, we had bulky desktop, old-fashioned analog watches, and 2D pixel-art game. Nonetheless, visual concepts gradually evolved from 2004 to 2014, e.g., fancier-looking Macbook Pro, digital watches, and 3D realistic-graphics games.

About CLEAR Benchmark

The CLEAR Benchmark and the CLEAR-10 dataset are first introduced in our NeurIPS 2021 paper.

{% embed url="https://arxiv.org/abs/2201.06289" %} NeurIPS'21 Datasets and Benchmarks Track {% endembed %}

In spirit of the famous CIFAR-10/CIFAR-100 benchmarks for static image classification tasks, we also collected a more challenging CLEAR-100 with a diverse set of 100 classes.

{% hint style="info" %} We hope our CLEAR-10/-100 benchmarks can be the new "CIFAR" as a test stone for continual/lifelong learning community. {% endhint %}

We are also extending CLEAR to an ImageNet-scale benchmark. If you have feedback and insights, feel free to reach out to us!

{% content-ref url="introduction/about-us.md" %} about-us.md {% endcontent-ref %}

1st CLEAR challenge on CVPR 2022

In June 2022, the 1st CLEAR Challenge was hosted on CVPR 2022 Open World Vision Workshop, with a total of 15 teams from 21 different countries and regions partcipating. You may find a quick summary of the workshop in the below page:

{% content-ref url="introduction/1st-clear-challenge-cvpr22.md" %} 1st-clear-challenge-cvpr22.md {% endcontent-ref %}

Given the top teams' promising performance on CLEAR-10/-100 benchmarks via utilizing methods that improve generalization, such as sharpness aware minimization, supervised contrastive loss, strong data augmentation, experience replay, etc., we believe there are still a wealth of problems in CLEAR for the community to explore, such as:

  • Improving Forward Transfer and Next-Domain Accuracy
  • Unsupervised/Online Domain Generalization
  • Self-supervised/Semi-Supervised Continous Learning

Note that this wiki page is under construction (as of July 8 2022)!

In the following pages, we will explain the motivation of CLEAR benchmark, how it is curated via visio-linguistic approach, its evaluation protocols, and a walk-through of the 1st CLEAR Challenge on CVPR'22.

You can also jump to the links for downloading CLEAR dataset:

{% content-ref url="documentation/download-clear-10-clear-100.md" %} download-clear-10-clear-100.md {% endcontent-ref %}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published