Skip to content

MM'24 -- Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning

License

Notifications You must be signed in to change notification settings

JethroJames/CREST

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning

version 1.0.1 license arxiv badge Pytorch GitHub Repo Stars

In this work, we propose a bidirectional cross-modal ZSL approach termed Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning (CREST) that begins by extracting representations for attribute and visual localization and employs Evidential Deep Learning (EDL) to measure underlying epistemic uncertainty and incorporates dual learning pathways, focusing on both visual-category and attribute-category alignments, to ensure robust correlation between latent and observable spaces. Moreover, we introduce an uncertainty-informed cross-modal fusion technique to refine visual-attribute inference. Extensive experiments demonstrate our model's effectiveness and unique explainability across multiple datasets.

🔥 News

  • 2024-04 Our paper is released on arXiv.
  • 2024-04 The code for pre-processing is available now!

🛠️Dependencies

$ pip install -r requirements.txt

❕Details

  • Python==3.9.18
  • numpy==1.26.1
  • scikit_learn==1.2.2
  • torch==2.0.1
  • torchvision==0.15.2
  • tqdm==4.65.0
  • transformers==4.31.0

🗂️ Step 1: Data Preparation

Before your model can start flexing its muscles, you need to gather the superhero team of datasets: CUB, SUN, and AWA2. Just like assembling a team of avengers, make sure you've got the right versions:

  • CUB - Caltech-UCSD Birds-200-2011
  • SUN - SUN Attribute Database: Discovering, Annotating, and Recognizing Scene Attributes
  • AWA2 - A free dataset for Animals Attribute Based Classification and Zero-Shot Learning

Oh, and don't forget the rookie of the year, xlsa17. You'll find them hanging out here.

Once you've got them all, decompress them in a folder that looks like this:

.
├── data
│   ├── CUB/CUB_200_2011/...
│   ├── SUN/images/...
│   ├── AWA2/Animals_with_Attributes2/...
│   └── xlsa17/data/...
└── ···

🎆 Step 2: Cooking the Features

Now, let's turn the heat up and cook those raw features until they're golden! Open your terminal and let the magic begin:

$ python preprocessing.py --dataset CUB --compression --device cuda:0
$ python preprocessing.py --dataset SUN --compression --device cuda:0
$ python preprocessing.py --dataset AWA2 --compression --device cuda:0

🏃 Train and Evaluation

TeleAI takes data confidentiality seriously. Our source and code are undergoing a thorough review process and will be shared with the community once approved. Your understanding is appreciated—stay tuned!

🤝 Citation

@inproceedings{
huang2024crest,
title={{CREST}: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-ShoT Learning},
author={Haojian Huang and Xiaozhennn Qiao and Zhuo Chen and Haodong Chen and Binyu Li and Zhe Sun and Mulin Chen and Xuelong Li},
booktitle={ACM Multimedia 2024},
year={2024},
url={https://openreview.net/forum?id=RAUOcGo3Qt}
}

About

MM'24 -- Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages