Code for the paper Context-aware-Interactive-Attention-for-Multi-modal-Sentiment-and Emotion-Analysis (EMNLP 2019).
For the evaluation of our proposed CIA approach, we employ five multi-modal benchmark datasets i.e., YouTube, MOUD, ICT-MMMO, CMU-MOSI, and CMU-MOSEI.
-
You can access these datasets from here or
-
You can download datasets from here.
-
Download the dataset from given link and set the path in the code accordingly and make two folder (i) results and (ii) weights
For trimodal -->> python trimodal_YouTube.py
========================
python: 2.7
keras: 2.2.2
tensorflow: 1.9.0