This project uses emoji photos to predict their sentiments. To achieve the intended outcome, transfer learning was employed to train the MobileNetV2 model.
- python3.8 or greater version
- Tensorflow2.8
- numpy
- matplotlib
The data collection contains 168 different categories of images. They are classified as smiling, sad, or angry. A directory data contains three directories with the class label's name. These labels include the images that correspond to them.
- The dependencies are imported.
- The data generators are setup for training and validation data.
- The ImageDataGenerator is configured with various data augmentation techniques such as rotation, width and height shift, and horizontal flip. It divides the data into subsets for training and validation.
- A model is constructed by combining the MobileNetV2 base model with additional layers for customization.
- The model is trained to predict the label.
To train a custom model put the images folder into the data directory where each folder will contain images with their respective names. After this run the provided jupyter notebook, emoji_emotion_recog_jupyter.ipynb, the model will be trained and saved into the model folder.
- Clone the repository in your terminal
git clone https://github.com/akshitagupta15june/Face-X.git
- Switch directory
cd Emoji-based-Emotion-prediction
- Run file
emoji_emotion_recog_jupyter.ipynb
If you wish to contribute refer CONTRIBUTING.md:
https://github.com/akshitagupta15june/Face-X/blob/master/CONTRIBUTING.md