Skip to content

This project was done as a part of COMP9444 Neural Networks and Deep Learning Course Project.

Notifications You must be signed in to change notification settings

anantkm/JapaneseCharacterRecognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

COMP9444 Neural Networks and Deep Learning

Project 1

Japanese Character Recognition

In this part of the assignment you will be implementing networks to recognize handwritten Hiragana symbols. The dataset to be used is Kuzushiji-MNIST or KMNIST for short. The paper describing the dataset is available here. It is worth reading, but in short: significant changes occurred to the language when Japan reformed their education system in 1868, and the majority of Japanese today cannot read texts published over 150 years ago. This paper presents a dataset of handwritten, labeled examples of this old-style script (Kuzushiji). Along with this dataset, however, they also provide a much simpler one, containing 10 Hiragana characters with 7000 samples per class. This is the dataset we will be using.
  1. Implement a model NetLin which computes a linear function of the pixels in the image, followed by log softmax. Run the code by typing:
    python3 kuzu_main.py --net lin
    
    Copy the final accuracy and confusion matrix into your report. Note that the columns of the confusion matrix indicate the target character, while the rows indicate the one chosen by the network. (0="o", 1="ki", 2="su", 3="tsu", 4="na", 5="ha", 6="ma", 7="ya", 8="re", 9="wo"). More examples of each character can be found here.

  2. Implement a fully connected 2-layer network NetFull, using tanh at the hidden nodes and log softmax at the output node. Run the code by typing:
    python3 kuzu_main.py --net full
    
    Try different values (multiples of 10) for the number of hidden nodes and try to determine a value that achieves high accuracy on the test set. Copy the final accuracy and confusion matrix into your report.

  3. Implement a convolutional network called NetConv, with two convolutional layers plus one fully connected layer, all using relu activation function, followed by the output layer. You are free to choose for yourself the number and size of the filters, metaparameter values, and whether to use max pooling or a fully convolutional architecture. Run the code by typing:
    python3 kuzu_main.py --net conv
    


About

This project was done as a part of COMP9444 Neural Networks and Deep Learning Course Project.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages