ABSTRACT

Eight years ago, since the Omniglot data was first released, very few papers have addressed the original Omniglot challenge, which is to carry out within-alphabet one shot classification task as opposed to selecting the test samples between the alphabets. Most researchers have made the task easier by introducing new splits in the dataset and have taken advantage of significant sample and class augmentation. Amongst the deep learning models that have adopted the Omniglot challenge as it is, Recursive Cortical network has the highest performance of 92.75\%. In this paper, we have introduced a new similarity function for aiding the training procedure of matching network, which helps achieving 95.75% classification accuracy on the Omniglot challenge without requiring any data augmentation.