ABSTRACT

Air writing is a form of human computer interaction where one can write on the screen by just waving the pen in the air. In this paper, the proposed system recognizes a green-tipped object and traces its movement, which is then converted to text. It enables the user to write in four different colours and also allows erasing of the whole canvas. This system is built using the computer vision techniques of OpenCV in Python. A Convolutional Recurrent Neural Network (CRNN) Model, which is trained using a connectionist temporal classification (CTC), loss is developed to recognize the air-written words and print them on the screen in the form of text. This recognition model will help the students and adults suffering from dysgraphia or dyslexia. In this model, we have captured the movement of the green-tipped pen as a video using OpenCV, detected it in each frame, tracked its movement, identified the characters being drawn, and displayed the same as output on the screen.