Recognition Of Accessibility Features Using CNN

Main Article Content

Jignesh Jayesh Tailor

Abstract

America has a deaf population of an estimated 10 lakhs people. The method of communication amongst the deaf community is sign language. The American Sign Language encompasses static and dynamic signs. This paper describes the method to capture the static signs (Which are the alphabets) and then translate that signs into texts. Image processing techniques are applied on these captured images. Upon the completion of the various image processing techniques, the features are relegated by three different techniques. For training dataset convolutional neural network is used. Finally, the interpreted text output for that sign in the English Language is displayed.

Downloads

Download data is not yet available.

Article Details

Section
Articles