Recognition Of Accessibility Features Using CNN

Jignesh Jayesh Tailor


America has a deaf population of an estimated 10 lakhs people. The method of communication amongst the deaf community is sign language. The American Sign Language encompasses static and dynamic signs. This paper describes the method to capture the static signs (Which are the alphabets) and then translate that signs into texts. Image processing techniques are applied on these captured images. Upon the completion of the various image processing techniques, the features are relegated by three different techniques. For training dataset convolutional neural network is used. Finally, the interpreted text output for that sign in the English Language is displayed.

Full Text:




  • There are currently no refbacks.

Copyright (c) 2020 International Journal of Advanced Research in Computer Science