Development of A Computer Aided Real-Time Interpretation System for Indigenous Sign Language in Nigeria Using Convolutional Neural Network


  •   Ayodele Olawale Olabanji

  •   Akinlolu Adediran Ponnle


Sign language is the primary method of communication adopted by deaf and hearing-impaired individuals. The indigenous sign language in Nigeria is one area receiving growing interest, with the major challenge faced is communication between signers and non-signers. Recent advancements in computer vision and deep learning neural networks (DLNN) have led to the exploration of necessary technological concepts towards tackling existing challenges. One area with extensive impact from the use of DLNN is the interpretation of hand signs. This study presents an interpretation system for the indigenous sign language in Nigeria. The methodology comprises three key phases: dataset creation, computer vision techniques, and deep learning model development. A multi-class Convolutional Neural Network (CNN) is designed to train and interpret the indigenous signs in Nigeria. The model is evaluated using a custom-built dataset of some selected indigenous words comprising of 15000 image samples. The experimental outcome shows excellent performance from the interpretation system, with accuracy attaining 95.67%.

Keywords: Sign Language, Convolutional Neural


J.G. Kyle, J. Kyle, B. Woll, G. Pullen, and F. Maddix. Sign language: The study of deaf people and their language. Cambridge university press, 1988.

M. Billinghurst. "Put that where? Voice and gesture at the graphics interface." Acm Siggraph Computer Graphics, vol. 32, no. 4, pp. 60-63, 1998.

L. Pigou, S. Dieleman, P.J. Kindermans, and B. Schrauwen, B. Sign language recognition using convolutional neural networks. In European Conference on Computer Vision, Vol.8925, pp. 572-578. Zurich: Switzerland, 2014

E. Gani, and A. Kika. “Albanian Sign Language (AlbSL) Number Recognition from Both Hand's Gestures Acquired by Kinect Sensors,” International Journal of Advanced Computer Science and Applications (IJACSA), vol. 7, no. 7, pp. 216-220, August 2016.

A. Thalange, and S. Dixit. “Sign Language Alphabets Recognition Using Wavelet Transform.” In International Conference on Intelligent Computing, Electronics Systems, and Information Technology (ICESIT-15), pp. 25-26, Kuala Lumpur, 2015.

K. Assaleh, and M. Al-Rousan. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers. EURASIP Journal on Applied Signal Processing, vol. 13, pp. 2136-2145, August 2005.

K. Assaleh, T. Shanableh, M. Fanaswala, H. Bajaj, and F. Amin. “Vision-based system for Continuous Arabic Sign Language Recognition in user dependent mode,” In 2008 5th International Symposium on Mechatronics and its Applications (ISMA08), pp. 1-5, Amman, 2008.

C. Wang, W. Gao, and Z. Xuan, “A real-time large vocabulary continuous recognition system for Chinese Sign Language.” In Pacific-Rim Conference on Multimedia. pp. 150-157, Springer, Berlin, Heidelberg, 2001.

J. S. Kim, W. Jang, and Z. Bien, “A dynamic gesture recognition system for the Korean Sign Language KSL.” IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), vol. 26, no. 2, pp. 354-359, April 1996.

F. Solís, D. Martínez, and O. Espinoza, “Automatic Mexican Sign Language Recognition Using Normalized Moments and Artificial Neural Networks.” Engineering. vol. 8, no. 10, pp. 733-740, 2016.

P.S. Rajam, and G. Balakrishnan, G. “Recognition of Tamil sign language alphabet using image processing to aid deaf-dumb people.” Procedia Engineering, vol. 30, pp. 861-868, January 2012.

A.Y. Dawod, J. Abdullah, and M.J. Alam, “Adaptive skin color model for hand segmentation.” In 2010 International Conference on Computer Applications and Industrial Electronics, pp. 486-489, Kuala Lumpur, 2010.

S. Mo, S. Cheng, and X. Xing, “Hand gesture segmentation based on improved kalman filter and TSL skin color model.” In 2011 International Conference on Multimedia Technology, pp. 3543-3546, Hangzhou, 2011.

R. Y. Wang, and J. Popovi?, “Real-time hand-tracking with a color glove.” ACM Transactions on Graphics (TOG), vol. 28, no. 63, pp. 1-8, July 2009.

H. Lu, K.N. Plataniotis, and A.N. Venetsanopoulos, “A full-body layered deformable model for automatic model-based gait recognition.” EURASIP Journal on Advances in Signal Processing, vol. 2008, pp. 1-13, 2007.

T.B. Moeslund, A. Hilton, and V. Kruger, “A survey of advances in vision-based human motion capture and analysis.” Computer Vision and Image Understanding, vol. 104, no. 2-3, pp. 90-126, 2006.

B. Büyüksaraç, M.M. Bulut, and G.B. Akar, “Sign language recognition by image analysis.” In 2016 24th Signal Processing and Communication Application Conference (SIU), pp. 417-420, Zonguldak, 2016.

C.M. Jin, Z. Omar, and M.H. Jaward, “A mobile application of American sign language translation via image processing algorithms.” In 2016 IEEE Region 10 Symposium (TENSYMP), pp. 104-109, Bali, 2016.

M. A. Uddin, and S.A. Chowdhury, “Hand sign language recognition for bangla alphabet using support vector machine.” In 2016 International Conference on Innovations in Science, Engineering and Technology (ICISET), pp. 1-4. Dhaka, 2016.

S.C. Agrawal, A.S. Jalal, and C. Bhatnagar, “Recognition of Indian Sign Language using feature fusion.” In 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI), pp. 1-5, Kharagpur, 2012.

P. S. Neethu, R. Suguna, and D. Sathish, “An efficient method for human hand gesture detection and recognition using deep learning convolutional neural networks.” Soft Computing, vol. 24, pp. 1-10, 2020.

T. Ozcan, and A. Basturk, “Transfer learning-based convolutional neural networks with heuristic optimization for hand gesture recognition.” Neural Computing and Applications, Vol. 31, No. 12, pp. 8955-8970, 2019.

N.Ç. K?l?boz, and U. Güdükbay, "A hand gesture recognition technique for human–computer interaction," Journal of Visual Communication and Image Representation, vol. 28, pp. 97–104, April 2015.

K.U. Ebi. Talking Hands: An Introduction to Sign Language in Special Education. Adex Sea Concept, Nigeria. ISBN: 978-978-52439-7-0, 2019.

N. Otsu, “A threshold selection method from gray-level histograms.” IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp 62-66, 1979.

J. Wang, and L. Perez, “The effectiveness of data augmentation in image classification using deep learning.” Convolutional Neural Networks Vis. Recognit, vol. 11, December 2017.

S. Ioffe, and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift.” In Proceedings of the 32nd International conference on machine learning, PMLR, vol. 37, pp. 448-456, 2015.

K. He, X. Zhang, S. Ren, and J. Sun, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification.” In Proceedings of the IEEE international conference on computer vision, pp. 1026-1034, 2015.

J. Bjorck, C. Gomes, B. Selman, and K.Q. Weinberger, “Understanding batch normalization.” arXiv preprint arXiv:1806.02375, June 2018.


Download data is not yet available.


How to Cite
Olabanji, A.O. and Ponnle, A.A. 2021. Development of A Computer Aided Real-Time Interpretation System for Indigenous Sign Language in Nigeria Using Convolutional Neural Network. European Journal of Electrical Engineering and Computer Science. 5, 3 (Jun. 2021), 68–74. DOI: