Portable Camera-Based Product Label Reading For Blind People
|International Journal of Engineering Trends and Technology (IJETT)||
|© 2014 by IJETT Journal|
|Year of Publication : 2014|
|Authors : Rajkumar N , Anand M.G , Barathiraja N
Rajkumar N , Anand M.G , Barathiraja N. "Portable Camera-Based Product Label Reading For Blind People", International Journal of Engineering Trends and Technology (IJETT), V10(11),521-524 April 2014. ISSN:2231-5381. www.ijettjournal.org. published by seventh sense research group
We propose a camera-based assistive text reading framework to help blind persons read text labels and product packaging from hand-held objects in their daily life. To isolate the object from untidy backgrounds or other surrounding objects in the camera vision, we initially propose an efficient and effective motion based method to define a region of interest (ROI) in the video by asking the user to tremble the object. This scheme extracts moving object region by a mixture-of-Gaussians-based background subtraction technique. In the extracted ROI, text localization and recognition are conducted to acquire text details. To automatically focus the text regions from the object ROI, we offer a novel text localization algorithm by learning gradient features of stroke orientations and distributions of edge pixels in an Adaboost model. Text characters in the localized text regions are then binarized and recognized by off-the-shelf optical character identification software. The renowned text codes are converted into audio output to the blind users. Performance of the suggested text localization algorithm is quantitatively evaluated on ICDAR-2003 and ICDAR-2011 Robust Reading Datasets. Experimental results demonstrate that our algorithm achieves the highest level of developments at present time. The proof-of-concept example is also evaluated on a dataset collected using ten blind persons to evaluate the effectiveness of the scheme. We explore the user interface issues and robustness of the algorithm in extracting and reading text from different objects with complex backgrounds.
 A. Shahab, F. Shafait, and A. Dengel, “ICDAR 2011 robust reading competition:ICDAR Robust Reading Competition Challenge 2: Reading text in scene images,” in Proc. Int. Conf. Document Anal. Recognition, 2011, pp. 1491–1496.
 Advance Data Reports from the National Health Interview Survey (2008).[Online]. Available: http://www.cdc.gov/nchs/nhis/nhis_ad.htm.
 B. Epshtein, E. Ofek, and Y. Wexler, “Detecting text in natural scenes with stroke width transform,” in Proc. Comput. Vision Pattern Recognition, 2010, pp. 2963–2970.
 C. Yi and Y. Tian, “Assistive text reading from complex background for blind persons,” in Proc. Int. Workshop Camera-Based Document Anal. Recognit., 2011, vol. LNCS-7139, pp. 15–28.
 C. Yi and Y. Tian, “Text string detection from natural scenes by structure based partition and grouping,” IEEE Trans. Image Process., vol. 20, no. 9, pp. 2594–2605, Sep. 2011.
 International Workshop on Camera-Based Document Analysis and Recognition (CBDAR 2005, 2007, 2009, 2011). [Online]. Available: http://www.m.cs.osakafuu.ac.jp/cbdar2011/
 J. Zhang and R. Kasturi, “Extraction of text objects in video documents: recent progress,” in Proc. IAPR Workshop Document Anal. Syst., 2008, pp. 5–17.57
 K. Kim, K. Jung, and J. Kim, “Texture-based approach for text detection in images Using support vector machines and continuously adaptive mean shift algorithm,” IEEE Trans. Pattern Anal. Mach. Intel., vol. 25, no. 12, pp. 1631–1639, Dec. 2003.
 L. Ma, C. Wang, and B. Xiao, “Text detection in natural images based on multi-scale edge detection and classification,” in Proc. Int. Congr. Image Signal Process., 2010, vol.4, pp. 1961–1965.
N.Nikolaou and N. Papamarkos, “Color reduction for complex document images,” Int. J. Imaging Syst. Technol., vol. 19, pp. 14–26, 2009.
Assistive devices, blindness, distribution of edge pixels, hand-held objects, optical character recognition (OCR),stroke orientation, text reading, text region localization.