A Survey on Feature Selection Using FAST Approach to Reduce High Dimensional Data
|International Journal of Engineering Trends and Technology (IJETT)||
|© 2014 by IJETT Journal|
|Year of Publication : 2014|
|Authors : R.Munieswari , S.Saranya
R.Munieswari , S.Saranya ."A Survey on Feature Selection Using FAST Approach to Reduce High Dimensional Data", International Journal of Engineering Trends and Technology(IJETT), V8(5),229-231 February 2014. ISSN:2231-5381. www.ijettjournal.org. published by seventh sense research group
Feature selection is a process of identifying the most useful subset of features.The survey summarising most of the feature selection methods and algorithms. Feature selection is the process of identifying a subset of most useful features. Typically the feature selection methods consist of four basic main steps and classify different existing feature selection algorithms .It is defined in terms of generation methods and evaluation functions. Most useful or representative methods are chosen from each category. The strength and weakness of different feature selection algorithms are explained. The aim here is to select some of the feature to form a feature subset. Feature selection has been effective technique in dimensionality reduction, removing irrelevant data, increasing learning accuracy, and improving comprehensibility. Increase in dimensionality of data imposes a severe challenge to many existing feature selection methods with respect to efficiency and effectiveness. To find a subset of features, the efficiency is related to time, the effectiveness is related to the quality of the subset of features. Existing feature selection algorithm removes only irrelevant features. But FAST algorithm removes both Irrelevant and redundant features. This survey mainly focuses on Comparison of various techniques and algorithms for feature selection process.
 Qinbao Song, Jingjie Ni, and Guangtao Wang, “A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data,” IEEE Transaction on Knowledge and Data, Engineering, Vol. 25, No. 1, January 2013.
 AlmuallimH.and. Dietterich T.G, “Algorithms for Identifying Relevant Features”, Proc. Ninth Canadian Conf. Artificial Intelligence, pp. 38-45, 1992.
 Arauzo-Azofra A.,. Benitez J.M, and Castro J.L., “A Feature Set Measure Based on Relief,” Proc. Fifth Int’l Conf. Recent Advances in Soft Computing, pp. 104-109, 2004.
 Biesiada J. and Duch W., “Features selection for High-Dimensional data a Pearson Redundancy Based Filter,” Advances in Soft Computing, vol. 45, pp. 242-249, 2008.
 Das S, “Filters, Wrappers and a Boosting-Based Hybrid for Feature Selection,” Proc. 18th Int’l Conf. Machine Learning, pp. 74-81, 2001.
 Dash M. and Liu H., “Feature Selection for Classification,” Intelligent Data Analysis, vol. 1, no. 3, pp. 131-156, 1997.
 Kohavi R. and. John G.H, “Wrappers for Feature Subset Selection,” Artificial Intelligence, vol. 97, nos. 1/2, pp. 273-324, 1997.
 Souza J., “Feature Selection with a General Hybrid Algorithm,” PhD dissertation, Univ. of Ottawa, 2004.
Feature selection, classification, Filter method, Hybrid method, redundant features, and irrelevant features.