Impact Factor:6.549
 Scopus Suggested Journal: Tracking ID for this title suggestion is: 55EC484EE39417F0

International Journal
of Computer Engineering in Research Trends (IJCERT)

Scholarly, Peer-Reviewed, Open Access and Multidisciplinary




Welcome to IJCERT

International Journal of Computer Engineering in Research Trends. Scholarly, Peer-Reviewed, Open Access and Multidisciplinary

ISSN(Online):2349-7084                 Submit Paper    Check Paper Status    Conference Proposal

Back to Current Issues

High Dimensional Data Clustering Based On Feature Selection Algorithm

K.SWATHI, B.RANJITH, , ,
Affiliations
M.Tech Research Scholar, Priyadarshini Institute of Technology and Science for Women
HOD-CSE, Priyadarshini Institute of Technology and Science for Women
:NOT ASSIGNED


Abstract
Feature selection is the process of identifying a subset of the most useful features that produces compatible results as the original entire set of features. A feature selection algorithm may be evaluated from both the efficiency and effectiveness points of view. While the efficiency concerns the time required to find a subset of features, the effectiveness is related to the quality of the subset of features. Based on these criteria, a FAST clustering-based feature Selection algorithm (FAST) is proposed and experimentally evaluated. The FAST algorithm works in two steps. In the first step, features are divided into clusters by using graph-theoretic clustering methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. Features in different clusters are relatively independent; the clustering-based strategy of FAST has a high probability of producing a subset of useful and independent features. The MinimumSpanning Tree (MST) using Prim’s algorithm can concentrate on one tree at a time. To ensure the efficiency of FAST, adopt the efficient MST using the Kruskal’s Algorithm clustering method.


Citation
K.SWATHI,B.RANJITH."High Dimensional Data Clustering Based On Feature Selection Algorithm". International Journal of Computer Engineering In Research Trends (IJCERT) ,ISSN:2349-7084 ,Vol.1, Issue 06,pp.379-383, DECEMBER - 2014, URL :https://ijcert.org/ems/ijcert_papers/V1I65.pdf,


Keywords : Feature subset selection, filter method, feature clustering, graph-based clustering, Kruskal’s algorithm.

References
[1] Almuallim H. and Dietterich T.G., Algorithms
for Identifying Relevant Features, In Proceedings
of the 9th Canadian Conference on AI, pp 38-45,
1992.
[2] Almuallim H. and Dietterich T.G., Learning
boolean concepts in the presence of many
irrelevant features, Artificial Intelligence, 69(1-2),
pp 279- 305, 1994.
[3] Arauzo-Azofra A., Benitez J.M. and Castro J.L.,
A feature set measure based on relief, In
Proceedings of the fifth international conference on
Recent Advances in Soft Computing, pp 104-109,
2004.
[4] Baker L.D. and McCallum A.K., Distributional
clustering of words for text classification, In
Proceedings of the 21st Annual international ACM
SIGIR Conference on Research and Development
in information Retrieval, pp 96- 103, 1998.
[5] Battiti R., Using mutual information for
selecting features in supervised neural net
learning, IEEE Transactions on Neural Networks,
5(4), pp 537- 550, 1994.
[6] Bell D.A. and Wang, H., formalism for
relevance and its application in feature subset
selection, Machine Learning, 41(2), pp 175-195,
2000.
[7] Biesiada J. and Duch W., Features election for
high-dimensionaldatała Pearson redundancy
based filter, AdvancesinSoftComputing, 45, pp
242C249, 2008.
[8] Butterworth R., Piatetsky-Shapiro G. and
Simovici D.A., On Feature Selection through
Clustering, In Proceedings of the Fifth IEEE
international Conference on Data Mining, pp 581-
584, 2005.
[9] Cardie, C., Using decision trees to improve
case-based learning, In Proceedings of Tenth
International Conference on Machine Learning, pp
25-32, 1993.
[10] Chanda P., Cho Y., Zhang A. and Ramanathan
M., Mining of Attribute Interactions Using
Information Theoretic Metrics, In Proceedings of
IEEE international Conference on Data Mining
Workshops, pp 350-355, 2009.
[11] Chikhi S. and Benhammada S., ReliefMSS: a
variation on a feature ranking ReliefF algorithm.
Int. J. Bus. Intell. Data Min. 4(3/4), pp 375-390, 2009.
[12] Cohen W., Fast Effective Rule Induction, In
Proc. 12th international Conf. Machine Learning
(ICML’95), pp 115-123, 1995.
[13] Dash M. and Liu H., Feature Selection for
Classification, Intelligent Data Analysis, 1(3), pp
131-156, 1997.
[14] Dash M., Liu H. and Motoda H., Consistency
based feature Selection, In Proceedings of the
Fourth Pacific Asia Conference on Knowledge
Discovery and Data Mining, pp 98-109, 2000.
[15] Das S., Filters, wrappers and a boosting-based
hybrid for feature Selection, In Proceedings of the
Eighteenth International Conference on Machine
Learning, pp 74-81, 2001.
[16] Dash M. and Liu H., Consistency-based search
in feature selection. Artificial Intelligence, 151(1-2),
pp 155-176, 2003.
[17] Demsar J., Statistical comparison of classifiers
over multiple data sets, J. Mach. Learn. Res., 7, pp
1-30, 2006.
[18] Dhillon I.S., Mallela S. and Kumar R., A
divisive information theoretic feature clustering
algorithm for text classification, J. Mach. Learn.
Res., 3, pp 1265-1287, 2003.
[19] Dougherty, E. R., Small sample issues for
microarray-based classification. Comparative and
Functional Genomics, 2(1), pp 28-34, 2001.
[20] Fayyad U. and Irani K., Multi-interval
discretization of continuous-valued attributes for
classification learning, In Proceedings of the
Thirteenth International Joint Conference on
Artificial Intelligence, pp 1022-1027, 1993.


DOI Link : NOT ASSIGNED

Download :
  V1I65.pdf


Refbacks : Currently there are no Refbacks

Quick Links



DOI:10.22362/ijcert


Science Central

Score: 13.30





Submit your paper to [email protected]