classified average precision for retrieving information using feedback sessions

Bhuvaneswari.S,Shobana.P,Vaishnavi.V,Ramya.P

Published in International Journal of Advanced Research in Computer Science Engineering and Information Technology

ISSN: 2321-3337          Impact Factor:1.521         Volume:3         Issue:2         Year: 10 March,2014         Pages:375-383

International Journal of Advanced Research in Computer Science Engineering and Information Technology

Abstract

Information Surfing is one of the vital phenomenon in today’s world. Users prefer to surf internet by their queries to clarify their known uncertain information. Search engines do not often bring the user required information and does not fulfill the request completely. Hence it is necessary to infer and mine user specific interest about a topic. Using Internet the user collects the required information through the search engine. To provide the best result by the internet, the user search goal has to be analyzed. The feedback sessions are clustered to find out special user search goals for a query and the Pseudo-documents for it. The user search goals are understand using Classified Average precision (CAP) algorithm.

Kewords

user search goals, feedback sessions, pseudo-documents, classified average precision

Reference

[1] C.-K Huang, L.-F Chien, and Y.-J Oyang, “Relevant Term Suggestion in Interactive Web Search Based on Contextual Information in Query Session Logs,” J. Am. Soc. for Information Science and Technology, vol. 54, no. 7, pp. 638-649, 2003. [2] T. Joachims, “Evaluating Retrieval Performance Using Click- through Data,” Text Mining, J. Franke, G. Nakhaeizadeh, and Renz, eds., pp. 79-96, Physica/Springer Verlag, 2003. [3] T. Joachims, “Optimizing Search Engines Using Clickthrough Data,” Proc. Eighth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (SIGKDD ’02), pp. 133-142, 2002. [4] T. Joachims, L. Granka, B. Pang, H. Hembrooke, and G. Gay, “Accurately Interpreting Clickthrough Data as Implicit Feed- back,” Proc. 28th Ann. Int’l ACM SIGIR Conf. Research and Development in Information Retrieval (SIGIR ’05), pp. 154-161, 2005. [5] R. Jones and K.L. Klinkner, “Beyond the Session Timeout: Automatic Hierarchical Segmentation of Search Topics in Query Logs,” Proc. 17th ACM Conf. Information and Knowledge Manage- ment (CIKM ’08), pp. 699-708, 2008. [6] R. Jones, B. Rey, O. Madani, and W. Greiner, “Generating Query Substitutions,” Proc. 15th Int’l Conf. World Wide Web (WWW ’06), pp. 387-396, 2006. [7] R. Baeza-Yates and B. Ribeiro-Neto, Modern Information Retrieval. ACM Press, 1999. [8] R. Baeza-Yates, C. Hurtado, and M. Mendoza, “Query Recommendation Using Query Logs in Search Engines,” Proc. Int’l Conf. Current Trends in Database Technology (EDBT ’04), pp. 588-596, 2004. [9] D. Beeferman and A. Berger, “Agglomerative Clustering of a Search Engine Query Log,” Proc. Sixth ACM SIGKDD Int’l Conf. Knowledge Discovery and Data Mining (SIGKDD ’00), pp. 407-416, 2000. [10] S. Beitzel, E. Jensen, A. Chowdhury, and O. Frieder, “Varying Approaches to Topical Web Query Classification,” Proc. 30th Ann. Int’l ACM SIGIR Conf. Research and Development (SIGIR ’07), pp. 783-784, 2007.