Information defining and using functions that relate interconnections

Information retrieval on content based searching using Hidden Markov Model

 

Khyati Sethi

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Kanika Sharma

Jayant Gulati

Parul Yadav

Student, Department of Information Technology, Bharati Vidyapeeth College of Engineering, New Delhi, India

Student, Department of Information Technology, Bharati Vidyapeeth College of Engineering, New Delhi, India

Student, Department of Information Technology, Bharati Vidyapeeth College of Engineering, New Delhi, India

Assistant professor, Department of Information Technology, Bharati Vidyapeeth College of Engineering, New Delhi, India

Email Id: [email protected]

Email Id: [email protected]

Email Id: [email protected]

Email Id: [email protected]

 
 

 

 

 

Abstract: Data Analysis is used to model data with the aim of discovering useful information. Information is retrieved and Hidden Markov Model is incorporated to identify the relevance of a document. Relevance, evaluation, and information needs are the definitive key issues associated with the analysis of data and the retrieval of information. The relational value of an input given by user in the form of query, within a dataset, is known as relevance. This relational value of a document is normally based on a document ranking algorithm. These algorithms explicitly define how applicable a document is to user query by defining and using functions that relate interconnections between the query provided and the documents indexed. The data within the system need to be accessed effortlessly and in a manner that is convenient and appreciated by the user. Retrieving a large amount of information might be inconvenient in certain systems. Simultaneously, in other systems, not returning all relevant information may sometimes be unacceptable. After ascertaining the relevance of the recovered data using the Hidden Markov Model, we employ concepts such as precision and recall to estimate and analyse the model.

Keywords: Hidden Markov, Information retrieval, relevance, precision, recall

I. INTRODUCTION

The process of inspection, transformation and shaping up of data, keeping the discovery of useful information and conclusions as goal is known as Data Analysis. This supports decision making. Information retrieval is carried out by various IR methods and data is then analyzed. 

Usually, the evaluation of relevance with the help of some document representations with respect to the query is done by an IR system. There are various models for representation documents and queries. Thus, each model has its pros and cons.

Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, in different business, science, and social science domains. Data analysis is closely associated to the visualization and dissemination of data. The term data analysis is often referred to as data modelling.

Information retrieval refers to the task of obtaining relevant information resources applicable to an information requirement from a set of information resources collected. Usually, metadata or on full-text (or other content) based indexing searches can be performed.

Hidden Markov models have been successfully designed and implemented, over the period of last two decades, covering a wide variety of speech and language related recognition problems which include speech recognition, named entity ending, optical character recognition, and topic identification and much more1. In the present work, an application of this technology is described by us with respect to the problem of ad hoc IR 2. In all HMM applications, the observed data is modelled by the output produced by passing any unknown key through certain noisy channel(s). In the ad hoc retrieval problem(s), we have the observed data as the query problem, and an unknown key that makes up a desired relevant document. Thus, for each document we can compute the probability that it highly probable that this was the relevant document in the user’s mind, given the query. We then rank the documents based on this measure.

II. EXISTING MODEL

Data mining is a distinct technique for data analysis which does not concentrate upon purely descriptive purposes, rather, focuses on modelling and discovery of knowledge for predictive purposes. Data relying excessively on aggregation and aiming in business information comes under business intelligence. Customer data and IT tools build the substructure on which a victorious CRM strategy is created. Also, the quick expansion of the web and related technologies has substantially extended the number of marketing opportunities. In addition, this has altered the way relationships between

Companies and their customers are balanced and managed 3.

Predictive analytics aims at application of statistical models for estimating or categorization, while statistical, language-producing, and systemic techniques are applied to text analysis to acquire and classify information from textual resources.

Retrieving information from the web incorporates handling the abstractness and volume of data contained on the internet. When including factors such as word ambiguity and a large number of typographical errors, it is made increasingly difficult. As estimated, one word in every two-hundred words, on a web site, will contain a textual error on an average. There exist a variety of key pitfalls comprehending IR- relevance, evaluation, and information needs.

However, this is not the complete set of issues involving IR. Common information retrieval problems include potential, scalability and paging update occurrences. The relational value of an input given by user in the form of query, within a dataset, is known as relevance. This is generally based on a document ranking algorithm.

The larger complications with web information retrieval that are relevance and evaluation are still significant subject matters that require attention, amongst others.

The documents and queries are a collection of terms where every term within the document is indexed. 1 and 0 denote the presence and absence of some text in a text source respectively 4,5. Maintenance of an inverted index of every term is necessary in order to process matching of document and query. However, the Boolean model has some major limitations as explained further. Binary decision criterion has a disadvantage that it exists without any notion of grading scale. Another includes overloading of documents. Some researchers have worked upon this to overcome the weaknesses of the above said Boolean model by building improvisations to the existing one. Certain researches have also approached data analysis with a different search strategy known as the Vector Space model 5.

The Vector Space Model represents documents and queries internally as vectors. In this model, every query and document is represented as vectors that exist in a |V|-dimensional space. Here V is the collection of all distinct terms in the set of documents. Here, the documents set is the vocabulary 5.

Markov Processes were first proposed by Russian Mathematician Andrei Markov. A Markov model in probability theory is a stochastic model. This model is used to model systems that change randomly. In this model, it is presumed that the future states depend only on the present ones rather than the sequence of events that occurred prior to it 1, 2, 6.

There exist four Markov models that are used in different situations, depending on the observational degree of every sequential state.

III. MODEL TO BE IMPLEMENTED

A hidden Markov model (HMM) constitutes of a Markov model that is statistical. Here, the system to be modelled is presumed to be a Markovian process with states that are hidden, which implies that the states are unobserved. The simplest dynamic Bayesian network can refer to as HMM 7.

For the measurement of efficiency of impromptu information retrieval in the standard way, we require a collection of tests consisting of three things:

?             Collection of documents

?             Test suite of requirements represented as a set of queries.

?             Set of judgments, which standardly is a binary assessment of relevance computed as either relevant or irrelevant for every query-text pair.

In the past, various researchers have used following parameters to evaluate the performance of IR Systems:

1. Precision: It is a fraction of documents that are relevant among the entire retrieved document. Practically it gives accuracy of result.

Precision=|Ra|/|A|

2. Recall: A fraction of the documents that is retrieved and relevant among all relevant documents is defined as recall. Practically it gives coverage of result.

Recall =|Ra|/|R|

 

Where,

Ra: Set of relevant documents retrieved

R: Set of all relevant documents

 

In pattern recognition system and IR with binary classification, precision refers to the fraction of instances retrieved that are found to be relevant, while recall refers to the fraction of relevant instances that are extracted and retrieved. Both precision and recall are henceforth derived from an understanding and degree of relevance.

A. Language used- Python

For both small and large scale, Python helps enabling clear programs by providing constructs. Its features include a dynamic type system and an automatic memory management. It also has a huge and all-inclusive standard library.

Python’s large standard library provides tools to users that are suited for numerous tasks. Modules for creating GUIs, connecting them to relational databases, pseudorandom number generators, and arithmetic decimals with arbitrary precision, manipulation of regular expressions are included. It is also capable of performing unit testing.

B. Dataset used

The OHSUMED test collection is a combinational set of 348,566 references from MEDLINE. It is the on-line database for medical information present on World Wide Web. It has a title, MeSH indexing terms, author, and an abstract with source as available fields in the database.

The existing OHSUMED topics define the real requirements. Although, the judgements of relevance does not have the same coverage as given by the pooling process of TREC. The information requirements aren’t directly expressed by MeSH but these terms manage the indexing terms. The standard TREC format provides the topic statements and includes only

and <desc> fields. <p>The relevant document files are described below which simulate human judgement and contain information for 0 or 1 for every MeSH term expressed in the filtration of any given topic.</p> <p>(1)     OHSUMED relevance judgments (files: qrels.ohsu.*)</p> <p>Four searchers replicate each query. Out of these four, two are physicians who are experienced in searching and the other two are medical librarians. A completely different set of physicians estimate the results for relevance. This judgement is performed on a three point scale. The pointers are: definitely, possibly, or not relevant. Consideration for relevance is done for all documents that are checked to be either definitely relevant or possibly relevant.</p> <p>(2)     MeSH relevance judgments (files: qrels.mesh.*)</p> <p> The document is considered to be relevant to a MeSH topic if its concept is included in the list of MeSH term fields. </p> <p> </p> <p>C. WHOOSH: Python Library</p> <p>Whoosh was created by Matt Chaput. </p> <p>?             Whoosh uses only pure python hence runs anywhere python can, and so is fast. It runs without requiring a compiler.</p> <p>?             Whoosh uses the Okapi BM25F ranking function by default, but can be easily modified.</p> <p>?             Fairly small indexes are created by Whoosh as compared to numerous other search libraries.</p> <p>?             All indexed text in Whoosh must be unicode.</p> <p>Whoosh permits you index free structured text for quickly searching matching documents with respect to either simple or complex search guidelines.</p> <p>Some predefined field types are provided by whoosh:</p> <p>whoosh.fields.TEXT</p> <p>It is used for indexing the text and storing locations for the terms. These positions or locations further allow phrase searching. </p> <p>whoosh.fields.ID</p> <p>The entire value of the field is indexed into a single unit using the ID field, rather than breaking it up into separate terms. </p> <p>whoosh.fields.STORED</p> <p>It is neither an indexed type nor a searchable one. This is useful for displaying the information to the user in the search results.</p> <p>whoosh.fields.KEYWORD</p> <p>An indexed and searchable type, this is created for comma and space separated words. </p> <p> </p> <p>whoosh.fields.NUMERIC</p> <p>This is capable of storing int, long, or floating point numbers in a format that is sortable and compact</p> <p>whoosh.fields.BOOLEAN</p> <p>Indexing of boolean values is done by this field and this type allows users to search for results like: true, false, 1, 0, t, f, yes, no.</p> <p>whoosh.fields.DATETIME</p> <p>Date-time objects are stored in this field in a compact and extremely sortable format.</p> <p> </p> <p>A Format object is made to define the type of information is recorded by a field about each term. It also describes how it has to be stored on the disk. For example, this is how the postings are stored by the Existence format:</p> <p>While on the other hand, this is how the Positions format would do the same:</p> <p>The Unicode string is passed to the field’s format object for a field by the indexing code. An analyser is called by the format object which breaks the string into tokens. Further, encoding of the information is done about each of them.</p> <p>The inverted index performs mapping of the terms to the documents in which they appear. Also, sometimes it is useful to store a term vector that maps all the terms that arise in the documents to the original document sources.</p> <p>For example, inverted index of a field is:For the image above, the respective forward index is:</p> <p>D. Creating An Index Object</p> <p> </p> <p>For opening an existing index in a directory, index.open_dir:</p> <p>import whoosh.index as index</p> <p>ix = index.open_dir(“indexdir”)</p> <p> </p> <p>For creating an index in a directory, index.create_in:</p> <p>import os, os.path</p> <p>from whoosh import index</p> <p>if not os.path.exists(“indexdir”):</p> <p>    os.mkdir(“indexdir”)</p> <p>ix = index.create_in(“indexdir”, schema)</p> <p> </p> <p>The schema using which the index is created is stored with the index itself. Indexes can be kept in the same directory using the index-name keyword.</p> <p> </p> <p>To use the convenience functions</p> <p>ix = index.create_in(“indexdir”, schema=schema, indexname=”usages”)</p> <p>ix = index.open_dir(“indexdir”, indexname=”usages”)</p> <p> </p> <p>To use the Storage object</p> <p>ix = storage.create_index(schema, indexname=”usages”)</p> <p>ix = storage.open_index(indexname=”usages”)</p> <p>The relevance of the documents using Hidden Markov Model is compared with the tf.idf approach. Tf.idf is an approach based on numerical statistic based vector model. It reflects necessity of a word to a document in a corpus. Often, it is used in IR and data mining as a weighting factor. </p> <p>The tf-idf value is proportional to the frequency of appearance of a word given in the document. Although, it is offset by the frequency of the word in the corpus. This helps to relate to the fact that in general some words have more frequency of appearance than others.</p> <p>For the implementation, the first step is to design the schema and then indexing is performed 5. Then tf.idf values are calculated using Whoosh Library in Python. For HMM calculation the data observed is assumed to be the query Q, and an unknown key is assumed to be a relevant document D that is desired. The mind of the user is a noisy channel, who is having either some precise or rough notion of the documents he requires. This channel transforms that expressed notion into the query text Q. Hence, we compute the probability for each document D that it was the relevant one in the user mind, provided that Q was the query which was expressed or produced, i.e. P (D is RjQ). We further rank the documents with respect to this measure 6. This can be incorporated in the form of graphs. These graphical structures represent information about a domain that is uncertain. Particularly, each node represents a random variable and the edges denote the probabilistic dependencies transitioning between all the random variables 8.</p> <p>“Hidden” is the term represents that an observer can view only the output states. But he doesn’t realise the underlying sequence of transitions and states by which the output is generated 9. </p> <p>P (qjD) is the output distribution of any document D. It is set to be the sample distribution on words appearing in that document. For any document Dk, we can explicitly set</p> <p>It is the distribution that has the maximum probability of producing Dk itself by repeatedly sampling the state “General English”. It is estimated by</p> <p>The sum here is taken for all documents present in the corpus. Using the parameters estimated above, the formula for P (QjDk is R) is stated as under:</p> <p>IV. ADVANTAGES</p> <p>1. Hidden Markov models (HMMs) are a formal foundation used for creating probabilistic models for problems of linear sequence ‘labelling’. Just by drawing an intuitive image, a conceptual toolkit is provided. This is very useful for building complex models. They are at the hub of a set of miscellaneous programs. These programs include gene finding, multiple alignments of sequence, profile searches and identification of regulatory site. </p> <p>2. An HMM is a full probabilistic model—the overall ‘scores’ generated for sequences and the parameters calculated are all probabilities 6, 9. Hence, Bayesian probability theory can be incorporated for the manipulation of these numbers in more powerful ways. This includes optimization of parameters and interpretation of the significance of scores 5.</p> <p>3. HMMs can be proved useful for modelling of processes which contain different stages that occur in definite orders 9. </p> <p>If, for example, you want to model the behaviour of a technical system that first boots, then operates, then enters sleep mode, and iteratively changes between sleep and operation later on, you might need three states (boot, operate, sleep) and can use this process model to find out what’s going on in the system at any one time. Similar is the case with a human biological system where the observations can be the sequence of symptoms of a human being. Human genome project also requires the assimilation of HMM for DNA sequencing and RNA structuring 10.</p> <p>V. CHALLENGES</p> <p>Complications like scalability and frequencies of paging update are familiar IR issues. Ranking algorithms are implemented with the usage of methods that elucidate relations between the given query and the accumulated documents. All the feedback given by the information retrieval system has to be evaluated, which is another issue with IR. The way the system behaves, may or may not meet the expectations of the user. All the documents that are returned from the procedure may not be relevant to a given query.</p> <p>The way a user interacts with the IR system is termed as Information needs. Retrieval of a lot of information might be disruptive in a number of systems. On the other hand, in another number of systems, not returning a complete set of relevant data may be inadequate. </p> <p>As experienced, handling a set of voluminous information from the internet might be extremely difficult because of the extremely large size of documents the server manages. </p> <p>A thousand of documents can be returned by a simple retrieval query. Many of those documents are loosely related to the original criteria of retrieval. To deal with this, an IR system is required to have a query management that is efficient enough as well as contains a good level of ability in order to give weight as priority to documents that are closer for relevance to the user query.</p> <p>VI. CONCLUSION</p> <p>In simple terms, high precision relates to a higher degree of relevance than irrelevance returned by an algorithm considerably, while high recall states that most of the relevant results are returned by an algorithm.</p> <p>For the comparison of HMM with the traditionally used model, Indexing and searching were performed followed by applying searching to query multiple words, and successful results were generated. </p> <p>Succeeding the above, Tf.idf values were found, and their precision compared with the ranked HMM values.</p> <p>In the analysis held for comparing tf.idf model with HMM, we find that the precision of HMM is greater than that of values generated by tf.idf. Thus, HMM is capable of retrieving more relevant data than tf.idf does.</p> <p>REFERENCES</p> <p>1A tutorial on Hidden Markov Models and selected applications in Speech Recognition Lawrence R. Rabiner</p> <p>BOOK: Readings in speech recognition</p> <p>Morgan Kaufmann Publishers Inc. San Francisco </p> <p>ISBN:1-55860-124-4</p> <p>2″Hidden Markov Models” by Phil Blunsom, August 19, 2004</p> <p>3 Application of data mining techniques in customer relationship management</p> <p>E.W.T. Ngai a, Li Xiu b, D.C.K. Chau a</p> <p>Expert Systems with Applications: An International Journal archive: Volume 36 Issue 2, March, 2009 </p> <p>4 A SURVEY OF TEXT CLASSIFICATION</p> <p>ALGORITHMS</p> <p>Charu C. Aggarwal, ChengXiang Zhai</p> <p>BOOK: Mining Text Data</p> <p>5 A Review on Important Aspects of Information Retrieval Yogesh Gupta, Ashish Saini, A.K. Saxena</p> <p>World Academy of Science, Engineering and TechnologyInternational Journal of Computer, Electrical, Automation, Control and Information Engineering Vol:7, No:12, 2013</p> <p>6 A Revealing Introduction to Hidden Markov Models</p> <p>Mark Stamp: Department of Computer Science</p> <p>San Jose State University: September 28, 2012</p> <p>7″An Introduction to Hidden Markov Models and Bayesian Networks” by Zoubin Ghahramani, University College London</p> <p>BOOK: Hidden Markov models</p> <p>World Scientific Publishing Co., Inc. River Edge, NJ, USA ©2002 </p> <p>ISBN:981-02-4564-5 </p> <p>8 Bayesian networks</p> <p>Irad Ben?Gal</p> <p>Encyclopedia of statistics in quality and reliability: 2007; John Wiley & Sons, Ltd</p> <p>9 Markov Models and Hidden</p> <p>Markov Models A Brief Tutorial</p> <p>Eric FoslerLussier</p> <p>International computer science institute</p> <p>TR-98-041, December 1998</p> <p>10 Durbin, R., Eddy, S.R., Krogh, A. & Mitchison, G.J. </p> <p>BOOK: Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids</p> <p>Cambridge University Press, Cambridge UK, 1998</p> <p> </p></desc>