Documente Academic
Documente Profesional
Documente Cultură
Adapted from Lectures by Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Prasad
L10Evaluation
This lecture
n
Results summaries:
n
Prasad
L10Evaluation
Result Summaries
n
Having ranked the documents matching a query, we wish to present a results list. Most commonly, a list of the document titles plus a short summary, aka 10 blue links.
Summaries
n
The title is typically automatically extracted from document metadata. What about the summaries?
n n
This description is crucial. User can identify good/relevant hits based on description.
A static summary of a document is always the same, regardless of the query that hit the doc. A dynamic summary is a query-dependent attempt to explain why the document was retrieved for the query at hand.
L10Evaluation 4
Prasad
Static summaries
n
Simplest heuristic: the first 50 (or so this can be varied) words of the document
n
Simple NLP heuristics to score each sentence Summary is made up of top-scoring sentences.
Dynamic summaries
n
Present one or more windows within the document that contain several of the query terms
n
If query found as a phrase, all or some occurrences of the phrase in the doc If not, document windows that contain multiple query terms
The summary itself gives the entire content of the window all terms, not only the query terms.
E.g., positional index says the query is a phrase in position 4378 so we go to this position in the cached document and stream out the content Note: Cached copy can be outdated
Prasad
L10Evaluation
Dynamic summaries
n
n n n
The real estate for the summary is normally small and fixed Want snippets to be long enough to be useful Want linguistically well-formed snippets Want snippets maximally informative about doc
But users really like snippets, even if they complicate IR system design
L10Evaluation 8
Prasad
An active area of HCI research An alternative: http://www.searchme.com / copies the idea of Apples Cover Flow for search results
Prasad
L10Evaluation
Prasad
L10Evaluation
10
Number of documents/hour (Average document size) Latency as a function of index size Ability to express complex information needs Speed on complex queries
n n
Uncluttered UI Is it free?
L10Evaluation 11
Prasad
All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise The key measure: user happiness
n
What is this?
n n
Speed of response/size of index are factors But blindingly fast, useless answers wont make a user happy
Prasad
L10Evaluation
12
IR Performance Evaluation
n
How relevant is the answer set? (required to establish functional correctness, e.g., through benchmarks)
L10Evaluation 13
Prasad
Web engine: user finds what they want and return to the engine
n
eCommerce site: user finds what they want and make a purchase
n
Prasad
Is it the end-user, or the eCommerce site, whose happiness we measure? Measure time to purchase, or fraction of searchers who become buyers?
L10Evaluation
14
How much time do my users save when looking for information? Many other criteria having to do with breadth of access, secure access, etc.
Prasad
L10Evaluation
15
Prasad
L10Evaluation
16
Evaluating an IR system
n
Note: the information need is translated into a query Relevance is assessed relative to the information need, not the query
n
E.g., Information need: I'm looking for information on whether drinking red wine is more effective at reducing heart attack risks than white wine. Query: wine red white heart attack effective
You evaluate whether the doc addresses the information need, not whether it has these words
L10Evaluation 17
Prasad
Relevancy, from a human standpoint, is: n Subjective: Depends upon a specific users judgment. n Situational: Relates to users current needs. n Cognitive: Depends on human perception and behavior. n Dynamic: Changes over time.
L10Evaluation 18
Prasad
TREC - National Institute of Standards and Technology (NIST) has run a large IR test bed for many years Reuters and other benchmark doc collections used
sometimes as queries
Human experts mark, for each query and for each doc, Relevant or Nonrelevant
n
or at least for subset of docs that some system returned for that query
19
Prasad
Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved) Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant)
Relevant Retrieved Not Retrieved
n n
Nonrelevant fp tn
tp fn
Prasad
20
Prasad
L10Evaluation
21
Precision
n
Recall
n
The ability of the search to find all of the relevant items in the corpus.
n
Prasad
L10Evaluation
22
Given a query, an engine classifies each doc as Relevant or Nonrelevant The accuracy of an engine: the fraction of these classifications that are correct
n
Prasad
Search for:
Prasad
People doing information retrieval want to find something and have a certain tolerance for junk.
L10Evaluation
24
Precision/Recall
n
You can get high recall (but low precision) by retrieving all docs for all queries!
n
In a good system, precision decreases as either the number of docs retrieved or recall increases
n
Prasad
L10Evaluation
25
Trade-offs
Returns relevant documents but misses many useful ones too The ideal
1 Precision 0
Recall
Prasad
L10Evaluation
26
Should average over large document collection/ query ensembles Need human relevance assessments
n
People arent reliable assessors Nuanced assessments? Results may not translate from one domain to another
L10Evaluation 27
Prasad
A combined measure: F
n
2 PR F= = 1 1 P+R + P R
n
Prasad
L10Evaluation
28
(1 + ) PR (1 + ) E= = 2 1 2 P+R + R P
= 1: Equally weight precision and recall (E=F). > 1: Weight recall more. < 1: Weight precision more.
L10Evaluation 29
Prasad
20
40
60
80
100
30
Breakeven Point
n
Recall vs Precision and F1
1.2
Precision and F1
0.8
0.6
0.4
n
0.2
Breakeven point is the point where precision equals recall. Alternative single measure of IR effectiveness. How do you compute it?
Recall
The system can return any number of results By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve
Prasad
L10Evaluation
32
R=4/6=0.667; P=4/6=0.667 Missing one relevant document. Never reach R=5/6=0.833; p=5/13=0.38 100% recall
L10Evaluation 33
A precision-recall curve
1.0 0.8
Precision
0.6 0.4 0.2 0.0 0.0 0.2 0.4 0.6 0.8 1.0
34
Recall
rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0} r0 = 0.0, r1 = 0.1, , r10=1.0
The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level above the j-th level:
Typically average performance over a large set of queries. Compute average precision at each standard recall level across all queries. Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.
36
Graphs are good, but people want summary measures! n Precision at fixed retrieval level
n n
Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all people want are good matches on the first one or two results pages But: averages badly and has an arbitrary parameter of k The standard measure in the early TREC competitions: you take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them Evaluates performance at all recall levels
L10Evaluation 37
Prasad
0.8
Precision
0.6
0.4
0.2
Prasad
38
11 point precisions
120 100 80
Precision
Prasad
L10Evaluation
39
True positive rate = tp/(tp+fn) = recall = sensitivity False positive rate = fp/(tn+fp). Related to precision. n fpr=0 <-> p=1 Why is the blue line worthless?
MAP for a query n Average of the precision value for each (of the k top) relevant document retrieved
n
This approach weights early appearance of a relevant document over later appearance
MAP for query collection is the arithmetic average of MAP for each query n Macro-averaging: each query counts equally
L10Evaluation 41
Prasad
Average Precision
summarize rankings from multiple queries by averaging average precision n most commonly used measure in research papers n assumes user is interested in finding many relevant documents for each query n requires many relevance judgments in text collection
n
MAP
Given that n docs are retrieved n Compute the precision (at rank) where each (new) relevant document is retrieved => p(1), ,p(k), if we have k rel. docs
n
E.g., if the first rel. doc is at the 2nd rank, then p(1)=1/2.
If a relevant document never gets retrieved, we assume the precision corresponding to that rel. doc to be zero Compute the average over all the relevant documents n Average precision = (p(1)+p(k))/k
n
45
(contd)
n
This gives us (non-interpolated) average precision, which captures both precision and recall and is sensitive to the rank of each relevant document Mean Average Precisions (MAP) n MAP = arithmetic mean average precision over a set of topics n gMAP = geometric mean average precision over a set of topics (more affected by difficult topics)
46
Popular measure for evaluating web search and related tasks Two assumptions: n Highly relevant documents are more useful than marginally relevant document n the lower the ranked position of a relevant document, the less useful it is for the user, since it is less likely to be examined
Uses graded relevance as a measure of usefulness, or gain, from examining a document Gain is accumulated starting at the top of the ranking and may be reduced, or discounted, at lower ranks Typical discount is 1/log (rank) n With base 2, the discount at rank 4 is 1/2, and at rank 8 it is 1/3
What if relevance judgments are in a scale of [1,r]? r>2 Cumulative Gain (CG) at rank n n Let the ratings of the n documents be r1, r2, rn (in ranked order) n CG = r1+r2+rn Discounted Cumulative Gain (DCG) at rank n n DCG = r1 + r2/log22 + r3/log23 + rn/log2n
n
Alternative formulation:
n n
used by some web search companies emphasis on retrieving highly relevant documents
DCG Example
n
discounted gain:
3, 2/1, 3/1.59, 0, 0, 1/2.59, 2/2.81, 2/3, 3/3.17, 0 = 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0
DCG:
3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61
Normalized Cumulative Gain (NDCG) at rank n n Normalize DCG at rank n by the DCG value at rank n of the ideal ranking n The ideal ranking would first return the documents with the highest relevance level, then the next highest relevance level, etc n Compute the precision (at rank) where each (new) relevant document is retrieved => p(1), ,p(k), if we have k rel. docs NDCG is now quite popular in evaluating Web search
52
NDCG - Example
4 documents: d1, d2, d3, d4
Ground
Truth
i
1
2
3
4
Document
Order
d4
d3
d2
d1
ri
2
2
1
0
Ranking
Func2on1
Document
Order
d3
d4
d2
d1
ri
2
2
1
0
Ranking
Func2on2
Document
Order
d3
d2
d4
d1
ri
2
1
2
0
NDCGGT=1.00
NDCGRF1=1.00
NDCGRF2=0.9203
2 1 0 DCGGT = 2 + log 2 + log 3 + log 4 = 4.6309 2 2 2 2 1 0 DCGRF1 = 2 + log 2 + log 3 + log 4 = 4.6309 2 2 2 1 2 0 DCGRF 2 = 2 + log 2 + log 3 + log 4 = 4.2619 2 2 2
NDCG - Example
n
Graded ranking/ordering:
4 2 0 1
DCG = 4 + 2/log(2) + 0/log(3) + 1/log(4) n = 6.5 IDCG = 4 + 2/log(2) + 1/log(3) + 0/log(4) n = 6.63 NDCG = DCG/IDCG = 6.5/6.63 = .98
54
R- Precision
n
Precision at the R-th position in the ranking of results for a query that has R relevant documents.
n doc # relevant 1 588 x 2 589 x 3 576 4 590 x 5 986 6 592 x 7 984 8 988 9 578 10 985 11 103 12 591 13 772 x 14 990
R = # of relevant docs = 6
L10Evaluation
55
Variance
n
For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7) Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query.
n
That is, there are easy information needs and hard ones!
L10Evaluation 56
Prasad
Test Collections
Prasad
57
Prasad
L10Evaluation
58
Still need
n n
Test queries Relevance assessments Must be germane to docs available Best designed by domain experts Random query terms generally not a good idea Human judges, time-consuming Are human panels perfect?
L10Evaluation 59
Test queries
n n n
Relevance assessments
n n
Prasad
n n
In some very specific settings, can use proxies But once we have test collections, we can reuse them (so long as we dont overtrain too badly) Example below, approximate vector space retrieval
L10Evaluation 60
Prasad
Let G(q) be the ground truth of the actual k closest docs on query q Let A(q) be the k docs returned by approximate algorithm A on query q For performance we would measure A(q) G(q)
n
Prasad
L10Evaluation
61
Alternative proposal
n n
Focus instead on how A(q) compares to G(q). Goodness can be measured here in cosine proximity to q: we sum up qd over d A(q). Compare this to the sum of qd over d G(q).
n
Thus A may be 90% as good as the ground-truth G, without finding 90% of the docs in G. For scored retrieval, this may be acceptable:
Most web engines dont always return the same answers for a given query.
62
Prasad
Kappa measure
n n n
Agreement measure among judges Designed for categorical judgments Corrects for chance agreement
n n n n
Kappa = [ P(A) P(E) ] / [ 1 P(E) ] P(A) proportion of time judges agree P(E) what agreement would be by chance Kappa = 0 for chance agreement, 1 for total agreement.
Prasad
L10Evaluation
63
P(A)? P(E)?
Kappa Example
n n n n n
P(A) = 370/400 = 0.925 P(nonrelevant) = (10+20+70+70)/800 = 0.2125 P(relevant) = (10+20+300+300)/800 = 0.7878 P(E) = 0.2125^2 + 0.7878^2 = 0.665 Kappa = (0.925 0.665)/(1-0.665) = 0.776
n n n
Kappa > 0.8 : good agreement 0.67< Kappa <0.8 : tentative conclusions (Carletta 96) Depends on purpose of study
Adapted from Slides Attributed to Prof. Dik Lee (Univ. of Science and Tech, Hong Kong)
Prasad
L10Evaluation
66
Fallout Rate
n
Number of irrelevant documents in the collection is not taken into account. Recall is undefined when there is no relevant document in the collection. Precision is undefined when no document is retrieved.
no. of nonrelevant items retrieved Fallout = total no. of nonrelevant items in the collection
Prasad L10Evaluation 67
Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware.
n
Coverage Ratio: The proportion of relevant items retrieved out of the total relevant documents known to a user prior to the search.
n
Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).
Prasad
L10Evaluation
68
User effort: Work required from the user in formulating queries, conducting the search, and screening the output. Response time: Time interval between receipt of a user query and the presentation of system responses. Form of presentation: Influence of search output format on the users ability to utilize the retrieved materials. Collection coverage: Extent to which any/all relevant items are included in the document corpus.
L10Evaluation 69
Prasad
SKIP DETAILS
Prasad
L10Evaluation
70
Previous experiments were based on the SMART collection which is fairly small. (ftp://ftp.cs.cornell.edu/pub/smart) Collection Name CACM CISI CRAN MED TIME Number Of Documents 3,204 1,460 1,400 1,033 425 Number Of Queries 64 112 225 30 83 Raw Size (Mbytes) 1.5 1.3 1.6 1.1 1.5
Prasad
L10Evaluation
71
n n n
A document can be redundant even if it is highly relevant Duplicates The same information from different sources Marginal relevance is a better measure of utility for the user.
Using facts/entities as evaluation units more directly measures true relevance. But harder to create evaluation set
L10Evaluation 72
Prasad
n n n
Search engines have test collections of queries and handranked results Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = 10 . . . or measures that reward you more for getting rank 1 right than for getting rank 10 right.
n
NDCG (Normalized Cumulative Discounted Gain) Clickthrough on first result n Not very reliable if you look at a single clickthrough but pretty reliable in the aggregate. Studies of user behavior in the lab A/B testing L10Evaluation
n n
73
A/B testing
n n
Purpose: Test a single innovation Prerequisite: You have a large search engine up and running.
n n
Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the new system that includes the innovation Evaluate with an automatic measure like clickthrough on first result Now we can directly see if the innovation does improve user happiness. Probably the evaluation methodology that large search engines trust most In principle less powerful than doing a multivariate regression analysis, but easier to understand
74
TREC Benchmarks
Prasad
L10Evaluation
75
Encourage participation from industry and academia. Development of new evaluation techniques, particularly for new applications.
Retrieval, routing/filtering, non-English collection, web-based collection, question answering.
Prasad L10Evaluation 77
TREC Advantages
n
n n
Large scale (compared to a few MB in the SMART Collection). Relevance judgments provided. Under continuous development with support from the U.S. Government. Wide participation: n TREC 1: 28 papers 360 pages. n TREC 4: 37 papers 560 pages. n TREC 7: 61 papers 600 pages. n TREC 8: 74 papers.
L10Evaluation 78
Prasad
TREC Tasks
n
Ad hoc: New questions are being asked on a static set of data. Routing: Same questions are being asked, but new information is being searched. (news clipping, library profiling). New tasks added after TREC 5 - Interactive, multilingual, natural language, multiple database merging, filtering, very large corpus (20 GB, 7.5 million documents), question answering.
L10Evaluation 79
Prasad
TREC
n
50 detailed information needs a year Human evaluation of pooled results returned More recently other related things: Web track, HARD
Prasad
80
GOV2
n n n n
Another TREC/NIST collection 25 million web pages Largest collection that is easily available But still 3 orders of magnitude smaller than what Google/Yahoo/MSN index East Asian language and cross-language information retrieval This evaluation series has concentrated on European languages and cross-language information retrieval.
L10Evaluation 81
NTCIR
n
Prasad
Both long and short documents (from a few hundred to over one thousand unique terms in a document). Test documents consist of:
WSJ AP ZIFF FR DOE Wall Street Journal articles (1986-1992) Associate Press Newswire (1989) Computer Select Disks (Ziff-Davis Publishing) Federal Register Abstracts from Department of Energy reports 550 M 514 M 493 M 469 M 190 M
Prasad
L10Evaluation
82
Volume 1 (Mar 1994) - Wall Street Journal (1987, 1988, 1989), Federal Register (1989), Associated Press (1989), Department of Energy abstracts, and Information from the Computer Select disks (1989, 1990) Volume 2 (Mar 1994) - Wall Street Journal (1990, 1991, 1992), the Federal Register (1988), Associated Press (1988) and Information from the Computer Select disks (1989, 1990) Volume 3 (Mar 1994) - San Jose Mercury News (1991), the Associated Press (1990), U.S. Patents (1983-1991), and Information from the Computer Select disks (1991, 1992) Volume 4 (May 1996) - Financial Times Limited (1991, 1992, 1993, 1994), the Congressional Record of the 103rd Congress (1993), and the Federal Register (1994). Volume 5 (Apr 1997) - Foreign Broadcast Information Service (1996) and the Los Angeles Times (1989, 1990).
Prasad
L10Evaluation
83
TREC Disk 5
Prasad
Prasad
L10Evaluation
85
TREC Properties
n
Both documents and queries contain many different kinds of information (fields). Generation of the formal queries (Boolean, Vector Space, etc.) is the responsibility of the system. n A system may be very good at querying and ranking, but if it generates poor queries from the topic, its final P/R would be poor.
L10Evaluation 87
Prasad
Prasad
L10Evaluation
88
Prasad
L10Evaluation
89
Evaluation
n
Summary table statistics: Number of topics, number of documents retrieved, number of relevant documents. Recall-precision average: Average precision at 11 recall levels (0 to 1 at 0.1 increments). Document level average: Average precision when 5, 10, .., 100, 1000 documents are retrieved. Average precision histogram: Difference of the Rprecision for each topic and the average R-precision of all systems for that topic.
Prasad
L10Evaluation
90
Prasad
L10Evaluation
91
1,239 abstracts of medical journal articles on CF. 100 information requests (queries) in the form of complete English questions. Relevant documents determined and rated by 4 separate medical experts on 0-2 scale:
n n n
Prasad
L10Evaluation
92
CF Document Fields
n n n n n n n n n
MEDLINE access number Author Title Source Major subjects Minor subjects Abstract (or extract) References to other documents Citations to this document
Prasad
L10Evaluation
93