IR5_so的ngruihua.pptxVIP

  1. 1、本文档共46页,可阅读全部内容。
  2. 2、原创力文档(book118)网站文档一经付费(服务费),不意味着购买了该文档的版权,仅供个人/单位学习、研究之用,不得用于商业用途,未经授权,严禁复制、发行、汇编、翻译或者网络传播等,侵权必究。
  3. 3、本站所有内容均由合作方或网友上传,本站不对文档的完整性、权威性及其观点立场正确性做任何保证或承诺!文档内容仅供研究参考,付费前请自行鉴别。如您付费,意味着您自己接受本站规则且自行承担风险,本站不退款、不进行额外附加服务;查看《如何避免下载的几个坑》。如果您已付费下载过本站文档,您可以点击 这里二次下载
  4. 4、如文档侵犯商业秘密、侵犯著作权、侵犯人身权等,请点击“版权申诉”(推荐),也可以打举报电话:400-050-0827(电话支持时间:9:00-18:30)。
  5. 5、该文档为VIP文档,如果想要下载,成为VIP会员后,下载免费。
  6. 6、成为VIP后,下载本文档将扣除1次下载权益。下载后,不支持退款、换文档。如有疑问请联系我们
  7. 7、成为VIP后,您将拥有八大权益,权益包括:VIP文档下载权益、阅读免打扰、文档格式转换、高级专利检索、专属身份标志、高级客服、多端互通、版权登记。
  8. 8、VIP文档为合作方或网友上传,每下载1次, 网站将根据用户上传文档的质量评分、类型等,对文档贡献者给予高额补贴、流量扶持。如果你也想贡献VIP文档。上传文档
查看更多
Evaluation in Information Retrieval Speaker: Ruihua Song Web Data Management Group, MSR Asia Call For Papers of EVIA 2010 Test collection formation, evaluation metrics, and evaluation environments Statistical issues in retrieval evaluation - User studies and the evaluation of human-computer interaction in information retrieval (HCIR) Evaluation methods for multilingual, multimedia, or mobile information access Novel information access tasks and their evaluation - Evaluation and assessment using implicit user feedback, crowdsourcing, living labs, or inferential methods Evaluation issues in industrial and enterprise retrieval systems Outlines Basics on IR evaluation Introduction of TREC (Text Retrieval Conference) One selected paper Select-the-Best-Ones: A new way to judge relative relevance Motivated Examples Which set is better? S1={r, r, r, n, n} vs. S2={r, r, n, n, n} S3={r} vs. S4={r, r, n} Which ranking list is better? L1=r, r, r, n, n vs. L2=n, n, r, r, r L3=r, n, r, n, h vs. L4=h,n, n, r, r r: relevant n: non-relevant h: highly relevant Precision Recall Precision is fraction of the retrieved document which is relevant Recall is fraction of the relevant document which has been retrieved R (Relevant Set) A (Answer Set) Ra Precision Recall (cont.) Assume there are 10 relevant documents in judgments Example 1: S1={r, r, r, n, n} vs. S2={r, r, n, n, n} P1= 3/5 = 0.6; R1= 3/10 = 0.3 P2= 2/5 = 0.4; R2= 2/10 = 0.2 S1 S2 Example 2: S3={r} vs. S4={r, r, n} P3= 1/1 = 1; R3= 1/10 = 0.1 P4= 2/3 = 0.667; R4= 2/10 = 0.2 ? (F1-Measure) Example 3: L1=r, r, r, n, n vs. L2=n, n, r, r, r ? r: relevant n: non-relevant h: highly relevant Mean Average Precision Defined as the mean of Average Precision for a set of queries Example 3: L1=r, r, r, n, n vs. L2=n, n, r, r, r AP1=(1/1+2/2+3/3)/10=0.3 AP2=(1/3+2/4+3/5)/10=0.143 L1 L2 Other Metrics based on Binary Judgments P@10 (Precision at 10) is the number of relevant documents in the top 10 documents in the ranked list re

文档评论(0)

daixuefei + 关注
实名认证
文档贡献者

该用户很懒,什么也没介绍

1亿VIP精品文档

相关文档