Computing trust from revision history

From Brede Wiki
Jump to: navigation, search
Conference paper (help)
Computing trust from revision history
Authors: Honglei Zeng, Maher A. Alhossaini, Li Ding, Richard Fikes, Deborah L. McGuinness
Citation: Proceedings of the 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services 380 in ACM International Conference Proceeding Series : 8. 2006
Editors:
Publisher: Association for Computing Machinery, New York, NY, USA
Meeting: 2006 International Conference on Privacy, Security and Trust: Bridge the Gap Between PST Technologies and Business Services
Database(s):
DOI: 10.1145/1501434.1501445.
Link(s): http://www-ksl.stanford.edu/pub/KSL_Reports/KSL_06_06.pdf
Search
Web: DuckDuckGo Bing Google Yahoo!Google PDF
Article: Google Scholar PubMed
Restricted: DTU Digital Library
Services
Format: BibTeX

Computing trust from revision history describes a method for assessing the trustworthiness of a Wikipedia article. They use the article revision history and a dynamic Bayesian network.

They consider 3 different trusts:

  • Article trust
  • Fragment trust
  • Author trust

The main focus of the paper is article trust. Their trust value is a continuous value between 0 and 1.

Contents

[edit] Method

Their dynamic Bayesian model is a Markov chain, where the posterior density distribution of the trust of an article (t_V_i+1) is dependent upon:

  • The trust of the previous article version (t_v_i)
  • The new insertion (i_i)
  • The new deletion (d_i)
  • The new author (t_A_i+1)

Beta distributions are used to model the probabilities and the BUGS software. The authors apply different priors based on user level: administrators, registered users, anonymous authors, and blocked authors. Trust from insertion and deletion are based on the size of the edit.

The longest common subsequence algorithm was used to compute the diff between consecutive articles.

The also trained a classifier to classify between featured and non-featured articles.

[edit] Data

Data from the English Wikipedia from the geography category in January 2006 was used with:

  • 50 featured articles
  • 50 "clean-up" articles
  • 768 normal articles

With a total of 40450 revisions.

For testing a classifier a further 200 new articles was also used.

[edit] Results

Their classifier could predict featured article status with 82% and clean-up articles with 84%.

[edit] Critique

  1. The authors considers deletions and insertions (section 3.1), but what about moves?

[edit] Related papers

  1. Investigations into trust for collaborative information repositories: a Wikipedia case study. Considers citation-based trust.
  2. Mining revision history to assess trustworthiness of article fragments
  3. Size matters: word count as a measure of quality on Wikipedia
Personal tools