Jump to content

Stanford Open Source Lab/Luca de Alfero

From Wikiversity

Luca de Alfaro is an Associate Professor of Computer Engineering at University of California, Santa Cruz. de Alfaro is known for designing the reliability rating software that is scheduled to be integrated into planned and announced Wikipedia contributor rating processes.[1][2][3][4]

Education

[edit | edit source]

PhD. from Stanford 1998. Dissertation: Formal Verification of Probabilistic Systems[5] Advisor: Zohar Manna.

Teaching career

[edit | edit source]

Started as Associate Professor at UC Santa Cruz in 2002.[6] He is an adviser for the Academic Senate.[7][8]

Color coding

[edit | edit source]

de Alfaro's initial implementation[9], which color codes chunks of text for reliability, has made headlines.[10][11][12]

de Alfaro presented the results of some of his Wikipedia Quality research at Wikimania 2007.[13] It was reported that "The co-founder of Wikipedia said it was one of the most exciting ideas he'd heard at Wikimania 2007 in Taipei."[14]

de Alfaro's work will first be tested into the German Wikipedia along with the new concept of "trusted" users and then, if the scheme is successful, later to others including the English Wikipedia.[15] Jimmy Wales announced that he plans to test de Alfaro's rating software on some of the smaller Wikia sites.[16]

In MediaWiki version 1.5 (circa 2005), a special page extension entitled Permissions included features named Trust and ReleaseArticleVersion designed to be controlled by users with a "publisher" authorization level. The documentation for this feature sometimes uses an example of an school where teachers would control the publication of student-written pages.

The algorithm and its implications

[edit | edit source]

Template:Essay-entry de Alfaro's algorithm first evaluated Wikipedia authors and rates them based on whether or not the new text they add to a page is retained in the current version of the article. It then goes back and applies the author's rating to all of the chunks or "lines" of text (sentences?) that the author has entered that are still in current article versions and then provides and then provides a score for the article based on the aggregate scores of the sentences.

In the description of the intention of the Luca's software design, it assumes and implies that sentence retention has a high level of correlation with the reliability of the content and trustworthyness of the editor. An analysis of the encyclopedia-building process as practiced at Wikipedia suggests that sentence retention levels could have other underlying causes. That encyclopedia-building process includes:

  • Recognizing an acceptable (non-controversial?) article subject to initiate
  • Thorough research in the form of fact-gathering and source evaluation
  • Composing English sentences that are clear and easy to read and are free of grammatical and typographical errors
  • Removal of one's own inevitable non-zero bias, perhaps with the help of supportive and constructive collaborators
  • Keeping the text focused objectively on the subject and avoiding extensive commentary and editorialization
  • Organizing that information into coherent paragraphs and sections
  • Interacting with other editors on Talk pages in a manner that will not provoke those other editors to specifically study one's contributions and apply a severe deletionist approach to one's mainspace article work
  • Avoiding the possibility that other editors will revert and then re-apply one's work, thus plagiarizing the original author.
  • Avoiding a courtesy blanking by some other editor.
  • Retention of access to one's Wikipedia account

Problems in any one of these steps might lead to a failure in having one's sentences retained and be reported as such. Other terms such as popularity of the editor or perhaps even entertainment value of the article might be used depending on the motivations of other editors in aggregate or individually (depending on how many editors are significantly involved modifying in any one article) to approve of (or remain neutral to) and retain certain sentences. One could also extend the analysis to entire paragraphs or to specific unpopular facts, such as those that might occur in biographies of living people that might cause some readers to experience emotional distress and consider litigation as a remedy. Perhaps whatever is perceived as leading to a higher Alexa Internet, Hitwise or other traffic ranking for the wikipedia.org domain name or otherwise perceived as enhancing the reputation and social status of the Wikipedia project (or possibly related fundraising) is what sometimes drives sentence retention or lack thereof.

There are many significant editor "human factors" that are likely too complex to analyze and compensate for. Factors such as political agendas and the like are likely too complex to account for and are assumed to be compensated for by editor recognition and intervention. This depends to some degree on the maturity, wisdom and insight of other editors. If the algorithm become well-defined and entrenched, the risk exists, that those familiar with its details might be able to game the system just to increase their rating, perhaps simply by restricting themselves to conservative, cautious and non-controversial statements.

In the case of article overall organization and avoiding article rewrites, traditional tools such as diff (which is optimimal for line-oriented computer program language source code differences) and MediaWiki's current differencing software cannot recover from article reorganization or rewrites. It has not yet been determined if de Luca's implementation suffers from these same limitations. In particular, the MediaWiki's current differencing software, differences as that character/byte level makes no attempt to analyze by sentences. If de Faro's software has similar limitations, then an easy workaround might be to digest the article down to its component sentences and match on the sentence occurring anywhere in the article.

Selected publications and presentations

[edit | edit source]

Note: some include de Alfaro as editor or co-author

  • Continuous Verification by Discrete Reasoning (1994) with Prof. Manna
  • Verification in Continuous Time by Discrete Reasoning (1995) See online copy of conference paper.
  • Visual Verification of Reactive Systems (1997) See online copy of conference paper.
  • Computing Minimum and Maximum Reachability Times in Probabilistic Systems. See online copy of conference paper.
  • The Control of Synchronous Systems (2000) See CiteSeer entry. See online copy of conference paper.
  • Symbolic Model Checking of Probabilistic Processes Using MTBDDs and Kronecker Representation. See online copy of conference paper.
  • International Conference on Concurrency Theory (CONCUR) 2001 - Concurrency Theory, 12th International Conference, Aalborg, Denmark, August 20-25, 2001, Proceedings} ISBN 3-540-42497-0
  • Process Algebra and Probabilistic Methods: Performance Modeling and Verification. Proceedings of the Joint Workshop on Process Algebra and Performance Modelling and Probabilistic Methods in Verification, PAPM-PROBMIV 2001, held in Aachen, Germany in September 2001. Scanned online version. ISBN 3-540-42556-X (ISSN 0302-9743)
  • Interface Theories for Component-Based Design (2001) ISBN 3-540-42673-6
  • Process Algebra and Probabilistic Methods: Performance Modeling and Verification Joint International Workshop, Papm-Probmiv 2001, Aachen, Germany, September 12-14, 2001, Proceedings (2001) ISBN 3-540-42556-X
  • Timed Interfaces (2002) ISBN 3-540-44307-X
  • Interface Compatibility Checking for Software Modules (2002) ISBN:3-540-43997-8
  • The Complexity of Quantitative Concurrent Parity Games (2004) See CiteSeer entry.
  • CONCUR 2005 - Concurrency Theory: 16th International Conference, Concur 2005, San Francisco, CA, USA, August 23-26, 2005, Proceedings (2005) ISBN 3-540-28309-9
  • The Complexity of Stochastic Rabin and Streett Games (2005) See online copy of conference paper.
  • Average Reward Timed Games (2005) See online copy of conference paper.
  • Compositional Quantitative Reasoning. Proceedings of the 3rd international conference on the Quantitative Evaluation of Systems (QEST) (2006) ISBN 0-7695-2665-9
  • Games, Probability, and Mu-Calculus
  • Several 2006 papers listed as Center for Hybrid and Embedded Software Systems (CHESS).
  • Selected works
  • Index of Papers and Short Presentations
  • UCSC Science & Engineering Library: 2007 honored faculty
  • Publication list at ScientificCommons
  • Frequent co-author with widely-cited author Thomas A. Henzinger. See also Henzinger recent publications.
  • Decomposing, transforming and composing diagrams: The joys of modular verification (Report / Stanford University. Dept. of Computer Science) (1998) ASIN B0006R7ZN2
  • Concurrent reachability games (Memorandum) (1998) ASIN B0006QXYMY

See also

[edit | edit source]

Notes

[edit | edit source]
  1. Wikipedia faces the facts over inaccuracy September 20, 2007
  2. Wikipedia 2.0 - now with added trust 20 September 2007
  3. Wiki finally getting its facts right (over inaccuracy) September 21, 2007
  4. Luca de Alfaro in the News September 24, 2007
  5. Formal Verification of Probabilistic Systems 1998
  6. New Faculty March 4, 2002
  7. contact information
  8. Universitywide Assembly and Committee Memberships
  9. WikiLab with Wikipedia trust coloring demo
  10. New program color-codes text in Wikipedia entries to indicate trustworthiness August 2, 2007
  11. We want to make Wikipedia more useful
  12. UCSC in the News August 27, 2007
  13. Quality studies at Wikimania2007
  14. Wikipedia co-founder to test quality control idea August 8, 2007, Dan Nystedt, PC World, The Washington Post
  15. Wikipedia Discredits Reports It's Abandoning Open Editing September 21, 2007
  16. Wikipedia co-founder to test quality control idea Aug. 8, 2007
[edit | edit source]