Jump to content

User:Dzonatas/Ethics and MediaWiki/Vandalism

From Wikiversity

Edit wars vs. vandalism

[edit | edit source]

The distinction between edit wars and vandalism is essential, and vandalism is a fundamental concern of ethics beyond automation and online media. The best way to understand vandalism in the digital world, and to differentiate it from edit wars (when an edit war is involved), is to look at how its definition arose on the Internet. In 1989, the Internet Activities Board set out adopted a policy as specified in RFC 1087, which includes these statements:

There is a big difference over somebody who borrows a book and scribbles over all of the pages with a crayon, and the next person that borrows the same book and complains that there was an edit war in the book. Of course, the traditional, hard bound, physical book doesn't work like MediaWiki where anybody can edit page content of the book so easily.

Let's analyze the above: MediaWiki has a feature to allow anonymous users to edit content, and this affects the attempts to accuse someone else of vandalism by such said unauthorized edit when such feature is enabled, see (a) above. IAB created RFC 1087 before 1995, the so called Internet boom year, so the outlook on the Internet of those that started after 1995 is different than those that started before 1995. The RFC states that research is a significant factor for the intended use of the Internet as in (b), and everything else that also makes the Internet is either experimental or out of due consequence by the nature of the Internet itself. If someone is on Wikipedia to research, then that follows that traditional intent of the Internet. To find, somehow, that the context of an edit war on Wikipedia has single handily disrupted the Internet would be quite remarkable. Consider (c), however, and it becomes clearer how information handled by MediaWiki can be vandalized. Wikipedia has a lot of volunteers that are editors, so the essence of employees is hard to consider as a resource that can be wasted. There are, on the other hand, capacities and computers to worry about where vandalism might occur. Edit wars, and even just heavily edited articles without there actually being a war, surely can affect capacities of the Internet and the storage available to the computers. Now (d), there does not even have to be an edit war in order for content to be vandalized, as it is comparable to the crayon marks over a book. What is often overlooked is the how the rest of the Internet is affected when people spread Wikipedia's information across the Internet. Last (e), privacy is a major concern for many across the Internet, and to claim vandalism on somebody and for someone else to turn around and disclose information on the accused is also vandalism.

One key element in vandalism is its antisocial nature, and it is one area that must be weighed for any case where there is suspected vandalism. For example, editor A goes to an article and removes one of the sources. The removed source may or not have not been referenced within the article. Editor B reverts the edit and claims editor A vandalized the article's integrity. Editor B further indefinitely blocks editor A and states, in the log against editor A, that the editor is a vandal, has wasted a lot of people's time to clean-up after editor A, and is therefore a disruptive user. The very social nature, or lack of it, that surrounded the block is more so vandalism than what editor A did (and A's intent is actually unknown). Despite the many ways MediaWiki could change to help prevent something like this from happening, there surely are, for example, more courteous behaviors that could have been exhibited. Editor A may have provoked an edit war (with or without such intention), but it is incomparable to the antisocial kind of vandalism that editor B started.

Vandalism is clearly distinct from edit wars even if the events do overlap or do not overlap.

Ethics & automation considerations

[edit | edit source]

There probably is no direct cookie-cutter approach to overcome vandalism in regards to what further automation can do. Each case, however, does reveal what kind of moral buffers are needed. If vandalism is provoked by a lack of moral buffers, then the solution to help prevent that kind of provoked vandalism is to achieve a proper layer of moral buffers between the people and the interface.

Despite that kind of layer, other systems, like the comments on Digg and Slashdot, use a bit of moderation. Imagine two icons next to each entry on the revision history, one being a thumbs-up and the other being a thumbs-down (Digg uses this approach). An editor could rate each revision with either a click on the thumbs-up icon or the thumbs-down icon. Moderation automatically occurs when there is a threshold, for example, two many thumbs-down clicks would automatically flag that revision as not appropriate for a public revision. If it is the most recent version, then it simply is not shown. When someone clicks on "edit" from the main article, that thumbed-down revision is skipped and the previous thumbed-up revision is used instead for content which the editor makes his or her changes. With this kind of system, it would help eliminate the type of scenario described above where somebody deletes a source, as that revision can be thumbed-down. There would be no need to revert, which would save extra space to restore the previous revision -- just a thumb-down will do the same. There would be no need for an admin (that would have normally reverted and blocked) to come along and leave a message on the other editors talk page, as MediaWiki could take care of it without admin intervention. This kind of thumb-up/thumb-down feature does also create a sense of a moral buffer besides just easier moderation of edits.

If we look at the Slashdot system of moderation, there is also an extra attribute of why a person thumbed-up a comment. This extra attribute rates each comment as "Interesting," "Funny," "Insightful," "Informative," "Off-topic," "Redundant," and etc (see Slashdot Moderation FAQ). That wasn't enough in the earlier days of of Slashdot, as many people got stuck easily as "trolls" because it allowed random troll moderators to thumb-down anybody, which made people randomly look like bad commenter. Slashdot then developed a way to moderate the moderators. Users are able to review the choices made by a moderator and thumb-up or thumb-down each individual choice made by the moderation. Such system of moderation happens mostly with anonymity for the commenter and moderators. With this kind of system, imagine Wikipedia admins being able to moderate the revisions history with such extra attributes (i.e. "Informative"), and any registered editor can review such moderations with their own thumbs-up or thumbs-down.

Consider that Slashdot has extremely high network traffic that would cause normal websites to crash (known as the "slashdotted" effect), and you can sense why they expanded their moderation the way they did. When a Wikipedia article gets mentioned in a Slashdot article, you'll usually find the referenced Wikipedia article immediately gets page-protected with a high-traffic tag for awhile. Slashdot listed their goals with their moderation system:

Suggested essay

[edit | edit source]
  • Find other websites that have high amounts of users and a moderation system and write about how they handle vandalism.
  • What can we learn from these other systems in the way they deal with vandalism?

Resources

[edit | edit source]
[edit | edit source]