Michael Harper for redOrbit.com — Your Universe Online
Were it possible to define the whole mass of the Internet in one quip, it might go as follows: “The Internet: Full of people who want to be right“¦and cats.”
That´s the trouble with the luxury of having such large amounts of information so freely available to us. Society can quickly become a mass of people who know so little about everything as opposed to knowing everything about so little, in a sense. Enter crowdsourcing, the means by which Internet-dwellers are asked to pitch in their collective wisdom for the good of the group. Those who are prone to think people are inherently good and want the best for the group might be more inclined to think crowdsource-driven sites, such as Wikipedia, are a great way to broadcast the world´s knowledge. On the other side of the token, there´s much to be said about seeking absolute accuracy over “just good enough.”
Dr. Victor Naroditskiy and Professor Nick Jennings from the University of Southampton are working together to find a way to improve crowdsourcing and find a harmonious balance to strike between these two paradigms.
Along with some help from Masdar Institute´s Professor Iyad Rahwan and Dr. Manuel Cebrian, Research Scientist at the University of California, San Diego, this team have developed some methods to both gather the best information as well as verify the information found within,
Dr. Victor Naroditskiy is the lead author of a paper the team has published in the journal PLoS ONE to describe these new crowdsourcing methods. He explained that while sites like Wikipedia have mechanisms in place to ensure the validity of the information posted, the human element can still throw the proverbial wrench in the gears. After all, to err is human, thus Wikipedia´s hierarchy of trusted editors and contributors who can check the citations and sources of any incoming information.
Crowdsourcing can also be useful when data needs to be collected quickly, as in the case of the Haiti earthquake when volunteers mapped, in real-time, trouble areas where help was needed.
In these time-critical situations, there´s no time for a Wikipedia-style hierarchy of editors.
Dr. Naroditskiy and team have now developed an incentive-based system to verify the best information in a quick manner.
Co-Author Iyad Rahwan explains their system in a statement, saying: “We showed how to combine incentives to recruit participants to verify information. When a participant submits a report, the participant´s recruiter becomes responsible for verifying its correctness.”
“Compensations to the recruiter and to the reporting participant for submitting the correct report, as well as penalties for incorrect reports, ensure that the recruiter will perform verification.”
While the notion of incentives for the best information sounds like an easy fix, one blogger suggests this kind of system essentially lowers the bar all around, outnumbering the true experts by making everyone else “expert-enough.”
“In other words,” writes Michael Martinez, “because no one can be expert enough in all topics to ensure that only correct information is included in a crowdsourced resource the more inexpert contributors a project relies upon the more misinformation will be accepted as reliable.”
“It’s simply a matter of numbers. The expert nodes in the population will always be outnumbered by the inexpert nodes; the expert nodes will always have fewer connections than the inexpert nodes; the expert message will always be propagated more slowly than the inexpert message; and the inexpert message will always win out over the expert message.”
Comments