Tuesday, 23 November 2010

Web Research: Who can we trust?

As discussed previously in a previous blog post “Web Research and Focus Groups” on 17th Nov, research is done in many forms - but noticeably over the past few years, web research has risen sharply. This can be attributed to the relative ease and speed of using the internet as a tool for research. This does not limit to just using the internet for primary, but using the internet to find sources of information for secondary research.

Primary Research

Businesses have been extracting information from users over the internet for many years (Amazon being a prime example) but more relevant to research specifically - sites such as Toluna, MyVoice and YouGov.

The given examples show two “paid opinion surveys” and the UK Government “opinion portal” which is also paid. The first two are likely have participants swayed by the prospect of being paid, whereas the latter has the same incentives and risks but may be more likely to draw out real issues of the people.

The issue with this is that the information gathered might be a half-truth from the majority of participants, so whilst still proving useful to companies it is not safe to base academic research on any information from these sites.

Secondary Research

To find information on the internet that is reliable is extremely hard to do from standard search engines such as Google or Bing. Instead alternatives such as Google Scholar and CiteULike are available that search academic sites, but for standard research Wikipedia is the most known for trying to alter the unspecific and incorrect information on the internet, requiring sources and correct documentation.

Stanford University have begun the Web Credibility Project which poses the following questions:
- What causes people to believe (or not believe) what they find on the Web?
- What strategies do users employ in evaluating the credibility of online sources?
- What contextual and design factors influence these assessments and strategies?
- How and why are credibility evaluation processes on the Web different from those made in face-to-face human interaction, or in other offline contexts?

There is more reading available to these questions and theories on how to solve the problems. However, in my eyes I believe there will be no way to decipher valid research on the internet as it is too vast. Much alike I discussed in my first post, it is again up to the researchers judgement to put the information into context; is this for good or bad?

Resources

Stanford Web Credibility Project - Stanford University - http://credibility.stanford.edu/research.html

Metzger, M. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology.
http://onlinelibrary.wiley.com/doi/10.1002/asi.20672/full

Flanagin, A. J., & Metzger, M. J. (2007). The role of site features, user attributes, and information verification behaviors on the perceived credibility of Web-based information. New Media & Society.
http://nms.sagepub.com/content/9/2/319.full.pdf+html

3 comments:

  1. Sean, You have raised some good points and captured a lot of information which supports the work already done by the group.

    ReplyDelete
  2. Toluna, MyVoice and YouGov are very good examples where users and participants can make money from answering surveys. Although this is a very easy and accessible way of getting a large range of answers from a wider audience, this can sometimes bring bias into the study.
    Participants will answer questions depending on what they believe the company is expecting rather than what their opinion is. Or they will simply answer it as quickly as possible without thinking about the questions in depth.

    ReplyDelete
  3. Breaking the points into 2 broad categories (primary and secondary research) makes this post more explanatory, as it exposes web research credibility. The post does makes sense.

    ReplyDelete