It is then also important to consider the use of open questions in which the respondent is asked about an attitude and then given a number of lines to word a response in their own style. It is much easier to tell from these responses whether a question has been misinterpreted, and they also provide extra information about beliefs and attitudes that a researcher may have overlooked and thus not included a direct question for. Two common problems with this question style are that they are not suitable for statistical analysis, and they are often not answered very comprehensively or honestly due to potential embarrassment and need for social desirability (Schuman & Presser, 1996).
The first criticism is even more important when considering translation to the Internet, as it does not allow for the previously mentioned automated data collection (which is useful as it is efficient, has much less potential for data input errors, and is easier for the researcher) (Michalak, 1998). Thus it may be useful for the researcher to use these in conjunction with the closed questions, as a ‘catch-all’ method that can be manually checked afterwards for any information that could not be gained from the closed questions (and thus the statistical data). The second criticism is much less of a problem on the Internet.
Studies have shown that the extra anonymity that it affords leads to a much lower level of social desirability and much less biased responses (Keisler & Spraull, 1986). It has also been observed that response to open questions on the web often receive much more comprehensive responses than their traditional paper equivalents (Krantz & Dalal, 2000). Open questions are particularly important during the vital piloting stage. At this stage they can allow the experimenter to see what areas he has overlooked and so needs to include or revise. No matter what type of question employed, the experimenter must be careful not to ask ‘leading questions’ that are worded in such a way as to bias response by making one attitude or belief look more desirable than another.
When thinking about the creation of a survey, it is also very important (but often overlooked) to consider the order of the questions. It is the responsibility of the researcher to keep the respondent at ease, and to ensure that they do not feel that the line of questioning is not too personal or blunt. To this end a survey (particularly one investigating attitudes and beliefs) should start with some very light, general questions and then gently more into the more personal and serious areas. It is equally as important to lead out of the questionnaire in the same style, finishing with more light-hearted questions to leave the respondent at ease.
From another perspective it is often the case that the order questions are laid out in can be used to ‘cognitively guide’ the respondent into a certain mindset that helps them to give accurate responses (Schuman & Presser, 1996). The questions can also often be found to lead on from one another. It has been suggested that order is potentially jeopardised when the survey is presented on the internet as participants are more leisurely in their involvement and may skip between questions as they want (Epstein et al, 2001). This need not have to be the case as long as the firm creating the survey takes this into account.
The program running the survey can be written in such a way that the questions are only presented in small sets, and all fields (response blocks) need to be filled in before the respondent may move on to the next set. In this way there is actually more potential control over order in web based surveys. While on the subject of respondent controls, some other problems are environment and presentation format on the web. Both of these factors have been shown to be significant affectors of response in questionnaires, and yet both are very difficult to control when Internet users are taking part in the survey. Environment can be easily regulated in traditional surveys, but with Internet users it can range from home, to school or work, and can vary on background noise, time taken interruptions & distractions and even time of day.
The best that can be done here is to give a present a brief preliminary statement before the survey is administered, requesting a few basic rules to be adhered to regarding some of these factors. These should not be made too stringent however, as otherwise potential participants may be ‘scared off’. The issue of presentation format is currently much more of a concern as it can be greatly effected by the type of computer being used, the service provider used to actually access the internet, and the browser software being employed to actually look at the page with.
In both the presentation and environment cases, it may be important to ask some questions (again probably as part of the demographic form) about all of these so that any potentially confounding variables can be considered. Once more this would be particularly useful as part of the piloting study, as then specific tests can be run on the data to see which if any of these factors can significantly affect responses and so should be moved against.
It is of prime importance to consider security of the information you are given when conducting a survey. The ethical guidelines of the various national psychological associations all state that it is the responsibility of any psychologist to respect the confidentiality of their subjects. Moreover a basic prerequisite of a survey such as the ones discussed here is a statement that the information being collected will only be used in the form of anonymous data, and for the purposes of this research, unless permission is specifically sought for the replication of any quotes.
Unfortunately the very nature of the Internet means that data protection in this public domain can be difficult (Liaw, 2002). What may be necessary is some sort of internal network for storage of the information that has a ‘gateway’ computer with encryption and controlled access. One other option is to have some sort of registration for the site with passwords. This would be a hassle for many users, but could also be incorporated with the demographic questions to save time and effort.
There are, then, a number of suggestions that can made to a firm who wishes to run attitude and belief surveys on their website. They should identify their targets and collect their sample accordingly. They should carefully consider which types of questions to employ, and how to extrapolate the results. They should be mindful of order effects and construct the survey accordingly. They should ensure that they are able to follow the ethical guidelines established in psychology, with particular emphasis on data protection. It is also advisable that they collect as much demographic data as possible on the respondents so they can understand who they are working with and what effects this may have.
They should seek to control as many independent variables as possible. The benefits of running a survey on the Internet are obvious. It allows access to a great number of people quickly and efficiently, it is far cheaper and easier to organise than a traditional paper-and-pencil survey and it allows the automation of much of the time consuming data transfer and analysis. Despite all this the last piece of advice for the firm could be that there may be great benefit from running their internet survey in conjunction with a traditional survey.
There are still a number of questions yet to be fully answered regarding the effective use of Web-based surveys, and still a number of potential confounds that should be explored. The validity of wed based studies has still to be adequately assessed (Buchanan & Smith, 1999). By comparing the web survey results to those of the same questionnaire run in the traditional manner (perhaps on a much smaller sample, just for comparative purposes) the reliability of the findings could be more accurately assessed.
Baily, R.D., Foote, W.E., & Throckmorton, B. (2000) Human Sexual Behaviour: A Comparison of college and Internet Surveys in M.H. Birnhaum (Ed) (2000) Psychological Experiments on the Internet. Academic Press (London).
Best, S.J., Krueger, B., Hubbard, C., & Smith, A. (2001) An Assessment of the Generalisability of Internet Surveys in Social Science Computer Review # 19 (2), 131-145.
Birnbaum, M.H. (2000) Psychological Experiments on the Internet. Academic Press (London).
Buchanan, T & Smith, J.L. (1999) Using the Internet for Psychological Research: Personality Testing on the Web in British Journal of Psychology # 90, 125-144.