Survey is frequently used in both daily life and social/psychological research to collect information from a sample of individuals through their responses to questions. The primary purpose of survey is to obtain information describing characteristics of a large sample of individuals of interest relatively quickly (Ponto, 2015). It can range from asking consumers their feedback with a grocery shopping to a rigorous and large-scale census surveys. But is survey really easy to compose and use?
How to write survey questions
Well, it may seem easy to write survey questions, as long as the researcher is clear about the overall goal of the survey and know what types of questions are of his/her interest. But let’s consider the following points.
For one thing, researchers need to remain tactful and unbiased in tone. Compare the following questions (Ornstein, 2013):
- Are you in favor of or opposed to a big increase in government spending to increase the opportunities for poor people so they can have a better chance of getting ahead in life?
- Are you in favor of or opposed to a big increase in government spending to increase the opportunities for poor people even if it means higher taxes?
Despite asking exactly the same policy, the two statements had 87% and 52% support respectively. That is because the second question emphasizes the potential cost. A balanced formulation of the question should combine the last phrases of both questions to be unbiased. The reality is that there are often a variety of arguments on each side of an issue, prior empirical research should be done for including as more arguments as possible to remain unbiased.
Second, the forms of questions also matter. Different forms of questions (e.g. rating scales, ranking, open-ended, and closed-ended) can have significant impacts on the responses. For example, comparing the open and closed questions on the same topic, it has been reported that non-response was about twice as high for the open questions (Ornstein, 2013). More importantly, the response distributions were different. When it came to less emotional responses such as “inflation”, both types of questions received similar proportions of respondents. However, when in closed questions, affective responses were much more likely to be selected. For instance, when being asked “crime and violence” as the most important national problem, 35% responded selected it in closed question, compared to 16% who answered the open question. Thus, it is crucial for the researcher to understand the strengths and weaknesses of each question type for the most useful data.
A commonly used form: Likert scale
Likert Scale is an ordinal psychometric measurement. In each statement, a respondent must indicate a degree of agreement or disagreement about attitudes, beliefs, or opinions. It originated with Rensis Likert, and has a long history of use in Kinesiology research (Bishop & Herron, 2015). Likert scale can be seen as one of the most universal methods for survey collection because it is easy to understand, and the responses are quantifiable and subjective to mathematical analysis. However, it should be used with cautions.
Bishop and Herron (2015) proposed the consistency problem of the interval between the item choices varying from “Strongly Disagree”, to “Disagree”, to “Neutral”, to “Agree”, and to “Strongly Agree”. That is, in the mind of the respondent, whether or not there is an equal distance between each of these choices. We may be very sensitive to the changes between “Disagree” to “Neutral”, but not “Strongly Disagree” to “Disagree”. Another issue is the anchor effect, where the choices at the two ends of a Likert items tend to get less selected than the central ones. Frequently, people don’t choose the “extremes” to avoid being seen as “extremists”.
How to structure survey questions
Another mostly overlooked part when constructing a survey comes from poorly designed question structures (see Avoiding Survey Bias). It usually refers to the order in which the questions are revealed to the respondent, the number of questions per page, and the survey length etc. Each of these can result in survey bias and drop outs. To avoid making such bias, the researcher should think carefully about how to structure the survey. For example, the most threatening or personal seeming questions should be placed at the end of the survey.
When it is in the hands of respondents
There is a concept called respondent bias, which leads to error in a survey study because of participants’ inability or unwillingness to provide accurate or honest answers (see Tips for Avoiding Respondent Bias).
It is quite common that respondents will be unable to answer questions 100% accurately, and most often, that results from unfamiliarity, respondent fatigue, faulty recall, question format, and question context. Besides, respondents could just simply have a nature desire to provide socially acceptable answers so as to avoid embarrassment or please the institution that conducts the study. A case in point is when a survey investigates a person’s relationship with the law, and is carried out by the government body or authority in the community. Other instances could be when the survey includes more sensitive items such as consumption of drugs, monetary income, and sexual behavior. To prevent this type of respondent bias, the researcher should understand what types of questions make respondent believes that the answer could have a positive or negative impact on their image in the eyes of the society.
In all, the intention of this blogpost is not to provide a thorough and detailed guide on “How to Do a Survey” or “Principles of Survey Design”. Rather, it tries to revisit several processes when constructing and conducting a survey, with an attempt to show how complex a survey could be, and hopefully arouse researcher’s awareness before starting a survey.
References
Bishop, P. A., & Herron, R. L. (2015). Use and Misuse of the Likert Item Responses and Other Ordinal Measures. International Journal of Exercise Science, 8(3), 297-302.
Ponto, J. (2015). Understanding and Evaluating Survey Research. Journal of the Advanced Practitioner in Oncology, 6(2), 168-71.
Ornstein, M. (2013). A Companion to Survey Research. London: SAGE Publications
Tips for Avoiding Respondent Bias, retrieved Nov. 14th, 2018 from http://fluidsurveys.com/university/tips-for-avoiding-respondent-bias/
Avoiding Survey Bias, retrieved Nov. 14th, 2018 from http://fluidsurveys.com/university/avoiding-survey-bias/