chapter  24
11 Pages

Survey-based studies

WithSANJUN SUN

The survey is probably the most common empirical research method in the social sciences and the humanities. It is a method designed to gather data about a human population (commonly referred to as a sample) through a sequence of focused questions. One distinguishing characteristic of the survey, according to Marsh (1982) and De Vaus (2011), concerns the form of its data: a structured set of data that forms a rectangle (or variable-by-case grid), in which rows usually represent cases (e.g., respondents, countries), columns represent variables (i.e., questions), and the cells contain information about a case’s attributes (e.g., respondents’ answers). Experiments and tests also use data in this form. The experimental method is different from the survey method in that with the former, “the variation between the attributes of people is created by intervention from an experimenter wanting to see if the intervention creates a difference” (De Vaus 2014: 5). There are two data collection mechanisms used in surveys: standardized

interviews and self-administered questionnaires. Standardized interviews are often conducted in person or over the phone, while self-administered questionnaires are used in group settings (e.g., in a classroom) or in individual settings (e.g., postal survey) (De Leeuw 2008). Each of these forms has a computer-assisted equivalent, such as Internet surveys, which are now more common than postal ones. Surveys are often used to collect different types of data by asking questions,

including (1) factual questions, regarding the demographic characteristics (e.g., age, gender, mother tongue, level of education) of the respondents, to help interpret the findings of the survey; (2) behavioral questions, regarding such things as personal history and language learning strategies; (3) and attitudinal questions, concerning the respondents’ attitudes, opinions, beliefs, interests, and values (Dörnyei and Taguchi 2010: 8-9). Based on the format of the responses, there are two broad types of questions

in a survey: open-ended and closed-ended questions. Open-ended questions do not provide specific answer alternatives and ask the respondent to provide his or her own answers. They can elicit rich information, but they are not easy

to code or quantify. As a result, survey researchers often use open-ended questions to help design closed questions or pilot studies and pretests. They also use open-ended questions sparingly in formal questionnaires (e.g., Holyk 2008). Closed-ended questions contain a predetermined set of answer alternatives for the respondent to select, and can be grouped into two classes: structured answer (dichotomous and multiple choice) and scales (McNabb 2010: 118). Scales are collections of items that measure the level of an underlying variable (DeVellis 2012: 15), which is placed along a quantitative continuum (e.g., from being very favorable to being very unfavorable in attitude or opinion). Scales are usually used in the measurement of attitudes. There are many

types of measurement scales, which fall into two broad categories: comparative and non-comparative. Comparative scales allow the respondent to compare two or more items, while a non-comparative scale allows the respondent to evaluate only a single item. The former category includes paired comparison, rank order, and others; the latter includes, among others, the Likert scale (Likert 1932), which is the most widely used rating scale (see Reddy and Acharyulu 2008: 101). The Likert scale consists of multiple items that typically are summed or averaged to yield an overall score (Brill 2008), and usually includes five to seven response categories, e.g., Strongly Agree, Agree, Neither Agree Nor Disagree, Disagree, and Strongly Disagree. An important consideration for the response categories is whether or not to include a middle position (e.g., “Neither Agree Nor Disagree”) or a “No opinion” option. There have been studies supporting both possibilities (e.g., Maitland 2008). The steps in conducting survey research include: (1) determining the survey

purpose and objectives; (2) defining and operationalizing key concepts; (3) developing specific research questions (and hypotheses); (4) determining the sampling procedure; (5) creating and pretesting the instrument; and (6) collecting, reducing, and analyzing data (e.g., Newman and McNeil 1998; Bartlett 2005). Questions/items need to be reliable and valid (Fowler 2014). Reliability is the extent to which an instrument yields consistent results upon testing and retesting. There are four common types of reliability estimates: test-retest, parallel forms, internal consistency, and inter-rater or inter-observer. In order to increase the reliability of a scale, one can include the number of items in the scale and eliminate items that have lower-than-average correlation with the other items, or have low inter-item consistency. These facets plus item difficulty are tested through item analysis (Angelelli 2004b). Validity refers to the extent to which an instrument measures what it has been designed to measure. Major measures of validity are face, content, construct, and criterion-related validity (see Brown 2001). Determination of face and content validity evidence is often made by expert judgment; a typical method includes several judges who rate each item in terms of its relevance to the content (Angelelli 2004b: 47-63; Kaplan and Saccuzzo 2013: 137). To think about the likely construct validity of a measure, the best way is “to see the full wording, formatting, and the location within the questionnaire of the

question or questions that were used to gather data on the construct” (Lavrakas 2008: 135). In terms of research purpose, surveys are mainly used for descriptive

or explanatory purposes (e.g., De Vaus, 2006). Marsh (1982: 6) claims that “[s]urveys and experiments are the only two methods known to me to test a hypothesis about how the world works.” This, however, is debatable. Dumont (2008: 25) argues that “the survey method does not provide empirical evidence that proves the existence of a causal relationship (only the experimental design can do this)” although it “can provide empirical evidence that a causal relationship between two (or more) variables does not exist.” In TIS, many kinds of research questions exist, including descriptive and explanatory ones. This makes survey research one of the most frequently used methods in TIS (e.g., Toury 2012: 263).