Individualised Rating‑Scale Procedure: A Means of Reducing Response Style Contamination in Survey Data?

Authors

  • Elisa Chami-Castaldi
  • Nina Reynolds
  • James Wallace

Keywords:

scale length, response styles, response bias, survey research, cross-cultural surveys, individualised ratingscale procedure

Abstract

Response style bias has been shown to seriously contaminate the substantive results drawn from survey data; particularly those conducted using cross‑cultural samples. As a consequence, identification of response formats that suffer least from response style bias has been called for. Previous studies show that respondents' personal characteristics, such as age, education level and culture, are connected with response style manifestation. Differences in the way respondents interpret and utilise researcher‑defined fixed rating‑scales (e.g. Likert formats), poses a problem for survey researchers. Techniques that are currently used to remove response bias from survey data are inadequate as they cannot accurately determine the level of contamination present and frequently blur true score variance. Inappropriate rating‑scales can impact on the level of response style bias manifested, insofar as they may not represent respondents' cognitions. Rating‑scale lengths that are too long present respondents with some response categories that are not 'meaningful', whereas rating‑scales that are too short force respondents into compressing their cognitive rating‑scales into the number of response categories provided (this can cause ERS contamination — extreme responding). We are therefore not able to guard against two respondents, who share the same cognitive position on a continuum, reporting their stance using different numbers on the rating‑scale provided. This is especially problematic where a standard fixed rating‑scale is used in cross‑cultural surveys. This paper details the development of the Individualised Rating‑Scale Procedure (IRSP), a means of extracting a respondent's 'ideal' rating‑scale length, and as such 'designing out' response bias, for use as the measurement instrument in a survey. Whilst the fundamental ideas for self‑anchoring rating‑scales have been posited in the literature, the IRSP was developed using a series of qualitative interviews with participants. Finally, we discuss how the IRSP's reliability and validity can be quantitatively assessed and compared to typical fixed researcher‑defined rating‑scales, such as the Likert format.

Downloads

Published

1 Sep 2008

Issue

Section

Articles