ABSTRACT

You are reading a journal article-an empirical study of emotional intelligence in the workplace. It’s well written, with a survey and critique of the relevant literature, a well-designed questionnaire supplemented by interviews with managers, key themes and categories drawn from the data and fi ndings that support the hypotheses. The paper develops the concept of emotional intelligence and assesses the value of training and development in helping managers to become more emotionally intelligent. Despite this, there’s something niggling away at the back of your mind. You can’t fi nd fault with the logic behind the study, and it consists of an impressive sample size of respondents across 10 organizations. The analysis and conclusions follow the data . . . but there’s just something . . . ? Then you get it-it feels too neat! Everything fi ts together, the data proves the hypotheses nicely and it’s a done deal . . . But life isn’t that simple, is it? What were the suppositions behind the researcher’s choice of questions? How did the survey and interview questions frame the responses? Were there other aspects of emotional intelligence the managers wanted to talk about, but didn’t, because they didn’t have the opportunity? What issues infl uenced the researcher’s interpretation of the data? Do managers themselves think about emotional intelligence as a four-by-four matrix? Just how does the theory relate to practice, and why do we as academics think we know our respondents’ thoughts and intentions, and understand their actions-even though they themselves may not?