the future of hci research

My PhD advisor, Erik Stolterman, recently penned a blog post called hci research and the problems with the scientific method. It’s a good post and you should read it. It was motivated by a New Yorker article written a few years ago about the ‘decline effect’ in science, which refers to the phenomenon wherein at-first positive results of scientific experimentation decline over time. I’m simplifying here, but you can read more about it in the New Yorker piece itself or the Wikipedia page on the decline effect.

In any event, this article led Erik to speculate about the implications of this ‘problem’ with the scientific method for hci research, a field grappling with a dilemma that he describes as “[stemming] from a shared view that HCI is not really a scientific enterprise while at the same time scientific research is still valued and rewarded.” He then speculates about how hci will develop in relation to science and expresses optimism about where the field will go, and here is where my thinking picks up the thread.

His last paragraph makes me think about the possibility of greater diffusion of the field and what that means for something like ‘quality control,’ in other words, “the criteria to assess the quality of the work and the teams that carry it out,” in hci research. I’m using Gibbens et al.’s definition of quality control from their book ‘the new production of knowledge’.

Anyway, when he writes that hci research could move in the direction of science ‘or’ the humanities ‘or’ design, I wonder whether that means that one of those paths will be dominant. For instance, if hci moves towards a more scientific tradition, that could mean that the humanistic and design traditions become ‘lesser research traditions’ within hci research. And if one of the traditions becomes more dominant, then quality control seems like it might be less of an issue since science and the humanities, at least, have rich histories and rigorous ways of evaluating research contributions. This might not be so true of design research.

But of course there doesn’t have to be a dominant path, and maybe a dominant path is impossible in hci research. Say hci continues to diffuse towards each of the traditions and maybe even, as he mentions, towards some new ones. That there ceases to be a dominant approach probably means that quality control becomes more context-dependent, temporary, and fluid, descriptors I’m borrowing (again) from Gibbens et al.

With respect to the dilemma he describes, denying that hci is a scientific enterprise while simultaneously valuing and rewarding scientific research, i think the possible “diffuse” future encourages a resolution characterized by valuing and rewarding scientific research, humanistic research, design research, and maybe to spend more time thinking about the relationship between these different traditions.

hci theory

I’m re-reading parts of Yvonne Rogers’s good book, HCI Theory: Classical Modern Contemporary for a summer research project, and I’m filled with validation and interest/intrigue in some of the claims she makes. The validation stems from an observation that because the book provides solid grounds in support of an argument for paying more attention to how hci researchers (design-oriented and otherwise) use theory in their publications and the interest/intrigue stems from one of the reasons why there exists a gap between theory and practice, which is that some theory requires too much work to apply in practice.

In the very last chapter of the book, when she writes about why some theory is “more successful” than other theory when it comes to bridging the gap with practice she provides a nice, succinct list of reasons for why the less and unsuccessful theory falls into the categories it does. In short, when theory fails to bridge the theory-practice gap it is because:

  • there is too much work required to understand and apply the theory,
  • the theory is non-intuitive to use, or
  • the theory is adapted as a generalizable method.

With regard to this last reason, when a theory is adapted as a generalizable method, this fails because:

  • theories do not “do” design,
  • theories are not easily related to current practice,
  • a complete theory/design cycle has not yet matured, and, again
  • it requires a lot of work even to understand and apply a generalizable method, and finally
  • there is a lack of consensus about what contribution various theoretical contributions as generalizable methods should make to interaction design.

The framing question of our research project is (as it was for a similar project carried out in design research) how is theory used in written texts? Put this way, we frame theory as an object (maybe a designed object) to be used by users (researchers). And Rogers’s list, then, can be understood as a list of all the things that make theory unusable. As a compass pointing towards “usability guidelines” for theory designed to bridge the theory practice gap.

But I’m curious about the generation of these guidelines from her survey of theory use in the field. The book is quite broad in its coverage of theory use. Does the broadness maybe result in a focusing in on what we could call “revolutionary” theories (to capture their impact in the field) while other kinds of theories were omitted? I’m playing Kuhn to her Popper, here. Also, her discussion of the role(s) theory plays in hci research in an earlier chapter is quite broad. It encompasses a lot, but in its broadness does it lose the details of “everyday, normal” theory use in hci research? These are some interesting and important questions, perhaps especially in light of the picture of the field she paints in her opening chapters as being in danger of “weakening its theoretical adequacy.”