IASDR 2017

art, design, design research, HCI, hci research, HCI/d, Human-Computer Interaction, knowledge production, knowledge tools, Uncategorized

Some additional good news to report. I submitted an abstract for a short paper to the upcoming IASDR conference in Cincinnati about some early-stage research that I’m working on with Erik Stolterman, and the abstract has been accepted! So now we’re writing the short paper and creating a poster to present at the conference.

Here is the abstract we submitted:

Scholars in a variety of academic disciplines have studied the peer review process. There are examinations of the biases that pervade peer review (Lee, Sugimoto, Zhang, & Cronin, 2013). Other studies propose tools or methods that might be useful for improving or standardizing the peer review process (Hames, 2008; Onitilo, Engel, Salzman-Scott, Stankowski, & Doi, 2013). Still others examine the kinds of criteria that ought to be relied upon in peer review processes, and in some cases these criteria are widely known and agreed upon. In the natural sciences, for example, we might say that there is a relatively stable set of criteria that can be used to assess the rigor, relevance, and validity of a scientific knowledge contribution. In this paper, our aim is to examine the process of peer review as it pertains to research through design. We aspire to build an understanding of the criteria scholars use when a design or prototype is the main contribution. How do reviewers evaluate designs as knowledge contributions? Is there any uniformity or stability to the review criteria? Are criteria from other fields (e.g. scientific criteria) used to evaluate designs? Toward this end, we report the outcome of a survey conducted with a group of meta-reviewers (n=15) from the design subcommittee for the 2017 Computer-Human Interaction (CHI) Conference, which is the flagship conference in our field of expertise. The design subcommittee reviews papers that “make a significant designerly contribution to HCI [including but not limited to] novel designs of interactive products, services, or systems that advance the state of the art.” Our findings suggest that there is little agreement on a common set of criteria for evaluating research through design.

I look forward to sharing more as this important project moves forward!

About The Theory-Practice Gap

design, design research, design theory, HCI, hci research, Human-Computer Interaction, Informatics, Interaction Design, knowledge production, knowledge tools, theory, theory building, theory-practice gap, Uncategorized

I’ve been spending some time looking through the CHI best paper award winners from the past five years — all the while continuing to think about the theory-practice gap. And now I have a question. How is it that we distinguish between theorists and practitioners? Who is creating the knowledge that seems to lack practical utility or accessibility?

Just looking at the best papers, one might be struck by the volume of publications using theory, models, frameworks, etc. to do design work. And judging from the author credentials, there is quite a lot of industry collaboration, which makes me think that practitioners (if an academic/industry credential could be casually used to make this distinction) are not only using theory but they are in some cases actively contributing to it.

The theory-practice gap is simple, useful metaphor in the sense that it has guided researchers to ask interesting questions and pursue intriguing and insightful projects — think about things like intermediate-level knowledge objects — but the metaphor has been used for quite a long time (in HCI and in other disciplines) and I’m curious to know whether it has outlived its relevance in spite of its apparent utility.