This year was a good one for CHI rebuttal writing. I say that not knowing whether our rebuttal swayed any of the reviewers one way or another. But we took a different approach for this year’s CHI reviews than we have in year’s past. This year, we made changes to our paper as we wrote the rebuttal. Changing the paper became a way to think through the viability and possibility of each critique, and the rebuttal became (primarily) a record of changes already made to the submission. It may not be an approach for everyone, but I totally recommend trying it to see whether and how it works. And, I’d be curious to hear from others who take this approach when writing rebuttals (with short turnaround times) about how it has worked!
Some additional good news to report. I submitted an abstract for a short paper to the upcoming IASDR conference in Cincinnati about some early-stage research that I’m working on with Erik Stolterman, and the abstract has been accepted! So now we’re writing the short paper and creating a poster to present at the conference.
Here is the abstract we submitted:
Scholars in a variety of academic disciplines have studied the peer review process. There are examinations of the biases that pervade peer review (Lee, Sugimoto, Zhang, & Cronin, 2013). Other studies propose tools or methods that might be useful for improving or standardizing the peer review process (Hames, 2008; Onitilo, Engel, Salzman-Scott, Stankowski, & Doi, 2013). Still others examine the kinds of criteria that ought to be relied upon in peer review processes, and in some cases these criteria are widely known and agreed upon. In the natural sciences, for example, we might say that there is a relatively stable set of criteria that can be used to assess the rigor, relevance, and validity of a scientific knowledge contribution. In this paper, our aim is to examine the process of peer review as it pertains to research through design. We aspire to build an understanding of the criteria scholars use when a design or prototype is the main contribution. How do reviewers evaluate designs as knowledge contributions? Is there any uniformity or stability to the review criteria? Are criteria from other fields (e.g. scientific criteria) used to evaluate designs? Toward this end, we report the outcome of a survey conducted with a group of meta-reviewers (n=15) from the design subcommittee for the 2017 Computer-Human Interaction (CHI) Conference, which is the flagship conference in our field of expertise. The design subcommittee reviews papers that “make a significant designerly contribution to HCI [including but not limited to] novel designs of interactive products, services, or systems that advance the state of the art.” Our findings suggest that there is little agreement on a common set of criteria for evaluating research through design.
I look forward to sharing more as this important project moves forward!