I’ve been collecting readings for a new project focused on HCI as it pertains to organizing and activism. So far a quick skim as yielded some interesting insights and questions. Here is a link to a prelim bibliography of relevant readings [added 12/26/2017 @817PM EST – some entries are incomplete]. I will update the file on the semi-regular basis in case there is any interest in resource sharing. I’ve been working my way through these texts with an eye towards the following:
- activists unique needs/constraints (there are many)
- nature of the collaborative relationship (I like the idea that contestational design requires that designers/researchers “take sides on contentious social issues” as a necessary part of their work)
- key artifacts like TXTmob, Dialup Radio, resist.org, Protest.net, Indymedia, and many others (case studies coming if they don’t already exist)
- key methodologies (content analysis of social media seems common / embedding in volatile places and critical making workshops are less common)
- key theoretical influences, such as: Mouffe, Laclau & Mouffe, and Habermas, and
- generally interesting bits of text (always part of my approach… identify things that are interesting for reasons that aren’t immediately obvious)
I’m eager to continue this project and interested in potential collaborations with others working in this area.
Looking forward to CHI2018 in Montreal, QC, Canada! This year I am humbled to be first author on one full paper and a contributing author on a second full paper. Two full papers! Huzzah! The first author paper examines the concept of the theory-practice gap as a generative metaphor. Here is the abstract:
The theory-practice gap is a well-known concept in HCI research. It provides a way of describing a space that allegedly exists between the theory and practice of the field, and it has inspired many researchers to propose ways to “bridge the gap.” In this paper, we propose a novel interpretation of the gap as a generative metaphor that frames problems and guides researchers towards possible solutions. We examine how the metaphor has emerged in HCI discourse, and what its limitations might be. We raise concerns about treating the gap as given or obvious, which could reflect researchers’ tendencies to adopt a problem solving perspective. We discuss the value of considering problem setting in relation to the theory-practice gap, and we explore Derrida’s strategy of “reversal” as a possible way to develop new metaphors to capture the relationship between theory and practice. (https://doi.org/10.1145/3173574.3174194)
I’m excited to talk about this ongoing project and discussing its potential with other members in the HCI research community. Onward!
I’ve been wondering for a few months about how to orient my HCI work towards organizing for causes, and today I am pleased to say I started making some headway. I made a few connections with folks at the AAUP, and I’ve got messages out to folks at Action Network. At this stage, I’m curious to know more about organizers, activists, and community members engaged with different causes and, in particular, the role technology plays in supporting their activities. Seems as good a time as any to start doing this sort of work.
Theory in HCI research appears to be of interest to a number of researchers working in the field. Theory use, which refers to the different roles or functions theory may play in scholarly research or publishing, is one way of exploring the topic, but, in my view, neither topic has been framed as an HCI problem.
Each has been framed as a problem of maturity (or, more accurately, one of immaturity) and, perhaps more recently, as a problem of identity. But these framings transcend the field of HCI research. They are (and have been) relevant to many other academic disciplines.
To the extent that HCI is grappling with its maturity (or immaturity) and/or its identity as an intellectual community, theory and theory use are relevant topics of study. But they have not been formulated or engaged with in terms of human-computer interaction. Such a formulation will be a necessary, good step forward in the discourse.
Most of my blog entries are announcements of publications or generic thought pieces about topics of interest. One thing I’d like to do differently going forward is keeping track of current research projects here at Penn State University’s Center for Human-Computer Interaction (C4HCI) as way of providing some insight and value to folks outside of my network; especially members of community organizations, local government, and industry.
Community Data & Water Quality. A current project underway here at C4HCI has to do with water quality data as a kind of community data around which different stakeholders could organize and act. There are already several groups in State College collecting and analyzing water quality data, and our working assumption is that this data could be made accessible and interpretable to a wider group of residents.
In my view, and in an ideal case, the outcome of such would be a more data literate citizenry confident and capable to engage with local government around water quality (and other) policy and decision-making.
Tonight we held the first meeting of what we hope will be a series of conversations and workshops with folks who work for and with different water quality collection groups in Centre County, including: ClearWater Conservancy, PaSEC, WRMP, and Trout Unlimited. The goal of the meeting was to share our vision for a possible project built around water quality data and to engage in a meaningful conversation about what a collaboration between our groups could look like. The meeting was great. Our group learned a lot about the kinds of data these different organizations collect and about some of the barriers to sharing/using the data that we had not yet considered. We even identified a possible opportunity for supporting (one of) their efforts to build out an online resource for community members to learn more about water quality issues.
Looking forward to more!
I’m really excited about an introductory HCI course I’m developing (with some amazing collaborators) for the spring semester. For the last week or so, I’ve been working with several practicing designers to establish a set of core skills interns and/or entry-level designers ought to know in order to succeed in the workplace. Their comments and insights have been interesting to read, inspiring to think about, and generative of a much stronger course design than if I had worked independently. I’m appreciative of their help, and I look forward to sharing this collaborative approach with the students in my section. Onward!
This year was a good one for CHI rebuttal writing. I say that not knowing whether our rebuttal swayed any of the reviewers one way or another. But we took a different approach for this year’s CHI reviews than we have in year’s past. This year, we made changes to our paper as we wrote the rebuttal. Changing the paper became a way to think through the viability and possibility of each critique, and the rebuttal became (primarily) a record of changes already made to the submission. It may not be an approach for everyone, but I totally recommend trying it to see whether and how it works. And, I’d be curious to hear from others who take this approach when writing rebuttals (with short turnaround times) about how it has worked!
Some additional good news to report. I submitted an abstract for a short paper to the upcoming IASDR conference in Cincinnati about some early-stage research that I’m working on with Erik Stolterman, and the abstract has been accepted! So now we’re writing the short paper and creating a poster to present at the conference.
Here is the abstract we submitted:
Scholars in a variety of academic disciplines have studied the peer review process. There are examinations of the biases that pervade peer review (Lee, Sugimoto, Zhang, & Cronin, 2013). Other studies propose tools or methods that might be useful for improving or standardizing the peer review process (Hames, 2008; Onitilo, Engel, Salzman-Scott, Stankowski, & Doi, 2013). Still others examine the kinds of criteria that ought to be relied upon in peer review processes, and in some cases these criteria are widely known and agreed upon. In the natural sciences, for example, we might say that there is a relatively stable set of criteria that can be used to assess the rigor, relevance, and validity of a scientific knowledge contribution. In this paper, our aim is to examine the process of peer review as it pertains to research through design. We aspire to build an understanding of the criteria scholars use when a design or prototype is the main contribution. How do reviewers evaluate designs as knowledge contributions? Is there any uniformity or stability to the review criteria? Are criteria from other fields (e.g. scientific criteria) used to evaluate designs? Toward this end, we report the outcome of a survey conducted with a group of meta-reviewers (n=15) from the design subcommittee for the 2017 Computer-Human Interaction (CHI) Conference, which is the flagship conference in our field of expertise. The design subcommittee reviews papers that “make a significant designerly contribution to HCI [including but not limited to] novel designs of interactive products, services, or systems that advance the state of the art.” Our findings suggest that there is little agreement on a common set of criteria for evaluating research through design.
I look forward to sharing more as this important project moves forward!
After a few years of submitting papers to HCI venues and learning how to cope with rejection after rejection after rejection*, I finally managed to get one accepted at ACM Designing Interactive Systems (DIS) 2017.
It’s a full paper, and it’s the outcome of a collaboration with Erik Stolterman. Here’s the abstract:
What are big questions? Why do scholars propose them? How are they generated? Could they be valuable and useful in HCI research? In this paper we conduct a thorough review of “big questions” literature, which draws on scholarship from a variety of fields and disciplines. Our intended contribution is twofold. First, we provide a substantive review of big questions scholarship, which to our knowledge has never been done before. Second, we leverage this summary as a means of examining the value and utility of big questions in HCI as a research discipline. Whether HCI decides that generating and having big questions would be a desirable path forward, we believe that examining the potential for big questions is a useful way of becoming more reflective about HCI research.
I’ll add a link to the draft soon, so if you find the abstract intriguing please do check back to download the paper. Can’t wait to visit Edinburgh!
*If you’re looking for an entertaining text on rejection-proofing yourself, I highly recommend Rejection Proof.
Note: This is an old post that I guess I never published. Hence the 2016 Labor Day reference.
Over labor day weekend (2016) I had some trouble with Alexa. But that’s all I know. I don’t know anything about the cause or anything about possible solutions. Here’s what happened.
On Sunday morning I asked Alexa to tell me the weather. The blue ‘listening’ light appeared and bounced around for a few moments longer than usual and then.. nothing. No ‘flickering’ lights to indicate that she was processing my request and no telling of the weather. What the heck?
And then an ominous red ring of light pulsed a few times and Alexa spoke. Something about how the echo had lost its connection followed by silence followed by “I’m having trouble understanding right now, please try again later,” or something along those lines.
No matter what I requested (or when I requested it) this same sequence of events played out so many times during the day Sunday and Monday. And I have no idea why! I opened the Alexa app on my phone to see if there might be anything helpful there. Nope. Nothing. The app gave me every indication that the Echo should be working. While it was frustrating enough that things were going wrong, it was even more frustrating that the most straightforward way I had of finding out what those things might be (the app) contradicted the fact that there was even a problem.
I use the Echo mostly for banal stuff like getting the news, weather, playing music, and adding items to digital shopping lists. I do have it paired with a smart thermostat, though. What if the Echo were an integral part of how I manage my day-to-day life and what if I had it paired with other smart devices (lights, a fridge, a car). It would be like multiple colleagues being out of the office without having giving any reason thus requiring you to change your schedule and take on a bunch of tasks that you no longer do. Not cool.
I don’t know what the takeaway is here: feedback is important, it’s better to know than not know, the Amazon Echo gives poor feedback, nodal point amenities (I’m making this up this stuff as I go along..) can make day-to-day life just a little bit better but when they fail they can induce anxiety and stress. Somehow I think this relates to the concept of faceless interaction. In the middle of the day on Sunday, staring at that broken cylindrical speaker in my kitchen, I wished, oh how I wished, for a screen.