Rudders

design, design research, HCI, HCI/d, Human-Computer Interaction, Informatics, research, science, scientific method, Uncategorized

Research has the potential to move in many different directions. There are constraints, sure. But regardless of where one starts, multiple paths reveal themselves at each step. Choosing a path is crucial for making progress. Moreover, revisiting and refining the intentions motivating one’s travels down a particular path is important. There is always value in asking why we’re doing the work we’re doing. Asking and answering this question is like steering the rudder on a boat.

ship-470083_1920

Course Design

design, design theory, education, experiential learning, HCI, HCI/d, Human-Computer Interaction, Informatics, learning, learning objectives, Uncategorized

I’m really excited about an introductory HCI course I’m developing (with some amazing collaborators) for the spring semester. For the last week or so, I’ve been working with several practicing designers to establish a set of core skills interns and/or entry-level designers ought to know in order to succeed in the workplace. Their comments and insights have been interesting to read, inspiring to think about, and generative of a much stronger course design than if I had worked independently. I’m appreciative of their help, and I look forward to sharing this collaborative approach with the students in my section. Onward!

handcraft-2380814_1920

New Journal Article

design, design research, Informatics, knowledge production, knowledge tools, philosophy of science, research

Terrific (not-so-new) news! She Ji: The Journal of Design, Economics, and Innovation published an article I wrote with Erik Stolterman about whether knowledge claims could be a useful way to distinguish research communities from one another. Here is the abstract:

While much has been written about designerly knowledge and designerly ways of knowing in the professions, less has been written about the production and presentation of knowledge in the design discipline. In the present paper, we examine the possibility that knowledge claims might be an effective way to distinguish the design discipline from other disciplines. We compare the kinds of knowledge claims made in journal publications from the natural sciences, social sciences, and design. And we find that natural and social science publications tend to make singular knowledge claims of similar kinds whereas design publications often contain multiple knowledge claims of different kinds. We raise possible explanations for this pattern and its implications for design research.

… and a link to the article, which is freely available for download. I’d welcome any comments or feedback. This is part of a broader project investigating the transfer and interplay of knowledge in research communities.

Adolescence as a Metaphor for HCI

design, design research, design theory, HCI, hci research, HCI/d, Human-Computer Interaction, Informatics, theory, theory building, theory development, theory-practice gap

Early in the book HCI Theory, Yvonne Rogers takes a few pages to establish that research in the field is rapidly expanding/diversifying and that it’s difficult to pin down just what kind of field HCI is and what kind of research academics who identify as “HCI researchers” do. Somewhere in those first few pages, she characterizes the field as being in its adolescence and there are other bits of language that support this metaphor (e.g. she describes its “growing pains” etc.). It’s not part of her aim to spend time examining the metaphor of adolescence in any kind of depth, but some of the key ideas in the book make exploring the metaphor seems like a good use of time.

Consider the concerns she expresses over the weakening theoretical adequacy of the field. For now let’s assume this means the degree to which HCI has developed theories that explain or describe its core objects of study. Let’s also assume HCI knows/agrees upon what it’s core objects of study are. Is it reasonable to expect that a field born in the eighties to be theoretically adequate? No. But this strikes me as a totally reasonable adolescent expectation!

I don’t think HCI researchers know what their core objects of study are (or should be), but, riffing on the adolescent metaphor, why should it? Is it because we indulge an almost mythical narrative about how life is supposed to unfold? Should we expect to have our core interests “defined” or “figured out” in our adolescence? I don’t think so, but I know that’s a dominant mental model… in Western culture at least.

In adolescence we experience what HCI has been experiencing — a proliferation (in both volume and speed) of information. Tons of different things to study and different ways of studying them. One result of this is the anxious self reflection that our research doesn’t seem to fit or that everyone else seems to have their role and contribution figured out “except me.” And it can be (and obviously is, for some) overwhelming.

I had a good chat with some colleagues recently about trying to pin down a reading list of canonical HCI texts. But the truth is that there probably isn’t (a) canon nor can there be (one). But a canon is exactly what an adolescent craves because a canon provides identity and, through identity, stability. In other words, a canon provides reassurance that when the time comes, we’ll be able to point to it and say, “This is the foundation of our field.” We know who we are and where we come from and maybe even where we’re going. This arc is reflected in how Rogers organizes her book. Just read the abstract and table of contents. She wants to provide this!

And this, again, is what most need when they’re young (myself included).  The world can seem a complex and scary place without the presence of a few useful frames to make sense of it all. And when it all comes at you so fast and in such high volume, maybe it’s quite a reasonable reaction to retreat and reflect. To try and find the core. The foundation. But things only seem/appear/feel dim if we focus on the parts of the metaphor that Rogers’ brings into focus.

Incidentally, the same thing happens with the theory-practice gap metaphor. We focus on what’s not there and as a consequence we never look elsewhere to see what’s going on.

For the adolescent metaphor (and its apparently generalizable ‘identity crisis’) we don’t stop to think, “Huh, well, what comes after adolescence?” Potentially a lot of really excellent deep insights and cool theoretical work! In fact, lots of cool stuff like this happens during adolescence, too. That much is also clear from Rogers’ text even if it paints an unsettling picture to begin with.. So, sure, the short term might — and I’m really emphasizing the might here — might seem like a confusing mix of questions, approaches, and contributions coming so quickly that we feel validated in our concern that the field is spinning out of control. But, that’s what adolescence is for most folks.

There is a ton of interesting theory work going on in the field! We’re developing theories originating in other fields and we’re developing our own! Check out the theory project page for some good citations. I can understand why someone might choose to frame the field in terms of weakening theoretical adequacy even though I disagree with it. Its negative charge is too strong. It strikes me as a “let’s be reactive and protect against this outcome from happening” instead of a “Let’s cultivate the good theory work that’s already happening.” Yvonne Rogers framing can be read as a warning and so I think it skews towards the former. However, the latter is in my view morally superior.

Adolescence brings with it enough anxiety. We don’t need to be fearful of possible future outcomes. That only subtly undermines our ability to do good work now.

About The Theory-Practice Gap

design, design research, design theory, HCI, hci research, Human-Computer Interaction, Informatics, Interaction Design, knowledge production, knowledge tools, theory, theory building, theory-practice gap, Uncategorized

I’ve been spending some time looking through the CHI best paper award winners from the past five years — all the while continuing to think about the theory-practice gap. And now I have a question. How is it that we distinguish between theorists and practitioners? Who is creating the knowledge that seems to lack practical utility or accessibility?

Just looking at the best papers, one might be struck by the volume of publications using theory, models, frameworks, etc. to do design work. And judging from the author credentials, there is quite a lot of industry collaboration, which makes me think that practitioners (if an academic/industry credential could be casually used to make this distinction) are not only using theory but they are in some cases actively contributing to it.

The theory-practice gap is simple, useful metaphor in the sense that it has guided researchers to ask interesting questions and pursue intriguing and insightful projects — think about things like intermediate-level knowledge objects — but the metaphor has been used for quite a long time (in HCI and in other disciplines) and I’m curious to know whether it has outlived its relevance in spite of its apparent utility.

HCI and Slow Theory

design thinking, HCI, HCI/d, Human-Computer Interaction, Informatics, philosophy, theory, User experience, UX

I co-authored an article that was published in ACM Interactions in January of this year. The article presented a conceptual framework that could serve as the bedrock for subsequent, substantive discussions in the HCI community. The title of the article is, “Slow Change Interaction Design: A Theoretical Sketch.”

It was called a sketch in order to draw attention to the nascence of the whole thing. We read more popular literature than academic papers and so we did not connect (nor attempt to situate) our ideas within growing contemporary scholarly discourses on slow design, slow technology, or the slow movement.

There is good reason for this. First, in our discussions with the editors, we learned that Interactions aimed to position itself not as a venue for academic papers but as a more of popular periodical. Second, we wrote in the context of and in response to popular literature in an attempt to react to the type of content a design practitioner or even a user might come across in their attempts to design for or accomplish some kind of attitudinal or behavioral change. We read books like Switch, The Power of Habit, The Slow Fix, Outliers, and a few others. It was great to write and a pleasure to read and re-read.

Of course, it’s a sketch. And so now I find myself gravitating towards questions about what it lacks, where its weak points are, and what is it that distinguishes the notion of slow change from other frameworks about (1) attitudinal and behavior change and (2) slowness, e.g. (the aforementioned) slow technology, slow design, and slow movement. There are wonderful things being researched and discussed in these domains. A cursory, non-curated search of the ACM digital library for “slow technology” yields 98 citations, a search for “slow design” yields 23 citations, and one for “slow movement” yields 111 citations.

Because of the volume and substance of these growing bodies of work, it should be apparent that demarcation is of the utmost importance.

As we move forward, we have to know and be able to articulate what makes slow change different from these other theories, why this difference matters, and how we might collide these theories in order to learn something new about interaction design.

Quantification and Goal Setting

design thinking, HCI, HCI/d, Human-Computer Interaction, Informatics, learning, User experience, UX, writing

Preface: I wrote this in an email exchange in early January, and the idea is still bouncing around my noggin. I’d love to get a dialogue going with anyone interested in any aspect of this content…

Before reading further, you can watch this video: TEDx talk on Keeping Your Goals. If you don’t watch it, that’s fine too. Just know this: in the video, Derek Sivers argues that when you’re setting goals for yourself you shouldn’t share those goals with anyone else because, if you do, your brain will trick itself into thinking you’ve already accomplished a lot more towards achieving the goal than you have (and thus you’ll do less work than you would if you’d kept it to yourself…). I frame my response to the video in terms of body data, lifelogging, and/or the quantified self.

I think the pith of his argument is especially relevant when body data works in some social component. And now I’m wondering how many fitness tracking devices don’t have some kind of social component. Are there any?! Anyway, with my now defunct Fitbit, I was “connected” with my father-in-law, sister-in-law, and my wife. So, they all knew I was using the Fitbit, and I can see how (even though it hadn’t occurred to me before) I might have tricked myself into feeling more accomplished simply because I was getting a social pat on the back from the people who knew I was attempting to take more steps, drink more water, etc.

With anything, I’m not sure this works all of the time. I can think of at least one reason why telling someone might be a motivating factor: shame. If I tell someone I’m trying to lose ten pounds and they check-in with me (informally, not because I ask them to) when we’re chatting, I’m going to feel shame if I’ve made no progress. Maybe potentially feeling shame will increase the odds that I’ll actually work at it…

There’s some kind of social contract forged whenever someone acknowledges their goals to others. Actually, there’s probably different kinds of contracts. One that is pure affirmation. One that is accountability. And maybe others. I do think people should receive some kind of affirmation for their goals (even just saying them out loud) but then its incumbent on the listener to hold the speaker accountable. When Sivers introduces the concept of telling someone about your goals, he uses the phrases “congratulatory” and “high-image” in reference to the listener as though this is the social contract. This is the response goal-setters get when they tell others. And perhaps he’s right. I’m usually supportive when people express their goals to me, anyway. But maybe that contract is wrong. Maybe I should be supportive while healthily realistic. Maybe rather than acknowledging the goal, we acknowledge the work that needs to go into the goal with a response like, “I’m going to check in with you every so often to see how you’re doing. You’ve got your work cut out for you.”

This reminds me of a conversation my wife and I were having where I expressed frustration over people in general looking for quick-fix shortcuts to problems; not wanting to put in the work to achieve their goals. Sivers’s talk really resonates with me in that regard. So, perhaps even just buying a Fitbit, a Fuelband, or a Jawbone is enough to make someone feel like they’ve accomplished their goals. They get the affirmation from the salesperson (assuming the bought it in a brick & mortar store) and from the company, which presumably lauds the purchase and sends many emails touting the results users have yet to achieve and the community of athletes of which they’re now a part. Just by buying the device, you’re basically telling people your goal. Funny. You buy a device to get healthy and the effect of just buying it is potentially undermining the process…

I would be curious to know more details about the Gollwitzer study [mentioned in the TED talk]. What were the demographics of the people involved? How were they selected and subsequently divided into the groups of “speak-goal-aloud” and “remain silent”? What kinds of goals did they set for themselves that 45 minutes of silent work was somehow directly related to their achievement? There’s lots to explore here..

McCulloch and Pitts’ Logical Calculus

cybernetics, HCI, Human-Computer Interaction, Informatics, logic, philosophy, UX

Prompt: Several days ago, a Professor assigned us a reading. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” This is not the kind of reading I’m used to doing. It’s science. It’s (shudder) math. Nonetheless, I dove in with an agenda. We were asked (as a preface to reading) to “discuss the implications of the paper and its role in the life of a budding PhD.”

What follows may very well befuddle you. You may be nonplussed. Not because it’s esoteric, but because it’s kind of muddled.

Nonetheless, I think I’ve got a few core ideas worth ruminating on. Strap yourself in….

…How is this paper beneficial to a budding Informatics PhD? In one sense, it is beneficial in that it provides a key historical moment in the development of the discipline. Why is it beneficial to understand the history of a discipline? For the same reasons that conferees at the Macy conference called attention to the cultural situated-ness of their theories. These reasons are both practical and philosophical.

Practically speaking, a paper like McCulloch and Pitts logical calculus opens up avenues for neuroscientists, psychologists, psychiatrists, etc. to leverage the MCP model in modifying their treatment practice (e.g., no longer is the patient history required in treating an illness). In addition, it opens up new avenues of research for disciplines like mathematics…opportunities for interdisciplinary collaboration. How many mathematicians were doing neuroscience through a computational lens prior to McCulloch and Pitts? The seeming simplicity of the MCP neuron makes it easy to process, too: inhibitory synapses and excitatory synapses trigger action (or stasis?) in the neuron (or system of neurons) they stimulate. Is there a way to figure out how to trigger particular impulses or suppress particular impulses? Can we develop a treatment to modify behavior without the patient having to “self-modify”?

Getting back to the pith of the Macy conferees, they pointed out the importance of understanding cultures on their own terms. To some extent, if it is possible to understand a culture on its own terms, then one has to know the history of that culture. Where did its predispositions, assumptions, and practices come from? What does this past imply about the present? What does the past suggest for future directions? What can we infer about a culture based on its past? We might modify these same questions to address an academic domain, such as informatics. Where did its predispositions, assumptions, and practices come from? What does this past imply about the present? What does the past suggest for future directions?The past — as filtered through the McCulloch + Pitts paper — implies that there has been a notable value shift, at least in terms of HCI.

My guess is that this paper would be met in that community with the same warmth as much first-wave HCI research is…The “brain-as-computer” metaphor dominates first-wave HCI. It could be argued, strongly I think, that first-wave HCI was perhaps the most explicit “human engineering” in the field to-date. But the metaphor was soon met with disdain. If the brain is the same thing as a computer, then what does it mean to be human? If scientists adopt this perspective, then how might that color their research? Is it better or worse? Or just different? Is it important to differentiate the brain from a computer? What are the limitations of the analogy? Are all things brain-related simply information processing to be understood — at its most basic — as the meeting or exceeding of thresholds of activity in a net? How do we explain differences in perception? Certainly there are commonalities between us. But so too are their differences. How does the MCP theory account for these differences?

Their theory–which I think they would admit is reductionist–overlooks so much of what might be called humanness. All this is to say, there is a practical aspect of the philosophical side of the role this paper plays: it forces us to turn the lens on the discipline and ask questions of it…and act if the answers we come up with are answers we don’t like. The neat thing about having read the Heims paper in concert with the MCP paper is that the Heims book, The Cybernetics Group, is a beacon of hope for enacting change within a discipline outside the auspices of official publications.