Scientific Management | Quantified Self

In Frederick Taylor and the quantified self (a blog post), Nicholas Carr basically pins on the Quantified Self the revitalization of Frederick Taylor‘s scientific management in the workplace. I am suspicious of claims that the Quantified Self movement has driven the private sector interest in tracking its workers day-to-day activities. I think Carr is mistaking the cause and implying that at some point between the rise of Taylorism and present day, tracking employees behaviors with the goal of maximizing productivity lost its luster. I’m skeptical, but I need to read more. I would argue that the Quantified Self and the rise in business operation interest in self-tracking technologies coincides with the rise of technologies capable of doing such tracking. In fact, it seems more likely that the self-tracking device industry would more likely be fueled by private sector incentive rather than the pet projects of a few people interested in self-actualization, as Carr puts it. Then there is the not-so-subtle parallel drawn between Taylorism and the Quantified Self movement (relative to business process tracking).

Carr uses a provocative Frederick Taylor quote to get at the harmful underlying principles of Taylorism and, by implication, the Quantified Self. “In the past the man has been first… in the future the system must be first.” The goals of Taylorism might best be described in terms of optimization, efficiency, and productivity. A productive, efficient, optimal worker is a prosperous worker. Carr fails to consider what these words might mean and whether or not there is some truth buried in them. When do you feel most content? When you’re productive and efficient? When you’re working at peak ability and capacity? Or when you’re unproductive and inefficient? When you’re working at your lowest capacity? I know very few people who enjoy a life of unproductivity and inefficiency.

Another dystopian warning in Carr’s writing appears in the form of machines taking over the jobs of (shudder) even the intellectual elite. A robot lawyer or doctor? It’s not so far-fetched. But what lies beneath this argument? Fear. This is an appeal to fear. Carr’s readers should be afraid because the advancement of tracking technologies marks the beginning of the end for all. Already we see many low level jobs performed by robots. Its only a matter of time before white-collar jobs suffer the same fate. This whole argument, though, is predicated on a myopic vision of the future. Who knows what careers will exist in the future? We have no idea what the future holds, try as we might to predict it. Crippling fear can cause just as many problems as it might solve. Fear stymies innovation. Fear keeps the world as it is as though this is as good as it gets. Fear in this case is as an argument against design. We must take the consequences of design with the benefits, and we must admit that the consequences are a necessary part of growth. We learn from all consequences (positive and negative). Take climate change for instance. Many people rightly fear it and it is (seemingly) coming to pass as I write this text. But even as we continue to design and build products that contribute to the worsening state, so too are we designing and developing ways to combat these deleterious contributions and correct our wrongdoings. The same is true of business process tracking.

What Carr’s post boils down to is a slippery slope argument supported by generalizations from small samples and some worrisome speculations tossed into the mix. And maybe that’s OK for a blog post.  His last paragraph includes the text, “We originally thought that the internet would tilt the balance further away from control and toward liberation. That now seems to be a misjudgment.” I disagree. Anything we design. Anything that has the raw power and potential of a thing like the internet can be used for good and evil. This is the nature of such things. We have to build things like this because of the opportunities they create and we have to be willing to pay prices for these opportunities because nothing is free. Nothing. You think the internet is something? Wait until we realize AI. It’ll happen. And when it does we will know what it means to live through a paradigm shift. The way we think and act will change forever. How many diseases do you think we’ll eradicate? How many mysteries will we solve? Just how fast will we be able to innovate? Will we unlock the mysteries of space? The mind? Who knows?! That’s why we have to do it! The questions we might be able to answer are staggering in number and weight. But there’s another side of the coin. Of what use will remedial human brains be to AI? How quickly will AI be able to replicate itself? Toward what end? If AI doesn’t need us, then what will become of us? Will it obliterate us? Enslave us? Leave us to die on a dying planet? Who know?! Do the costs outweigh benefits? Or do the benefits outweigh the costs? I’m inclined toward the latter.

I was an ardent opponent of quantifying my life until I started doing it and, subsequently, started thinking more deeply about just what it means to quantify. There is no clear line between quantification and qualification. I’m going to go against Einstein (assuming he actually said it) and say that everything that counts can be counted. What matters is how we count it and how we make sense of the counting. Humans are nothing if not excellent at making meaning. And meaning doesn’t get lost when we get down into the minutiae of life. Ever listen to a piano recital when one single sour note echoes through the room? One note among thousands makes all the difference. One added color in a painting can change the whole painting. Life–as design–is a figurally complex thing. And the Quantified Self is one way to make meaning/sense of the complexity.