Over the last week, I have noticed a flurry of blog posts and articles referring to “design thinking.” This may just be a clustering illusion, though — the idea is not new, nor can I see any particular reason why it would surface now more than before. What I read does puzzle me, though.
Let’s start with what is meant by design thinking.
A quote from the latter first:
When the word “critical” is attached to the word “thinking,” the result, “critical thinking,” is a term that has clear, well defined, and well-understood meaning — certainly in the academic community, if not generally. As a counter example, the same cannot, for instance, be said about the term “art thinking.” This is not a term that can be used in any precise or meaningful way. Why? Because it could mean painting or sculpture; it could mean figurative or abstract; it could mean classical or modern or contemporary. Because it embodies so many contradictory notions, it is imprecise to the point of being meaningless — and therefore, completely understandably, it is not much used, if at all.
“Design thinking” is as problematic a term as “art thinking.” Design thinking could refer to architecture, fashion, graphic design, interior design, or product design; it could mean classical or modern or contemporary. It’s imprecise at best and meaningless at worst. More muddled thinking.
But then the more recent article takes a different view:
One popular definition is that design thinking means thinking as a designer would, which is about as circular as a definition can be. More concretely, Tim Brown of IDEO has written that design thinking is “a discipline that uses the designer’s sensibility and methods to match people’s needs with what is technologically feasible and what a viable business strategy can convert into customer value and market opportunity.” [Tim Brown, "Design Thinking" Harvard Business Review, June 2008, p. 86.] A person or organization instilled with that discipline is constantly seeking a fruitful balance between reliability and validity, between art and science, between intuition and analytics, and between exploration and exploitation. The design-thinking organization applies the designer’s most crucial tool to the problems of business. That tool is abductive reasoning.
When design is stripped from forming, shaping and styling, there is a process of critical thinking and creative solving at the very core of the profession. By consciously understanding and documenting this process, a new field within the design domain emerges that deals with the creativity DNA of the design mind. When properly understood and harvested, one can transfer the creative DNA from design into virtually any discipline regardless of brain direction. This process has been recognized by thought leaders as an extremely valuable tool for fostering creativity and driving innovation.
However, this is as far as it goes — there is no further analysis of what this “process of critical thinking and creative solving” might be (apart from a meaningless allusion to the left brain-right brain dichotomy, which is a widespread fallacy). So that takes us no further. (I confess that in my original draft, I was much ruder.)
The reference in this week’s Design Observer piece to abductive reasoning takes us a bit further. Here is what wikipedia currently has to say about that, by comparison with better-known forms of reasoning.
- allows deriving b as a consequence of a. In other words, deduction is the process of deriving the consequences of what is assumed. Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. It is true by definition and is independent of sense experience. For example, if it is true (given) that the sum of the angles is 180° in all triangles, and if a certain triangle has angles of 90° and 30°, then it can be deduced that the third angle is 60°.
- allows inferring a entails b from multiple instantiations of a and b at the same time. Induction is the process of inferring probable antecedents as a result of observing multiple consequents. An inductive statement requires empirical evidence for it to be true. For example, the statement ‘it is snowing outside’ is invalid until one looks or goes outside to see whether it is true or not. Induction requires sense experience.
- allows inferring a as an explanation of b. Because of this, abduction allows the precondition a to be inferred from the consequence b. Deduction and abduction thus differ in the direction in which a rule like “a entails b” is used for inference. As such abduction is formally equivalent to the logical fallacy affirming the consequent or Post hoc ergo propter hoc, because there are multiple possible explanations for b.
At this stage, then, abduction doesn’t look too promising as a means of solving problems. However, it might be attractive as a tool to suggest solutions which can then be tested separately. This is the way I imagine it being used — as an exploratory technique. This is supported by exploring a reference later in the article to Charles Sanders Peirce. His lecture “The First Rule of Logic” is apposite here. Peirce argued that whatever mode of reasoning is chosen, “inquiry of any type… has the vital power of self-correction and of growth.” Following from this, “it may truly be said that there is but one thing needful for learning the truth, and that is a hearty and active desire to learn what is true.” We then come to the heart of his argument.
Upon this first, and in one sense this sole, rule of reason, that in order to learn you must desire to learn and in so desiring not be satisfied with what you already incline to think, there follows one corollary which itself deserves to be inscribed upon the wall of every city of philosophy,
Do not block the way of inquiry.
Although it is better to be methodical in our investigations, and to consider the Economics of Research, yet there is no positive sin against logic in trying any theory which may come into our heads, so long as it is adopted in such a sense as to permit the investigation to go on unimpeded and undiscouraged.
This opens the way to the kind of instinctive, hunch-following process that appears to be presented now as “design thinking.” I am far from sure that such thought processes are unique to designers or, even, more prevalent in that community. Peirce’s suggested open-mindedness in seeking solutions, followed by clear-headed assessment of the merit of those solutions, is a model that many professionals follow, designers or not.
Neil Denny, in a post critiquing some lawyers’ thinking, points to Edward de Bono’s concept of Po. This idea is essentially the same as abduction — thinking of answers that are entirely distinct from the obvious answers in order to reach a new and achievable solutions. As Neil puts it,
Po lifts us out of the normal patterns of thinking. It does not ask “Is this a good idea?” which invites a critical progression of “…And if not, why not.” Instead, po says “Let’s just accept that the following statement, however nonsensical, however illogical is a good idea. Now, what is good about it? What would work or how would it benefit our organisation, or our clients.”
The idea or the suggestion itself is put forward to stimulate the discussion. The idea can be discarded later once it has identified benefits or methodologies.
As Neil indicates, it is the discussion, or the process by which traditional logical tests are applied, where the work really happens. Going back, again, to an old post of mine, James Webb Young’s A Technique for Producing Ideas (chronologically only slightly closer to de Bono than to Peirce) is just another expression of the same basic process.
The process can be distilled into a small set of key points:
- Desire to learn, adapt, or create
- Always be open to possibilities (however odd they may seem)
- Choose potential solutions intuitively and imaginatively
- Test the chosen solutions rigorously
- Discard failed (and failing) solutions (including the status quo), however attractive they may appear
- Learn, adapt or create
- Return to the beginning
This is a hard discipline, and it has to be maintained for best results.
Interestingly, if you persist in concentrating on the things you already know and are familiar with, if you avoid opening your eyes to the widest variety of options, you are likely to be persistently unlucky. Richard Wiseman has reached this conclusion after studying luck and luckiness for some years.
[U]nlucky people miss chance opportunities because they are too focused on looking for something else. They go to parties intent on finding their perfect partner and so miss opportunities to make good friends. They look through newspapers determined to find certain types of job advertisements and as a result miss other types of jobs. Lucky people are more relaxed and open, and therefore see what is there rather than just what they are looking for.
My research revealed that lucky people generate good fortune via four basic principles. They are skilled at creating and noticing chance opportunities, make lucky decisions by listening to their intuition, create self-fulfilling prophesies via positive expectations, and adopt a resilient attitude that transforms bad luck into good.
Wiseman’s work is extremely interesting, and worth exploring in more detail. (For those in Manchester at the end of the month there is even an opportunity to hear him speak as part of the Manchester Science Festival.)
It is important, however, not to get too carried away with intuition. When dealing with abstract problems, our brains tend to think in a way that can lead inexorably to error. The clustering illusion that I referred to at the beginning, together with a host of other cognitive errors, can be a real problem when assessing probability and statistics, for example, as Ben Goldacre specialises in showing us. If design thinking just means being supremely imaginative and doggedly intuitive, it is not likely to be a formula for success. If however, it is a shorthand for creative thinking coupled with critical assessment against objective standards (whether those are rules of logic or just client imperatives), then it is undeniably good.
But let’s not allow the designers to think it is their unique preserve.
 The reasons why this fallacy persists are beyond my scope here. However, the idea of a clear division is a fallacy. Although the mechanism is not fully understood, the brain almost certainly needs to involve both halves to function properly. Take this statement by Jerre Levy, in “Right Brain, Left Brain: Fact and Fiction,” Psychology Today, May 1985, for example:
The two-brain myth was founded on an erroneous premise: that since each hemisphere was specialized, each must function as an independent brain. But in fact, just the opposite is true. To the extent that regions are differentiated in the brain, they must integrate their activities. Indeed, it is precisely that integration that gives rise to behaviour and mental processes greater than and different from each region’s contribution. Thus, since the central premise of the mythmakers is wrong, so are all the inferences derived from it.
The New Scientist has also covered the issue (only available in full to subscribers, although it is possible to find versions of the article around the internet).