Against craft

[Sent first to my email subscribers – sign up to receive short, curious letters on emerging technology and design.]

Art and science are inadequate labels for both design and technology, but that doesn’t stop the odd skirmish.

UX people generally profit from scientism, disparaging aesthetics in favour of carefully dosed phrases like ‘cognitive load’. The development community has embraced comp-sci whiteboard abstraction and rejected the low barriers of the early web. Meanwhile, the UI crowd often prefers to align with art. 2017’s wave of Weird Consumer-Tech Illustration at least echoes Bruno Munari’s idea of design ‘re-establishing the long-lost contact between art and the public’, albeit in a misshapen way. These are the extremes. However, many technologists now outflank these two concepts and pledge allegiance to a third label: craft.

 Photo by  Philip Swinburn  on Unsplash

Photo by Philip Swinburn on Unsplash

Calling yourself a craftsperson affords status. Craft bespeaks skill and autonomy. In the face of creeping automation, a craftsperson is sovereign and irreplaceable. No mere production worker, labour to be organised – she chooses how the work should be done, which of course helps to justify her fees.

Deb Chachra’s piece Why I Am Not a Maker nails the negative connotations that surround making, craft’s central activity: its implied gendering, its conviction that the only valuable human activity is the production of capitalist goods. A shot of undiluted Californian Ideology.

But I also worry about how shallow the tech community’s interpretation of craft is; how aesthetic and performative we’ve made it. We buy handmade holsters for our Sharpies. Our conferences offer wood-turning workshops. Our dress code somehow blends hipster fetishisation of a blue-collar past with the minimalism of the urban rich: we yearn to connect with a handmade, physical world (perhaps to compensate for the ephemerality of our materials), but above all we must display our appreciation of quality, and hence our taste. Craft underpins how we dress and even behave. It’s easy to see where this leads: these identity performances become acts of gatekeeping. Those who look the part and fit the groove are given attention, hired, and respected. The rest are filtered out. Craft as class warfare.

Clinging too tightly to the craft identity also makes us arrogant. Craftspeople generally aren’t renowned multidisciplinarians: sadly, some believe their expertise separates them from less capable people. Those tawdry marketers, those frantic project managers – they don’t understand what it means to truly build. Academics? All talk. Can’t even fucking code, man. This maker-primacy, as Chachra points out, is anti-intellectual and discriminatory, undermining the important roles of education, care-giving, and other such feminised functions. To the craft ideologue, output is everything, and outcomes extraneous. To the world’s hungry, we offer the most exquisite wooden apples.

I’ve been thinking about how we reached our current ethical mess. I don’t think it’s because we ignored ethics; instead, we framed it too narrowly thanks in part to our distorted notions of craft. Until perhaps eighteen months ago, much of the tech community appeared to see ethics as a matter of competence, not impact. An ethical technologist had proper contracts and got paid on time; he commented his code diligently; he knew all the keyboard shortcuts and none of the politics. Nathaniel Borenstein’s old joke fits well:

“No ethically-trained software engineer would ever consent to write a DestroyBaghdad procedure. Basic professional ethics would instead require him to write a DestroyCity procedure, to which Baghdad could be given as a parameter.”

We’ve only recently started to tease apart ‘can’ and ‘should’, and to scrutinise our social and political responsibilities. We’re making progress, but from a standing start. To hasten this journey I think we should loosen our grip on the notion of craft, or at least look beyond its seductive aesthetics. Besides, craft has become fully hijacked by the mass market anyway. It has become commoditised and sanitised: an empty marketing label. On a recent economy-class flight I was served ‘artisanal craft coffee’: granulated, wettened by a tepid kettle, served with UHT milk and a plastic stirrer.

Let’s discard craft’s harmful identities, and instead explore craft, science, and art alike as domains to learn from and play in, not ways to shore up our status. Let’s dress worse, talk more, and get weirder.

A techie’s rough guide to GDPR

curtis-macnewton-317636.jpg

[This was originally written for my upcoming book Future Ethics, but might be too boring to make the final draft. I must stress this post does not constitute legal advice; anyone who takes my word over that of a properly qualified lawyer deserves what they get. I recommend reading this post alongside the UK’s ICO guidance and/or articles from specialists such as Heather Burns.]

A large global change in data protection law is about to hit the tech industry, thanks to the EU’s General Data Protection Regulations (GDPR). GDPR affects any company, wherever they are in the world, that handles data about European citizens. It becomes law on 25 May 2018, and as such includes UK citizens, since it precedes Brexit. It’s no surprise the EU has chosen to tighten the data protection belt: Europe has long opposed the tech industry’s expansionist tendencies, particularly through antitrust suits, and is perhaps the only regulatory body with the inclination and power to challenge Silicon Valley in the coming years.

Technologists seeking to comply with GDPR should get cosy with their legal teams, rather than take advice from this entirely unqualified author. However, it’s worth knowing about the GDPR’s provisions, since they address many important data ethics issues and have considerable implications for tech companies.

GDPR defines personal data as anything that can be used to directly or indirectly identify an individual, including name, photo, email, bank details, social network posts, DNA, IP addresses, cookies, and location data. Pseudonymised data may also count, if it’s only weakly de-identified and still traceable to an individual. Under GDPR, personal data can only be collected and processed for ‘specified, explicit, and legitimate purposes’. The relevant EU Working Party is clear on this limitation:

‘A purpose that is vague or general, such as for instance ‘Improving users’ experience’, ‘marketing purposes’, or ‘future research’ will – without further detail – usually not meet the criteria of being ‘specific’.’ —Article 29 Working Party, Opinion 03/2013 on purpose limitation, 2 April 2013.

So, no more harvesting data for unplanned analytics, future experimentation, or unspecified research. Teams must have specific uses for specific data.

The regulations also raise the bar on consent. User consent is required unless you can claim another lawful basis for handling personal data. One such basis is ‘legitimate interests’, but this isn’t the catch-all saviour it may appear. To take this route you need to demonstrate your interests aren’t outweighed by others’ – it’s likely this only applies where there’s minimal privacy impact and no one could reasonably object to their data being handled in this way.

Where requested, consent must be freely given, specific, informed, and unambiguous – and indicated by a clear affirmative action. These few words form a death sentence for data dark patterns. Pre-ticked and opt-out boxes are explicitly banned: “Silence, pre-ticked boxes or inactivity should not therefore constitute consent” (Recital 32, GDPR). ‘No’ must become your data default. Requests for consent can’t be buried in Terms and Conditions – they must be separated and use clear, plain language. Requests must be granular, asking for separate consent for separate types of processing. Blanket consent is not allowed. Consent must be easy to withdraw; indeed ‘it must be as easy to withdraw consent as it is to give it’. No more retention scams that allow online signups but demand users phone a call centre to delete their accounts. Finally, parental consent is required to process children’s data – the age at which this applies is down to individual EU countries, but can’t be lower than thirteen.

GDPR also defines some riskier data as sensitive: data on race, ethnic origin, politics, religion, trade union membership, genetics, biometrics used for ID, health, sex life, and sexual orientation. This ‘special category data’ always requires explicit consent. As often happens with new legislation, it’s not yet clear exactly what this means and how it differs from standard consent, but technologists should nevertheless tread carefully.

GDPR also offers eight individual data rights. Some are straightforward. EU citizens have the right to rectify false information and to restrict certain types of data processing. They have the right to be informed about data processing, usually through a clear and concise privacy policy that explains nature and purpose. They have the right to access their personal data held by companies. This must be sent electronically if requested, within a month of the request being made, and must be provided free unless the requests are repetitive and excessive. (Building a self-serve system for access requests might be a smart move in the long run.) Individuals also have the right to object to data processing. If the data is being used for direct marketing, this objection rules all: you cannot refuse to take someone off the direct marketing list.

The three remaining individual rights are more complex and ethically interesting, and deserve closer attention. First, GDPR provides a right to data portability. Not only can users request their data, but it must be provided in a structured, machine-readable format like CSV, so users can use it for other purposes. However, this isn’t an own-your-data nirvana – it only covers data the user has directly provided, and excludes data bought from third parties or new data derived through, for example, segmentation and profiling. Businesses that choose the ‘legitimate interest’ justification for data processing (see above) are also exempt. However, this new right still threatens some comfortable walled-garden models. For example, social networks, exercise-tracking apps, and photo services will have to allow users to export their posts, rides, and photos in a common format. Smart competitors will build upload tools that recognise these formats; GDPR might therefore help to bridge the strategic moats of incumbents.

Users also have a right to erasure, sometimes known as the right to be forgotten. This has already become a cause célèbre for data rights, and legal cases are swirling around the topic. It’s important to note there is no absolute right to be forgotten under GDPR; the right only applies in specific circumstances, and requests can be refused on grounds of freedom of expression, legal necessity, and public interest. The ethics of forgetting are fascinating and way beyond this post; I’ll save that for the book. But in The Ethics of Invention, Sheila Janasoff argues convincingly that this right helps to constitute what it means to be ‘a moving, changing, traceable, and opinionated data subject’. It’s a particularly important right for children, although it also has the potential to be abused by those trying to hide wrongs. And a right to be forgotten may come into direct conflict with emerging technologies: good luck handling a right to erasure request if you’ve already committed the data to an irrevocable blockchain.

The eighth individual right relates to automated decision making and profiling. This has sometimes been misrepresented as a right to explanation; i.e. that companies must explain on demand the calculations of any algorithm that takes decisions about people. This right doesn’t exist within GDPR, although it may need to exist in future. (Again, the ethical angles of explainable algorithms are complex and need to be covered separately.) GDPR’s automated decision right requires companies to tell individuals about the processing (what data is used, why it’s used, what effects it might have), to allow people to challenge automated decisions and request human intervention, and to carry out regular checks that systems are working properly. This last phrase is promising: regular auditing of decision-making systems will hopefully mean algorithmic bias will be exposed and eliminated sooner.

GDPR makes a special case of fully automated decisions that have ‘legal or similarly significant effects’, giving examples such as algorithms that affect legal rights, financial credit, or employment. These can only be undertaken where contractually necessary, authorised by law, or when the user gives explicit consent. In these high-risk cases, individuals have a right to know about the logic involved in the decision-making process. It seems likely that an outline of how the algorithm works might suffice, rather than providing the data relating to this specific decision. Companies must conduct an impact assessment to examine potential risks, and take steps to prevent errors and bias.

GDPR’s other highlights include an obligation for teams to practice data protection by design and tight stipulations to notify authorities about data breaches. And, for the first time, it’s all backed up by meaty penalties: up to 4% of global turnover or €20 million for the most severe violations.

Complying with GDPR will require tricky changes to algorithmic design, product management and design processes, user interfaces, user-facing policies, and data recording standards. You’ll have to spend time designing consent-gathering components, unless you can claim a justification that obviates consent. You may end up with less rich customer insights than you had before. Some KPIs may slump. But for companies that have direct customer relationships, it’s all manageable, and on the upside you not only reduce your compliance risk but benefit from the increased trust your customers will show in you and the online world in general.

However, there are a small number of companies who should be very worried. GDPR will expose the tracking that is now commonplace on the web, and it’s fair to expect widespread revolt. Without a direct customer relationship, third-party ad brokers and networks must rely on publishers to gather consent; but no publisher will willingly destroy their user experience with dozens of popups for their ad partners (remember: consent must be granular!). Even if a publisher did volunteer for this self-mutilation, expect users to universally refuse permission for their data to be used for tracking. It’s highly doubtful the ‘legitimate interests’ excuse will work for ad networks either; the balance-of-interests test is unlikely to go their way. The black box will be forced open, and people will find it’s full of snakes. Dr. Johnny Ryan of PageFair puts it bluntly: GDPR will ‘[rip] the digital ecosystem apart’. Expect panicked consolidation in adtech as networks realise they can’t simply sit in the middle; they must somehow own the customer relationship to control consent. It’s likely adtech firms may even try to acquire publishers for this reason. And it’s likely some will die. No flowers.

[Photo by Curtis MacNewton on Unsplash.]

The weirding of design: thoughts on #AIRetreat

I’m on an anticonvergence swing. The rush to systematise, to codify, the release of yet another manifesto or code of ethics – it all tires me out. These attempts feel premature at best; at worst they feel like vehicles for status and positioning, not genuine impact.

I was worried #AIRetreat would trace a similarly prescriptive path, and I arrived ready to play the role of candid saboteur. Two and a half days isn’t enough for twenty people to reach consensus on something as complex as artificial intelligence. Instead of a doomed attempt to plot an accurate map, I wanted us to tell stories of our respective hometowns, the origins of our myths. I wanted to hear which hills smoulder with the smoke of dragons.

Many of the attendees were wrestling with professional difficulties and neuroses. Existential quandries hung in the air. I know well that events like this risk looking elitist, self-congratulatory – but believe me, there was vulnerability. Souls were bared.

AI is overwhelming in scale. At times I feel like the triangle player in the orchestra – can I really add anything worthwhile? So we talked about giving ourselves the conviction to contribute but the grace to step back. We threw around our pet ideas to weigh them, to examine their shape, to see how they bounce. 

The old designer hammer-nail combo raised its head at times: Post-Its flew onto the windows, and we couldn't help but categorise. Perhaps more interdisciplinarity would have helped. But of course designers do have something to add to the AI conversation; some human-leaning balance to a field colonised by the technical. And that conversation was fast and deeply intelligent. After some early toe-dipping of theory and labels, we leapt in – sex, ethics, consciousness, existential risks, mundane dystopias. We agreed on the folly of separating technology from its social context – augmented reality, for example, makes this fallacy blindingly clear – and argued whether AI should amplify or alter humanity.

It’s important the public has a say in AI discourse, lest we slip into technocracy. But the cultural tropes of AI don’t help. We need to go past the Terminator/HAL angle, the white plastic humanoid handshake motherfuckers. We need new art, new metaphors, new visual and narrative motifs for AI. Black Mirror does a great job in its millennial-Aesop way, but designers and artists have valuable skills that could further provoke the conversation. Can we, for instance, create compelling visions of not the black mirror but the magic window? Can we sketch out a technology that points outward, exploding the hidden components of our environments and lives, collapsing the distances of capitalism (provenance, labour, energy) and helping people make more informed choices?

In this territory, art movements and speculative prototypes surpass manifestos. I think if we’re to really contribute to the territory of AI, we need weirder design practices. We can’t think of interfaces as deterministic, nor interactions as linear. Designers will have to expand both their inputs and outputs: fiction, posters, plays, and games could play roles as large as products and blueprints. I see more value in the futures toolbox than the usability test.

I have to mention the nature. We met at Juvet, famous from Ex Machina, buried in the mountains of northern Norway. The air was crisp and the changing autumn light threw time out of balance. Kairos ruled and chronos melted away. Yellow and ochre leaves tumbled into the river. We marvelled at the Milky Way and the (faint but undeniable) Borealis. We climbed a great big hill, gulping cold air and pointing at the glacier beyond. Once we descended, we drove just a little further, to Trollstigen. As we rounded the bend, we realised the sheer size of the valley ahead, and it took our breath away.

The bored designer’s reading list

At some point in their career, every digital designer gets tired of the typical didactic tech literature. So many tools, so many techniques, so much heat – yet so few ideas. Lately I’ve been fortunate to read some fascinating books that loosely orbit my design and technology interests. Most bias toward theory rather than practice. They’ve helped sharpen and reinvigorate me; perhaps they might work for you too. I’ve included a couple of suggestions from Twitter – thanks to everyone who contributed; please forgive curatorial omissions. See replies to my original tweet for more.

Disclosure: I’ve put referrer links on these. I’m currently playing the role of low-income writer myself, and I’m not too proud to try to cover some of my unabating research costs.


Thinking in Systems: A Primer – Donella Meadows

Crystalline and readable overview of systems thinking, a one-way valve to new perspectives.

[Amazon US · Amazon UK · Goodreads]

Inventing the Future: Postcapitalism and a World Without Work – Nick Srnicek & Alex Williams

A bold left-accelerationist manifesto on the coming years of automation and intractable unemployment. High concepts supported with passion and rigour.

[Amazon US · Amazon UK · Goodreads]

How Designers Think – Bryan Lawson

A thorough analysis of what makes a designer a designer, ideal for elevating oneself beyond those dreary ‘But everyone is a designer now’ discussions.

[Amazon US · Amazon UK · Goodreads]

Design for the Real World: Human Ecology and Social Change – Victor Papanek

“There are professions more harmful than industrial design, but only a very few of them.”

[Amazon US · Amazon UK · Goodreads]

Four Futures: Life After Capitalism – Peter Frase

Short, provocative speculations about the intersection of technology and inequality in our near futures. It’s true: design is politics now.

[Amazon US · Amazon UK · Goodreads]

The Prince – Machiavelli

Classic study on power and ethics, more nuanced than its notoriety suggests.

[Amazon US · Amazon UK · Goodreads]

The Nature of Technology: What It Is and How It Evolves – W Brian Arthur 

Suggested by Matt Jones.

[Amazon US · Amazon UK · Goodreads]

Frame Innovation – Kees Dorst 

Suggested by Chris Jackson.

[Amazon US · Amazon UK · Goodreads]

Metaphors We Live By – George Lakoff & Mark Johnson

Suggested by Sol Kawage.

[Amazon US · Amazon UK · Goodreads]

Finite and Infinite Games – James Carse

Suggested by Austin Govella.

[Amazon US · Amazon UK · Goodreads]

Datafication and ideological blindness

[Part 1 of a series on product strategy and data ideologies.]

“Our bodies break / And the blood just spills and spills / And here we sit debating math.”
—Retribution Gospel Choir, Breaker

Design got its seat at the table, which is good because we can shut up about it now. What used to be seen as the territory of bespectacled Scandinavians is now a matter of HBR covers, consumer clamour, and 12-figure market caps. People in suits now talk about design as a way to differentiate products and unlock new markets.

The table is a metaphor for influence, of course. Designers already have plenty of tactical influence – interface, layout, structure and all that – but this is influence of a different order. It is deep and internal: influence over culture, vision, and most of all strategy, the art of deciding where to go and how to get there.

In this realm, data is king. Whether from device sensors, social media chatter, or experiment analytics, data pours off every surface of the modern world, and people are happy to sell us expensive tools to analyse it.

Data has transformed strategy across many industries. Sports fans and insiders alike have become trainspotters: the minutiae of Moneyball, of take-on percentages and suspension loads are now mundane. Evidence-based medicine has put empiricism at the heart of the profession, with randomised controlled trials guiding new treatments and in some cases reducing mortality.

But outcomes are only half the story. Much of the appeal of this datafication is ideological.

“Quantified thinking is the dominant ideology of contemporary life: not just in scientific and computational domains but in government policy, social relations and individual identity.”
James Bridle, What’s Wrong with Big Data?

The tech industry believes itself to be neutral and objective. This is pure self-delusion. Ideology runs hot through the veins of the sector. So blown are we by the winds of the New, it takes just weeks for a prevailing zephyr to align all ships in the same direction.

Today’s dominant tech ideology is Lean Startup, a California-ised nephew of Lean Manufacturing. The family resemblance comes in the elimination of wasteful work that fails to meet customer needs. So far, so obvious. In practice Lean Startup almost exclusively manifests as accelerated empiricism.

Lean Startup’s central tenet is that we’re surrounded by unparalleled uncertainty, to the extent that accurate forecasting is impossible. Therefore, adherents claim, the only worthwhile way to build is through stepwise iteration, in a perpetual cycle of Build-Measure-Learn. The notions of intuition and prediction are negated, deprecated by data.

I’m not convinced by the presumption. Certainly the tech industry operates amid flux, but the wide-angle view of this change is more predictable than many would admit. Bill Buxton famously claimed consumer tech has a 30-year ramp-up, pointing to the mouse and the touchscreen, first prototyped in R&D circles in the mid-1960s. Even the Gartner Hype Cycle, tacky as it is, offers a plausible model of trajectory and velocity for emerging technology. With intelligent extrapolation and study, the next five years of technology is hardly a mystery. The second-order and social impacts are murkier, true, but here a spot of science fiction scrutiny and primary research surely isn’t beyond us.

But the message is out of vogue, and a posteriori empiricism is in the ascendancy. So datafication it is, and with a narrow view of data at that. In Lean Startup as now practised, data is first and foremost quantitative, usually gained from user analytics and multivariate experiments.

I’ve studied a good deal of mathematics and statistics, and know the power of quant data. But I also know its limitations, and have seen first-hand the dangers of data ideologies excluding other decision-making inputs.

Scenario 1 – experimentation trumps coherence

I’ve worked with two companies where the primary product strategy has been reducible to “Increase this KPI”. The same sorry tale has panned out in both.

At the start, things look positive. Per executive edict, employees concoct product experiments to move the needle. Pace of execution goes up, pet projects ship, and people are pleased at the rapid throughput and product change. Sometimes the measure does indeed move, and from a distance it certainly looks like innovation.

But almost all these experiments are additive, so the interface gets crammed. White space is eroded by buttons and info. Successful A/B trials ship to 100% regardless of coherence and intent. The product slowly becomes cluttered and the value proposition becomes incoherent. Secondary metrics that lie outside the scope of the experiments, such as retention or NPS, start to plateau, then slip.

Worse, the internal framing of users shifts. Employees start to see their users not as raison d’être but as subjects, as means to hit targets. People become masses, and in the vacuum of values and vision, unethical design is the natural result. Anything that moves the needle is fair game: no one is willing to argue with data.

PMs and engineers decide that since they can ship pretty much whatever they like, they bypass what they see as designers’ obstructive, oversensitive tendencies. Deployment authority becomes the ultimate power, design morale plummets, and designers quit. This proves to be a leading indicator of company morale, and general confidence in leadership sags shortly after. Failure to provide a strategic North Star is itself an absence of leadership; a timid disavowal of responsibility for direction. So the short-term happiness soon fades, and the breakdown of collaboration and strategic coherence proves hard to reverse. Usually you have to sack an exec or two.

Scenario 2 – Safety dominates

In a data-paralysed company, conviction is discouraged. Skills are diminished to perspectives, and only hypotheses have currency: weak opinions, barely held. There’s a shift from fulfilling user need to squashing risk, and heavy conservatism sets in. The symphony orchestra of design is reduced to the barbershop quartet of conversion rate optimisation, and the product hillclimbs to the well-known local maximum. Innovation becomes purely incremental.

hillclimb1.png

Now, there’s nothing wrong with incremental innovation per se, unless it becomes the only way you innovate. In an environment of data-enforced caution, there’s no way to climb down that hill to higher pastures elsewhere: a single metre is downhill, so you’ll never walk a hundred. Companies thus paralysed, unable to take bold steps in new directions, become vulnerable to eventual disruption. 

This malaise is particularly dangerous because it’s symptomless until too late. Your outlook seems healthy for many years until one day you’re suddenly irrelevant.

For most companies, deep commitment to product/market fit will prove more valuable than a safety-first optimisation mindset. As Ben McRedmond of Intercom says, a billion-dollar business was never built off better button colours. At vast scale, a 0.1% conversion uplift could indeed mean $millions, but to a company not in that league, premature datafication could be fatal. Better to focus on truly understanding and addressing user needs rather than shaving a tiny advantage in a conversion funnel. Optimisation is the cherry, not the cake.

Scenario 3 – copycat strategies

Replacing strategy with metric optimisation is stupid enough, but it’s even more dangerous for companies that choose the same metric as competitors.

Social networks typically make engagement their primary target, and consider it a proxy for user success. It’s now clear that among the strongest drivers of social network engagement are rich media (images and video), contemporaneity, and easy feedback mechanisms. Little wonder then that all social networks are headed toward the same territory of videos, live streaming, and push-button social grooming. It’s the preordained endgame of a battle for engagement, and so every social network starts to look the same.

A strategy is useless if your stronger competitor has the same strategy. Without differentiation there’s no advantage, so metric-copycat strategies tend to lead to one of two scenarios:

  1. If scale matters (any domain with Metcalfian dynamics, e.g. multiplayer gaming, social networks, two-sided platforms like classifieds or ride sharing), the winner takes all. Any incumbent would love its competition to ride the same rails – it can then leave the risky R&D/innovation to the chasing pack, cherry-pick what works, and roll it out to a wider audience, thus protecting future market share. Why bother checking out the alternatives when we’ve copied the best bits here? Call it a fast-follow strategy if you want to wrap its ethical deficiencies in a cloak of respectability. 
  2. If network effects are negligible (e-commerce, publishing, task-based software), cost is the only real differentiator left, and it’s an ugly race to the bottom. Again the bigger player usually wins. They can discount the sharpest and absorb losses the longest, then ramp margins back up once the competition is dead.

Scenario 4 – flawed data, flawed decisions

If you’re putting data at the heart of your decision-making, you need to get it right. That means:

  • employing skilled staff who will set up experiments accurately, avoid flaws such as p-hacking, and have the numeracy and statistical capability to draw valid insight from your raw data.
  • investing in watertight analytics technology with excellent uptime and security.
  • a laser-like focus on team efficiency and deployment. No point garnering insights if you can’t act.
  • surprisingly large sample sizes. Thanks to the complexities of statistical power, you may find a minor tweak in a low-conversion process will need a >100,000-user sample for a valid test.

Without these, you may be making decisions off faulty data. Worse, you won’t even know. Thanks to the legitimising effect of datafication, you’ll feel highly confident while doing the wrong thing; betting on a hand of four ♠s and a ♣ that you misread as a flush.

Data can of course be an enormously valuable strategic input, if these pitfalls are sidestepped. Senior designers and leaders can’t withdraw from the data discourse, but they are well placed to question its ideological power. Data is a valuable adviser but a tyrannical master, and in some companies datafication has such a stranglehold that other approaches are permanently in shadow.

Fortunately, these companies are easy to spot: they call themselves “data-driven”. Run from data-driven companies. In thrall to semi-science and blinded by their dogma, they’ve lost the ability to see intelligent alternative perspectives on their business, their products, and the world. Embrace instead data-informed companies. This isn’t mere grammatical pedantry – a company genuinely informed by data understands the risks of datafication and adopts sophisticated, balanced approaches to strategy that blend quant, qual, and even some of that unfashionable prediction and intuition.

In Part 2 I’ll talk about the broader strategy process and how to counterbalance an overweighting to analytics.