date_published

31.10.2024

Categories_

  • Technology,
  • Consumer

Future of Strategy 2024: How Explaining Our Role Can Protect Strategists from AI—and Ensure Our Survival

Earlier this year, a rather self-satisfied meme did the rounds. In its telling, creative agencies will be shielded from the disintermediating threat of Gen AI because clients are unable to ‘accurately describe what they want’. It triumphantly concludes ‘we’re safe, people!’

whiteboard

"The difficulty of objectively defining strategy heuristics may protect strategists from the threat of AI because it’s hard to decode into an algorithm"

Earlier this year, a rather self-satisfied meme did the rounds. In its telling, creative agencies will be shielded from the disintermediating threat of Gen AI because clients are unable to ‘accurately describe what they want’. It triumphantly concludes ‘we’re safe, people!’

AI Creative Meme

We’ve all experienced messy, iterative, often chaotic strategic and creative processes whose twists and turns yield results that feel a long way from what was briefed. And we can also relate to situations where clients don’t truly know what they’re looking for until they see it manifested tangibly as creative work.

But whether or not that means ‘we’re safe’ depends on whether creative strategy (and the broader creative process) can be (to use the ‘Knowledge Funnel’ from Professor Roger Martin’s ‘The Design of Business’ (https://rogerlmartin.com/lets-read/the-design-of-business)) codified and transformed from a heuristic to an algorithm.

Knowledge Funnel

Martin’s model posits that all knowledge starts as a mystery until someone with talent and time is able to define a heuristic.

Stanley Pollitt, Stephen King and their many notable ‘descendants’ have developed heuristics for (amongst other things) identifying salient insights and translating them into inspiring creative briefs; providing a reliable foundation for developing effective creative work. And because this knowledge has been specific (residing in the heads of certain individuals and both relatively costly and relatively difficult to transfer to others) it has kept a bunch of strategists in gainful employment (‘safe’) over a number of decades.

The threat to our job security posed by Gen AI (specifically LLMs) is that its brute computing power has the potential to reverse engineer our work: churning through millions of potential approaches to a strategic problem before optimising to a perfect solution. In essence, to decode the specific knowledge in our heads, making it an algorithm that becomes general knowledge available to all.

This has certainly been the case with the game Go where the self-taught AlphaGo Zero (https://www.scientificamerican.com/article/ai-versus-ai-self-taught-alphago-zero-vanquishes-its-predecessor/), unburdened by the intuitive and somewhat nebulous ways in which humans approach the game, has reached stratospheric levels of playing ability. But Go has a definitive and objective ‘win condition’ – completing a game with set rules; whereas there are numerous ways to define the quality of the strategic thinking we produce with no definitive or ‘perfect’ solution – as Rory Sutherland put it, ‘the opposite of a good idea is a good idea’.

And it is the difficulty of objectively defining our strategy heuristics which may initially protect us. As Martin found in his book ‘The Opposable Mind’ (https://rogerlmartin.com/lets-read/the-opposable-mind), a large number of highly successful leaders run ingenious heuristics in their roles that they are simply unable to articulate. So just as clients may find it hard to express what they’re looking for from the creative process, I believe that if we’re honest, us strategists find it equally hard to explain the intuitive heuristics – the fabled ‘lateral leaps’ – that allow us to do what we do so well.

Martin concludes ‘to have a great career in the modern economy, the only path is to have an above average heuristic for creating value in the specific domain of your job. [doing so sets] a high bar – but will increasingly be reality in the LLM/AI economy.’

And maybe, just maybe, we are lucky enough to have an above-average heuristic in a field that lacks an objective and repeatable ‘win condition’, meaning that we will be insulated from the onslaught of AI. But learning from the formerly unassailable masters of Go, it seems sensible to also take this moment for some introspection: to do as much as we can to understand, unpack and codify the ineffable and mysterious creative process. Maybe the creative process will remain a heuristic guarded by a few. Maybe it will become an algorithm available to all. Or maybe AI will enable and amplify both: automating what can be made an algorithm and augmenting us to refine and supercharge our heuristics.

Whatever the case, the notion of ‘safety’ as expressed in the aforementioned meme: the idea that we need not change the way we approach our craft is a fallacy in a world where AI is already our reality.