You’ll know you are developing a methodology when you start doubting the efficacy of any particular method. Methodology is a far more interesting and useful concept when we resist the urge to convert it into a meta-method.
I had the same sensation with UX and Service Design as you’ve had with Systems Thinking.
As the methodology becomes normalised and commodified, it ceases to offer a way into the complexity of the situation you are addressing, and becomes a way of avoiding or domesticating that complexity.
But while I’ve long thought of my job in terms of metaphor making, I had never connected the critique of commodified methodology to a loss of analogical range. That is a very helpful concept, thank you!
Latour, in his studies on science, talks a lot about "allies". Each scientific breakthrough is not only scientific, but also a political success on micro- and meso-level.
Thanks for sharing this, this opened a whole can of worms for me: just took a quick dive into "The Pasteurization of France". I very much liked the idea that Latour describes the work of the *Pasteurians* as a "translation" (between science and politics) – also reminds me a lot of some of the things found in "Most Work Is Translation". [1]
It's important to separate the pitfalls of the individual from those of the approach.
The conclusion can't be "avoid systems thinking." It might even be: truly understand the nature of systems, precisely because it reveals how complex the variables actually are. This should humble you accordingly—but done well, also makes you highly perceptive.
Cantankerous indeed. 😉 Once you start seeing things in "systems", it's turtles all the way down. Far better to see entangled mangrove swamp roots or balls of yarn.
I think you’re describing a very real failure mode of encountering systems thinking too early—something books like Meadows really should explain. It makes sense why you see it as a massive regret in your career journey. I learned this the hard way too, years ago. I do, however, want to leave you with the reframe I’ve learned. Truly understanding systems thinking in environments like yours that reward conformity and local optimization, unfortunately, requires systems disillusionment to fully set in first. Once it has, you can see it for what it truly is: anti-universal, anti-lucidity, anti-symbolic comfort, anti-model, anti-diagram… It’s a discipline, not a destination. Properly practiced, it places the analyst inside the system, not above it, to manage uncertainty over time rather than exercising agency (aka getting comfortable at being wrong and learning how to respond instead of react so change is less disruptive). If it feels like a minigame or a silver-bullet hunt, it’s already stopped being systems thinking. It’s about restraint, not ego. I hope sometime in the future you revisit the idea, but not from a book. From people who successfully live it. Just know, your mistake wasn’t reading the book… it was not being able to see the important guardrails and costs of its use. I wish you luck and hope you land the job you deserve. And if not, maybe you create it yourself instead.
I wonder if what’s being rejected here isn’t systems thinking itself, but abstraction that arrives before contact. Models make sense to me only after constraint has already taught their limits — otherwise they feel like tools without a hand.
You’d probably enjoy Critical Systems Thinking. Puts the different approaches to systems thinking in their historical context and provides a nice frame for when to attempt to use them.
This is a misrepresentation of systems thinking. It absolutely does not seek to formalise causation. Quite the opposite. The whole point of systems thinking is to understand non linearities, tipping points, mental models & thinking mistakes. I was a management consultant 20 years before you & we all suffered from a different disease: extrapolation / interpolation. Big fancy financial models that thought everything was a straight line & ignored diminishing returns, thresholds etc My God, I sweated over decks with fancy charts that said how much margin we were going to squeeze from one mix change (always, everything else being equal?!) & none of these decks bore the slightest resemblance to actual performance. Because performance really is just a feedback loop that gets started until it saturates & stabilises. The world that existed before you, talked about drivers & KPIs & USPs & OKRs & hockey sticks & we were all just throwing darts in the dark. The system is going to do what it does in response to feedback loops. You can’t spot the threshold - without heroic assumptions & luck - and as you say much of what works is then post rationalised / survivor biased. But what good systems thinking does is this: it says “1. You’re not looking at a photo. You’re looking at a flow (set of interacting flows). 2. This system will behave normally - until it doesn’t. 3. You can’t predict that moment but you can look for signs of auto correlation / oscillation / bifurcation. 4. Once your system goes non linear, all the usual levers will fail. You are not steering the ship. The ship is in control 5. Understand your brain will jump to the answer, engage in apophenia, find cause & effect where there is none 6. Simple models help you understand the system better than complex ones 7. Leadership needs to be humble. Feedback & course correction is everything. 8. The strategy deck is a useful qualitative tool for understanding your direction BUT the charts & cashflow protections are close to worthless 9. Equilibrium is predictable & can be risk modelled. Tipping points create uncertainty not risk. Don’t get confused. 10. 10% contingency can become 300% over-run in a non linear situation. 11. Never forget: hypothesis > test > learn > adapt. The essence of being a human.
That’s all I have. But systems thinking is great & should be taught to everyone, especially politicians.
I think your list covers it so well. Point 7 especially stood out, because at a certain point action + paying attention is the best thing. And you’re pulling on @Vaughn Tan’s heartstrings who is far better acquainted than I am with the difference between uncertainty and risk.
I agree that this is a misrepresentation but I feel like it’s a commonly held one, particularly among brash greenhorns like me. Maybe it’s cause we don’t respect its limits (like KPIs et al that were trusted too much)? I only get to read about the old world, hence why I feel that people who have more experience, like yourself, tend to hold an accurate representation of systems thinking. And you should teach it, because when used well it does seem valuable!
Politicians probably won’t listen, but… maybe the next generation. Big fan of your Substack publication’s name btw. 🫡
Coming back to this I just discovered another podcast interview I did directly relevant to your later comments on Graeber's Bullshit Jobs. Dreadful piece of work. So self-indulgent, so incurious about what could be driving the phenomenon.
Ah, but the Prince is a remarkable piece of work, full of real insight, however much you want to take or leave the 'Machiavellianism'. My favourite idea from the Prince is that, if the prince can service the people reasonably, they'll reciprocate with some gratitude, whereas the 'grandi' are insatiable. They'll always be after more!
I found very interesting the insight about temporality of doing harm or good: if you have to do harm, do it quickly and all at once. When doing good, do it little by little over a longer time.
People will forget the harm you caused and remember the good.
I keep wondering whether contemporary governments couldn’t take lesson from that (e.g. in any policy reform).
Are you familiar with Dave Snowden work and Cynefin? Anyway, I think a key point you touch that resonates is from knowing to acting. Translate knowledge to effective action is not trivial and sometimes hidden by the hubris of knowing (mixed with Dunning-Kruger?)
You know, I've used Cynefin a little bit. My more recent exposure to the framework has been through Jules Yim (@thecontrapuntal - not sure why this tag isn't working) who now works with Dave. I noticed Claire in another comment referred to the hubris of knowing. There's definitely a horseshoe theory of the hubris of knowing, where too much confidence and too little probably lead to the same outcome...
The most compact summary of the uncertainty paradox that I've encountered goes something like "Uncertainty should not change your decisiveness." As in, even without knowledge of the outcome, the right thing to do is often clear.
Appreciated how your conversation differentiated between self-proclaimed systems thinking vs. systems thinking "in spirit". Seems like Rory does support the latter, particularly as a foil for reframing things quickly. Systems thinking might be better referred to as alchemical thinking, ha.
This all panders to my prejudice that "systems thinking" is dumbed down cybernetics ... I think the big difference is what Will Butler-Adams calls "respect for the problem". The problem isn't just something you can point to on a diagram and solve, it is your adversary and like any other adversary you underestimate it at your peril. The thing you call a problem is most likely a solution for somebody else, otherwise it wouldn't have been allowed to persist for so long. That's very much a system level point of view but it's a million miles from "systems thinking" of the sort that you get in books with too many diagrams
I'd agree with that prejudice! It's hard to tease out utility from cybernetics. On the note of underestimation, I have a colleague who says we should treat problems like monsters and that more businesses need Dungeons & Dragons-style monster manuals. You can't often kill the big hairy ones, but you might be able to avoid being squashed.
Also, really enjoyed your podcast with Patrick McKenzie. From 2024 I think.
Thanks for sharing your perspective on this, it really resonates. Do you have any book or course recommendations that *are* useful for analysts interested in expanding their toolbox?
Is it worth trying to learn techniques like agent based modelling for understanding complex systems, for example
Thanks Hibah. Honestly, it depends a lot on your vertical. The more universal tools have to do with communication rather than analysis. Tons and tons of books and courses on that. Most of all, though, I'd pay attention to who you find to be a good communicator. Even a bit of mimicking will help you pick up some good tricks
I would guess that more technical versions of understanding complex systems are probably *better* because they fail in obvious ways. But I don't have a ton of experience with them since I'm pretty much a wordcel
You’ll know you are developing a methodology when you start doubting the efficacy of any particular method. Methodology is a far more interesting and useful concept when we resist the urge to convert it into a meta-method.
I had the same sensation with UX and Service Design as you’ve had with Systems Thinking.
As the methodology becomes normalised and commodified, it ceases to offer a way into the complexity of the situation you are addressing, and becomes a way of avoiding or domesticating that complexity.
But while I’ve long thought of my job in terms of metaphor making, I had never connected the critique of commodified methodology to a loss of analogical range. That is a very helpful concept, thank you!
It really is a practice of restraint to not make that conversion. Starting to sound like this is a rite of passage for a lot of people!
Latour, in his studies on science, talks a lot about "allies". Each scientific breakthrough is not only scientific, but also a political success on micro- and meso-level.
Thanks for sharing this, this opened a whole can of worms for me: just took a quick dive into "The Pasteurization of France". I very much liked the idea that Latour describes the work of the *Pasteurians* as a "translation" (between science and politics) – also reminds me a lot of some of the things found in "Most Work Is Translation". [1]
[1] Chennapragada, Aparna. 2025. ‘Most Work Is Translation’. Substack Newsletter. ACD, September 12. https://aparnacd.substack.com/p/most-work-is-translation.
It's important to separate the pitfalls of the individual from those of the approach.
The conclusion can't be "avoid systems thinking." It might even be: truly understand the nature of systems, precisely because it reveals how complex the variables actually are. This should humble you accordingly—but done well, also makes you highly perceptive.
Cantankerous indeed. 😉 Once you start seeing things in "systems", it's turtles all the way down. Far better to see entangled mangrove swamp roots or balls of yarn.
I think you’re describing a very real failure mode of encountering systems thinking too early—something books like Meadows really should explain. It makes sense why you see it as a massive regret in your career journey. I learned this the hard way too, years ago. I do, however, want to leave you with the reframe I’ve learned. Truly understanding systems thinking in environments like yours that reward conformity and local optimization, unfortunately, requires systems disillusionment to fully set in first. Once it has, you can see it for what it truly is: anti-universal, anti-lucidity, anti-symbolic comfort, anti-model, anti-diagram… It’s a discipline, not a destination. Properly practiced, it places the analyst inside the system, not above it, to manage uncertainty over time rather than exercising agency (aka getting comfortable at being wrong and learning how to respond instead of react so change is less disruptive). If it feels like a minigame or a silver-bullet hunt, it’s already stopped being systems thinking. It’s about restraint, not ego. I hope sometime in the future you revisit the idea, but not from a book. From people who successfully live it. Just know, your mistake wasn’t reading the book… it was not being able to see the important guardrails and costs of its use. I wish you luck and hope you land the job you deserve. And if not, maybe you create it yourself instead.
What a great read — such a delight.
I wonder if what’s being rejected here isn’t systems thinking itself, but abstraction that arrives before contact. Models make sense to me only after constraint has already taught their limits — otherwise they feel like tools without a hand.
You’d probably enjoy Critical Systems Thinking. Puts the different approaches to systems thinking in their historical context and provides a nice frame for when to attempt to use them.
https://onlinelibrary.wiley.com/doi/book/10.1002/9781394203604
That’s a really useful resource at first glance. Smart approach to rounding out some of the blindspots. I’m looking forward to checking it out!
My official title was knowledge expert for the transformation practice at BCG. This article captured my 20s far too well. 😂
I was once a Business Improvement Analyst at the Business Modernization Initiative 🫡😆
This is a misrepresentation of systems thinking. It absolutely does not seek to formalise causation. Quite the opposite. The whole point of systems thinking is to understand non linearities, tipping points, mental models & thinking mistakes. I was a management consultant 20 years before you & we all suffered from a different disease: extrapolation / interpolation. Big fancy financial models that thought everything was a straight line & ignored diminishing returns, thresholds etc My God, I sweated over decks with fancy charts that said how much margin we were going to squeeze from one mix change (always, everything else being equal?!) & none of these decks bore the slightest resemblance to actual performance. Because performance really is just a feedback loop that gets started until it saturates & stabilises. The world that existed before you, talked about drivers & KPIs & USPs & OKRs & hockey sticks & we were all just throwing darts in the dark. The system is going to do what it does in response to feedback loops. You can’t spot the threshold - without heroic assumptions & luck - and as you say much of what works is then post rationalised / survivor biased. But what good systems thinking does is this: it says “1. You’re not looking at a photo. You’re looking at a flow (set of interacting flows). 2. This system will behave normally - until it doesn’t. 3. You can’t predict that moment but you can look for signs of auto correlation / oscillation / bifurcation. 4. Once your system goes non linear, all the usual levers will fail. You are not steering the ship. The ship is in control 5. Understand your brain will jump to the answer, engage in apophenia, find cause & effect where there is none 6. Simple models help you understand the system better than complex ones 7. Leadership needs to be humble. Feedback & course correction is everything. 8. The strategy deck is a useful qualitative tool for understanding your direction BUT the charts & cashflow protections are close to worthless 9. Equilibrium is predictable & can be risk modelled. Tipping points create uncertainty not risk. Don’t get confused. 10. 10% contingency can become 300% over-run in a non linear situation. 11. Never forget: hypothesis > test > learn > adapt. The essence of being a human.
That’s all I have. But systems thinking is great & should be taught to everyone, especially politicians.
I think your list covers it so well. Point 7 especially stood out, because at a certain point action + paying attention is the best thing. And you’re pulling on @Vaughn Tan’s heartstrings who is far better acquainted than I am with the difference between uncertainty and risk.
I agree that this is a misrepresentation but I feel like it’s a commonly held one, particularly among brash greenhorns like me. Maybe it’s cause we don’t respect its limits (like KPIs et al that were trusted too much)? I only get to read about the old world, hence why I feel that people who have more experience, like yourself, tend to hold an accurate representation of systems thinking. And you should teach it, because when used well it does seem valuable!
Politicians probably won’t listen, but… maybe the next generation. Big fan of your Substack publication’s name btw. 🫡
Thanks - gracious response! I have subscribed to Vaughn Tan - looks interesting.
Coming back to this I just discovered another podcast interview I did directly relevant to your later comments on Graeber's Bullshit Jobs. Dreadful piece of work. So self-indulgent, so incurious about what could be driving the phenomenon.
https://economicsexplored.com/2021/07/25/ep97-bs-jobs-critique-cbdc-thoughts-from-dr-nicholas-gruen/
The advantage of this book is that it's an easy red flag in a conversation, like someone earnestly referencing The Prince by Machiavelli.
Ah, but the Prince is a remarkable piece of work, full of real insight, however much you want to take or leave the 'Machiavellianism'. My favourite idea from the Prince is that, if the prince can service the people reasonably, they'll reciprocate with some gratitude, whereas the 'grandi' are insatiable. They'll always be after more!
I found very interesting the insight about temporality of doing harm or good: if you have to do harm, do it quickly and all at once. When doing good, do it little by little over a longer time.
People will forget the harm you caused and remember the good.
I keep wondering whether contemporary governments couldn’t take lesson from that (e.g. in any policy reform).
Yes, it's applied Hawke Government - get the nasty stuff out of the way early in your term and campaign on the gains it gives you.
Are you familiar with Dave Snowden work and Cynefin? Anyway, I think a key point you touch that resonates is from knowing to acting. Translate knowledge to effective action is not trivial and sometimes hidden by the hubris of knowing (mixed with Dunning-Kruger?)
Cynefin is great. There are no answers. There are just system states with different behavioural implications.
You know, I've used Cynefin a little bit. My more recent exposure to the framework has been through Jules Yim (@thecontrapuntal - not sure why this tag isn't working) who now works with Dave. I noticed Claire in another comment referred to the hubris of knowing. There's definitely a horseshoe theory of the hubris of knowing, where too much confidence and too little probably lead to the same outcome...
The most compact summary of the uncertainty paradox that I've encountered goes something like "Uncertainty should not change your decisiveness." As in, even without knowledge of the outcome, the right thing to do is often clear.
I had a fun chat with Rory Sutherland about complexity in ways that chime with some of your concerns here https://nicholasgruen.substack.com/i/161545855/complexity-cliches-and-bullshit
Appreciated how your conversation differentiated between self-proclaimed systems thinking vs. systems thinking "in spirit". Seems like Rory does support the latter, particularly as a foil for reframing things quickly. Systems thinking might be better referred to as alchemical thinking, ha.
This all panders to my prejudice that "systems thinking" is dumbed down cybernetics ... I think the big difference is what Will Butler-Adams calls "respect for the problem". The problem isn't just something you can point to on a diagram and solve, it is your adversary and like any other adversary you underestimate it at your peril. The thing you call a problem is most likely a solution for somebody else, otherwise it wouldn't have been allowed to persist for so long. That's very much a system level point of view but it's a million miles from "systems thinking" of the sort that you get in books with too many diagrams
I'd agree with that prejudice! It's hard to tease out utility from cybernetics. On the note of underestimation, I have a colleague who says we should treat problems like monsters and that more businesses need Dungeons & Dragons-style monster manuals. You can't often kill the big hairy ones, but you might be able to avoid being squashed.
Also, really enjoyed your podcast with Patrick McKenzie. From 2024 I think.
Yes
Nicely put!
Looking forward to reading this! I’m an aspiring wikiman
It's not a read. It's a listen I'm afraid :)
Fantastic article, I enjoyed it very much. And it came with a chess puzzle!
Always 😎
Here for a contrarian post
Thanks for sharing your perspective on this, it really resonates. Do you have any book or course recommendations that *are* useful for analysts interested in expanding their toolbox?
Is it worth trying to learn techniques like agent based modelling for understanding complex systems, for example
Thanks Hibah. Honestly, it depends a lot on your vertical. The more universal tools have to do with communication rather than analysis. Tons and tons of books and courses on that. Most of all, though, I'd pay attention to who you find to be a good communicator. Even a bit of mimicking will help you pick up some good tricks
I would guess that more technical versions of understanding complex systems are probably *better* because they fail in obvious ways. But I don't have a ton of experience with them since I'm pretty much a wordcel
tremendous, cheers