See more on mold and rust logic here.
It used to be sabertooth tigers. Now it’s Candy Crush we have to worry about.
Engines of Risk
Ulrich Beck, in his monumental book Risk Society: Towards a New Modernity, argued that wealth creation had become coupled to risk creation.
This idea emerged four decades after a titan of economics, who had a similar theory, passed away. Joseph Schumpeter, famous for his exploration of business cycles (and more), pioneered the idea that economic growth depended on technological innovation, rather than improvements in efficiency or access to natural resources.
Beck examined the flip side of this coin. Technologies create new hazards that we’ve never seen before and that we must learn to manage. Beck and Schumpeter traced two seemingly independent phenomena, increased wealth and increased risk, back to a shared root cause: technological progress.
And as technological progress continued to ratchet forward, Beck asked the question, “How does society adapt to risk?”
It is not a simple question. Starvation is almost incomparable to the dangers posed by increasing atmospheric carbon levels. We can all grok the former; it’s a hazard that’s shaped the deepest components of our biology. No one can claim to precisely predict the impact of the latter even if we can easily classify it as a hazard.
Beck’s big idea is that as societies wade into increasingly complex risk environments, they go through a process of introspection. When novel hazards materialize into view and there’s no existing mechanism, protocol, or institution to manage it, the obvious question to ask is, “Why didn’t we see this coming?” This leads to restructuring, like the establishment of international organizations, investments in monitoring programs, and fresh classes of experts.
Technology is a engine of progress, and risk is one of its exhaust fumes, floating up and skirting around our containment devices and into the atmosphere. Literally; CO2 emissions, addictive substances, emotionally caustic social media, air pollution, the potential for kinetic war, carcinogens, plastics, terrorism, bio leaks and natural viruses. These things do not square with borderlines and do not start and end with you, as an individual. Our fates are tied together in staggeringly complex – and lopsided ways.
Some characteristics of risk are scale-independent. Risk displacement (which Beck calls risk externalization) describes both a family purchasing a large SUV for its safety rating and a nation-state stockpiling a nuclear armory. The family is safer, but others on the road are at greater risk. That nation-state’s interests are secured, but other countries, without nukes, are not.
It’s mostly your decision whether or not to buy an SUV, but how do we decide about bigger and more complex risks? On a related note, how does our innate desire for safety continue to affect decision-making, even in today’s super safe environment? And what about the compounding effect of wealth inequality and risk inequality?
These are gnarly questions and I have actual work to do (plus at the moment, I don’t think any of my readers are running a country) so instead of answering them, I’ll destructively unbundle Beck’s Risk Society into something practical.
Bureaucracies are effective organizational forms for managing public health and safety, among other things. They move slowly, but procedurally. Chesterton’s wise ass quote, “I've searched all the parks in all the cities and found no statues of committees.” is misinterpreted as a slight against decentralized decision-making bodies, but actually speaks to their important role – to quietly secure improvements to standards of living and reduce exposure to hazards.
Risk displacement is, nine times out of ten, a bad thing. Risk cannot be destroyed. It can only be transformed. When you knowingly increase your own safety at the expense of others, you’re engaging in a form of rent-seeking behavior. Not just extracting wealth, but turning other people into a meatshield. There is a virtuous turn to this concept in its inverted form, which is risk internalization. Exposing yourself to hazards so that others don’t have to is usually an honorable thing to do.
Obviously, Beck’s case that with technological innovation, we face more unknown unknowns, and it’s wise to maintain some slack in your systems and keep a few (but not too many) contingency plans in mind.
Differential tech development might be a strategy worth pursuing. Some people theorize that certain technologies have predictably lower chances of creating systemic risk. Things like masks, cybersecurity programs, programmable cryptography, solar energy grids vs. things like GMO grains, bioweapon research, artificial intelligence. In theory, the former examples create risks, but only locally. While this idea might not apply to truly exotic technology, it’s a useful rule of thumb to use when choosing which existing technologies to adopt first in your city, state, or business.
Equitable risk distribution isn’t talked about enough. Along with wealth inequality, it’s something that pretty much every policy analyst should be taking into consideration now. In a team setting, too, those that internalize risks in service to the group should be compensated accordingly. And vice versa; unilateral risk externalization is suspect by default. Maybe there’s a reason that Zuckerberg has taken so strongly to MMA – he’s backfilling risk into a life pretty much devoid of it. Granted, it’s a somewhat pasteurized form of risk and he’s well insured against injury.
We will be reactive as we uncover new risks. It’s in our nature. However, we must be mindful that new risks require new approaches, not necessarily the destruction of old institutions.
Twin Logics of Decline
Two images of organizations dominate the current zeitgeist. On the one hand, organizations as machines, and on the other, organizations as organisms (or ecologies). I have a soft spot for the latter, coming from a family of foresters. So far though, my work has primarily been in machinist-land. Inputs, outputs. Cogs, wheels.
We use these twin images to chart a path forward, like how to improve bottom lines and customer service and metrics of public safety. Less often, I think, and certainly with far less detail, we use analogies of machines and organisms to describe the decline of organizations. Maybe because by the time that an org is irreparably failing, it’s no longer profitable to analyze. But an ounce of prevention is worth a pound of cure, and these viewpoints are fundamentally incomplete without their own unique terminologies to discuss decline.
Rust Logic
Rust logic’s central tension is preventative maintenance vs. planned obsolescence.
In this image, systems are analogous to metal machines that, if left unprotected, rust. A slow, inexorable deterioration caused by exposure to environmental forces. Do you invest in ongoing upkeep to preserve functionality or to allow systems to wear out deliberately and be replaced, thus embracing an inherent cycle of decay?
In rust logic, decline is inevitable and must be delayed through maintenance or completely avoided through a Ship-of-Theseus-style replacement of components. It’s the Six Sigma green belt army vs. the creative destruction Boydians. People variously use terms from fields like mechanical engineering, architecture, and manufacturing. In all of these fields, rust is a problem.
Rust forms because of an interaction between the nature of an organization’s machinery and its environment. Org structures break down as shifting priorities put pressure on them. Job descriptions become outdated as workloads increase. Reporting channels get clogged with bloated documents. Rust accumulates, one area at a time and makes bottlenecks worse. You have to deal with it before it affects operations, and well before it affects structural integrity.
There’s a pretty sharp distinction between people who lean towards maintenance and those that prefer periodic overhauls. Disciplined maintainers are like the guards from Plato’s Republic – they can easily become hidebound traditionalists and dig their heels into protecting a copy of a copy of a copy that’s unrecognizable to anyone who didn’t see the original. People suited to blowing up entire departments or subassemblies can easily blow up the entire system while claiming to look for a “global maxima”. To avoid total decline from rust, you need both, just not at the same time.
The defining characteristic of rust logic is that decay is a cumulative, but not systemic process. For a system to become completely rusted out requires gross negligence or a lack of engineering ability.
Mold Logic
Mold logic’s central tension is calculated sanitation vs. managed overgrowth.
The image of organizations as living systems subject to organic growth and decay is potent and increasingly popular. Decay in this lens is systemic, spreading from one area to another through invasive spores, which can be native or alien. The system might be able to function without them, but no one knows for sure.
Mold decays a system by stealing resources or by directly destroying desirable elements (like, say, trees). The analogy of an organization as an organism lends itself to people thinking in terms of gardening, forestry, or medicine. Synonyms for mold include virus, fungus, cancer – terms that are often used to describe other people and their ideas, like the infamous and obviously flawed concept of a “mind virus”.
The abuse of mold logic language does, however, hint at where it’s going to be useful – what we might call the “soft” parts of a system. Mold logic tends to be about ideas, stories, people, memes, cultures… rather than structures and processes. How people deal with organizational mold is just as striking as the differences in rust treatment.
Calculated sanitation attempts to get rid of mold, especially early on, before it spreads. This involves education, storytelling (more darkly, propaganda), firing people, and working to improve habits and philosophies rather than routines. Taken too far, it’s like the disfiguring mastectomies practiced in the 1700s that caused far more damage than they were worth.
Managed overgrowth is like gardening. Or in medical terms, palliative care. You allow the mold to persist, but limit its growth rate or the area in which it grows. Decay is treated as an inevitable but natural part of the process, where molded-over species or subsystems will eventually have to be replaced by more hardy varieties.
Mold logic is defined by decay that arises from infectious vectors and threatens to tip a system into an equilibrium so different that most would consider it destruction.
I’ve played with this twin logics of decay idea for a bit and it’s proven to be an effective heuristic in my own practice. Whether it’s troubleshooting my Tupperware management system™️ or team dynamics in my Wednesday soccer league, I usually get to a productive idea faster when I start with the closest approximate logic. Of course, it’s always a blend of the two. It’s working (at least, so far). The problem with this stuff is that it takes a while to assess the benefits – if you ever can. Good thing in this case is that it’s filling a gap so you can be more sure that you’re taking a step in the right direction.