Bruno Latour was a winemaker’s son. Instead of joining the ranks of the vignoble class, he chose to study the philosophy of science and technology. Unfortunately he might have picked the wrong profession.
The key concept he theorized about – a black box – did not age particularly well. In fact, it has misguided us for at least a decade. Latour is the reason that we think about microwaves and algorithms in the same way.
“...the more science and technology succeed, the more opaque and obscure they become.”
A black box is a machine where we can see the inputs and outputs, but can’t understand how it works. Most of the time this isn’t a problem. When I put cold spaghetti into a microwave and it comes out hot, I don’t ask questions. I just eat my spaghetti. All’s well that ends well.
Even if something did go wrong (and it sometimes does – looking at you, exploding butter) asking questions wouldn’t get me anywhere. A microwave is simply too complicated for me to understand. I’m a recovering business student. Reengineering a microwave is not going to happen.
When I use a black box, my ability to affect outputs is limited to my capacity to affect inputs.
I can push buttons, change settings, or even make a little divot in the middle of my spaghetti. And this is generally enough. I don’t need to know the inner mechanisms, and I’m not offended or scared that I don’t know. They are what they are.
Microwave with an attitude
At the same time, I’ve recently found myself worried about how algorithms and social media work. Black boxes are the default starting point for thinking about this kind of technology as well, just like a microwave. Your average Chad will not understand how an algorithm works beyond something like: “It sees what I like then shows me more of it”
It’s no coincidence that when something works well, it becomes difficult to understand. And boy, do algorithms work well. I would not have written this article if they were a failed technology. Would-be wine wunderkind Bruno Latour on why machines become opaque:
“When a machine runs efficiently, when a matter of fact is settled, one need focus only on its inputs and outputs and not on its internal complexity.”
When something works, we stop caring about how it works. Then we combine working parts into more complex machines. Constituent parts form a technological tangled bank that behaves in surprisingly organic ways. Yada yada.
Now, we have dozens of apps on our phones capable of influencing our behaviour in powerful ways. In the past decade, we’ve catapulted completely orders of magnitude beyond Chad-level understanding.
Algorithms don’t feel like traditional black boxes. I think headlines like “It’s Time to Open the Black Box of Social Media” are misleading. When you use a microwave or a car, it adapts to your behavior in predictable and obvious ways. Wear and tear accumulates in high-use parts. Aggressive driving shortens the lifespan of a vehicle.
Usage affects technologies like Spotify, social media, large language models, and artificially intelligent bots in different and strange ways.
These things mutate hand-in-hand with their users. In a sense, they’re like a technology combined with a miniaturized economy or market. Instagram and X, for example, quickly respond to user behavior. They show you more of what you want to see – they also reward creators who are producing that kind of content. This is economics 101, but powered by a high-speed algorithm.
Same goes for search engines like Google and large language models like ChatGPT. I was recently talking to a friend who shares an OpenAI account with his partner. Cost-wise, this makes a lot of sense. But he noticed that ChatGPT had evolved in step with both of their usage patterns, creating a kind of unusual style that both of them found uncanny. The two of them have very different jobs, so GPT-4o was “getting confused”.
Or imagine your Spotify or Apple Music app. You can search for songs the old-fashioned way, or you can manipulate the Discover Weekly playlist to surface more music you like. But that’s not done in your settings – that’s done by skipping songs you don’t like or replaying songs that you do.
You are the second settings panel.
Black blobs
Black box is not a fitting term. These things (social media, search engines, algorithmic feeds, large language models, and AI agents) are unique. While they work well and have become opaque as a result, they behave in a strangely organic way. We are the inputs, and we are the outputs.
I propose black blobs as a more accurate term for making sense of our relationship to this new category of tools.
Why? Black blobs are useful and they are worth figuring out. They’re also dangerous. You can easily imagine a black blob as a key predator in Strickler et al’s portrayal of the Internet as a dark forest, where all creatures are by default hostile. The Internet is no place to hang out without a plan. As Sam Altman, CEO of OpenAI recently pontificated: “algorithmic feeds are the first at-scale misaligned AIs”
While this is a pretty loose definition of AI, it gets at the heart of the issue. Sufficiently advanced algorithmic tech is organic and squishy in a way that analog tech is not. There is a distinct, symbiotic relationship between black blobs and their users that is not well understood.
No point in trying to understand it, either, in my opinion. For the average Jill, the path to responsible and safe use of a technology is to create protocols. But blobs are protocol-resistant technologies. Since they change based on a user’s patterns, protocols must remain fluid.
Protocols are really difficult to build into these new technologies themselves, at least incrementally. That’s probably why, today, we see so many companies experimenting with model weights and training protocols that try to align AIs from the start, rather than relying on users to take responsibility.
Another reason why we don’t have many protocols to protect individual users: there are more collectible rewards if you can claim to prevent existential risks at the planetary scale, rather than identify hazards at a small scale.
For normal users, what risks do black blobs pose?
Rhetorical machines
Devon Eriksen has some strange job titles. Cat. Sharpshooter. Part-time Dæmon Prince of Tzeetch.
In a recent podcast, he mentioned what he calls the liberating arts – a play on the term liberal arts. Liberal arts are non-STEM fields, like sociology or literature. Liberating arts are seven “timeless” areas of study that are relevant even for 21st century life.
One of these topics is Rhetoric, which is about how to persuade people. What kind of arguments work, how to spot fallacies, the different ways people are convinced, the power of narrative, etc.
Black blobs are rhetorical machines. They are unparalleled in their ability to move you through epistemic space. One day you’d like to lose weight. Fast forward a few months and you’re a PETA sympathist. Or you’ve forgotten how much you used to dislike Trump.
“Many gods, all in machines.” - Bruno Latour
It’s not all dark, obviously. I know people whose sense of humor was rescued by TikTok. Instagram can be a great tool for finding beautiful places to explore and friends to do that with. It’s just that the Internet is a dark forest, filled with black blobs, and you should not go gently into it.
The fact that algorithmic technologies can’t ever be fully automated and packaged up makes protocols more important than ever. Latour might say that these rhetorical machines are resistant to being completely blackboxed.
Some technologies like atomic reactors or chemotherapy are so potent and complex that they’re also resistant to being blackboxed. We need protocols – engineered arguments about how to use these tools – as behavioral envelopes around dangerous technologies.
But I believe nukes and chemo can be blackboxed. It’s just really hard to do. Social media algorithms, chatbots, and AI agents might very well be different because they mutate with their users. Black blobs need protocols even more than regular technologies.
Black blobs have a perennial hazard. Narrative riptides. You can’t engineer them away. You are forever stuck at the improvisational stage of the tech-protocol cycle, forever making and remaking protocols to stay above the tides.
Normally, behavioral adaptation is temporary. In the face of a new hazard, we negotiate with our habits to make changes. A social order emerges to support this and hold people accountable. Then, we engineer those constraints into the environment. If you have a masochistic desire to read more about this, I wrote an entire essay on the theory.
Since the internal mechanics of a black blob are unstable, so is the direction of the narrative riptides it contains. The black blobs of the 2010s contained a frothy confluence of wokism, trad culture, vicious dieting, steroid abuse, technocracy, and so on… I myself felt sucked into a few of these, kind of like a hapless first-time surfer getting pushed into the bay by a current.
I’m a pretty heavy user of black blobs. I use Spotify, many social apps, and a few LLMs. As I looked around for protocols to reduce my downside, I didn’t find many. I have a hunch that they’re still extremely personal and hard to share from person to person.
These protocols will always be starting points, given the fluid nature of black blobs and their undeniable utility as rhetorical machines. But just a few protocols are certainly better than nothing.
Before I show my hand, a brief opinion on macro-level policy responses to this technology class.
Pandora’s blob?
“Protocols do not promote excellence, they prevent disaster.” - Chinese proverb
Karl Weick, another sociotechnology theorist like Latour, firmly believed that humans are a source of safety, reliability, and predictability. Contrast this with popular conceptions of humans as central weaknesses in systems like a finance company or a government. Fair enough. Human manipulation is one of the highest sources of computer hacks and data breaches.
But we don’t keep track of all the times that human variability establishes safety or wards off threats. Like the time that guy noticed a tiny glitch in a program running, which led to the prevention of a massive cyberattack.
A big reason we crack down on behavior so much is because, yes, people do dumb things… but we also record fuckups much more diligently.
The way we assess the impact of blobs is a bit different, but follows a similar dichotomy. Some view humans as hapless victims of social media and the algorithm-armed elite. If you follow this logic, humans need to be insulated from the whitewash of hyperconnectivity and sensational imagery until they reach the age of majority, or something like that.
In the other camp, videogames and digital life are neutral elements and what we choose to do with them, how we choose to let them affect us, is less our assigned fate and more a matter of conscious decision.
An accurate description will be somewhere between.
We are cursed to live in interesting times. Black blobs evolve in step with their users, containing a representation of one or more personalities. Protocols to prevent disaster have to change as well. Panicked lockdown of technology isn’t warranted, nor is resignation to the idea that progress has its victims.
Renaissance protocolization
The real “solution” is a mindset that provokes adaptive rule-making. It’s clear that algorithms are not going to settle into some sort of equilibrium. Chronological feeds were a starting point. No matter how much some people request that they return, they won’t. It’s bad business – when you look at user data, no one actually wants chronological feeds. They are a platonic ideal. Eyeballs don’t lie.
This increases the potential of these tools. Algorithmic feeds more powerfully affect user behavior. We just need to solve the downside problems. There are a lot of ways to do this, but they’re different than typical protocols. You can append a black box with protocols and continually refine those protocols, because the internal mechanics don’t change quickly. Protocols can get incrementally engineered into the core technology.
With black blobs, you need to squish and poke and belly flop onto them to get the results you want. It’s less methodological and more athletic. One could even say playful – because play resists strict protocolization. To win this game requires muscularity of a specific sort: How strong can you make your protocols? How explosive or fast is this movement? How quickly can you recover?
If we continue the analogy of athletics, it’s much more straightforward to systematize exercises than actual gameplay. Especially on a playing field that is so constantly in flux. Here are some that I do – not all the time or even consistently – but I generally feel good that I did them.
Scar an algorithm. You can deceive a blob into changing its representation of you. If your TikTok feed is populated with videos of cats, but you decide you’d like to be more of a dog person, you could intentionally seek out and binge watch dog videos until your explore feed changes.
Scroll in public. Some people show each other their feeds to gauge how off the beaten trail they are. This often produces some stark contrasts, depending on the narrative riptides people are caught up in.
Break tolerance. Shortform content is INTENSE. Taking a break for even a week or two can resensitize you to the medium. No point trying to be a purist in my opinion – it’s socially isolating. Plus I’m not sure if there’s a direct benefit from it. Taking a step back gives you space to intentionally reorient your interests.
Triangulate. Black blobs silo people super quickly. For every person saying A, there are people that will claim B and C are superior. Following sources with distinct patterns of opinion enables you to take the time to make a choice about what to consume, rather than drifting into an echo chamber.
Tether it to an IRL goal. Tie your blob usage to something like organizing more events with friends, getting better at running, or finding new art for your apartment. Always have a goal when you play with blobs.
Puppymaxx the algo. Only engage with chill content. Blooper reels, standup bits, nature minidocs, cute dog videos. Black blobs love juicy scary news. But only because you do.
Not all of the above are clearly defined with proper specifications or metrics, like hard protocols. Nor are they all common knowledge, like many soft protocols. Just reasonable starting places each at their own stage of maturity. For example, there are lots of apps that are protocolizing tolerance breaks, like ones that help you systematically reduce screen time.
The tension between technological literacy and being a strategic individual is, for the time being, unresolvable. Not only that, but it’s a tension faced at a personal level on a daily basis.
You’re in a dark forest, surrounded by black blobs. They’re hungry for your eyeballs but possess interesting powers and opportunities. Stay dangerous. 🫡
Thanks for the exercises at the end here, extremely helpful and succinctly explained.
Love this, I’m playing out what that will mean for daily life in the near and far future https://earthstar111.substack.com/p/earth-star-vision-daily-harvesting