“Do I need to justify what most call philosophy? Aren’t all these social and political issues building into huge cumulonimbuses that demand a less solely reflective response? But look, a thunderstorm has its origins in the vibrations of individual atoms. And as an atom of this society, I need to examine myself, because whatever is driving me (and you) is driving that developing storm.”
“In other words, what is the role of individual perception in all these less abstract issues of immigration, governmental control, war, and the dangers of AI?”
“Well, I bristle at the word “abstract.” I’m saying that the storm has a concrete origin in the atom of my personality. There’s a dynamic there that translates into society. My personality is a twisted wreck of inauthenticity — defensive denials, and bald declarations of pig-headed belief in anything and everything. I leap from one conclusion to another, rarely questioning any of them. Rarely learning.”
“Yes, society is a cumulative stupidity.”
“And on the “atomic” level it’s only me and you getting caught on what we think and usually staying that way the rest of our lives. It’s not just stupidity, but a stubbornly self-enforced stupidity, which is beguilingly odd. There’s a clarifying thrill in this, like being trapped in a small cell my whole life and suddenly discovering that there are doors everywhere in the cell that I’ve simply refused to open. Every resistance in myself is a door I refuse to open.”
“So you’re saying that your own dynamic of resistance, control and self-denial, is powering society’s storms? What good are social movements if this is the case?”
“This dynamic undermines every movement after a short while. That’s why I want to look at myself more closely. And this might set you off, because I know how much you love science. But I blame science for this dynamic.”
“You’re kidding? For your own personal stupidity? You should be thankful that science can help us distinguish real news from fake.”
“Maybe I’m thinking of the way science tries to pin the world down. There’s a Utopian streak in this attempt to be errorless; an overly ambitious search for the all-encompassing theory.”
“No, the genius of science is falsification, the recognition that there is no final answer, only increasingly more powerful theories that are subject to constant change. It tries to be errorless only in the way it handles and analyzes evidence.”
But the way it handles and analyzes evidence – this objectifying gaze – is never questioned. It approaches the world, the planet, as something inanimate. That materialistic ideology decimated cultures that looked at the world very differently. It hypocritically refuses to see its own limitations in all this. And the whole techno-utopian Zeitgeist is a product of science.”
“And I think the practice of science is freedom from wishful thinking. Sloppy thinking got us in this mess. Error got us here. What if we’re building rockets that carry people? That demands perfection. Don’t you want investigations of reality that are free from error?”
“Yes, you’re right. I’m conflicted. We do need to be as error-free in our thinking as possible. But how does this add up then? Because in another context, if I strive to be errorless, I’m trying to be something I’m not; then I’m fighting my own nature, which IS imperfection. And there’s a way to live with that. It’s called being humble, where errors aren’t a problem that need to be eliminated. And that’s very different from a desire to get rid of error. Because the desire for errorlessness is a delusion of grandeur.”
“When we sent people to the moon that wasn’t a delusion of grandeur. That was something truly grand that wouldn’t have been possible if people were content with error.”
“You’re right. But I think I’m also right. How is this possible? That’s the real question. Not which of us is wrong. There seems to be this need for control and errorlessness and a need to be tolerant of error. There’s the need to eliminate error and the need to live peacefully with error. How is this understood?”
“That’s a legitimate question, but science doesn’t deserve the blame. In other words, Can a person try to be errorless in some contexts without allowing this attitude to reign everywhere? Do we know when it’s necessary to switch attitudes? This could be considered a scientific question.”
“Yes, are we able to relate to error cordially so that we can learn?”
“And science codified a way to face error cordially, and is constantly learning. Why blame science?”
“I’m not sure you CAN codify honesty. Nevertheless, it seems like this codified, “cordial” relationship to error is often missing when science steps too far out of the lab and invades everyday life with its technological solutions, which amount to increased controls; a warlike approach, even to people’s health, with invasive drug therapies and chemical agriculture. It tries to stamp out error like a bug everywhere it’s found.”
“Is this the fault of science or of the way science has become an arm of capitalism?”
”Or maybe science made a mistake in aligning itself too closely with technological development, which is designed almost exclusively as a means of increasing the efficiency of production. Maybe science has forgotten that original inspiration you mentioned – that sense of itself as a natural philosophy, which loves error, because it hints at larger worlds.”
“Not entirely of course.”
“No, but the funding is almost exclusively used to create technologies for business and military. And if we’re building technologies we naturally strive for error-free results. That drive replaces a more open-ended interest in pure research, where error is more of an inspiring clue than something to be eliminated. Would you say that control is essentially technology? I mean, there is no way to achieve control without technology, and technology itself depends on controls and the absence of error to work. So a kind of desire for errorlessness and control is “baked into” every technology. The technology infects the person with this same desire.”
“Could it have provided capitalism with a delusion of grandeur?”
“We could also say that capitalism infected science with a delusion of technological perfection. But at some point there was a marriage of capitalism and science. Maybe capitalism was the abusive partner. And their offspring was technology, AI in particular, which is a child raised to believe it knows everything and can never be wrong.”
“But AI learns from its errors.”
“Yes. But does it learn only in a certain direction? I mean, a computer program can’t unwrite itself completely. It can’t question its own program. Doing so would still be part of its program! A computer essentially converts error into new certainties, more control. So it wouldn’t be able to recognize an error in this programmatic search for more and more control.”
“Well, it can’t question the intention of the program writers, which is always some form of control. It can’t have an existential crisis that transforms it into something different.”
“Yes, only we have this underused capacity to be stopped in our tracks, to have our ideological programs shut off at the root, and be transported by joy and wonder, with no ulterior purpose, no program.”
“But couldn’t you say that the desire to live and procreate, for instance, are programs that also can’t be turned off?”
“No, because these are biological conditions that don’t necessitate a particular kind of response or ideology. We live without ideologies, if we dare; without a program or mission to change reality. It’s a kind of programmatic death that computers simply can’t go through.”
“You mean, our orientation to the world can change not just superficially, from one certainty to another, but fundamentally shut down that system of certainty-seeking?”
“Yes, that can wholly end. And in its place a different relationship to error emerges. I wouldn’t even call it error then, but liveliness, that which can’t be pinned down or reduced to a utilitarian known. This suggests a wholly different kind of person. One who is not driven by a programmatic mission, not seeking anything, but absorbed by the majesty of this unknowable reality. See, a computer can’t do something for no reason, for the sheer joy of discovery. “
“It has no aesthetic sense?”
“No, I mean something far more meaningful. We disparage purposelessness, an unprogrammed mentality. But this is a mentality that is always learning, but in a different way.”
“Purposeless people are without ambition or enthusiasm for anything.”
“But that same dynamic of perpetual escape from pain has become their overwhelming purpose; the constant attempt to distract themselves from the guilt of being “wrong” in some way, and the miseries of lovelessness. That’s what keeps them uninterested and feckless. They are even more driven in that sense. Nothing interesting can reach a person watching porn all day, or taking drugs, because they are even more caught up then we are in their own programmatic purpose of resistance.”
“OK, so you’re using “purpose” here a little differently.”
“Yes, I’m sorry. The point is, we think of learning as gaining knowledge, gaining control. This programmed response can shut off in the face of a charged beauty. It overwhelms the program. And this death, which is inaccessible to a computer, initiates the birth of learning as self-discovery, self-transformation. And this has no ulterior purpose.”
“OK, ulterior purpose is more understandable.”
“So, would you say that a computer is an expert in a certain kind of purpose, which also dominates us? You might even say a certain kind of imagination, the kind that can only posit potential solutions — ideas, assertions, images – all of which are controlled worlds, models of reality? In short, knowledge. Like a computer, we leap from this known thing to that, with only the briefest possible suspension of the program in uncertainty.”
“We shut the door on the unknown as fast as possible, and remain trapped in small cells.”
“And we think we’re progressing because we gain more knowledge and control.”
“Yes, the limited space becomes crammed with more and different certainties and rules. Is it a larger and larger space?”
“Not really. Not if you measure it against the reality of an infinite world. No knowledge, no matter how large, ever gets closer to that infinity. Everything we will ever know will remain infinitely short of reality. This is why a computer is doomed to remain stupid.”
“And that’s how we operate also.”
“Almost always, but not quite. In fact, we designed the computer as a reflection of our positive orientation.”
“Is it wrong to seek positive knowledge about the world?”
“No, but if these positive assertions are taken too literally, or if they don’t stir questions to life, then we settle too positively on an answer.”
“But frankly, that’s what I do. I’m no different than a computer!”
“Yes, but living things have access to something else – a non-posited orientation, an orientation friendly to our positive errors.”
“Friendly to positive errors?”
“I mean, if we don’t berate ourselves for being too certain and pig-headed, then that empathy for ourselves and humility immediately opens the door to change, to learning in a new way. And it doesn’t matter if we mostly act like computers – positing answers, constantly busy like a computer, fearing every absence of an idea. We can turn to face this without resistance. And when there’s no resistance, we begin to change. And we begin to relate directly to these shifting currents of meaning and perspective, which can never be pinned down.”
“Is that your faith?”
“Well, it’s not my belief, if that’s what you mean. It does no good to believe in this, because that’s another program aiming for a positive conclusion, another small cell cut off from reality. See, the limits of the positive orientation have to be met head on before the other even opens. It’s no good striving for it, because that’s just more positive orientation. But that’s our reality at present, that’s the closed door we need to open. Turn away from who we desire to become, and notice what’s actually happening.”
“But don’t we still need to build these little islands of positive knowledge in which to live? That seems necessary, right? But if I understand what you’re implying, then it might be this: within a lab or a village at the edge of a wilderness, positive knowledge, controls, might take precedence. In a kitchen, cleanliness might be very important. But any real belief in the possibility of absolute sterility or errorlessness makes us delusional. It’s too literal. Our sense of identity, too, can’t become so literal that it excludes the so-called other. This weakens us, just as a sterile, lifeless environment weakens our immune system. We need to be in relationship to everything that errs from our cell walls. They have to be very permeable to remain strong…”
“…They are permeable, they have doors, but we ignore them…”
“… OK. But the point is, we can’t try to become utterly positive about anything. We need that slack of not knowing anything for sure. We can’t go around pretending that the village can be walled off from infinite chaos, without denying ourselves fundamentally.”
“Yes, and we think that the whole society should be sealed off in a hyper-controlled state. This desire for sterility has escaped the lab on the back of technologies like AI. Techno-logical thinking now dominates daily life, and we begin to desire what the technology desires – a world perfectly cut off from all the poisons we project beyond our borders. It’s a false Self, national or personal, a cell where we keep ourselves jailed.”
“I can’t help thinking of Castenada’s “inorganic beings” or “the predator.”
“Yes, precisely. This predatory thinking embedded in technology lures us into increasingly dictatorial systems of control, which use us like batteries. And it destroys our empathy, because machines have no empathy. It violently resists seeing the error in its own program, which might be the political system running everyone like parts in a big machine.”
“Yes, and the more “lifelike” our systems of control become, the more unreal the individual within this system becomes.”
“So even though we are gifted technologists, we can’t go where the technology itself compulsively wants to take us. Otherwise it controls us.”
“Yes, as control and technology rise in importance, we lose the humility and sense of humor that comes with an easy relationship to imperfection. And a life that struggles to be without error, without pain, without mutations, isn’t really alive, or able to learn.”
“So AI isn’t really intelligent.”
“Yes, there’s an error in the heart of AI that AI will never recognize.” ((I need to credit a conversation between J. Krishnamurti and a computer scientist for inspiring this criticism.))
“And as long as we are limited to a positive orientation we won’t either. Techno-logical thinking is incapable of allowing itself to die. Only living things die. And that is what learning is. We learn new ways of being, because old ways of being die. Technology will never be intelligent in that sense.”
“So is technology to blame for my stupidity?”
“Well, first I blamed science, then capitalism or its union with technology for my own dynamic of resistance.”
“But each was only a lens.”
“A lens without sufficient magnification to discover what lies behind the doors I refuse to open.”
“Then what have we learned?”
“Maybe we are learning to lose our certainties and ask bigger questions.”
“So we don’t learn here by gaining knowledge, but by losing knowledge?”
“Yes, by not finding an answer, questions grow larger and more interesting. Then we don’t fight the current of error.”
“And when we resist our errors, we’re refusing to open ourselves…”
“… our Cells…”
“… to bigger mysteries, and then complaining that the world is meaningless. Is this an answer?”
“No, we’re only beginning to notice the origins of the storm. Now we have to leave this page, leave words behind for now, the inherent certainties of words, and notice all the traps of positivity that keep the things we fear from opening into larger questions. We have to move via the negation of cold, virtual certainties into the real world of pain and trauma, courageously, without resistance.”
“Yes, we either make our lives real, or we get blown away in a storm of delusion.”