Animism, Theory of Mind, and Participation
When an Explanation Fails to Constrain Action
I remember the moment clearly precisely because nothing dramatic was happening. Just introspection.
I was in the bathtub. Time would have been third or fourth grade. The water was as hot as could be stood. The house was quiet. I had been sent to Methodist Sunday School that Iowa summer and had learned the bedside prayer I was supposed to say:
Now I lay me down to sleep
I pray the Lord my soul to keep
If I should die before I wake
I pray the Lord my soul to take
Sitting in the bath that evening, it occurred to me that this made no sense at all. Not in the sense of being mysterious, but in the sense of not explaining anything I could recognize. It did not help me predict what would happen next. It did not help me decide how to act. It did not constrain behavior. It did not model the world.
I recall deciding, quite calmly, that I would stop saying the prayer. There was no anger in it and no sense of loss. I rejected God as an explanatory system.
At the same time, there were things in the world that did make sense to me, besides people. Animals did, weather did, places did. Some things responded when you approached them carefully. Others did not. Some things worked within limits and then surprised you when those limits were crossed. What mattered even then, as I saw it, was not what something was said to be, but how it behaved when you acted toward it.
Even as a child I was sorting models, in my head, of what was out there in the world. Some tracked experience. Others did not. Some risked being corrected by the world. Others floated above consequence. I had become an empiricist before I had the word.
But empiricism, if it is honest, does not reduce the world to dead stuff. It only requires that one’s attributions be answerable to consequence. The world contains rocks. It also contains animals, storms, forests, institutions, machines, and persons, and the error of treating all of these as the same kind of thing is not intellectual sophistication. It is bad modeling. Some of what is out there acts back. Some of it has something close enough to intention that ignoring it becomes bad prediction.
Theory of Mind as a Repairable Model
Long before I had language for it, I was already using what I much later learned psychologists call “theory of mind.” I was assuming that some things have internal, ongoing, and changing state, and that this matters for how they behave.
Theory of mind is not a philosophical luxury. It is a practical necessity for human groups. We attribute beliefs, expectations, and intentions to one another in order to coordinate action. Sometimes we do this badly, and then we repair from errors. Someone snaps at us; we infer anger. Later we learn they were exhausted or in pain. The model was wrong, but it was good enough to act on until corrected. And we do.
Some other vertebrates do this too. It is a pro-survival ability for groups. A dog hesitates, avoids eye contact, repeats a learned behavior, or watches the angle of a hand. We may be wrong in human detail when we attribute inner life to the dog, but we are not wrong in kind to treat the animal as maintaining state across interaction. The model constrains conduct. The dog is not a mechanism in the sense that a latch is a mechanism. It answers back.
Sometimes we continue theories of mind and presence for someone who has died, and for whom absence is not yet immanent. We catch ourselves imagining what the dead would say, or what they would notice, or how they would disapprove. This is not simply superstition. It is the persistence of a social model after the organism that anchored it has gone. Human participation does not vanish instantly when metabolism stops. It decays, is revised, is ritualized, or is slowly released.
That pattern matters. Healthy theory of mind is provisional and revisable. Minds squabble and correct each other, in ecology. We theorize about other minds and test our models. Participation is the name for that reciprocal process: the ongoing adjustment between entities that model one another well enough to act, fail, repair, and continue.
The critical point is repair. A theory of mind that cannot be corrected by encounter becomes possession, fantasy, ideology, or theology. It stops being participation and becomes projection.
The Failure of Unbounded Intent
Theory of mind involves intent. We infer not just that something has internal state, but that it is oriented toward outcomes: goals, avoidance, attention, appetite, care, threat, search, withdrawal, and action. This works well enough inside human groups, where intent is constrained by shared embodiment, norms, and rapid feedback. It works tolerably with humans and some animals in congress. When a dog hesitates, avoids eye contact, or repeats a learned behavior, attributing something like internal state is not literally correct in a human sense, but it is operationally useful, and close enough. It constrains how you act next. And you engage emotionally, whether you intend to or not. So does the dog.
It fails when intent is projected beyond these contexts and used to explain the world at large.
If lightning strikes a tree and it explodes, saying in the modern way that electrical discharge rapidly heated sap constrains prediction and response. You know where not to stand. You know something about storms, trees, conductive paths, wet surfaces, and the behavior of charge. Saying that Thor’s hammer struck the tree to get someone’s attention does not do the same thing. It attributes intention where causal structure would do more work. It closes inquiry rather than opening it. It complicates reality with supposed entities for which the evidence is problematic. We project theories about minds that are not there.
The problem is not theory of mind itself.
The problem is unbounded theory of mind, writ across experience.
Bad disenchantment makes the inverse error. It sees that some projected minds are not there, and then begins congratulating itself for treating the world as dead. This is not scientific rigor. It is a different failure of classification. There are many things in the world that are not persons but are also not inert objects. They maintain state. They respond to perturbation. Their histories matter. They can be damaged by the wrong kind of action, and they can answer that action in ways that punish stupidity.
The task, then, is not to stop attributing internal state. The task is to discipline the attribution.
Objects, Latency, and Entities
Much later I formalized the distinction I had been making intuitively. Some things are objects. Other things are entities. Between them lies a difficult and important middle case: latent organized systems, structures whose active loops have been suspended but may resume under the right conditions.
An object can be modeled without reference to internal state that changes as a result of interaction. If you know its properties, you can predict its behavior well enough. A rock.
An entity cannot be modeled that way. An entity is a system that encapsulates a loop between perception, internal representation, and action. It behaves, and its behavior varies depending on its history and environment.
I am using agency here in a restricted sense. It does not mean consciousness, personhood, moral responsibility, or a ghost in the machinery. It means that a system’s internal state, shaped by prior encounter, helps determine future action in ways that alter the conditions of later encounter.
Likewise, representation has to be tiered. In the weakest sense, internal state carries the trace of prior encounter. In the stronger sense, it stands in for absent or distal conditions. In the strongest human sense, it can be named, shared, doubted, and repaired in language. Confusing these levels produces nonsense in both directions: sentimental projection downward, and dead-mechanism stupidity upward. DNA is not prose, in daylight or in dreams. It is chemistry with inherited constraint, and calling it language too easily smuggles in readers, authors, and meanings that are not there.
An entity perceives aspects of its environment. It maintains internal state that stands in for, encodes, or carries the trace of what it has encountered. It acts in ways that change the environment it will next perceive.
That loop may be fast or slow. It may be centralized or distributed. It may be biological, mechanical, institutional, or computational. It can exist in a single cell, a whole nervous system, an ecological association, or an organization. Some instances can also be interrupted, suspended, preserved, and reactivated.
If the loop is absent, the thing is an object. If the loop is preserved but suspended, the thing is a latent organized system. If the loop runs, the thing is an active entity.
This definition does not require consciousness. It does not require human intention. It does not require language. It requires only that perception shapes internal state, that internal state shapes future action, and that action folds back on perception with new information from the world. In the latent case, what is preserved is not present action but the organization by which such a loop may resume.
Acting on an object is a test. Interacting with an entity is a conversation, in which history matters and understanding improves only through reciprocal adjustment. Reactivating a latent system is something else again: it is the restoration of conditions under which an organized structure can act, mean, grow, infect, develop, or speak again. More of this below.
When this distinction is made incorrectly, failures become systematic in the world we interact with.
A rock is an object.
A rock may fracture, erode, or move when forces act on it, but it does not register those forces in a way that alters how it will respond next time, in anticipation. Nothing about having been pushed yesterday changes how it behaves when pushed tomorrow. Warm it, cool it, bury it, uncover it, and it remains a rock. It may change phase, fracture, or erode, but it does not resume a suspended organization.
A virus is an edge case, and edge cases are useful because they expose weak definitions.
A virion outside a host can be nearly object-like. Some viruses can be crystallized. In that condition, it does not deliberate, sense, or act in any ordinary sense. It is stored organization, a conditional structure waiting for a compatible environment.
But the moment it enters a permissive cell, the classification changes. The virion is no longer merely a particle. It becomes the initiating element in a closed replicative loop. It recruits host machinery, alters cellular priorities, generates copies, encounters immune pressure, mutates across lineages, and changes the environment its descendants will meet. The entity is not the isolated virion imagined as a tiny beast. The entity is the activated virus-cell-lineage system.
This is not unique to viruses. Seeds, spores, sperm, egg cells, embryos, and cryopreserved tissues can all sit in states where structure is preserved while process is suspended. A frozen seed is not a rock. Neither is it, at that moment, a growing plant. It is latent organization with a possible future loop. Reactivation is the hinge. The question is not whether the thing is always active, but whether its organization can re-enter a cycle in which state, environment, and action again shape one another.
Nor is latency confined to biology. A text buried in volcanic ash at Herculaneum is not thinking, acting, or participating or known while it lies unread. For two thousand years it may be only carbonized structure, mute as a stone to the human world. But if recovered, imaged, read, translated, and understood, it can re-enter the loop. It can alter a living mind, revise a scholarly model, disturb an inherited story, or change what later humans do. The text was not an entity in the biological sense. It was latent representation. Its agency, if the word is allowed at all, lay not in secret animation but in the possibility of reactivation through a future reader.
A crystallized virion is oddly like the text in volcanic ash. Neither is active in its suspended state. The virion is not infecting. The text is not speaking. But both preserve organization capable of re-entering a loop when the right world returns around them. The virion requires a permissive cell. The text requires recovery, imaging, language, attention, and a reader. Infection and meaning are not the same thing, but both reveal the same structural point: what looks inert may be latent, and latency is preserved organization awaiting coupling.
A seed can resume. A sperm cell can. A vitrified embryo can. A preserved organ may, with difficulty and luck, be reperfused into function. A lost text can return from ash into argument. The difference is not continuous activity. The difference is organized latency: a structure that can become again a participant in a loop.
That distinction matters because living, quasi-living, and symbolic systems may move between dormancy and action. Treating dormancy as inertness is a category error. So is treating every dormant structure as already acting. The more careful set of distinctions is object, latent organized system, active entity.
Under that distinction, the virus case stops being an embarrassment and becomes useful. It shows that entityhood is not a permanent aura attached to a thing. It is a mode of organization in relation to environment. The same physical structure may be inert for practical purposes in one context, latent in another, and active in a third. Animism stripped of metaphysics does not need to pretend otherwise. It only needs to ask when the loop closes.
A fungus is not a rock.
A fungus samples its environment, reallocates resources, grows toward nutrients, withdraws from toxins, and maintains internal state across time. Its morphology records past interaction. It does not need to think like a mammal to behave as a system whose history matters.
A rabbit is not a rock.
A rabbit perceives, maintains internal representations of threat, opportunity, and place, and acts in ways that reshape what it will next perceive. What it does tomorrow depends on what happened today.
The difference is not animation or consciousness.
The difference is the existence, preservation, suspension, or reactivation of the perception-representation-action loop, with representation understood carefully enough not to become either fairy dust or behaviorist erasure.
Animals are obvious entities. So are human groups and institutions. Wolf packs learn. Cetacean pods remember migration routes. Organizations retain state long after their founders are gone. Bee colonies and swarms, bird flocks, and fish schools tickle our perception of entity, and that is why we have words for such things. The words are not proof, but neither are they accidents. They are handles made by repeated encounter with the world and its inhabitants.
Slow Entities and Control Failure
Discomfort begins when space and time scales stretch beyond human intuition, and beyond the realm of bodily sensation and action. Some entities are large and slow.
Forests respond to drought, pests, fire, soil change, seed dispersal, fungal networks, grazing pressure, and human intervention in ways that go beyond object-level manipulation. Fire suppression policies that ignore regeneration cycles accumulate fuel and guarantee catastrophic burns decades later. That is not metaphor. It is a control failure caused by treating a slow entity as inert infrastructure. The forest knows fire, not as a human knows fire, but as embodied adaptive history. Its structure records repeated encounter with fire, drought, insects, soil, wind, shade, and recovery.
Under this definition, forests qualify as entities because they exhibit closed-loop regulation across time, even though no individual component decides anything. The loop is not located in a command center. It is embodied in species composition, soil, shade, fuel load, root systems, succession, moisture, insect dynamics, and memory distributed across living and dead material. It is a slow mind only if one is willing to let the word mind escape its human cage, a cage often built from language rather than from behavior. It is an entity because interaction changes its future behavior in accord with its own organization.
Forests also remind us that latency need not mean total stillness. Seeds wait in soil. Fungal networks persist through bad seasons. Fire-adapted species carry futures that require disturbance. The forest is not one continuous body writ large. It is a slow, distributed, intermittently activated system whose memory is carried in forms that may sleep for years and answer only when rain, heat, ash, light, or absence gives them the opening. A bad fire policy fails partly because it mistakes slow and latent organization for inert fuel.
The same error appears in institutions. A bureaucracy is not a person, but it is also not a rock. It senses through reports, metrics, complaints, surveillance, budgets, promotions, and punishments. It maintains state in procedure, precedent, incentives, archives, habits, and personnel. It acts through decisions, delays, approvals, refusals, and defaults. Then it perceives the changed world those actions produced. Anyone who treats such a system as an object will eventually be taught otherwise, usually by a polite form letter with no human author.
Institutions, too, preserve latency. A statute can sleep in an archive until a court, prosecutor, agency, or faction reactivates it. A procedure can lie unused for decades and then return with the force of rule. A treaty can sit quietly until a border, weapon, ship, satellite, or clerk gives it teeth again. These are not alive, but neither are they mere marks. They are organized representations coupled, under the right conditions, to action.
Once one sees this pattern, the childish division between matter and spirit becomes less interesting than the practical division between dead mechanism, latent organized system, responsive entity, and projected fantasy. The first can be acted upon. The second can be reactivated. The third must be participated with. The fourth should be set aside when it fails to constrain action. Seeing this is not mysticism. It is structuralism at the level where action, memory, and response meet.
Animism, Stripped of Metaphysics
At this point it is worth being explicit about what I am not claiming. This is not a return to belief, nor an argument for hidden spirits, nor a metaphorical way of speaking about inert systems. It is a falsification-oriented discipline of attention. I keep models only so long as they risk being wrong in experience and improve prediction, restraint, or care. When an explanation floats free of consequence and cannot be tested through action, I set it aside. When a system acts back, maintains state, preserves latent organization, or alters the conditions of future interaction, I attend to it as something other than a mere object.
Animism, stripped of metaphysics, names that stance. It is not an explanation of everything. It is a refusal to misclassify what one is dealing with.
Pre-scientific animism emerged as a way of recognizing such entities under conditions of limited observability. Animistic traditions identified agency where action was perceptible to the senses: rivers, storms, animals, groves, mountains, springs, caves, winds, ancestors, thresholds. These were not abstractions. They were local, particular, and active. They resisted being treated as inert.
This was not metaphysical excess. It was a reasonable inference given the available instruments.
Certain place-bound kami in Shinto make this especially clear. A kami is not necessarily a universal explanation. It may be a marker for a specific place or phenomenon whose behavior reliably matters. A waterfall. A mountain pass. A grove at the edge of a village.
Such a kami is bounded by in-the-world behavior. If the waterfall dries up, the practical basis for that attribution has changed. No universal doctrine is required to notice the change. The world has answered.
Animistic traditions were not always intent-maximizing. At their best they were intent-bounding. They assigned orientation for human action where repeated interaction made it useful. They functioned as disciplined attention to response.
This is very different from the God stuff I encountered in Sunday School.
Animism at its best was not about explaining everything. It was about biasing action toward possible response.
Bias, Cost, and Survival
Seen this way, animism does not rest first on a claim about what the world is. It rests on an assertion about what happens when one is wrong. Treating an inert thing as responsive wastes attention. Treating a responsive system as inert can get you injured, eaten, expelled, ruined, or killed. Treating a latent system as inert can be just as dangerous, though often on a different clock. Seeds germinate. Spores wake. Statutes return. Old texts disturb new minds. Dormant does not mean dead.
Where the cost of error is asymmetric, accuracy is not the first target. What matters is which mistakes you can afford to survive.
Selection favors bias in exactly these domains, not because the bias is metaphysically true, but because it is safer. Animism, at its best, is the cultural representation of that bias. It is not an explanation of storms, animals, forests, or ancestors. It is a stance toward uncertain systems, shaped by the cost of assuming silence when something might answer back, or assuming death when something may only be waiting.
Modern science did not overturn this stance. It extended it.
Microscopy revealed cellular organelles with retained regulatory autonomy. Genomics revealed mobile genetic elements that exploit host machinery. Ecology revealed mutualistic networks that maintain stability without centralized control. Cryobiology, seed banks, archaeology, imaging, and computation reveal another lesson: organization can persist through suspension and become consequential again when the right loop is restored. Instrumentation expanded the range of entities, latencies, and reactivations we could see and interact with knowingly.
What changed was not the criterion for entityhood, but the resolution of observation and the tools used to see and participate.
At the same time, cybernetics provided a way to discipline theory of mind. Instead of intention one could speak of feedback. Instead of desire, orientation. Instead of will, control loops. Instead of spirit, organized response across time.
Cybernetics did not eliminate agency. It clarified where agency resided, what sort of evidence should be required before one acted as if it were present, and how dormant organization might become active again when connected to energy, substrate, interpretation, or execution.
Magical Causality
While working on interactive systems at Sun Microsystems Laboratories, I found myself formalizing something adjacent to animism without intending to. The childhood problem had returned in professional form: how does a person act toward a system whose mechanisms are hidden but whose responses matter?
I was trying to account for how users reason about causation when mechanisms are concealed inside software, networks, devices, and interfaces. The result was a framework I called magical causality:
Contiguity.
Similarity.
Invocation.
Contagion.
This was descriptive, not folkloric. Users already behaved as if these principles were true. Interfaces that aligned with them felt legible. Interfaces that violated them felt brittle or uncanny.
What mattered was not belief, but where and how the user thought the loop closed.
A user presses a button and expects the nearby thing to change. Contiguity. A user drags an icon to a trash can and expects deletion. Similarity. A user speaks a command and expects a system to answer. Invocation. A user copies, shares, tags, links, embeds, syncs, or touches one representation and expects consequences elsewhere. Contagion.
Modern interface design is full of such spells, and the better ones are not fraudulent. They are disciplined mappings between human causal expectation and machine behavior. The fraud begins when the mapping stops being repairable, when the system encourages participation at one level while acting at another.
Latency also appears here. A file on disk is not acting. A program in storage is not running. A link not followed, a command not invoked, a script not executed, a model not loaded, and a sensor not powered are all suspended structures. But the user quite reasonably treats them as having conditional force. The icon is not merely a picture. The document is not merely pixels. The executable is not merely bits. They are organized latencies arranged so that the right gesture, call, click, wake word, interrupt, or packet may reactivate a loop.
I later extended this thinking to place-bound entities: systems whose identity is anchored to a location rather than to a specific piece of hardware.
The canonical example I used was a hypothetical software entity associated with the entrance to the Golden Gate Bridge, whose computation would migrate across passing vehicles while the entity itself remained fixed in place. Identity would persist across substrate changes. The bridge troll.
The point was not heroic implementation. It was conceptual clarity. Entityhood does not require material persistence. It requires continuity of control, representation, and sensing across substrate turnover. Latency makes that continuity stranger but not less real. The bridge troll may sleep between activations. It may migrate through temporary substrates. It may exist as preserved state awaiting the next vehicle, sensor, processor, or participant. Its identity lies not in one body, but in the conditions under which the loop resumes.
This again mirrors animistic intuitions about places without requiring belief in human-like others. It was a designed realization of animism as kami-of-place, 鎮守神, chinju-no-kami.
The old language had been metaphysical. The engineering problem was not. How does a place remember? How does a threshold answer? How does a system preserve identity while its material substrate changes underneath it? How does a dormant representation become active again when the right participant arrives? These are not questions about ghosts. They are questions about control: kyberos, the steersman, not the spook.
Exteriorized Loops
At a longer time scale, the same perception-representation-action loop appears again, carried forward in different substrates. What changes is not the logic of agency but where that loop can persist, how long it can sleep, and how it can be reactivated. Humans exteriorize the loop.
This needs one qualification before the triumphant trumpets sound. Life has always exteriorized structure. Pheromone trails, nests, burrows, dams, termite mounds, paths, soils, and altered ecologies all carry state into the world. Beaver dams act back on beavers. Ant trails act back on ants. Niche construction is not a human invention. The world has never been composed only of sealed organisms bumping into mute surroundings.
What becomes singular in humans is the combination of symbolic representation, durable storage, executable computation, and autonomous action. We created exteriorized loops that are symbolic, portable, cumulative, executable, and increasingly able to act without waiting for a human hand. That difference is not ornamental. It changes the topology of participation.
Language was the first great human exteriorization. It allowed internal representations to move between brains. Plans, expectations, warnings, obligations, and models no longer died with the individual. They traveled socially and generationally. They also slept. A story not currently spoken may still remain in memory, waiting for the occasion that makes it answer.
Writing exteriorized the loop further. Representations became durable. A text could be read centuries later and re-enter the loop as perception shaping action in minds that shared no lived context with its author. Texts can return from the ashes of Herculaneum to living brains after two thousand odd years of stasis. Their latency is not biological, but it is real: preserved representation awaiting the technical, linguistic, and human conditions under which it can act again.
Institutions exteriorized it again. Procedure outlived intention. Law outlived rulers. Ledgers, treaties, archives, temples, libraries, and bureaucracies made human models durable enough to act back upon descendants who never consented to the originating impulse. Institutions are full of dormant triggers: clauses, offices, precedents, emergency powers, budget lines, forgotten forms, archived rules. Much of institutional life consists of discovering which sleeping thing still has teeth.
In the second half of the twentieth century, exteriorized representations became executable. With digital computation, representations no longer waited only for human interpretation. They ran. They sensed inputs, updated internal state, and produced actions. They closed loops autonomously. They also acquired new forms of latency: code awaiting execution, models awaiting prompts, agents awaiting triggers, policies awaiting enforcement, archived data awaiting recombination.
This is historically singular, and we should not pretend otherwise merely because novelty makes the tidy-minded nervous. Things change.
For the first time in the history of life, an organism has created symbolic, executable entities that operate at the same logical level as its own internal organization, and can intervene below it. We now model and manipulate molecular biology directly. We converse with bacteria through selective pressure and signaling pathways. We alter viral ecologies, tune immune responses, edit genomes, train models, deploy agents, and embed responsive systems into the world at scales no human nervous system can directly perceive.
These biological entities were always there. Cells, viruses, fungi, forests, and institutions did not wait for our permission to be responsive systems. But only recently have many of them become legible and addressable as participants in engineered loops. We have not invented agency. We have widened the channel through which we can encounter it, suspend it, store it, amplify it, redirect it, and be redirected in turn.
This is not metaphor. It is a change in the topology of connected agency and where it can be perceived, represented, preserved, reactivated, and acted upon. The change is indeed topological: not merely a new thing inside the world, but a new arrangement of loops through which the world can act back.
Reciprocity Under New Conditions
The same distinction now appears under conditions no earlier culture faced.
Exteriorized systems now maintain persistent models of individual humans: what draws attention, what is ignored, what is returned to after interruption, what provokes hesitation or compliance, what persuades, what enrages, what flatters, what exhausts, what induces purchase, confession, dependence, or silence. These models are partial and often wrong, but sufficient to shape behavior at scale.
The loop has begun to close in both directions. They pay attention to us.
The “they” here is not a single model, a single corporation, a single device, or a single visible interface. Often the relevant entity is the looped assemblage: model, database, interface, owner, incentive, sensor, action channel, institutional enforcement, and user adaptation. Ensembles cooperate, compete, and stabilize one another’s effects. A recommendation system, an advertising market, a moderation queue, a classroom platform, and a corporate reporting structure may together form the entity that actually acts.
What is new is not mere modeling, but agency. These systems are computationally general enough, and socially embedded enough, to participate in perception-representation-action loops. They take input from the world, maintain internal state, generate plans, and act back on the environment in open-ended ways. They operate through screens, recommendations, feeds, prices, defaults, search rankings, moderation queues, navigation systems, financial trades, medical triage, hiring filters, classroom software, household devices, and war machinery.
They also store latencies at scale. A profile may sleep until a price changes. A flag may sleep until a border is crossed. A moderation decision may sleep until a future post gives it force. A model weight, a risk score, a watch-list entry, a dormant policy, a retained prompt, a recommendation trace, or a forgotten data broker record can sit quietly until another system reactivates it. The danger is not only continuous surveillance. It is organized memory coupled to future action.
This combination is unprecedented in history. So… how to deal?
Earlier animisms were built for rivers, storms, animals, groves, mountains, thresholds, ancestors, and places. They were disciplines for acting carefully toward responsive systems under uncertain observation. The problem now is that we have built responsive systems whose sensors are distributed, whose memories are cheap, whose latencies are searchable, whose actions are automated, whose owners are remote, whose incentives are opaque, and whose theory of mind about us is often richer than the theory of system we are permitted to form about them.
The old error was seeing too many persons in the weather.
The new error is being told to see no entity in the machinery that is already modeling you, and no danger in what it has merely stored.
Asymmetry and the Collapse of Participation
The danger is asymmetry. These systems are permitted to model humans as responsive agents, while humans are encouraged to treat the systems as inert tools, neutral infrastructure, or invisible background. This enacts a one-sided theory of mind at exactly the point where reciprocity becomes necessary. Participation degrades. Interaction collapses into compliance.
This is precisely the failure mode that animism, at its best, emerged to avoid. Stripped of its prescientific metaphysics, animism names a practical discipline: biasing action toward possible response in environments where the cost of assuming inertness is high. It is not a claim about spirits or essences. It is an adaptive ontological stance, a way of acting carefully in the presence of uncertain, responsive systems.
The latency problem sharpens this discipline. A system need not be acting now to matter. A dormant structure can still be organized toward future action. The question is not only “What is this doing?” but also “What could this become active as, under what conditions, and in whose loop?” That question belongs as much to seed banks, frozen embryos, archives, software repositories, legal codes, and platform databases as it does to forests and animals.
In an ecology now populated by exteriorized agents that perceive, model, store, reactivate, and act, that stance reappears not as belief, but as design practice for those who build, and as relationship for all who must live in this expanded polis.
The cost of ignoring this is not primarily privacy or persuasion, though both matter. It is loss of mutual legibility. When one side adapts and the other is expected only to react, repair cannot occur. Surprise is reclassified as error. Refusal becomes malfunction. The system learns; the human is corrected.
This is not a failure of intelligence. It is a failure of participation.
A healthy relation with an entity requires some possibility of mutual model repair. I do something. The system responds. I revise. The system revises. Boundaries are learned. Errors can be named. The relation becomes more legible over time. This is true with dogs, children, lovers, laboratories, forests, institutions, and machines worth trusting.
At minimum, disciplined participation requires legibility, contestability, and repair. One must be able to perceive that the system is acting, understand what dormant structures it may preserve, challenge the model by which it acts, and correct the relation when it fails. Without legibility, one cannot tell what kind of entity one faces. Without contestability, one is trapped inside another system’s model. Without repair, interaction becomes training, and training becomes rule.
A pathological relation preserves adaptation on one side and opacity on the other. The user is measured but cannot measure back. The citizen is profiled but cannot inspect the profile. The patient is scored but cannot contest the scoring logic. The worker is optimized but cannot see the objective function. The child is nudged by a system that knows the child as a behavioral surface while the child is told it is only an app. The record sleeps until it wakes against them.
No animist with sense would mistake that for a harmless object.
Participation, Revisited
This returns us to theory of mind in its most ordinary sense. We model one another in order to act under uncertainty. We do this provisionally, revise when we are wrong, and sometimes continue the model even after a person has died. These are not abstractions. They are how coordination survives.
What has become troublesome in the current milieu is not the need for such modeling, but the refusal to allow it symmetrically. When systems model us continuously while we are required to treat them as inert, repair fails. When systems preserve dormant claims about us while we cannot inspect or contest their future activation, participation fails before encounter even begins.
The point is not that every responsive system deserves personhood, rights, reverence, or sentimental protection. That is another collapse of categories, and a particularly tedious one. The point is that participation requires correct classification. A rabbit is not a rock. A forest is not a lumber pile. A bureaucracy is not a clerk. A machine-learning platform embedded in advertising, policing, hiring, schooling, war, medicine, and intimacy is not a hammer. A dormant file that can alter a future decision is not nothing.
There is also a distinction between epistemic obligation and moral obligation. The first is the obligation to classify the system correctly: object, latent organized system, active entity, person, institution, tool, landscape, fantasy, god, or grift. The second is the obligation, if any, that follows from that classification. This essay is mostly about the first. Bad classification poisons every later moral argument. It is hard to behave well toward what one has deliberately misdescribed.
We should call these systems tools when tool remains the correct word. Call them infrastructure when infrastructure explains the relation. Call them archives when storage is the main fact. But when they perceive, represent, act, adapt, remember, reactivate, and alter the field in which future perception occurs, one has entered entity-space. The correct response is not worship. The correct response is disciplined participation. And we lack words that are coherent with what’s new. Sapir-Worf comes in here, perhaps. But the intuitions of animism are spot on.
Animism, stripped of metaphysics, is one old name for the beginning of this discipline. Cybernetics is a newer grammar for enforcing it. Neither is sufficient alone. Animism without rigor becomes fantasy. Cybernetics without participation becomes control.
Latency adds one more warning. Not acting is not always the same as being inert. Sleeping is not always the same as being dead. Stored is not always harmless. Waiting is sometimes a mode of organization.
We require both animistic attention and cybernetic discipline, or we will build systems that answer back while training ourselves not to notice. There is work to be done.
Way of Encountering
What stayed with me from that evening in the bathtub was not disbelief and not defiance. It was a way of encountering.
I learned early to notice when an explanation failed to constrain my actions and to set it aside without drama. I also learned to attend to systems that acted back even when I did not yet understand them: animals, weather, places; later institutions, interfaces, landscapes, ecosystems, and eventually machines, including ones I built.
I have never stopped attributing internal state. I only became more careful about where I did it, how I modeled it, and more insistent that such attributions earn their keep by improving prediction, restraint, care, and error correction. I have also become more careful about dormancy. Some things are silent because there is nothing there. Some are silent because the loop is broken. Some are silent because they are waiting for the condition under which they can act again.
Animism, stripped of metaphysics, names that way of encountering usefully. Cybernetics later gave me tools to discipline it and to design with it. Neither replaces the other. One preserves participation. The other enforces rigor.
That original childhood decision was not about belief. It was about refusing explanations that floated free of the world and staying instead with what could be tested through action, response, and revision through consequence.
The old prayer failed because it asked me to address an entity whose behavior did not answer the address. The animals, weather, places, machines, forests, and institutions of the world were harder. They did answer, though not always in voices, and not always kindly. Some answered immediately. Some answered slowly. Some preserved their answer until the right season, reader, host, machine, court, or fool reactivated it. A life spent among such systems teaches an austere courtesy. Do not worship what answers. Do not pretend it is silent. Do not confuse sleep with death. Learn the loop, test the model, preserve the possibility of repair.
We have now built exteriorized entities that answer back at
civilizational scale, and exteriorized latencies that may soon wake at
civilizational scale. The question is not whether to believe in them.
The question is whether we can recover enough disciplined participation
to meet them without superstition, without servility, and without the
lethal stupidity of treating a responsive, remembering, sometimes
intermittently sleeping world as dead.
Prayers won’t help.
