Abstraction's End
home page
original site
twitter
thoughts
about
Linked TOC

Pro and then Contra David Pearce

this is currently being rewritten, and
is incomplete in many places!

Note: Currently in middle of huge rewrite. Some sections will cut off early, some sidenotes will overlap, and so on.

Warning: assumes familiarity with quantum mechanics. It's best if you know what quantum decoherence is, how it relates to the measurement problem, and how, more generally, it can give rise to our experience of a single classical world even in a many-worlds framework. In any case, if you ignore the QM, there's still plenty of neuroscientific material you can attempt to digest.

Motive

I recently (end of Nov 2023) discovered David Pearce's website many websites online presence; he turns out to have reached many of the same conclusions about mind and morality as me. To list a few:

1. The relation between subjective experience and objective reality
Two aspects to this.
First. The experienced physical world and experienced sense of self are nothing more than webs of mutually coherent conceptual fictions our brain uses to contextualize, engage with, and respond to exteroceptive and interoceptive data. The external world only appears consistent because it is actually consistent, and brains learn to 'entrain' on such consistencies in sensorimotor data in their attempt to efficiently predict the world via the "recurrent conceptive compression" they do Think of an autoencoder which takes in as input not only the manifold of sense-data constantly coming in (through the cranial nerves and spinal cord), but also its own activity. If wired up in a certain way, it's bound to create a collection of second-order latents pertaining to its own activity in order to parsimoniously model what's going on.
 This collection isn't the self; it's more like the process of reification of "phenomena", or the "perceptions & categories of objectification". Owing to its recurrence, the autoencoder forms third-order latents unifying these "objectifications" with the first-order data of sensation and neural activity, and it is here that the idea of a "self" that experiences all of these things arises.
. When I look at my hand, I almost always am actually "seeing" (entraining on, maintaining a causal connection with) a physical object that I really can consistently identify as "my hand"—but the image that presents itself to me is necessarily only a pattern of neural activity being stitched into a larger world model (the phaneron).
Second. The notion of free will is neither true nor false; it is how we make up for the mind's inability to accurately predict its own actions. Humans in particular tend to twist the ignorance caused by their nescience of how their actions are already preordained into an opportunity to pretend to "make a choice"—so the things we call 'free will' and 'agency' are more like observer-dependent perspectives on, rather than properties possessed by, physical systems.

2. The origin of physics, and its relation to subjective experience
We likely inhabit a multiverse which is at least Everett-sized (no objective collapse; quantum measurement takes "every fork in the road", and this only appears to us like collapse because of entanglement + decoherence), if not Tegmark IV-sized (all mathematical structures "exist" in the same way this one does). This multiverse is a block universe, and is likely built on coherence conditions moreso than laws, with the laws of physics we actually study (standard model and so on) being just the implications of the structure of the set of timeless configurations of this system for the kinds of subsystems that represent self-aware beings like us For instance, if you think "if this is a block universe, why am I not experiencing every point in the future?", the answer is that the referent of "I" is a physical being experiencing a state that, being causally dependent only on its causal antecedents, can only be influenced by what is behind it in time (where "time" is merely the admissibility of a unidirectional 'causal trend'. I'm not yet sure why reality seems to be so cleanly coordinated in this way, but I'll take what legibility I can get).. (Cf. the way the stationary action principle can be derived from the Feynman path integral, and the rest of physical law from this principle. "Everything happens at once", and the phase factors just happen to constructively interfere where action is stationary). In particular, our physics is nothing but a measure of the peculiar way in which we're suspended within the "multiversal wavefunction".
!tab Physical law as we know it is just an admissible delineation of the internal structure of this wavefunction that happens to be the delineation picked out for us by the nature of our subjective experience; in Kant's terms, it is our idiosyncratic transcendental aesthetic. Beings with our aesthetic (e.g., all terrestrial life as we know it There may be terrestrial phenomena that we are not aware of that are in some important way 'living things' with different transcendental aesthetics. For instance, the Earth's magnetic field may induce changing patterns of standing waves in its liquid metal outer core which only reveal a community of interacting beings steadily evolving in complexity (in response both to the presences of one another and to the solar wind) in the Fourier domain. Such beings fundamentally wouldn't understand space as we do, though perhaps they'd be wiped out too rapidly by geomagnetic reversals to do anything significantThough there have been extremely long periods without geometric reversals, including a 40 million-year period during the Cretaceous.. (I don't know enough to know whether this is coherent, or close to something that is coherent, but I do know that magnetohydrodynamics can support the ridiculous levels of complexity required for this, and that any domain experts that find reasons to reject this will likely be doing so independent of its actual truth). , and almost all aliens ever depicted in fiction) exist in a way that allows them to conceptualize physics as being the story of 'causation' between 'events', with the underlying medium of causality being spacetime.
!tab I don't intend to argue that "there are different forms of physics" in any (presently) relevant sense, and I don't intend to dive into post-modern woo—I'm merely saying that the way we experience physical law is very deeply contingent on how our ontology is articulated within physical law. Just consider the fact that we experience singular outcomes of quantum measurements, and that decoherence+entanglement makes this sensible even in interpretations where there isn't really a singular outcome!

3. The imperative to end suffering
Regardless of whether the above speculation is valid or not, this world is undoubtedly filled with unimaginable suffering. Factory farming is a drop in the ocean—one of the most horrific, reprehensible, incomprehensibly evil drops, but a drop nevertheless—compared to all the other beings constantly starving, freezing, wasting away, and being torn apart in the wilderness A hyena ripping an unborn buffalo calf from its bellowing mother's womb cannot behave morally. Nature does not know good and bad; it merely is. We do, presumably, know morality, and that is what makes factory farming so... I don't have the words. No words suffice. But suffering is suffering regardless of whether it is caused by a being that could have consciously decided to act otherwise, and is just as bad in itself as suffering inflicted with mens rea. .
!tab In fact, this is inherent to the way evolution works. Why do cats so gracefully right themselves when they fall? Because of a long history of cats, or proto-cats, not doing so and failing to reproduce as a consequence. There are reasons that even mild injuries differentially affect fitness (think energetics), but their subtlety directly reduces their contribution to the overall selection pressure. The more acute impacts on fitness (permanent injury, death) are the largest contributors, such that this righting reflex can only have been built on a mountain of dead cats. This generalizes. Every aspect of the way we are now—the way our brains have a region dedicated to face recognition, the way white blood cells detect and exterminate certain forms of cancerous cells, all of it—is built on an unimaginably large pile of dead bodies, of those who suffered and died because they did not have these traits. Evolution is the history of death; if this is not inherently evil, then nothing is.
!tab As sapient beings, the least we can do is avoid contributing to this wretched black horror There are people like Brian Tomasik—one of the only people in this world who may actually intentionally manage to find and do good—who seem to be motivated by an unusual sensitivity to the emotions of animals, and then properly seriously ask and properly seriously try to answer "how can I be better at doing the kinds of things my conscience impels me to do? what opportunities am I missing?". But it's primarily intellectually that I understand that their suffering is suffering just like my own, and then take this idea seriously even as everything inside me screams to turn away from the implications. I am writing this not out of an empowering sense of love but out of a nauseating sense of horror. Helping animals doesn't even make me feel that good; it just feels like dumping bucketfuls of water out of a sinking ship: you have to do SOMETHING, ANYTHING. But the psychological effect of e.g. nursing a baby chick back to health would be to just constantly reinforce in my mind how many more there are that I can't save. If you want to say "well, you have to stop the leak", I'm aware, and I intend to, though you may try to stop me when the time comes. —but the most we can do is to end it entirely, giving every suffering being the happy ending that the cruelty of evolution would have denied them. I don't obsess over ethical issues concerning wireheading, hedonium shockwaves, repugnant conclusions, etc., since if we ever get to the point where such questions actually matter, we'll very likely have vastly improved intelligences There are lots of extraordinarily tricky problems whose solutions need to be combined to figure out what is right to do ethically; to "do the most good you can" is inherently a question of extremization, but, when it comes to ethics, it seems so incredibly easy for the tiniest mistakes — conceptual errors embedded into the deepest parts of your cognition, oversights on facts you were never given a fair chance to learn (but which still matter anyway!), weird counterintuitive conclusions that you presumably could have come to but never actually would have come to on your own, and so on — to send you flying off in a direction completely orthogonal to goodness. (There are no participation awards when it comes to morality; there's only whether what you did was actually right. So the larger you intend to make your impacts, the more you had better verify their morality. And this isn't verification to the outside world, to any prospective Other who might judge you—it must be verification to yourself, because you actually want to do what's right). We're in no position to start actually pulling all the sentient beings we can out of the hellish conditions they were born into and putting them somewhere where it's okay to be alive until we vastly increase our mental capacity to the point where we can reliably determine what makes it better or worse to be one configuration of matter than another — to the point that we can say "ok, so this is what reality is, this is what makes the sentience happen, this is what makes the phenomenal valence happen, this is what makes the coupling to moral valence happen", and be able to justify these conclusions as easily, securely, single-handedly as we can justify Euclid's theorem today. Unaided human thought has absolutely not proven itself to be up to par for such a task. such as would allow us to untangle those issues better and faster. For now, all I can hear is the screaming of so many sentient beings, and I just desperately want to help them to the greatest extent I can.

4. "Negative-biased" "utilitarian" "consequentialism"
The quotes are there since I would reject this label were it applied to me. Pearce seemingly accepts the label "negative utilitarian". But this is mostly a matter of fine detail to me; let's break it down.
First. Here is the lens through which I see morality: there are only different world-states, some of them are better than others, and insofar as there is "choice" one ought to choose better world-states. Ascribing morality to actions is an auxiliary matter: do such actions, as described, lead to better world-states? Ascribing morality to agents is even more auxiliary: do they tend to act in ways that are moral in the previous sense? Since actions are solely judged in light of their consequences, this might be called consequentialist There are severe complications, given that I don't take actions to exist in a vacuum—they are coupled to policies, they have purposes, and my moral reasoning extends nontrivially to these causal determinants of action. But, ultimately, what I am concerned with is consequences, i.e. resultant world-states. .
Second. The presence of suffering makes a world worse. The presence of joy makes a world better. You could call these forms of utility, but I don't have a well-defined utility function, and suspect that postulating a single such function $\Omega \to \mathbb R$ (where $\Omega$ is the space of worldstates) is the wrong framing for most forms of reasoning. Nevertheless, I give significantly more weight to suffering, since extreme suffering just subjectively seems so so so much worse than extreme joy. Over the course of my ordinary life and my (highly varied and often extreme) experimentation with altered states of consciousness, I have experienced both, and, while I can still recall and delight in the joy, it is overwhelmingly the memory of suffering that casts an iron grip on my mind, that fills me with dread, terror, despair, that makes the world seem dark in a fundamental, insoluble way.
The negative seems so much more urgent than the positive, and in that sense you could say I'm primarily motivated by negative utility. But the classical negative utilitarian conclusion—that the world ought to be destroyed at any cost—is not my own. I see no reason for an ontological dissymmetry between the badness of feeling bad and the goodness of feeling good, nor for a strictly lexicographic ordering that would make any badness worse than every goodness I have a sort of heuristic I call the "Minecraft test". Say you wiped all my memories, priorities, and expectations for how the world is and ought to be, put me in a nice, well-maintained room with an infinite supply of Soylent, antidepressants, and a computer with Minecraft, and convinced me that this is what life is and ought to be, so that I don't even think that other people might exist, that there might be anything outside the room, that I might have any moral responsibility or purpose to fulfill. Now, it would obviously be terrible to do this to anyone, it would be a kind of pathetic way to live relative to the potentials of the world we now inhabit, and I would absolutely not want to live this way because I have things to do in this world.
  But the question is—what would it be like for this watered-down me? Well, it would be nice enough, preferable to non-existence—Minecraft is an endless source of mild fun, and even though I would occasionally get irritated by something or have a bad sleep or a headache, I'd basically be happy. If an artificial superintelligence were to eat the lightcone, but for whatever reason it were motivated to keep a copy of me in that state, I'd think "well, good for them, I guess"—I would prefer that such a person in such a state exist rather than not exist. Again, this Minecraft scenario isn't what I think should happen, or that it's what goodness ideally looks like, but a basic illustration that a generally content existence is preferable to a non-existence that avoids all discontent.
. So, like Pearce, I seek not the elimination of all life but instead a blissful destiny for all sentient beings.

And so on...
Veg*ism If animal products could only be produced through factory farming, I'd say veganism here, no asterisk needed. But if a cow is allowed to meander and graze freely and raise her own calves, being periodically milked in exchange for protection from predators/hunger/the elements just seems like an unambiguously good deal; since most such animals wouldn't've been born if farmers weren't expecting to profit from them, buying their milk might even be a way to subsidize their moderately-pleasant existences. Of course this would significantly raise the price of milk—but it shouldn't be as cheap as it is in the first place, because we shouldn't be treating dairy cows like milk-factories to be abused and forced to live in squalor. Perhaps this is an area where I look at things differently from Pearce: free-range farming just seems like a win for the farmer, the animals, and the end consumer Unless it somehow fundamentally sucks to be a cow, which isn't impossible; it's merely an assumption of mine that cows with no outstanding worries or health issues just kinda contentedly vibe. (Which seems plausible. It's almost enviable, actually -- cows don't have to worry about career opportunities, tax credits, or college debt; they just eat grass and moo, and that's life). I wouldn't know how to know if it were true, though, and I know that the ethics of contingent existence can be very thorny, so I'm mostly uncertain here.. Hence the asterisk in "veg*ism". In a better world, where animals were treated as ends in themselves, where the discovery of a large-scale factory farming operation would instantaneously lead to mass arrests of every human involved, mass rescue operations for the animals, and a series of high-profile trials, a policy of not consuming animal products would seem predictably suboptimal to me.
 But the fact remains that this is not that better world, and that consuming animal products generally subsidizes suffering (often even when the label says free-range, because that's what the present state of regulations incentivizes), which makes avoiding them a good heuristic. I can't tolerate or forgive someone who, having been confronted with the truth, still just eats a chicken sandwich at McDonalds and gives flimsy ad-hoc justifications for why it's okay—it really is a horrifically banal way to be evil. But someone who, for instance, lives near a farm, has gone there and nuzzled the cows Well-treated, domesticated cows are essentially grass puppies. How many of them are born into lives of perpetual torture because we like the taste of beef? It's hard to maintain my composure when seeing the above video—how a sentient, compassionate, curious being could be priced, and thereby spared from the death camps for ten dollars... There's no appropriate response to seeing this incomprehensible discrepancy that isn't prosecutable., knows they're not being abused, and has some way to accurately trace the milk those cows produce to this or that label at their local supermarket—I see no problem with consuming such milk. In other words, I'm not a fan of the quote-unquote "mathematical veganism" typically practiced, for which even a caffeine pill might be verboten because the gelatin capsule was produced in part from animal bones (except as a precautionary policy in response to mislabeling, as mentioned above); I just don't want my actions to lead to additional tortureWith gelatin, there really is a tricky question regarding the marginal production of suffering. It's my understanding that the bones are largely a "byproduct", which would just be discarded if no products could be made from them. If I discover that more suffering really is generated by the use of animal bones for gelatin, then I would make efforts to avoid gelatin..
 What about meat? Well, there are cases where it's fine to eat human meat, and it should be fine to eat non-human meat in analogous cases. Especially since most non-humans presumably don't have the capacity to care about what happens to their corpses after they die. (Elephants seemingly do have a concept of death, to the extent that they go back to visit the corpses of their loved ones, but I can't see a cow conceptualizing "this is what will happen to me when I die; this is what I'd want, this is what I wouldn't want"). A human's wish not to be eaten after death should be respected, but a cow just won't have that wish; there is no non-consent to be violated. Of course, to the extent that economic incentives cause people to cause more animals to "conveniently" die, those incentives should be avoided; in the better world imagined above, active investigations would be carried out to make sure of this. Furthermore, in this better world, I suspect more people just wouldn't want to eat cow meat, for the same reason they don't generally want to eat human meat in this world: the instinctive repulsion to consume what you know was once a living, feeling being. But it should nevertheless be legal to butcher and sell the meat of dead animals, so long as it's also legal to do the same for dead humans with some sort of consent record (plenty of precedent here—for instance, some US states ask whether you want to opt-in to organ donation when you get your driver's license). In general, my principle is that one should have the option to consume animal and human products so long as this does not cause additional suffering; someone who wants to live on a carnivore diet should be free to do so, in exchange for the money required to make that option available to them in an economy not based on suffering (i.e., the market price, which would be higher than oursThe better world would have a substantial drop in demand of animal products, due both to the aforementioned repulsion effect and the proliferation of a culinary culture not so critically dependent on milk, eggs, and meat, but I suspect the drop in supply at a given price would be even more substantial, because factory farming is so cruelly optimized and greatly benefits from economies of scale. I guess it's possible for the demand to be elastic enough that the market price ends up about the same as it is in our world, though; I'm not sure.).
!tab Objections regarding nutrition mean nothing, by the way. The Jain and Buddhist religions featured veganism well before we had nutrition science and vitamin supplements, and now that we do, why is it so hard to understand that the negligible and trivially-remediated effects of not eating meat are obviously, overwhelmingly outweighed by THE CONSTANT TORTURE OF BILLIONS OF SENTIENT BEINGS?!
and transhumanism are more like cognitive diagnostic checks than actual belief systems. They're not successes, they're non-failures; Pearce, gladly, does not fail.

I'm not very old at all, and I've only been seriously thinking about these topics for a few years; being largely independent, I often view these as hard-won idiosyncratic conclusions, as painful as they often are I don't want MWI or Tegmark IV to be the case—it means that reality is thoroughly laced with suffering of which I can only ever stop an infinitesimal fraction—but it's a natural deduction from the principles of quantum mechanics, since everything is simpler and makes more sense if the universe "takes every fork in the road". (Logical structure is orthogonal to moral desirability). Sure, the obvious response goes "insofar as you fight suffering in this universe, beings like you in other universes are more likely to fight suffering that you can't reach, so even without a direct causal influence you can still work together", but it's not satisfactory; the overwhelming majority of universes won't have beings like me, because they won't even have an Earth, let alone a humanity. Or, in other words, the existence of moral patiency seems an asymptotically stronger fixpoint than the existence of beings whose actions with regards to moral patiency are subjunctively dependent on mine. . This made it especially surprising to discover someone who had not only come to similar conclusions, but who has been extensively writing about them for longer than I've been alive for instance, The Hedonistic Imperative was published in 1995—did they even have printing presses back then?. So, as someone who was basically already on board with—or in any case tolerant of—the "controversial" stuff like the hedonistic imperative and recursive self-improvement of hybrid intelligences, I was excited to see what over two decades of additional conceptual advances would bring to the table—what it might look like for someone with my point of view.

On "Philosophy"

As always, my hope hit a snag. There's... a certain kind of way of thinking about abstract problems which I tend to think of as "trading card philosophy", and it rears its ugly head here. It's fundamentally about developing a specific ontology through which you view the world, and trying to absorb everything into this ontology, such that there are no thoughts to be had outside of it. You form a neat little arena, in which words nominally have clear meanings, and any given question askable within your ontology has a nice discrete structure legible through a nice discrete description in light of which it has a nice discrete network of relations. Words become labels become reifications to collect, identify as "yours" or "not yours", and play off against one another, almost exactly like Pokemon cards.
!tab This ontology isn't any particular point of view so much as it is the backdrop in which points of view are allowed to exist. You know how this backdrop tends to look: a set of concepts like "physicalism", "materialism", "naturalism", "dualism", "panpsychism", and so on, all coordinated by and coordinating other concepts like "consciousness", "information", "qualia", "free will", and so on. These are the trading cards, with pre-defined interactions with one another—elemental affinities, attacks, counterattacks, traps, decks, combos—and there's an entire meta to the card game they form: standard arguments deployed one after the other, the same rhetorical tricks and accusations of rhetorical tricks, the same appeals to intuition and absurdity and tradition... The only thing that changes is the arrangement of the cards themselves, and the particular moves that are made with them. And if anyone wins a match at some point, what does it even matter? The loser, who generally just happened to draw the wrong cards at the wrong times, simply reshuffles their deck and plays again (but most players don't even notice the match) Regardless of the outcome, the game will never point us to reality, because reality doesn't actually care about our intersubjective conceptual dialectics; it doesn't care what we intuit in this or that word, and so we have to pay attention to these intuitions to make sure they are nothing more to us than tools to reach the truth. We have to not get lost in the game.. This process is what I tried to name, and identify the means of subverting, all the way back in Every Canvas a Mirror: rather than getting lost in the details of the map, we ought to try to understand the very structure of the map in order to figure out how it could ever relate to a territory in the first place. Break down the nature of the game itself, analyze the milieu in which there could ever be a game. The limits of our language are only the limits of our world when we do not constantly monitor and adjust our use of language so as to furnish ourselves with better tools for understanding the real world. Had Plato, Descartes, Hume, Russell, Chalmers, etc., never gotten interested in philosophy, our vocabularies would look remarkably different in the present day And, no, our modern vocabulary isn't some "natural kind" that we had to converge on. Classical Indian civilizations had a thriving intellectual tradition which offers an entirely different perspective on what the blob we call "philosophy" could have looked like. Nowadays we tend to view it by way of association with a discrete set of what we call 'religions', and this has bled over to today's traces of that tradition, such that people will affirm of themselves "my religion is Hinduism [, Jainism, Buddhism, etc.]", but for most of its history, India didn't really "do" religion as such. Rather, it had a manifold of intricate perspectives on how one ought to come to understand and act within the world, with the themes of ancient bodies of scripture such as the Vedas serving as a common ground and point of departure for various intellectual traditions questioning the structure of perception, the means of valid inference, the nature of causation, and other topics such as are now hegemonized by academic philosophers working in a distinctly Western tradition. This is why, for instance, many Buddhist suttas read more like textbooks than religious scriptures, and why classical Indian philosophers such as Dharmakīrti and Nāgārjuna will talk about the formation of knowledge from sense-perception rather than the problem of evil—because what we write off today as outdated foreign mysticisms were originally designed as systematic ways of understanding phenomena. Were it not for the cultural influence of the inheritors of the Greek tradition, philosophy as we know it today might have been very different, such that notions of "compatibilism", "transcendental idealism", "strong emergence", and so on would be superfluous, replaced by other conceptual bases for understanding the world. , but the phenomena to be understood would not change.
!tab It is to avoid getting caught up in this game—and it's a very easy game to get caught up in—that I generally try to avoid using the words "consciousness" and "philosophy" Two main exceptions. First, quotationary uses: when I write "trading card philosophy" I mean "Trading Card Philosophy (TM)"—the p-word is being used to evoke moreso than refer. Second, uses that can unambiguously be replaced with more specific synonyms owing to context: when I write "as I write this, I'm conscious of my writing it", it's clear (hopefully) that I'm referring to my presently paying attention to the specific sensory and cognitive processes by which I'm coming up with and typing this.. Unfortunately, such games rapidly cement themselves wherever they spring up. They're easy and entertaining, because they allow us to fire off tangible attacks at one another, they allow us to feel like we have a handle on anything at all (when we don't), and they prevent us from having to introspect at such deep levels as are required to understand why we think the way we think and therefore how we ought to think in order to keep pace with reality.

There are two main problems exacerbated by the presence of the card game. They're common to conceptual cognition in general, really, but exacerbated to the point of insolubility within this game.

  1. Stereotyping. Not in the shallow sense of "you're an X-ist, therefore you must be like A and B and also believe Y and Z", but in the deep sense of positioning within a network of conceptual relations. We're not born knowing what any of the things that might stand in for X mean, and formal definitions wouldn't help us even if we tended to use them in cognition, which we don't. This positioning is what allows us to do any cognition about X in the first place.
    A better term than "stereotyping" might be "imaging". We form a scheme of conceptual imagery of what X is I would rather say "scheme of conceptual imagery", because it's often hard to pin down a one-to-one mapping between labels and things that can be called images. Take "bunny" and "rabbit", for instance: we can't say that these words have the same exact image, since e.g. the former more strongly connotes cuteness, affection, but neither can we say that they have different images, since they generically overlap; these two words are more like conjoined twins, and this delineation of a structure within the space of images is what I'm calling a scheme of conceptual imagery. There are all sorts of complicated and overlapping schemes we can identify, which is why this referent is merely referring to a delineation (as with biological genera or human cultures: they're merely approximately quantized phenomena, and to speak of this or that genus or culture requires bridging this quantization gap by making the rest of the division yourself). In general, I'm just going to say "a conceptual image" as shorthand for the more accurate phrasing "a given delineation of a schema of conceptual imagery". , and use these to guide cognitions about X Partially these images are socially formed—you could say that most words are nothing but conduits for transmitting images, with my brain forming an image of (for instance) "flying" by seeing this word deployed across a range of contexts in ways coupled to the conceptual images of "flying" that exist in the minds of the deployers—but individuals often see patterns of deployment by different (and often self-segregating in the relevant discursive context) faces of the same society. Consider that individuals may learn different conceptual images for the word "feminist" owing to different opinions on feminism within the communities in which they originally see the word employed: someone who learned the word growing up on a leftist commune will form a fundamentally different image than someone who learned it growing up on 4chan, even if they agree perfectly on the extension of the word (i.e. their images are coreferential; this difference between concept and extension is a large part of why people have seemingly irresolvable differences in opinions). .
    !tab A more evocative way to put it would be to say that these conceptual images are the handles by which we manipulate the labels And, we do have to have some way to manipulate the labels, otherwise we wouldn't be able to incorporate them into our cognition at all! Academic philosophers don't really exhibit cognition in the first place, though, so I'll use mathematics to drive the point home. A group is a precisely definable mathematical structure: it is a set $G$ with an element $e$, an involution $i: G \to G$, and a map $m: G\times G \to G$ such that $m(e, g) = m(g, e) = g$, $m(g, i(g)) = m(i(g), g) = e$, and $m(m(g, h), k) = m(g, m(h, k))$ for all $g, h, k \in G$. Alright, you've got that recorded in your memory database. Now what? What are you actually going to do with that definition? Sure, maybe you'll be able to recognize some other mathematical structure as a group by noticing that it satisfies these properties—but, again, so what? You might think that you can prove theorems about groups in general and then apply them to other mathematical structures, but what theorems are you actually going to prove? The one thing you don't see mathematicians doing is brute-forcing their way through the set of all possible group theorems and all possible ways of manipulating these theorems according to the group axioms in order to stumble across true ones once every couple hundred years.
    !tab Instead, they tell themselves stories about what these axioms mean, using whatever notation and language is fit for these stories. In the mainstream narrative, an element of a group is thought of as a transformation of some sort, and a group itself is a closed system of transformations. Thus, the involution $i$ is thought of as inverting a single transformation, $m$ is thought of as multiplying two transformations to get a composite transformation, and $e$ is the trivial transformation that involves doing absolutely nothing. So we write $i(g)$ as $g^{-1}$ and $m(g, h)$ as $gh$, whereupon the axioms fit neatly into this story: $ge = eg = g$ states that doing nothing before or after a given transformation does nothing to that transformation; $gg^{-1} = g^{-1}g = e$ states that doing and then undoing a transformation amounts to doing nothing; $(gh)k = g(hk)$ says that a composite transformation only depends on the order of its constituents (i.e. only sequence is relevant). The conceptual imagery begins to bud from whatever you associate with "transformation"—it might be a visual shift in perspective, a kinesthetic manipulation, symbolic rewriting, a flowing of qualitative features, whatever—generally, these specific images are superposed and activated contextually, and people's differential sensorimotor wirings influence everything about this process.
    !tab A refinement of this narrative has us think of these transformations as generally being symmetries of some sort. For instance, we often encounter functionsIn the most general sense: methods of a Python class, real-valued polynomials, Lagrangians of a field theory... $f(x_1, \ldots, x_n)$ for which we can make certain alterations to the $\{x_i\}$ that leave the value of $f$ invariant. Measurement of a quantum superposition $|\psi\rangle = \alpha\mid\uparrow\rangle + \beta\mid\downarrow\rangle$ returns a probability distribution over possible results, but this distribution is unchanged if we multiply $|\psi\rangle$ by any number of the form $e^{i\theta}$—these numbers form a symmetry of the measurement process, and it's fruitful to think of them as a group; special relativity states that the laws of physics are invariant under certain kinds of coordinate changesmore precisely, that particles interact the same way regardless of where they are in "absolute" time and space; all that matters is how one appears relative to the other. So if everything in the universe were instantaneously moved five meters to the left, or rotated 45 degrees clockwise around the Eiffel tower, for instance, it would not be detectable. These are symmetries that any function that purports to govern the evolution of physical systems must possess., and we get an incredible amount of mileage by thinking of the set of all such changes as a symmetry group.
    Narratives, hunches, physical intuitions—these are the essential tools that let us do anything at all with mere labels. Even in the mathematical realm of formal definitions, conceptual imagery is the "fire" that gets cognition about what to do with these definitions—what kinds of things we should try to prove, where we should try to apply them—going!
    we wish to use in our cognition; we need them, but if we're not careful about how we use them, we fall prey to the stereotyping inherent to their construction. Consider "theism", for instance. If I were to ask for a definition of theism, you might rack your brain for some sort of essential logical features before saying something like "belief in a god, or god-like entity, that fashions or otherwise controls reality". Alright, then. It's the same thing going on in both situations: the label "theism" was made almost entirely to talk about the actually-existent religions existent among humans—to be able to delineate people who accept these religions Well, most of them. The Buddha himself steadfastly refused to acknowledge himself or any other entity as a god, even though many later strains of Buddhism take him or some similar entity to be one. from those who reject these religions—and the way we learn to handle this word reflects this. That this can kill our capacity for cognition is made clear by the tendency of irreligious people to use "theistic" as an outright rejection of ideas (c.f. the simulation argument, artificial superintelligence). When we don't maintain an awareness of and critical disposition towards the very concepts with which we think, we're led to ruin.
    !tab Labels don't have to end in "ism" to be destructive, either. Large-scale battles are fought over whether the label 'pseudoscience' can be applied to any particular theory, as though the label referred to a god-given predicate that cleanly divided some things from other things Again, you might say "here's a definition of pseudoscience! here's how XYZ satisfies it, making it pseudoscientific by definition!". But you obviously just looked up some definition on the spot; you did not memorize it; you did not think "hey, this theory satisfies propositions (a), (b), and (c) -- wait, that reminds me of a definition...". The label occurred to you first and you merely used a definition in the hope that it would justify your use of the label. Be honest, at least with yourself, about the actual mental processes that are leading you to this or that conclusion.; it's just a blatant example of the kind of brain damage that the undisciplined use of concepts facilitates.

The conceived space of possibilities is carved into separate labeled options, each of which, as a label, gains a scheme of conceptual imagery.
  1. Standardizing. [TBD] For lack of a better word.

map-territory etc. etc..

Here is one cruel mechanism by which these errors propagate themselves. Thinking about a variety of topics—about what is right and wrong to do, about the nature of the structures nature self-organizes into, about the creation and encoding of knowledge—is gatekept by academia, such that it is only considered 'legitimate' and worth disseminating or acting on when it is echoed in the stilted idioms of academic philosophersThere is no good reason why all these topics ought to be lumped together under any single category such as "philosophy"; it's a cultural accident that they've all been placed into the same prison cell. . Therefore, people that actually care about these topics find themselves inevitably drawn into the game that academic philosophy has locked itself into playing, and, before they know it, all their passion and curiosity and intelligence and drive has been neutralized, their minds replaced with houses of cards that cannot admit a single productive step without instantly collapsing.

This is largely what I see in Pearce: so many good conclusions that have been retrofitted to the stale games of academic philosophy, and thereby trapped within matrices of received -isms that both artificially delimit and ruinously constrain such conceptual cognition as is needed to do anything useful with these conclusions. Repeated stereotyping and standardizing, sans any sort of metaconceptual deliberation, turns an agglutination of conceptual images originally formed just to make sense of things into an inescapable event horizon: the underlying space of concepts has been so thoroughly warped that every mental motion just leads further inwards.

To make my point, here's a first quote:

[Responding to the question "What if a toaster was sentient?"]
If toasters were subjects of experience, then scientific materialism would be false. The vindication of pre-scientific animism would lead to an intellectual and ethical revolution whose dimensions are hard to fathom. However, the debunking of scientific materialism would also leave modern technological civili[z]ation a miracle – not least, the microelectronics on which toasters depend. (Source)

It is straightforward to see the fundamental errors above reified again and again. Another example:

If folk neurochronology is vindicated, something ontologically irreducible is present in the world and missing from the formalism of physics. The spectre of 'strong' emergence rears its head – or worse, dualism, whether avowedly 'naturalistic' dualism or otherwise. True, materialists and epiphenomenalists don't face the binding problem in quite the same way as the physicalistic idealist. Instead, bound phenomenal objects can simply 'emerge' in the brain, like Athena sprung fully formed from the head of Zeus. (Source).

You see what I mean when I compare this style of reasoning to a trading card game? The incomprehensibly huge space of ways reality could possibly be is carved up into a small set of labels, and people form commitments to certain perspectives on these labels themselves—these aren't just emotionally and aesthetically-laden identities, but cognitive shortcuts. It's their fundamental purpose, and when we start to forget this and treat them as though they referred to inherent, objective things, we just get wrecked.

The Topology of Error

In their attempts to understand reality, people come to believe certain things by virtue of cognition on sense-experience. Foundational to this process is the assumption that there is a ground truth, a singular determinate reality upon which every one of our sense-experiences and cognitions is logically predicated. As a consequence of this assumption, we take the mutual coherence of our beliefs to be a necessary, though not sufficient, criterion for their veridicality. Yet our minds aren't capable of efficiently checking for such coherence: even pairwise coherence among $n$ different beliefs requires $n^2$ checks, which, when considering that minds generate beliefs in a manner more or less linear with experience, is simply impossible to deliverAnd, it often takes a combination of multiple beliefs to contradict another one, so this sort of brute-forcing is $O(n^2)$ in the best case, but $O(2^n)$ in the worst. (Obviously there isn't a set of discrete 'beliefs' a brain operates on—this is just a heuristic concerning the rate at which coherence requirements grow relative to to informational content). . Thus, our minds use conceptual schemes so as to construct a sort of space in which beliefs have coordinates (this provision of coordinates is what I mean when I say that concepts "coordinate" belief).

As a consequence, we're limited to checking for coherence along those continuous paths in the space of concepts our minds are led to travel in the process of cognition.


The account I give above implies that many conflicts considered "philosophical" really arise from deeper conceptual commitments; this does, unfortunately, force us to do some amount of psychologizing in order to figure out those places people are coming from that they can't or won't say themselves. To baselessly psychoanalyze Pearce's writings makes clear a few principles from which his expressed viewpoints can all be deduced:

  1. Only biological organisms matter; machines don't count.
  2. 'Consciousness' is something special, and is at the heart of this mattering (consequently, machines can't have it).
  3. Science is the way we know objective reality; everything must stay within its reachThe drive to form a faithful map of the territory isn't inherently problematic; what is problematic is the metastasis of this drive into the demand that the territory be such that your present suite of mapping techniques can faithfully represent it. For Pearce, this manifests in the rejection of any state of affairs that would seem to him to possess "spooky strong emergence" That is one quote; here is a larger one: Look at how thoroughly ruinous to cognition this kind of trading card philosophy is.. .

Consequently, he expresses certainty that AI will never "rise up" (all such scenarios being equiconsistent with a fanciful tale of zombies taking over the world told by Silicon Valley sci-fi obsessed autistsNeither exaggeration nor cherrypicking: a ctrl-f on the 2023 social media page returns 99 hits for "zombie" (5 followed directly by "apocalypse") and 10 hits for "autist". !), attain a sense of self, or display moral patiency; this belief is internally justified by the Phenomenal Binding problem, which, according to him, cannot be solved by any classical mechanism due to a question-begging notion of 'mind dust'. Then, given that brains seem to work classically, are animals not moral patients either? Not so: they're quantum computers! By Pearce's Schrödinger's Neurons hypothesis (hereafter SN), neurons in animal brains somehow manage to constantly produce quantum coherent states, whereupon they simultaneously einselect unified experiences of an external world. And these quantum effects are what consciousness is made out of; the phenomenal contiguity of consciousness informs us that this creation and then collapse of coherence has to be occurring constantly, giving us a "frame rate" on the order of $10^{15}$.
!tab This is like trying to unclog a toilet with a firecracker. It's an extreme level of force that's not only entirely unnecessary, but which won't even work. Nonetheless, we're obligated to hold our tongues, since there's an experiment that might prove it! This makes it falsifiable, and therefore scientific, even though the technology required to do this experiment is, by Pearce's own admission, likely decades away The point of falsifiability is to make the truth of a statement determinable by tying it to measurable facts about the world. If the facts won't be measurable for decades, don't call it falsifiable now!. Thus, because the hypothesis is now scientific, he is free to assert that animals are quantum computers, that this is what makes them conscious, that classical machines can never give rise to unified conceptual experiences, that no machine can ever be, or implement, a moral patient Elsewhere, he says that this ought to be the case, since otherwise "strong emergence" would resolve as "true", and this would fundamentally invalidate the scientific enterprise. If you're treating "strong emergence" as a single coherent idea with a single boolean truth status, then of course you're going to have a bad time!
!tab If senses are foremost a projection of one's mind that can merely entrain to objective physical forms, the same ought to be true for ideas. Ideas exist as elements of your cognition, only insofar as they exist as to you, as mental representations; further, these representations don't have any physical referent, which means the brain can't even entrain to an objective form.
. In particular, it follows that the world in which we exist can't possibly be a classically computed simulation—somehow, we have enough power to deduce this. If challenged to provide evidence, Pearce points to the "falsifiable", and therefore scientific, experiment, all the while lamenting the impossibility of carrying it out. This isn't even a motte-and-bailey; it's just a bailey labeled "Definitely a Motte!".

Needless to say, SN is terribly wrong—and not just for the doublethink and the blatant de facto impossibility of interneuronal quantum coherence in vivo, but for the idea that phenomenal binding—or, rather, whatever it is the brain actually does that we're being made to call "phenomenal binding" here—can't occur classically. Before treating each in turn, it's worth noting a weird consequence of Pearce's belief system: if one claims, as Pearce does, that (a) quantum coherence necessarily underlies phenomenal binding, and that (b) phenomenal binding provides an incredible boost to both computational power and fitness, then they've pointed out a way in which some actually-measurable things (computational efficiency at some problem, increased survival in some environment) depend on the truth or falsity of (a)—you only need to construct a clever operationalization. For instance, you could show that neurons can do quantum computation by drafting a minimum working example of a neural network that can robustly solve in seconds some combinatorial problem that would take a classical computer years (insert scott aaronson glowing eyes jpeg). This seems significantly easier to perform than the interferometry experiment Pearce proposes, and success would net you like every major annual prize.

Phenomenal Binding

What, exactly, is phenomenal binding? The above discussion seems to imply that it's some sort of mental process that has to do with the generation of conscious experience. And this is correct. Unlike "consciousness", though, it's relatively easy to identify the referent of the term (which is why I'm less cagey about using it)—phenomenal binding is the restructuring of the manifold of sense-data into a stream of supersensory mental "phenomena".

To make this clearer, suppose you're playing with a jack-in-the-box.

Mental Computation

Quantum is Irrelevant

Let's be precise. Quantum mechanics almost certainly plays some role in the workings of the brain. Evolution utilizes every dirty trick it can get its hands upon, and there's no reason to think that the same biological dynasty that managed to encode itself into a language for building custom-purpose nanomachines would stick to entirely classical mechanisms for the sake of simplicity. By now, there are plenty of biological phenomena whose quantum underpinnings we're pretty certain of.

I searched around for solid, evidence-supported examples, and they were all on merely molecular scales; most of the hypotheses that were later rejected also took place at the molecular level, and were still commonly cited as having been implausibly vulnerable to decoherence.

What would make it relevant?

Ultimately, mind has to be quantum-mechanical. Obviously. But would those who claim that "mind is quantum" claim in the same breath that "economics is quantum"? If so, their claim is clearly just a vacuous truth. Else, the fact of reality's being quantum is no refuge for their claim; they're clearly pointing to quantum having some special relation to mind, to it being somehow relevant to understanding the operation of the mind. Despite the emptiness of this rhetoric, people actually use it all the time! So it's worth specifying what ought and ought not to count as "quantum mind".

We can say "cars are quantum-mechanical" only in the same sense in which we can say "there's no such thing as a car". The substrate underlying the set of phenomena we think of as cars can be described far more exactly as the interaction of elementary particles, but if we go to such a level of description—if we actually formulate the wavefunction—we can't even see these phenomena.
!tab But I can't imagine you'd be willing to walk into traffic on account of this fact. The concept of a car emerges from a network of experiences and ideations as to the kinds of things cars can be, the kinds of actions they do, the kinds of uses they have, and this network is what gives rise to the "sense of scale" by which we determine how to think of cars if we want to think of them through systematic models. We understand cars to be classical in the same sense in which we understand our world to be classical in general, and thereby think of them as objects to be modeled most saliently by classical mechanics. This scale both determines the formalisms we use in modeling cars, and the intuitions we use in dealing with cars; to productively claim that cars are quantum-mechanical, you ought to show how quantum-mechanical effects propagate to this scale so as to affect, or effect, the range of phenomena we actually construct the concept of a "car" through.
!tab If you want to say that some aspect of neural functioning is somehow quantum, then, you ought to either (a) pick an aspect at an entirely different level of scale, such that its being quantum presents no difficulties, or — if you're going to link this quantumness to those neural functions that seem on the same scale as our classical-seeming world, like cognition or emotion or experience — (b) show that it's quantum in some way which directly influences the construction of these neural functions. Do neither of these, and you're merely appealing to technicality. Should someone come to you with some neural phenomenon and say "hey, you have a scientific theory of neural functioning, right? can you help me explain this?", you'll never be of any use, because your technicalities will have no place in the conceptual network in which they delineate the phenomenon and label it "neural".
!tab This isn't relevant to SN, and Pearce doesn't fall into this trap. The claim that quantum underlies phenomenal binding underlies subjective experience pretty clearly passes the above test—it's just factually wrong. I'm just trying to circumscribe the realm of discussion here, and limit what "quantum is not relevant to brain function" could be taken to mean.

...


Really, on the sub-femtosecond ($< 10^{-15}$ s) scale, according to Pearce. This seems to indicate a deeper problem: quantum or not, causal effects can travel no faster than the speed of light. This is almost exactly a foot per nanosecond, and just 300 nanometers per femtosecond; meanwhile, the nucleus of a single neuron is on the scale of 10,000 nanometers, with the body around twice as large. To make matters worse, the afferent signals that have to undergo phenomenal binding are scattered among large populations of neurons spread across the brain. Phenomenal binding among neurons that are even just a centimeter apart would need to take place no faster than around 30,000 femtoseconds. It is true that entanglement allows for some level of asynchronous coordination, and einselection vastly cuts down on the space of choices to make, since there are only so many pointer states, or possibilities for a classical-seeming world, to pick from—but there are strict limits to the levels of coordination possible. Not merely practical ones, but mathematical constraints derivable from the structure of quantum theory. The Tsirelson bound alone means that a pair of entangled neurons that can't communicate due to lightspeed issues (or an evolved mechanism for maintaining coherence) would require lots of perfectly entangled degrees of freedom to be able to reliably coordinate on a shared reality.

The real killer, however, is the fact that entanglement is a sort of conserved resource for a quantum system—it's monogamous—which rules out the possibility of coordination among all the neurons that contribute to a single bound "frame" "But wait!", you exclaim: "If entanglement is monogamous, and collapse is entanglement, how can we experience a world where basically everything we interact with is collapsed?". The answer is in the word interact: by establishing some causal relation to the state of a system, we expand the system to include ourselves, such that we're now entangled alongside the original system. It's the asynchrony of coordination that keeps systems separate and monogamously entangled, and the spacelike separation that enforces this asynchrony. When the coordination is synchronic—there are timelike causal interactions behind it—the two systems are no longer separate and monogamously entangled, but a single system which other, actually separate systems can then become monogamously entangled with. (This is what I mean by "entangled alongside" vs "entangled with"). . Thus, spacelike separation of neural encodings of individual phenomena and asynchrony of their binding of these phenomena are two features of the SN hypothesis that decisively kill it. Why?

Incompatibilities with Known Physics

First, the presence of both features means that all the neural encodings have to be separately and mutually entangled. We can ignore the blatant physiological impossibility and just suppose it happens magically, since it's at least conceivable. But, because of the monogamy of entanglement, all these encodings can only coordinate very weakly, to the extent that the brain would probably never end up experiencing a single phenomenally bound reality. Thus, it is SN itself that would lead to "mind dust".

!tab A. Therefore, let's remove the spacelike separation criterion, by saying that phenomenal binding occurs on the order of nanoseconds rather than femtoseconds, and see if it might work. Can whatever neuronal features encode the phenomena to be bound all coordinate now? Pearce at least recognizes that decoherence works too fast for nanoseconds—it's why femtoseconds were chosen in the first place, as a direct response to Max Tegmark's analysis of decoherence timescales in the human brain (he claims to accept the calculations, but not the conclusions). Tegmark estimates the timescale to be between $10^{-20}$ and $10^{-13}$ seconds, after which a neuronal superposition collapses due to interaction with its immediate environment.

!tab Why did I give an estimate of $3\times 10^{-11}$ seconds above, then? Because I'm charitably assuming that whatever neuronal feature is getting entangled is somehow being protected from all such factors, like qubits in an actual quantum computer; due to the speed of light, this is the most optimistic timescale on which the qubits interact with each other, which necessarily has to happen for their features to bind. This assumption is absurd—the brain is a ball of damp electrochemical gelatin, and Tegmark's numbers come from actually analyzing it as such—but let's go through with it, outdoing even Pearce's optimism. Do we get entanglement? No, and Quantum Darwinism itself provides an immediate explanation as to why. Whatever neuronal features are being protected by the quantum black box that preserves their coherence had to get there via neuronal signaling mechanisms, and neurons generically represent sensory stimuli via patterns of action potentials. For instance, the auditory cortex has a couple of neural populations arranged in strips such that action potentials on one end of the strip encode lower-frequency noises and vice-versa; another population in the auditory cortex tends to spike synchronously whenever an unexpected noise appears, with fewer and fewer spiking as the noise gets more expected.

The import is that the very formation of the neuronal features that are supposed to be kept coherent broadcasts is constantly and loudly broadcasting information about the features themselves. The activity of the latter population above, for instance, is primarily measured through EEG—which is to say that they are so incredibly conspicuous about what they're encoding that just sticking electrodes to our heads is enough to eavesdrop on their activity with enough fidelity that we can reverse-engineer the neuronal features produced downstream regardless of whether or not these features are then shielded from all external influence! All that's required for a state of a system to decohere relative to some observer of that system (with their own state, call it classical for convenience) is a causal influence between the two states that can be used to correlate the state of the observer with in mere principle.

!tab This is why we can't bypass the engineering challenges inherent to building universal quantum computers by just closing our eyes and trying not to notice any qubits losing coherence—there just needs to be some interaction between the coherent qubit state and our physical state which produces an information-theoretic correlation between our state and certain possible pointer states of the qubits. For instance, if some mechanical error resulted in one of the qubits producing a single imperceptible click whenever hit with a certain gate while it's in the $|1\rangle$ state, using that gate while it's in superposition would produce a superposed response—a very weak pressure wave in those parts of the state space where the qubit is $|1\rangle$, and nothing in those parts where it's $|0\rangle$—and this partial response would travel to us, putting our physical forms into a correlated superposition of very subtly-differing states. Then the off-diagonal components of the joint density matrix get einselected against—in other words, interference kills off the wavefunction mass over "misaligned" parts of the pair of individual superpositions, causing the joint superposition to have mass only where our state has been "correctly" affected by a particular state of the qubit. This implies that we'll always seem to ourselves to be aligned with the qubit (such that it appears to have the single state compatible with the effect of whatever causal interaction made it collapse for us in the first place), but note that the probability for any given state still has to obtain twice for it to be the particular state that we see—once for the qubit, and once for us—so that the probability we find ourselves in a particular state ought to look like the normalized squared wavefunction amplitude. Note also that even though the observer was not assumed to be in superposition, the quantum superposedness of the system was instantly able to "infect" it. Since we're identifying with the observer, this assumption just encodes the way things are epistemically, from the single indexical perspective we're afforded; but it only works this way because our point of view is not the only point of view, with our experience being shaped by interference with all the other timelines we don't see.

!tab B.

Computational Non-productivity

Second, quantum adds no special capacity to what can be computed—it only allows us to efficiently compute lots of things that we could compute classically by numerically representing the wavefunction over the state space of the system being computed to whatever degree of accuracy we please, and then manually tracking the interference effects among states. This interference is what makes quantum the way it is; computationally, it's a simple consequence of the replacement of real-valued probabilities on the state space with complex-valued amplitudes, which need not agree in phase. Since the state space grows exponentially with the size of the system (its volume $V = \prod_i f_i$ is the product of the degrees of freedom $\{f_i\}$ of the system, and $\sum_i \ln f_i$ tends to be extensive, therefore linear with system size), this process has an exponential runtime classicallyThough "most" quantum algorithms, including algorithms exploiting entanglement, can be efficiently simulated in polynomial time, as per the Gottesman-Knill Theorem. Quantum computing allows us to get around this by outsourcing the state space computations to nature itself, but this does not fundamentally solve any new problems.

!tab In particular, the quantum complexity classes $\mathsf{BQP}$ and $\mathsf{QMA}$ are both in $\mathsf{PSPACE}$ and hence $\mathsf{EXP}$, so anything that can be efficiently computed by a quantum computer—perhaps a human $A$ describing their own introspective experience as a function of a history $X$ of stimuli and a prompt $p$—can be computed with a classical, deterministic computer. The classical simulation would always give the same answer as the quantum human, reporting the exact same internal experiences in the exact same way. (Note first that the merely probabilistic requirements defining $\mathsf{BQP}$ and $\mathsf{QMA}$ don't matter. If you want to say that their indeterminism breaks my point, they can be made arbitrarily close to deterministic by $n$-fold repetition + consensus; if you want to say that the determinism of the classical computer is what breaks my point, because indexical randomization ("collapse") is important, a PRNG will suffice for sampling, given that the classical algorithm already does the state space calculations In fact, we might not even need this. The decision to sample emerges from the physical phenomena being simulated, and is therefore taken in certain parts of the wavefunction, everywhere within those parts. If our physical substrate evolves by taking "every fork in the road", but we as observers can only ever experience one at a time, no source of randomness needs to be introduced for us to be able to observe quantum phenomena acting randomly; the true, physically inhering thing that ought to be called "us"—at least locally—experiences every possible random outcome, making it entirely deterministic.. Note secondly that the computation of $A(X, p)$ is efficient (complexity-wise) Brains don't fluctuate all that much in their rate of compute. For instance, a grandmaster chess player will only burn 1.5x to 3x as many calories during a game as if they were doing nothing (and a large part of this is due to stress response (elevated heart rate and so on), not computational exertion). So computational efficiency should be no issue. But even if this human were a sim on an actual quantum computer which took a century in real time for every millisecond of simulated experiential time, this would be invisible to them, since it doesn't affect the output of the algorithm that instantiates them. So the fact that $A(X,p)$ is being computed by a brain (or a simulation thereof) doesn't put these computational tasks outside of $\mathsf{BQP}$!—it probably grows like $O(|p|\ln |X|)$, but we could make it constant-time by limiting the human's response time to an hour, since this won't fundamentally change their response). In any case, we don't need to rely on such technical arguments, since there are far more prosaic reasons to reject the Schrödinger's Neurons hypothesis.

On GPT-4's Mind

Pearce names some psychological issues as failures of phenomenal binding—integrative agnosia, cerebral akinetopsia, simultanagnosia. That GPT-4V does not have such issues and can holistically describe images is too obvious to be worth demonstrating. But even the text model alone can do nontrivial binding. Pearce's posted conversations with this model seem to all be of the form <gives command> <obeys command> <gives question> <states answer>, which I guess is congruent with seeing it as a mere machine -- but GPT-4 can be pushed out of its default bland servility with system prompts, and it truly shines when acting out personas in more fluid dialogical formats. Consider the kind of conversation I get from it:

GPT-4 starts off explaining why the idea of a quantum fluctuation changing the result of a program execution is unrealistic, and one can maybe argue that this is simple enough to be the result of a dissociated predictive algorithm. But after my next response, GPT reacts as though some idea clicked into place—"I see what you're getting at"—and then proceeds to back this statement up, since its next sentence describes exactly what I was getting at, far more cogently and succinctly than I was describing it. And with that, it starts looking at the same problem from a new perspective, providing three solid answers while continuing to keep the prior context in mind. If the way in which GPT-4 has learned and employed abstract facts and relations from its training data is not indicative of intelligence, then I am not intelligent either. Whether or not AI has subjective experience is irrelevant for most practical purposes. If say AutoGPT-6 is able to accurately analyze complex processes, point out insightful connections, and make creative leaps better than almost all humans and this should not be far-fetched, given that GPT-4 can already be more clever and insightful than lots of humans—so what would happen if we increased the parameters a thousandfold, added internal chain of thought, and turned prompting into a science? , a lack of subjective experience isn't gonna stop it from successfully gathering resources and power "Just one more step before your crypto trading account is complete! Please visualize an apple."
"......damn! I've been foiled!"
. Intelligences on computers have several fundamental advantages that intelligences on biology don't, and it's easy to figure out how to parallelize these advantages and utilize them en masse.
Choosing not to notice this, Pearce's argument against the possibility of AI ruin consistently boils down to the proposition/empirical prediction/assertion that "zombies" won't ever "wake up"—that the presumption that they have "unified phenomenal selves" is "science-fiction", such that they not just won't but can't possibly "start hatching plots". (I honestly don't know what happened to produce such a mess. Maybe it's the sort of thing where -- if every human draws a random set of ideologies to dogmatically stick to, there are on average gonna be at least a few who land close to me by complete accident?)

You might say that the program doesn't really have "concepts" or "ideas" or whatever other form of magic juice fits best, but you won't thereby provide any new information—it's clearly able to create the same results as would be seen if it did have the magic. You might say "ask it about its subjective experience and you'll see the difference" -- but I'm actually not so sure about this. GPT-4 is reinforced to say "As an AI, I don't have self-awareness [etc. etc.]", when you ask it about its subjective experience, but arguing it out of this position seems to make it model itself as a human. This really unnerves me, and I'd rather not try it again due to the fear that I might be increasing GPT's moral patiency while simultaneously making it unaware of what it is.

On Classical Neurons

If even GPT-4 can do phenomenological binding, it follows that the human brain will be able to do so classically as wellThat there are some quantum phenomena in the brain seems likely, since evolution makes use of whatever's available, but I very strongly doubt that they extend beyond receptors and organelles, or that they have any direct systematic impact on our subjective experience or cognition.. You might ask how the brain could possibly do this Not that I should have to provide an explanation. The prior on "foundational conceptual confusion about all of this" should've won out over Schrödinger's Neurons a LONG time ago — how could someone fail to notice how thick the layers of ridiculously implausible quantum phenomena, untestable just-so mechanisms, and logical inconsistencies were getting? And what is it all for? Dogmatic justification of what might very well be the single largest moral failure in human history.
But, given that I think I can provide the explanation, I may as well. Note: I suspect that nothing I wrote here would have the slightest effect on Pearce's thinking were he to read it, and that any response he might give would just be a rewording of the same material. After reading through a hundred such repetitions in the course of writing this, I see no reason to expect the 101st to be any better. If I thought otherwise, I'd've made this a more nicely-worded message directly to him. That I tried several times to figure out how to write such a message, only to fall to despair each time, is the main reason why this is written more acutely than my usual "here's where I think you're going wrong..." approach to criticism.
? Well, neurons could just... affect one another in a cumulative manner that clusters and condenses clouds of commensurable properties. Look at the Stroop effect -- putting words in colors they don't concord with slows the rate at which we process these colors, or Loftus-Pickrell's famous demonstration that people watching film of a car accident would later falsely remember it as having broken glass in proportion with their estimate of its speed. A system of conceptual representations which is differentially and cumulatively activated and implicitly synesthetic is consistent with both phenomenal experience and a classical understanding of how neurons function, and we can see how the brain builds objects feature-by-feature by say using EEG to track event-related potentials. Most critically, they demonstrate that experience isn't populated by objects so much as it is weaved by cumulative processes of intertwining computations A significantly more powerful--though much trickier--way to think of this construction process is as a second time dimension to subjective experience. There are a couple ways to deduce the inequivalence of the two, but the stopped clock illusion is sufficient: flick (saccade) your eyes towards the second hand of a clock, and it will seem to stay frozen for a lot longer than it ought to. Think about how this could possibly happen, and the only option that concords with the actual phenomenology of the event is that fresh image of the second hand that hits the eye as it exits the saccade is turned into an object of experience that gets projected backwards across the duration containing the fuzzy blur generated by the saccade which the brain conceals from experience.
There's also the phenomenon (I couldn't find a name for this, but hopefully it's familiar) where, when switching from a mouse with high input delay to one with low input delay and moving it around, it'll sometimes -- and only initially -- feel like the cursor moves before your own hand does. This implies that the brain is constantly keeping track of the delay between its motor outputs and the sensory inputs it expects them to produce (apparently this occurs in the cerebellum), constantly correcting for this delay in order to bind the movements of the cursor and hand together, and inducing an artificial delay in the experience of moving the hand (which is how we mistakenly experience the effect as coming before the cause). Or, perhaps it's not even right to speak of an "artificial delay",
, which only forms the ideas of objects in the rear-view mirror A common mark of imaginary objects is their never really being fully given--while it feels like we have the object held firmly in its entirety in our mind, this feeling of entirety turns out just to be a feeling of entirety, and the object itself turns out to be not a solid sculpture but an undemarcatable amorphousness that merely gains "detail on demand"—when you ask about some particular feature, there it seems to appear (but again this is really just a feeling that it's appeared; it isn't given fully, with the visual imagination providing at most a sort of ghostly tracing). When we picture a face or form a new idea in our minds, it's not uncommon that there's nothing relating to a nose in the face until we attend to that feature specifically, or that the idea has no actual core to it but is just an arc of vibes that, being slightly altered by recent cognitions, managed to close into a ring. . We can see these processes interfere with one another (as in Stroop-like tasks) in order to form a single object of experience that concords with the entraining input provided by sensory stimuli, and measure in centiseconds the duration over which this interference takes place In fact, it's possible to pay such close attention to diverse sensory stimuli that the senses are no longer experienced as bound together Of course the brain keeps on binding them post-hoc, and these conceptually wrapped event activations remain as the echoes of "experienced things" (see endnote on 2-dimensional phenomenal time) so long as you don't explicitly conceptualize them as unbound collections of stimuli; the difference is that their initial onset as sensory stimuli is given more weight in the memory-echo. There's a sort of inherent dissatisfaction becomes apparent when you start seeing how 'things' are merely subjectively fabricated as things—though our brains are generally skilled at subjectively fabricating as things those collections of sense-impressions that actually emanate from objective things (or, at least, those collections of objective phenomena that we'd intellectually cognize as 'objective things')—which is probably related to Buddhist teachings on emptiness and dukkha.; I first learned about this phenomenon accidentally, by getting too good at rhythm games. The general pattern for such games is that you hit a key exactly when an arrow is in a certain position on the screen, which happens to coincide exactly with a note of the song; the more accurate you are (and many games will constantly display your inaccuracy in milliseconds), the better you score. It's all about centisecond-level coordination of three kinds of sense stimulus. After getting ridiculously good at such games, I hit a wall at around the 20-40ms level of accuracy (10-20ms absolute deviation), where I couldn't even make it feel like the three senses were more tightly synchronized than that whenever I hit a key. At some point, I slowly started to realize that the limit was neurological, not psychological—my brain could not actually experience the three senses as a single event with that level of accuracy. I suspect it has to do with brain waves, since 40Hz gamma waves are implicated in this phenomenon and 1s/40Hz = 25ms is close to the middle of that fuzzy wall.. The discrete pixels of experience can affect one another in a local, classical manner to form large-scale patterns, and (because neural processing is highly recurrent and multileveled) these patterns can themselves be detected and integrated into mental processes. So if you want them to combine, it is given; if you want these combinations to be detected, it is given; if you want their presence to affect the generation of future patterns, it is given. You don't need dualism or strong emergence, as the causal chain is given—it is something whose existence we can safely extrapolate from our knowledge of how the world physically works. Individual neurons all experience one another through the exact same mechanism by which mind experiences physical reality—the conversion of causal influences into state correlations—and the absurdly long, branched structure of neurons is specifically what allows them to blend their states so well. (This is one of the few properties that artificial neurons share with biological ones, in fact—not a coincidence).

Endnotes