Discovering The World Beyond Words
(Published Oct. 9, 2022 by Mariven)
A Conceptual Lens
As we go about life, we interpret reality through concepts. We interpret our real selves through concepts, interpreting this very interpretation through the concepts of “interpretation” and “concept”. Thus is objective truth filtered through a conceptual lens before it reaches usOf course, this holds for logical truths as well, as will be explored further down this page..
What do we see when we focus our minds beyond this lens? We see that it systematically deludes us in our analysis of reality, which is not itself made of whatever concepts we happen to see it through. This is the subject of this article.
What do we see when we focus our minds on the lens itself? What do we see when we manage to remove the lens? That'll be addressed in the future.
Noticing Ambiguity
A. 'Mu' as Vacuity
The Japanese character 無, “mu”, is used to indicate that something fails to be -- Wiktionary gives the definitions “nothing, nothingness, null, nil, not”. In the Buddhist tradition, though, it carries two extra meanings. One is a characteristic of the formal condition of mind, which will concern us later; the meaning that concerns us now interprets 無 as an answer to a question, roughly analogous to “N/A”. to give this answer is to say that the question as so fundamentally confused that no straight answer could possibly be a true one. It has a connotation of _absence_, and often indicates that the question wrongly reifies some actually-absent concept, rests on an implicit assumption about an actually-absent state of something, or so on. Thus, it might be used in any of the following situations:
-
If I’m asked “is the present king of France bald?”, I could answer 無. Answering yes or no would seem to accept the existence of a present king of France, when such a person is actually absent
From the standpoint of formal logic, the situation is ambiguous. It is (vacuously) _true_ that !c{every person who is the present king of France is bald}{$\forall x(pkF(x) \Rightarrow bald(x))$}, but it is _false_ that !c{there is a person who is the pkF who is bald}{$\exists x(pkF(x)\wedge bald(x))$}. Linguistic utterances don’t immediately and faithfully convert into logical propositions, so such antinomies are to be expected. .
-
If I chug a liter of coffee at night, and, upon having trouble sleeping four hours later despite feeling calm, ask myself “what am I worried about that’s keeping me up?”, I ought to think 無 given that it’s not any _worry_ that’s keeping me up — the hypothetical worry is absent — but all the coffee I drank.
-
If I’m asked “did you know that Jesus loves you?”, I could !c{answer 無}{Mentally, at least!}: since I’m not Christian and therefore don’t believe that a Jesus capable of loving me !c{exists}{he’s _at best_ a pile of bones}, I can’t truthfully answer affirmatively, whereas answering negatively would wrongly imply that I’m just learning that Jesus loves me.
-
Sometimes it can depend pretty finely on the wording of the question. If I’m asked “do mice go to heaven?”, I can comfortably reply “no”, but if I’m asked “does _every_ mouse go to heaven?”, replying “no” would seem to indicate that I believe that some mice go to heaven and others to hell, so I’d prefer to answer 無.
-
This pair of contrasting examples reveals perhaps most clearly the essential subjectivity of the thing: I’m comfortable saying that mice don’t go to heaven because I don’t believe in a heaven (for mice, at least), yet by this rationale I could say that the present king of France is _not_ bald because there is no hairstyle that the present king of France has, there being no such king. The difference that causes me to reject the latter question is the increased strength of _implication of existence_ I perceive a yes-or-no answer to it to have; this is why I also reject the question of whether every mouse goes to heaven.
B. 'Mu' as Ambiguity
There is a softer way in which one can take ‘mu’ — not as indicating a fundamental vacuity of the question but instead a critical ambiguity within it, some vulnerability which tends to cause answers to it to end up miscommunicating something. Let’s write this use as µ (the Greek letter mu).
-
If you ask me whether cats are better than dogs, I may reply µ, as you could mean a bunch of things by better — cats make better pets in my opinion, but dogs are better morally — and if I were to automatically slot in my own meaning for the word before giving a yes-or-no answer, I’d certainly fail to communicate what I mean by my answer to you, and possibly cause a contracted conflict by introducing an unspoken discrepancy in our uses of a specific word.
-
If you ask me whether acts that don’t harm anyone can nevertheless be immoral, I may reply µµµµµµµµµ — the number of ambiguities in that question is inconceivably large.
!tab
For instance, take a murder attempt that fails silently, e.g. suppose you tried to poison someone’s lemonade, but the poison you bought from the dark web was really just a tiny vial of carbonated water.
-
If I think of morality as something that can be ascribed to actions independent of the people who perform them, then this does not seem immoral, since absolutely nothing happened except for some lemonade being negligibly dilutedAnd, hey, maybe it was too strong to begin with, so that you’re actually doing something beneficial; maybe the carbon adds a nice kick. Could it be immoral to make good lemonade?.
-
If instead I think of morality as something that must be ascribed to the act insofar as it was performed by a particular actor, then this does seem immoral, as you were doing it with the intent to commit murder.
This is how it intuitively appears to me at the time of writing, and I won’t pretend that my moral intuitions are justified or static, but the fact that such a small change in the meaning of the sentence can flip my answer demonstrates the danger here: people do seem to differ on which of the above ways they interpret “immoral act”It's a non-obvious change too, lying not in any particular word but in the _binding_ of the word ‘immoral’ to the word ‘acts’. Of course, there are more common yet subtle differential interpretations of "immoral act". If you're feeling devious, you can bake them into questions like thumbtacks into raisin bread to create seemingly-irresolvable arguments., but they generally don’t have the capacity to see how they interpret it or to realize that it may be the cause of their problems in a given disagreement about morality; thus, they deadlock.
!tab
And this is ignoring all the different ambiguities, such as whether ‘harm’ means direct harm to particular people, or includes the sort of harm indirectly caused by e.g. increasing your future likelihood to place a burden on family members or the collective welfare.
C. Concept and Definition
The same ambiguity is present in most questions concerning many particular concepts, ranging from the highly abstract (“sentience”) to the apparently obvious (“knowledge”): our individual conceptions of them are highly detailed in all sorts of ways not immediately present to us, as are the means by which they’re activated in certain contexts via communication or cognition more generally, and these details color our judgements in ways that we do not directly perceive but only invent ex post facto justifications for
*THIS IS TO BE EXPECTED FROM FIRST PRINCIPLES*. Given that the brain is an incomprehensibly complex product of stochastically driven natural selection selecting for evolutionary fitness above all else, we can expect no safe harbor in our conceptions of its function — the reality of biology simply does not limit itself to the patterns we happen to cognize, being infinitely more complex and subtle. In particular, one must expect that whatever part of the reality of the brain is indeed responsible for the things we call concepts does not simply carve “concepts” out as a natural kind standing on their own and merely “importing” other functions from the brain, but rather sort of has them indirectly through many other complexes of phenomena some of which may vaguely look like scattered components of the faculty of conception. Every biological abstraction is like this, ontological spaghetti with an irrational Hausdorff dimension. The _entire history_ of the study of biology, especially evolutionary and molecular biology, is people impaling themselves on this bitter lesson _over and over_ without _ever actually adapting their conceptive frameworks to it_.
!tab
There is _some_ kindness to biology, given for instance by the modularity common to evolved systemsSee Alon’s actually-amazing book _Systems Biology_, esp. the final chapter, but it is never complete, being scarred in the battle between optimization for particular function (whence non-modularity) and optimization for general ease of construction, maintenance, and design storage/transmission/transclusion (whence modularity). The hands are _not exactly_ copies of the feet, for instance, as the former had to adapt to its particular function of fine manipulation while the latter had to adapt to its particular function of weight-bearing
The identicality (modulo reflection) of the left and right hands is a good example of a victory for modularity; handedness can largely be relegated to a neural difference not concerning the design of the hands themselves..
!tab
Another example: our brains are capable of learning how to control foreign implements as though they were our own limbs. For instance, when I play a video game I’ve played for a long time, I don’t think about how I should manipulate the _controller_ with my fingers to make the character do what I want, and I’m generally not even conscious of the controller unless something weird happens with it. I just will the character to do something, and my muscles interface with the controller in the way that makes that happenI have a running hypothesis that degradation of this skill, rather than sociocultural change, is actually the main reason that older people have more trouble adapting to new technologies; when the instinctive motor learning doesn't kick in, it becomes a frustrating cognitive task to map goal to input to movement (have you ever tried to trim your hair with scissors in a mirror?)..
!tab
Given the existence of this incredibly powerful faculty, one might conjecture that we use it to learn how to operate our limbs shortly after birth, rather than knowing how to use them right out of the box. Perhaps it’s a form of facilitated variation, with the neural mechanisms responsible for this faculty having evolved a very long time ago, early in the history of chordates, and made it easier for them to adapt to all sorts of niches — flying with wings, running with two large legs or four small legs, swimming with flippers or fins; if this was in general particularly difficult for evolution, starting off with the general faculties would’ve vastly increased the fitness of chordates in general.
!tab
However, supposing that this really is the case, we should expect that for any given genus/family/etc. defining a given body form, it would be advantageous in the short term to adapt this general learning system to that particular form so they can get a head start on flying and hopping. Hence, there’s a conflict between global and local optimization trends that in practice produces a fractal interplay between modularity and non-modularity.
.
In other words, every concept can be seen as having a _liquid aspect_ through which it diffuses into and mixes with other concepts, cognitive processes, patterns of sense-phenomena, and all other kinds of mental phenomena; the liquid _refracts and blurs_ our image of both the concept itself and any given employment of it, so as to alter our cognition in an essentially uncontrolled way that is for the most part invisible to us except, if we’re careful observers, as vague changes in the “tint” and “warping” of the way a concept is made to appear through its employmentIn other words, we look at how the mind autonomously shapes the concept as it is used, trying not to get in the way. (It's difficult to get the hang of).. This conceptive liquidity
As explained elsewhere, It’s confusing to use the adjective “conceptual” to mean “existing as or within concepts” _and_ “pertaining to concept formation, development, and usage”, so I use the word conceptive for the latter (comporting with its established meaning). For instance, a conceptual pattern is a pattern found in the content of various concepts, such a common reference point which they’re thought in relation to, while a conceptive pattern is a pattern found in the structure or development of concepts, such as a common habit of branching out from visualizations. Both of these are different from a concept of a pattern, such as seasonal temperature change.
!tab
In any case, I mean to contrast the phenomenon of conceptive liquidity — the tendency of concepts to have a liquid aspect — with that of conceptive solidity, which roughly consists in a network of phenomenologically clear signs guiding the sorts of recognitions and applications of the concept I am consciously aware of. The solid in e.g. the concept of a dog would include among its many mental images things like “wagging its tail when happy”, “shedding hair”, “putting its paw in your hand when you reach out to it”, “having a wide range of different breeds”, “chihuahuas are especially unpleasant ones”, and so on, for each of these is a particular image that can consciously come to mind when I’m induced to think of dogs for whatever reason. (The last example is meant to steer you away from the impression that the solid component is a collection of objective facts). It’s obviously harder to catalogue the liquid, but it includes among its ranks things like the way I preconsciously avoid getting too close to dogs because of a vague image of unpleasantness-caused-by-shedding-hairs-caused-by-contactI have a slight allergy, such that _hugging_ a dog would probably make me a bit itchy and irritated; shaking a dog's hand or walking next to one wouldn't, but unless I'm consciously approaching a dog, my mind will nevertheless project an unpleasant-itchy-field around the dog which it steers to avoid..
!tab
Note that this contrast isn’t a clean split, but instead a continuum, so that ontologically speaking we’d do better to say “this is _more_ solid, _more_ liquid, _less_ solid, _less_ liquid” than “this _is_ solid, this _is_ liquid”. But it’s just as when someone contrasts sharp knives with dull knives despite sharpness being a continuum, the categorical phrase “sharp knives will do this well” easily being substituted for the continuous phrases “knives, insofar as they are sharp, will do this well” or “the sharpness of a knife enhances its facility at this”.
is the cause of a significant portion of internal mental confusions and external communicative breakdowns.
Thus, if I'm being honest with myself, I ought to say:
- How do I know what I know? µ.
- Are numbers real? µ.
- Is that which is true _necessarily_ true? µ.
- What is the nature of intelligence? µ.
- What are the conditions required for a system to be sentient? µ.
In each of these cases, I can’t convince myself that I don’t actually have multiple determinations (or, in a geometric manner of speaking, a broad space) of the essential concepts involved in the case in a manner that would lead me to different answers based on which !c{determination}{or mixture thereof, as sometimes happens} happened to come to me by chance. For instance, whether I consider the question “are numbers real?” with a conception of reality in which thoughts are real and so are particles or one in which particles are real and thoughts are not matters significantly; I can only extrapolate via induction the existence of a whole ecosystem of partial determinations of the concept of reality each of which affect my analysis of the question in some possibly important way yet only a few of which I can actually seeArguments over Platonism generally come from a failure to understand that we each have _slightly different spaces of conceptions_ of ‘real’, ‘number’, and so on, and we therefore get different answers when honestly applying them to answer a question like “are numbers real?”. If everyone is given a function $f$, and mine is $f(x) = x^2+1$ while yours is $f(x) = \left(x+\frac1x\right)^2$, it’s ridiculous for us to debate whether $f(1)$ is $2$ or $4$ or to debate whether $f(0)$ is $1$ or inconceivable/incoherent/nonexistent.. So what can I answer to “are numbers real?”, or any of these other questions, but “µ”?
We might call this the µ-problem: we cannot generally be sure that our concepts are so-determined as to allow for particular answers to any of the questions those concepts allow us to ask. To be precise, we might call this particular form of it the internal µ-problem, contrasting it with the problem of discrepancies in conceptions among different _people_, the external µ-problem.
Don’t think that you can escape this by assigning definitions to the words you use to represent certain concepts in order to reason about them in a logical, propositional manner as is done in mathematics. For:
-
Whence the definition? Originally, it must be a rigid descriptor of the solid content of some pre-existing concept, in which case it is shaped by the concept rather than shaping the concept. It must match the concept in some sense if it is to be a definition _of_ the concept, and yet if it is to be useful it must allow us to alter, expand, divide, revise the very concept it is a definition of as we reason about the concept through it — in other words, it must necessarily engage in some sort of conceptive dialectical process.
!tab
This tends to be how mathematical concepts are “domesticated” via their attaching to definitions, with the mathematician (a) sketching out a series of formal desiderata given in terms of already-domesticated mathematical concepts, which desiderata are to be satisfied by an object to which the new concept applies, (b) playing with various definitions of the new concept until they find one that kinda works, and then (c) taking this concept-definition pair through the wringer via the creation of various propositions and proofs until the definition seems to correspond to the concept, applying as a single unit to the mathematician’s further conceptions
For instance, suppose I’m doing some topological work, trying to compute some numbers associated to some spaces; I have a brute-force way to calculate them, but it's a lot of tedious grinding, so I look for shortcuts wherever I can find them - maybe I'll notice that for some spaces two different parts of the calculation will cancel each other out and therefore don't have to be worked out explicitly. After a while, my mind will start to group these shortcuts into patterns and start to see certain patterns in spaces which indicate that I can probably use a certain kind of shortcut. The grouping itself will generally be unconscious, though I may very well consciously think “ok, so this is one of _those_ kinds of spaces”, tying in the “those” with the almost-entirely liquid concept my mind has formed of a particular pattern of space to which a particular pattern of shortcut may be applied. (It is the liquid concept formation, the grouping, which is unconscious).
!tab
If in one particular case the pattern ends up presenting itself very clearly, such that I end up explicitly noting what it is about a certain space that allows me to take a shortcut, and what exactly that shortcut consists in, I may have a feeling of sudden realization as one of the liquid concepts my mind’s been accumulating suddenly gains a solid core. At this point, I may make a definition which attempts to solidify a much greater region of that concept: “A space has X property when for every point p we can find a corresponding point q such that the proposition $\varphi(p, q)$ holds”. I’d follow this up with the essential point of the definition, i.e. the shortcut (call it S) in the computational process (P) for the number (N) of the spaces I’ve been trying to compute, making this into a proposition as: “For every X space, applying shortcut S to traditional process P still yields the correct number N”. In practice, I may not even name the property, just keep referring to the space as one of _those_ spaces, but with a much more solid _those_ in mind, but for clarity let’s call it X-ness.
!tab
I’ll then go back through some of the previous computations I’ve been doing, looking for other spaces which were X and seeing what kinds of computations I did for them. If indeed I did use S in every case where the space was X, or at least I could've used it, I’d start trying to prove the proposition in full generality. Usually, though, whether it’s when I’m searching through previous computations or trying to find a general proof, I’ll find some case where the number N of an X space can’t be found through S — every time this happens, I’ll be forced to redefine the mathematical objects X and S, and will do so by trying first to conceptualize what went wrong, what in my concept is inapplicable: does it essentially fail here because the space “has singularities” or is “anisotropic” or “isn’t smooth” or so on, or because the space necessitates a sort of “mirror image” of the shortcut or an “application of it to multiple different components and combination of the results”? Once I’ve gotten some sort of conceptual grasp, I’ll try to come up with some new formal definitions, usually by making X more specific or by making S more general, and continue with either the checking of previous examples or the general proof.
!tab
This will continue to happen, with the mathematical definition being a sort of interface through which the concept interacts with and is altered by objective truth; eventually, hopefully, I’ll end up at a formal definition together with a provable proposition which intuitively concord with a revised concept of the property and shortcut, this being one end of the dialectic. (It may continue later, if I find some useful generalization of the property in which a weaker version of the shortcut, maybe a sort of ‘essential movement’, can be applied, or if I find some useful additional specification in which a stronger version of the shortcut, which maybe simplifies some of the more convoluted steps, can be applied. No doubt you’ve encountered such ‘essential movements’ before, if you do anything rigorous: if you do computers, think of the essential step in a dynamic programming algorithm, “the X of this is the extremum of the Xes of these smaller parts combined with this enlargement term”; if you do math, think of the essential step in the proof of the uncountability of the reals, namely the postulation of a hypothetical enumeration and construction of a new non-enumerated real via the diagonal argument; if you don’t do either of these, you better get started).
.
!tab
How could a dialectical process domesticate non-mathematical concepts? I propose that it must link to some complete, unchanging, and logically infinite form outside of the mind (for a domestication to be possible even when we don’t know where it’ll end up), and that the only such form is to be found in mathematics. First, let’s look more closely at mathematical conceptions.
-
Mathematics is only able to domesticate concepts such as “space” by seeing which of those suggestions cognition generates through the concept are supported by formal analysis, modifying the both the concept and cognitive patterns in response to the results of this reasoning so as to learn the form of and select against unsupported suggestions
There _is_ a logically coherent mathematical object which can be reached through mere conceptive cognition, just as there are physically coherent real objects which can be known through conceptive cognition. That cognition is conceptive does not _entirely_ cut it off from external worlds; I call the necessity of accounting for this the transcendental problem.
. In this way, the concept becomes amenable to formal analysis. It’s the formal aspect of the analysis — the objectivity of the logical movements made — which is the driving force behind this domestication; while it’s true that the vast majority of mathematicians don’t usually reason in pure formality, they reason in a sense which tends to be _very close_ to formalization and thus good at domestication nevertheless.
!tab
However, a mental concept which we attempt to capture via definitions does not by virtue of having definitions automatically _conform_ to them; it’s still lying in its feral state within our minds. Unless a significant amount of time has been spent agonizing over said formal definitions, it’s unlikely they actually have domesticated anything — and, in fact, nobody _really_ tries for full domestication. If all I think when I think ‘topological space’ is ‘set equipped with collection of specified subsets…’, if I do not picture any sort of blobbiness, extension, cohesion, etc., the concept has become _useless_ for me, as all I can do with it is analyze it through propositional logic, and the definition in itself suggests nothing for me to do with it. Productive mathematical thought happens at the _intersection_ of definition and concept, so the internal µ remains, bringing with it all sorts of invisible effects on our cognition.
-
Even if you _do_ manage to formalize your concepts to such an extent as to be able to control the effects of the generally-invisible details that condition your cognition of it, there’s no reason to expect that you'll be legible to anyone who has not gone through the same tedious process. So even if you remove the µ for yourself (the internal µ), you have not necessarily removed it for the people you may wish to converse about the concept with, and therefore have not removed the µ standing between you and others (the external µ).
This is a problem that we cannot simply throw off — nor is it a problem that we _ought_ to throw off, for it’s part of what allows our thoughts to be productive.
The Non-Kind World
Reason tends to assume that anything it can ask it can find an answer to, and that this answer is simple in structure: if the question may be _interpreted_ as bivalent, having either one answer or another, Reason postulates that it _is_ bivalent — this failing, it assumes that the situation is such that one of the two answers may be selected as “close enough”. If there are many possible answers, it tends to assume that one among these can be selected as the “real” answer — if it is somehow decisively shown that no perfect answer exists, it tends to assume that it can get one “real closest approximation”. If there are many possible perspectives or definitions each of which yield their own answer, it tries to pick one perspective as the “canonical” one, thereby obtaining one answer. One, always one. It assumes that words have definite meanings, that statements have definite logical forms, that things in general can be made systematic and boiled down to single, clear answers. Even those who consciously deny this manage to assume it at every opportunity, because they don’t consciously realize — or, at least, care — that it takes deep mental alteration to gain even the slightest hope of defying the natural illusions to which the mind patterns its cognitions.
“Whose fault was the plane crash?” — why do you assume that there was a _who_ whose fault it was? Do you not see that, in reality, it happened because of a multitude of happenstance events, a plurality of essentially random coincidences, a series of ordinary reactions, a chain of simple misunderstandings, all combining in some boringly predictable way to cause the accident? The reality of the situation is right there, and _it is no more than it exactly is_. If there is an agent to whom we can attribute an _exceptionality_ of behavior, such as the pilot who, uncoerced and unprovoked, axes his copilot and points the plane straight down, then we’re in one of the rare cases where we can _reasonably_ pin the fault on one single person.
More commonly, there are multiple agents with less exceptionality — the repairman who, hungry, forgets to double-check a few bolts in an engine because he’d like to finish up quickly to get lunch; the inspector who, having slept poorly enough to earn a crick in his neck, fails to look up high enough and see the loose bolts hanging; the new pilot who, interpreting the ergonomically induced tension he feels as anxiety, is nudged into panic by the loss of the left engine even though he trained for recovery even when both engines go down, his mind racing right over the procedure his instructor failed to properly drill into him; the air traffic controller who, hardly being able to make out the pilot’s panicked, stuttering voice over a poorly engineered tin can radio, mishears a couple words so as to misunderstand the situation and give directions to an airfield where local winds prevent the pilot from approaching correctly with just the right engine working.
Maybe you want to blame one person, saying somehow they were “essentially” the one at fault. Obviously, this is wrong, but I doubt that’ll stop you. Maybe you’ll be a bit more cautious and say here that the blame is shared by multiple specific people, but of course the number of people depends on how wide the net of exceptionality you cast is, there being no canonical size to this. Furthermore, the more details we consider in judging the exceptionality of each person’s behavior, the less exceptional it ends up being! The more powerfully you condition on someone’s psychology and situation, the less surprising their actions will be; the more powerfully you condition on someone’s upbringing and genetic tendencies, the less surprising their psychology will be; and so on. Don’t bother creating some ad-hoc counterfactuality condition that you fail to see just spreads the ambiguity around so you can’t see it as easily. _There is nothing more to the situation than what it exactly is_, and there is no notion of fault inherent to the bare reality in which this situation obtains! It’s an illusion which sticks to reality through the transcendental equivalent of static electricity.
“What is the function of the prefrontal cortex?” — there is no exact function. You can approximate, and if you’re lucky this approximation will give you decent answers to a significant range of questions concerning its behavior or connectivity, but you must expect reality to defy your conceptual approximations when you look closely enough, because reality is not _built_ according to your concepts, only thereby _interpreted_. Of course, there is no real thing that is ‘the’ prefrontal cortex either — can you exhibit it? No, you can only point to a part of any given real person’s brain, and you will have pointed at something which is unique in structure and connectivity (as brains are) and therefore slightly different in function from the “corresponding” part of any other real person’s brain.
“Is this supplement healthy?” — why do you assume it either will be or won’t be healthy? I’ll leave out the narrative here to skip to the conclusion: _it will make the alteration to you that it specifically will make to you specifically, and no more_. To the extent that you can know this alteration and carve out some part of it which you can coherently assimilate under your image of health, that’s a bonus — there is no notion of ‘healthy’ inherent to the reality in which the, say, plant underlying the supplement is produced, so you don’t get a “yes” or “no” when you ask whether it’s healthy, but another ontological fractal. This holds even when the supplement is designed to be ‘healthy’. Not that I avoid supplements on this basis — there is some transcendence. The _formal condition_ of the question, when a precise and entire answer is demanded, is illegibility, but humans manage to make do for approximate and limited purposes.
The ubiquity of these indefinable fractal interplays hiding beneath the shallow words we wrongly assume are supported by definable categories is what I mean when I speak of the “non-kindness” of the world. It’s an overloaded phrase, ‘kind’ here meaning both type/class/category and nice/pleasant/easygoing. In general, reason assumes that it is living in what an epistemically _kind_ world — or a world where it can neatly, coherently classify things according to their kind. This is false. The world is epistemically cruel, or _non-kind_, such that any assignment to things of a kind into which they fall is always loose at the seams, always partially incoherent. When we do not realize this, we set ourselves up for failure as we repeatedly aim for a perfect answer that just does not exist.
There is no guarantee...
- That a question should have a definite answer.
- That a word should have a definite meaning.
- That a meaning should have a definite encoding in words.
- That an utterance can be made unambiguous and clear.
- That a concept can be made unambiguous and clear.
- That a concept can veridically understand the world.
- That a concept can be veridically communicated.
Humans systematically neglect the _incongruity between conceptivity and reality_, thereby ending up in constant confusion and conflict over fictions. We do this because we do not have the mental tools required to navigate a non-kind world. These tools are what I'm building.
Endnotes