Linguistic Corrections + LessWrong (1870 Words)


Preface

This is a very confusing essay, to the point where I'm almost tempted to trim it down. I haven't — I think the progression of thoughts is useful — but you absolutely have to read How I Write before reading this.

You also probably have to read Linguistics (and Advice, while not strictly necessary, is kind of part two of this three part series on language).

This started as a commentary on LessWrong, but I realized an issue in my thinking that sprang from it halfway through, so what you see is the development of my beliefs on the topic in real time. This is valuable, but also probably a bit confusing — sorry about that. Keep it in mind as you read.

It's likely that at some point this essay will stop making sense. Because the terms I use all mean things (broadly speaking), it's very easy to read it in bad faith (in the Advice sense) — be careful of this, and make sure you actually know what I'm talking about as I go.

My Strawman

In my Linguistics essay, I ended with a note about why I don't like LessWrong:

But they then set aside observation as less valid. I claim this misses a fundamental part of what language does, and vastly limits the truths the LessWrong community can actually reach.

To me the best example of this is rule 1 of rational discourse: "Don't say straightforwardly false things." I think the community would consider a lot of what I say "false." That's the core issue. It misses the point of what we're really doing here with all these words.

I don't think this claim in and of itself is entirely wrong. LessWrong as a whole does miss a lot of what language does. But the mental strawman I had constructed was that they were trying to access all of truth through language. And they're not. There are certainly members of the community who think this way, and I still believe this is a very problematic trap to fall into. But in reading Duncan Sabien's analysis of the site, I realized that I misunderstood their fundamental aim.

What I Missed

The thesis of Linguistics is that language is both definitional and associational, and I claimed that the associational form was much more valuable, and how language should be used. I still think this is mostly true. This is, at the very least, how language should be used in the context of an essay like this, where I'm trying to convey the experience of a realization.

But what if we were in the same place in our thinking? What if we had relatively similar experiences? Then how would we progress our ideas? Associative language probably does a passable job — it could optimally express my thought process. But it seems to me that the thought process in question in this case is me trying to rigorously reason to a new idea. And I think in terms of experiences (or, put differently, thoughts are experiences). So What I'm doing is mentally using experience to simulate logic as best as I can, and then using associative language to describe these ideas as best as possible. But… What if I just used definitional language to actually describe the logic? That sounds like a pretty good idea, no?

All this to say: what I missed about LessWrong is that definitional language is also very useful. Even if LessWrong fails to appreciate associative language, they certainly understand the value of definitional language. And they're not claiming it's the only valuable thing, they're just trying to make a community where everyone has the common goal of being really good at rationality, and can catch each other's mistakes, because rationality is, at the very least, an extremely valuable skill.

"But wait," you say, "Lachlan, don't you really like the essays on LessWrong? And those induce experiences (to use your terminology) in a sense, even though it's purely definitional language. What's up with that? In fact, can't you just apply the argument for why rationality is better for reaching new truths to explaining truths too? That is, if all your theses are things you reached by experience simulating logic, shouldn't you give us the logical path to the theses instead of your experiential explanation?" This is a very good point.

Wait A Minute

This point is so good, in fact, that I had to re-read Linguistics and think for a while. The understanding that I came to was that the way I reached the thesis of the essay was not through logic, it was through experiencing the conclusion. I then broke down that leap into a few smaller steps that, while non rigorous, I understood to be true. When I wrote the essay, I further separated those steps into a full intuitive picture, without resorting to full logic. This is similar to the difference between understanding why a mathematical theorem is true intuitively (i.e. you connected a few mathematical experiences with where you started and they get you to the answer, even if the connections themselves might not be fully justified), and actually writing a rigorous proof.

What I'm saying in Advice and Linguistics, then, is closer to "when explaining a math proof, it's more helpful to explain why it's true that to recite a proof" than "theorems can not be rigorously shown." This is where both the analogy and the term "experience" breaks down. Because it is very much the case that not all experiences can be rigorously shown — we must thus split "experiences" into "logical experiences" and "illogical experiences." What this essay refers to is mainly logical experiences, and thus this distinction is important to make.

The question is mostly answered, but it seems to me that one step requires a bit more work. The math analogy broadly makes sense, but it seemingly breaks down a bit earlier than I suggested. The reason it's okay to work with intuitions in math is because those intuitions are about rigorously defined things — any intuitive step you take can be rigorized, because your intuitions are about how mathematical objects "behave," and those behaviors are based on definitions. To rephrase slightly: mathematical intuitions are expressible with logic. This is a massive issue — one of the core ideas in Linguistics was that in language, there was logic and experience, and they were fundamentally different things — they couldn't be expressed in terms of each other.

This brings me to a (rather obvious, in hindsight) mistake I made: there is no such thing as definitional language. What do I mean by that? I've been representing what LessWrong does as purely logical linguistics. But obviously that's not possible — language is defined culturally, societally, even personally — there's no escaping experiential associations. The dictionary isn't logical, it's cultural. What LessWrong actually does is hone the art of approximating logic using language as best as possible. They try to create terms that are as objective as possible, and remove as much of the subjective experiential association as possible. They think Wittgenstein should have been in the car instead of Camus so that he never got to write his later works.

This solves our problem — it's now clear why the math analogy holds up: just as mathematical intuition is based on rigor, linguistic rigor is secretly made of experiences. So what LessWrong essays try to do (whether they know it or not) is describe experiences using the most modular, general, and objective building blocks they can so as to best imitate logic.

This is what makes them nice to read, even if they usually say very little: they reach conclusions with a very high degree of certainty.

Recall the thought experiment I mentioned earlier where I was trying to collaboratively reach new truths. I construed using associative language for this as "what I'm doing is mentally using experience to simulate logic as best as I can, and then using associative language to describe these ideas as best as possible," and from this drew the conclusion that it would be better to use definitional language. But in light of the fact that definitional language doesn't exist, what I was really proposing was using associative language that approximates logic as well as possible. While this is not strictly bad, it's also less obviously good. When explaining math, you don't want to rely too heavily on intuition at risk of being confusing, but you also want to be as intuitive as possible so as not to get tangled in rigor. In the same way, it seems like the best way to explain experiences is with rigor where needed, but otherwise without — that is, seemingly approximately in the way put forward in "How I Write."

Back to LessWrong (and Other Thoughts)

I think my original point actually stands: as a site for reaching truth, they fall short, but as a place to hone logic-approximating skills, they're essential. The math analogy holds up: most of being good at math comes from intuition, but it's completely useless unless you have a sufficient ability to rigorize. Similarly, it's very hard to reach meaningful truths with logic-approximation (and most truths on LessWrong are, in an experiential sense, not that "deep"), but the skill trained there is essential for validating intuitive steps.

It's important to note that "validating" means something slightly different here than in math. Yes, part of it is rigorizing, but it's also about separating from experience as much as possible, or at least understanding where experience comes from. I think the best example of why this is so important is ethics, where a single bad intuitive step sends you back to ethical square 1: moral relativism (e.g. intuitionism), because societal norms are so deeply baked into most of our experiences.

Two more questions pop up (though they're really the same): can logic approximation describe all experiences, and what do we do with our brand new definitions for experience? The answer to the first is yes, but really no. In theory, logic approximation and deep experiences are made of the same stuff. But if you want to describe what it's like to cry on the bus and enjoy it rigorously, good luck. You can probably do it, but I suspect it will take hundreds of thousands of words at least (in the same way all math proofs are technically expressible in first order logic, but I ain't readin allat). Similarly, "logical experiences" and "illogical experiences" are not well defined (or at least illogical experiences = $\varnothing$) in theory, but in practice the distinction is an approximate line drawn at "reasonably expressible via logic approximation." This is qualitative, but a corollary of this essay is that literally every definition is also qualitative, so that's probably fine.

Math and Philosophy

A brief final note: as I wrote this and developed my ideas, it truly struck me how identical (but almost antithetical, in a sense) math and philosophy are. The two take the exact same approach to building layers of truth on top of themself, except one is purely logic despite its approximation of human experience, while the other is purely experiential despite its imitation of logic. It adds a lot of depth to the beauty of both, I think.

This is actually so much the case that I find myself falling into the fallacy here of assuming they're perfectly opposed, and thus drawing conclusions about philosophy from the nature of math. Most of the ideas I did actually think through, but it's a tempting mistake to make.