NOTE: I put this under draft day, because I'm hopeful someone will read it. I have no problem with its presentation, rough draft or whatever.

I think EA's don't quantify much or very well, but claim to update their knowledge in a quantifiable sense, basically offering a bogus description of what they do when learning, and also providing a cultish social ritual for how they do it, with the occasional addition of increasing their desire for betting behaviors as a part of the exercise of their epistemology, or emphasizing the importance of specific ideas of the future, as needed. I explain this below. Here I am referring to quantifying probabilistic uncertainty, rather than other uses of mathematics.

Perspectives on how or why EA's quantify beliefs and decisions include:

One - The tendency for EA's to "update" or quantify various kinds of beliefs has a structure that I have seen before, in a different group of people. I am familiar with this suspension of disbelief around discussion of mental mechanisms and the commitment to a group dynamic of offering each other communication rituals specific to the in-group during conversation with other members. The communication rituals address and reify a false idea of how a mental mechanism works, and lends an aura to the interaction, as though it is something special, earned, or developed through practice. It's something you find in any in-group or cult. The EA updating ritual is a ritual. You have a discussion. You want to "update, " given that they made you feel convinced. You let them know that you will "update" on the new information. "People updated after event X", "I haven't updated on this in awhile but then I found this information from you", EA's make assertions like that. You know to discuss your updating with people who will understand what you mean, and appreciate it.

Two - There is another perspective on it, the perspective that EA's are sometimes quantifying their feelings of doubt or uncertainty or conviction wrt beliefs. I find this perspective appealing as an interpretation of EA's acting in good faith to "update" or quantify beliefs or decisions. Good introspection will let you notice degrees or shades of feeling level or intensity. Your certainty can indeed increase and decrease. It shouldn't be a surprise to anyone, however, that feelings do not align with our conscious knowledge. Thus our usual awareness of our "irrational" feelings. Measuring feeling intensity is important to make them more accessible to your way of thinking, whatever awareness, insight, or trigger that they represent. Rather than an all-or-nothing phenomenon, feelings become more nuanced and granular, as their level of intensity gets measured and labeled. However, while feelings might tell you something about what you believe in the moment, they don't always tell you what you should believe based on facts or evidence.

CAUTION: Typically, feelings are influenced by motives or beliefs in ways that the person with feelings cannot control when trying to "think clearly". That is a good reason to use external cognitive aids (for example, a written list of evidence or a pros and cons write-up) for some kinds of argument analysis and decision-making because otherwise your feelings would filter out some of what you need to consciously consider.

Three - There's a perspective that EA's are actually betting addicts, and that their ethics and philosophy reflect an unconscious commitment to their risk-seeking behavior. To the extent that that is true, you would expect to see a lot of betting behavior among EA's, and I don't have strong evidence of that. Most EA's seem committed to career interests, not betting. However, the presence of the metaphors of betting in EA epistemics is alarming. Words like "casinos", "cashing out", "odds", show up in most discussions, and serious debates apparently require betting money to resolve. "Updating" from this perspective is like updating a bookie's betting odds, and the EA is the bookie or the bettor.

Four - EDIT: I almost forgot this perspective, though it is one I've held for a while. By the prediction of existential risk with specific probabilities, EA's emphasize the outcome or its contrary. For example, EA's emphasize a path into the future that is ambiguous with respect to the consequences of developing AGI. Supposedly, AGI either threaten our existence or guarantee a bright one for us. There's a chance of each outcome. By selecting a specific probability for one, it or its contrary can be emphasized. Does AGI safety require desperate work or are AGI going to bring prosperity and happiness to everybody? Seems like either way your AGI research job is vital. In the case of Climate Destruction, on the other hand, the consensus among EA's is that its odds of causing an existential crisis are quite low. Instead, EA's emphasize a contrary interpretation of our current path into the future: a status quo approach of economic development and free markets leading to technological utopia, provided AGI safety concerns are solved. Since the estimated probability of a climate apocalypse is not testable, nor requires any real proof, the choice of number simply lends emphasis to a worldview or agenda. In the case of Climate Destruction, a crisis whose presence threatens the status quo of economic, social, and business development, the agenda is to maintain cash flow and the worldview is one that the wealthy hold. The choice of message about Climate Destruction simply reflects that self-serving behavior.

NOTE: EA's do mention tail risks going up, and sometimes mention climate change as a tail risk. Discussing tail risks seems to be another way of claiming that "Yeah, we're on the right path but it's getting more risky to stay on it!" With respect to AGI harming everyone and climate destruction causing an apocalypse, I am scared of both and believe neither's contrary deserves following an ambiguous path towards it. Our current path is the wrong path.

Those are a few critical perspectives on EA updating. Speaking for myself, I prefer a categorical statement of belief followed by learning to constrain the belief to contexts in which it is not contradicted. If there are no such contexts, I move on. There are more and less efficient methods to constrain a belief. However, by "belief", I mean something that I find tolerable to assert as part of a conscious deliberation about what belongs in my ontology, or a conscious investigation of truths about the world. Murkier beliefs that I hold are found somewhere in my unconscious, as intuitions or heuristics or nagging feelings, where they typically remain, surfacing every so often to clear my head or irritate me a bit, lol.

TIP: I have been trying to rid my own self-talk of the cultural tick of referring to possibilities and probabilities when what I mean are plausible ideas (plausibilities, haha) and matches to contexts. I don't necessarily recommend it to you unless you would like some "deprogramming." An alternative term than "context" is "schema", "prototype", or "situation". Contexts have variations or nuances in specific instances, and the more a specific instance varies from some prototypical form, the more the word "ignorance" describes my knowledge about it. "Ignorance", not "uncertainty". If you want to explicitly quantify less, you can try out what I'm doing. For example, you could answer the question, "How should I constrain my belief about this topic?" or the question, "How similar is this context to any that I compare it to? What are the differences?"

17

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 11:04 AM

I'm not sure if I understand where you're coming from, but I'd be curious to know: do you think similarly of EAs who are Superforecasters or have a robust forecasting record?

In my mind, updating may as well be a ritual, but if it's a ritual that allows us to better track reality then there's little to dislike about it. As an example of how precise numerical reasoning could help, the book Superforcasting describes how rounding Superforecasters predictions (interpreting .67 probability of X happens as .7 probability of X happening) increases the error of the prediction. The book also includes many other examples where I think numerical reasoning confers a sizable advantage to its user.

Thank you for the question!

In my understanding, superforecasters claim to do well at short-term predictions, in the range of several years, and with respect to a few domains. That is not me speaking about my judgement, that's some discussion that I read about them. They have no reason to update on their own forecasts outside a certain topic and time domain, so to speak.I can track down the references and offer them if you like, but I think the problem I'm referring to is well-known.

I want to learn more about the narrow context and math where superforecasters are considered "accurate" versus in error, and why that is so. Offering odds as predictions is not the same as offering a straightforward prediction, and interpretation and use of odds as a prediction is not the same as acting on a prediction. I suspect that there's a mistaken analogy about what superforcaster's actually do and what it means for assigning subjective probabilities to credences in general.

EA's offer probabilities for just about any credence, but especially credences whose truth is very hard to ever determine, such as their belief in an upcoming existential harm. Accordingly, I don't believe that the mystique that supeforecasters have can rub off on EA's, and certainly superforecaster success cannot.

Other approaches for different purposes, like fermi estimates, where you attempt to estimate the size of something by breaking it down into components and multiplying, are good ways to get a better estimate of whatever is being estimated, but I don't consider that an attempt to assign a probability to a credence by an EA in a typical context, and that is all I was focused on with my critique.

Statistical estimation is used in a lot of domains, but not in the domain of beliefs. If I were taking samples of runs in a factory, looking to estimate numbers of widgets with construction errors coming off the line with some random sampling, I wouldn't be thinking as I do here about EA updating. EA's don't do much random sampling as part of offering their subjective probabilities. While it might be useful to check for background distributions relevant to event occurrences under the assumption that some outcomes are determined by a population distribution, most of what happens in the world is not random in that sense, and our feelings and beliefs are not about interpretation of such random events, for the most part. An EA working in a factory would do what everybody else does, some random sampling, not use their "priors" and guess a percentage and thus a probability of a random widget having a construction error.

I don't object to numerical reasoning. It's really the "updating" that I think is dubious. There are a lot of great equations in mathematics, any of which you could build a a community around, and Baye's theorem is, I guess, one of them, but it's gone wrong in EA.

Thanks for taking the time to respond! I find your point of view more plausible now that I understand it a little bit better (though I'm still not sure of how convincing I find it overall).

Sure, and thank you for being interested in what I have written here.

I didn't offer an argument meant to convince, more a listing of perspectives on what is actually happening around EA "updating". For example, to know that an EA is confusing judgments of feeling intensity and truth probability, I would have to have evidence that they are acting in good faith to" update" in the first place, rather than (unconsciously) pursuing some other agenda. As another example, to know that an EA has a betting problem, the psychological kind, I would have see pathological betting behavior on their part (for example, that ruined their finances or their relationships). Different EA's are typically doing different things with their "updating" at different times. Some of them are sure to have a bit of risk-seeking that is unhealthy and distorts their perspective, others are doing their best to measure their feelings of certainty as a fractional number, and still others are being cynical with their offerings of numbers. If the superforecasters among the EA's are doing so well with their probability estimates, I wish they would offer some commentary on free will, game theory, and how they model indeterminism. I would learn something.

If there were a tight feedback loop about process, if everyone understood not only the math, but also the evidence that moves your credence numbers, if there were widespread agreement that a new bit of evidence X should move probability of credence Y an amount Z , then I could believe that there was systematic care about epistemics involved in the community. I could believe that EA folks are always training to improve their credence probability assignments. I could believe that EA folks train their guts, hone their rationality, and sharpen their research skills, all at the same time.

But what actually goes on in EA seems very subjective, anyone can make up any number and claim their evidence justifies it and don't really have to prove anything, and in fact, it's hard to make a case that is not subjective, and there's a spread of numbers on most credences, or there really isn't but I'm looking at the numbers and seeing most people quoting somebody else then, while everyday credences are just treated as regular beliefs (binary or constrained with experience) plus there is plenty of secondary gain potential, so I don't believe that EA folks are using their probability estimates well at all, no.

NOTE: By "secondary gain potential", I mean gain from an action other than the action's ostensible gains. A person doing updating for its secondary gains might be developing:

  • feelings of belonging in the EA community.
  • a sense of control over their feelings of doubt or certainty.
  • an addiction to betting or other financial risk-taking.
  • a (false) sense of efficacy in rational arguments.
  • or something else that they would have to tell you.

I suspect that most EA's are being sloppy in their offerings of probabilities and credence levels. They offer something closer to epistemic status metrics for their own statements, which rely on poor evidence or no evidence, just a feeling, maybe of worry or disappointment at failing other's expectations, or the converse, a high metric corresponding to a poor judgement of the quality of their own work, typically basing that on other work in EA, unsurprisingly. EA folks choose numbers to emphasize a message to their audience that reflects what they're comfortable sharing, not what they actually believe. There's some real controversy among EA's, but it comes from outside the community, apparently, through people like me.

So as I said, this whole updating thing has gone wrong in EA. EA as a research community would do better to drop updating and the bayesian kick until they have a better handle on contexts suitable for it. If superforecasting offers those contexts, then great!

Lets see superforecasters address existential risk with any success. Lets just see them do the research first so they that have some clear preliminary answers to what the dangers can shape up to be. Presuming that they are not using foresight but forecasting techniques, that should be interesting.

NOTE: Once I learn more about superforecasting, I might constrain my beliefs about how EA's update to exclude superforecasters working in their successful forecasting domains. I would do that right now, but I'm suspicious of the superforecaster reputation, so I'll wait until I can learn more.

Curated and popular this week
Relevant opportunities