Jan_Kulveit

3792Joined Dec 2017

Bio

Studying behaviour and interactions of boundedly rational agents, AI alignment and complex systems.

Research fellow at Future of Humanity Institute, Oxford. Other projects: European Summer Program on Rationality. Human-aligned AI Summer School. Epistea Lab.

Sequences
1

Learning from crisis

Comments
201

Seems worth trying

At the same time, I don't think the community post / frontpage attention mechanism is the core of what's going on. Which is, in my guess, often best understood as a fight between memeplexes about hearts and minds

The quality of reasoning in the text seems somewhat troublesome. Using two paragraphs as example 

On Halloween this past year, I was hanging out with a few EAs. Half in jest, someone declared that the best EA Halloween costume would clearly be a crypto-crash — and everyone laughed wholeheartedly. Most of them didn’t know what they were dealing with or what was coming. I often call this epistemic risk: the risk that stems from ignorance and obliviousness, the catastrophe that could have been avoided, the damage that could have been abated, by simply knowing more. Epistemic risks contribute ubiquitously to our lives: We risk missing the bus if we don’t know the time, we risk infecting granny if we don’t know we carry a virus. Epistemic risk is why we fight coordinated disinformation campaigns and is the reason countries spy on each other.

Still, it is a bit ironic for EAs to have chosen ignorance over due diligence. Here are people who (smugly at times) advocated for precaution and preparedness, who made it their obsession to think about tail risks, and who doggedly try to predict the future with mathematical precision. And yet, here they were, sharing a bed with a gambler against whom it was apparently easy to find allegations of shady conduct. The affiliation was a gamble that ended up putting their beloved brand and philosophy at risk of extinction.


It appears that a chunk of  Zoe's epistemic risk bears a striking resemblance to financial risk. For instance, if one simply knew more about tomorrow's stock prices, they could sidestep all stock market losses and potentially become stupendously rich.

This highlights the fact that gaining knowledge in certain domains can be difficult task, with big hedge funds splashing billions and hiring some of the brightest minds just to gain a slight edge in simply knowing a bit more about asset prices. It extends to having more info about which companies may go belly up or engage in fraud.

Acquiring more knowledge comes at a cost. Processing knowledge comes at cost. Choosing ignorance is mostly not a result of recklessness or EA institutional design but a practical choice given the resources required to process information. It's actually rational for everyone to ignore most information most of the time (this is standard econ, check rational inattention and extensive literature on the topic).

One real question in this space is if EAs have allocated their attention wisely. The answer seems to be "mostly yes." In case of FTX, heavyweights like Temasek, Sequoia Capital, and SoftBank with billions on the line did their due diligence but still missed what was happening. Expecting EAs to be better evaluators of FTX's health than established hedge funds is somewhat odd. EAs, like everyone else, face the challenge of allocating attention and their expertise lies in "using money for good" rather than "evaluating the health of big financial institutions". For the typical FTX grant recipient to assume they need to be smarter than Sequoia or SoftBank about FTX would likely not be a sound decision.

Also: Confido works for intuitively eliciting probabilities

I think this is a weird response to what Buck wrote. Buck also isn't paid  either to reform EA movement, or to respond to criticism on EA forum, and decided to spend his limited time to express how things realistically look from his perspective.

I think it is good if people write responses like that, and such responses should be upvoted, even if you disagree with the claims. Downvotes should not express 'I disagree', but 'I don't want to read this'.

Even if you believe EA orgs are horrible and should be completely reformed, in my view, you should be glad that Buck wrote his comment as you have better idea what people like him may think. 

It's important to understand the alternative to this comment is not Buck will write 30 page detailed response. The alternative is, in my guess, just silence. 

Thanks for all the care and effort which went into writing this!

At the same time, while reading, my reactions were most of the time "this seems a bit confused", "this likely won't help" or "this seems to miss the fact that there is someone somewhere close to the core EA orgs who understands the topic pretty well, and has a different opinion".

Unfortunately, to illustrate this in detail for the whole post would be a project for ...multiple weeks.

At the same time I thought it could be useful to discuss at least one small part in detail, to illustrate how the actual in-the-detail disagreement could look like.

I've decided to write a detailed response for a few paragraphs about rationality and Bayesianism. This is from my perspective not cherry-picked part of the of the original text which is particularly wrong, but a part which seems representatively wrong/confused.  I picked it for convenience, because I can argue and reference this particularly easily.
 

Individual Bayesian Thinking (IBT) is a technique inherited by EA from the Rationalist subculture, where one attempts to use Bayes’ theorem on an everyday basis. You assign each of your beliefs a numerical probability of being true and attempt to mentally apply Bayes’ theorem, increasing or decreasing the probability in question in response to new evidence. This is sometimes called “Bayesian epistemology” in EA, but to avoid confusing it with the broader approach to formal epistemology with the same name we will stick with IBT.

This seems pretty strange characterization. Even though I participated at multiple CFAR events, do teach various 'rationality techniques' and know decent amount of  stuff about Bayesian inference, I think this is misleading/confused.

What's called in rationalist circles "Bayesian epistemology" is basically what's the common understanding of the term:
- you don't hold beliefs to be true or false, but have credences in them
- normatively, you should update the credences based on evidence. normatively, the proper rule for that is the Bayes rule; this is intractable in practice, so you do various types of approximations
- you should strive for coherent beliefs if you don't want to be Dutch-booked

It's important to understand that in this frame, it is a normative theory. Bayes theorem in this perspective is not some sort of "minor aid for doing some sort of likelihood calculation" but a formal foundation for large part of epistemology.

The view that you believe different things to different degrees, and these credences basically are  Bayesian probabilities and are normatively governed by the same theory isn't an 'EA' or 'rationalist' thing but a standard Bayesian take. (cf Probability Theory: The Logic of Science, E. T. Jaynes

Part of what Eliezer's approach to 'applied rationality' aimed for is taking Bayesian epistemology seriously and applying this frame to improve everyday reasoning.

But this is almost never done by converting your implicit probability distributions to numerical credences , doing the explicit numerical math, and blindly trusting the result!

What's done instead is
-noticing that your brain already internally used credences and probabilities all the time. you can easily access your "internal" (/S1/...) probabilities in an intuitive way by asking yourself questions like "how surprised you would be to see [a pink car today|  Donald Trump reelection | SpaceX rocket landing in your backyard]" (the idea that brains do this is pretty mainstream eg Confidence as Bayesian Probability: From Neural Origins to Behavior, Meyniel et.al.)
- noticing you brain often clearly does something a bit similar to what Bayesians suggests as the normative idea  - e.g. if two SpaceX rockets already landed in your backyard today, you would be way less surprised by the third one
- noticing there is often a disconnect between this intuitive / internal / informal calculation, and the explicit, verbal reasoning (cf alief/belief)
- ...and using all of that to improve both the "implicit" and "explicit" reasoning!

The actual 'techniques' derived from this are often implicit. For example, one actual technique is: Imagine you are an alien who landed in one world of two. They differ in the respect that in one a proposition is true, in the other, the opposite is true. You ask yourself how the world would look like in the different worlds, and then look at the actual world. 

For example, consider this proposition: "democratic vote is the optimal way how to make decision making in organizations":  How would the world where this is true look like? There are parts of the world with intense competition between organizations, e.g. companies in highly competitive industries, optimizing hard and measurable things. In the world where the proposition is true, I'd expect a lot of voting in these companies. We don't see that, which decreases my credence in the proposition.

It is relatively easy to see how this both connected to Bayes and not asking people to do any explicit odd multiplications.

There is nothing wrong with quantitative thinking, and much of the power of EA grows from its dedication to the numerical. However, this is often taken to the extreme, where people try to think almost exclusively along numerical lines, causing them to neglect important qualitative factors or else attempt to replace them with doubtful or even meaningless numbers because “something is better than nothing”. These numbers are often subjective “best guesses” with little empirical basis.[27]


While some people make the error of trying to replace complex implicit calculations by the over-simplified spreadsheets with explicit numbers, this paragraph seems to conflate multiple things together as "numerical" or "quantitative".

Assuming  fairly standard standard cognitive science and neuroscience, at some level all thinking is "numerical", including thinking which feels intuitive or qualitative. People usually express such thinking in words like "I strongly feel" or "I'm pretty confident"

What's a classical rationalist move in such cases is to try to make the implicit explicit. E.g., if you are fairly confident, at what odds would you be willing to bet on it?

When done correctly, the ability and willingness to do that mostly exposes what's already there. People already act based on the implicit credences and likelihoods, even if they don't try to express them as probability distributions, and you don't have access to them.  

E.g., when some famous experts recommended 'herd immunity' strategy to deal with covid, using strong and confident words, such recommendation actually were subjective “best guess” with little empirical basis. Same is actually true for many expert opinions on policy topics!

Rationalist habit of reporting credences and predictions using numbers ... basically exposes many things to possibility of being proved wrong, and exposes many personal best guesses for what they are.

Yes, for someone who isn't used to this at all this may create fake aura of 'certainty', because use of numbers often signals 'this is more clear' and use of words signals 'this is more slippery' in common communication. But this is just a communication protocol.

Yes, as I wrote before, some people may make the mistake  of trying to convert some basic things to numbers and replace their brains with spreadsheets with Bayes formulas in the next step, but it does not seem common at least in my social neighborhood.

For instance, Bayesian estimates are heavily influenced by one’s initial figure (one’s “prior”), which, especially when dealing with complex, poorly-defined, and highly uncertain and speculative phenomena, can become subjective (based on unspecified values, worldviews, and assumptions) to the point of arbitrary.[28] This is particularly true in existential risk studies where one may not have good evidence to update on.

I would be curious how the authors imagine the non-Bayesian thinking not depending on any priors internally works. 

We assume that, with enough updating in response to evidence, our estimates will eventually converge on an accurate figure. However, this is dependent on several conditions, notably well-formulated questions, representative sampling of (accurate) evidence, and a rigorous and consistent method of translating real-world observations into conditional likelihoods.[29] This process is very difficult even when performed as part of careful and rigorous scientific study; attempting to do it all in your head, using rough-guess or even purely intuitional priors and likelihoods, is likely to lead to more confidence than accuracy.

This seems confused (the "common response" mentioned below applies here exactly). How do you imagine, for example, a group of people looking at a tree, manages to agree on seeing a tree? The process of converting raw sensory data to the tree-hypothesis is way more complicated  than a typical careful and rigorous scientific study, and also way more reliable than a typical published scientific study. 

Again: correctly understood, the applied rationalist idea is not  to replace our mind's natural ways of recognizing a tree by a process where you would assign numbers to statements like "green in upper left part of visual field" and do explicit Bayesian calculation in S2 way, but just to be ...less wrong.

This is further complicated by the fact that probabilities are typically distributions rather than point values – often very messy distributions that we don’t have nice neat formulae for. Thus, “updating” properly would involve manipulating big and/or ugly matrices in your head. Perhaps this is possible for some people.

A common response to these arguments is that Bayesianism is “how the mind really works”, and that the brain already assigns probabilities to hypotheses and updates them similarly or identically to Bayes’ rule. There are good reasons to believe that this may be true. However, the fact that we may intuitively and subconsciously work along Bayesian lines does not mean that our attempts to consciously “do the maths” will work.

I think the "common response" is partially misunderstood here? The common response does not imply you can consciously explicitly multiply the large matrices or do the exact  Bayesian inferences, any more than someone a catching a ball would be consciously and explicitly solving the equations of motion.

The correct ideas here are:
- you can often make some parts or results of the implicit subconscious calculations explicit and numeric (cf forecasting, betting, ...)
- the implicit reasoning is often biased and influenced by wishes and wants
- explicitly stating things or betting on things sometimes exposes the problems
- explicit reasoning can be good for that
- explicit reasoning is also good for understanding what the normatively good move is in simple or idealized cases
- on the other hand, explicit reasoning alone is computationally underpowered for almost anything beyond very simple models. (compare how many FLOPs is your brain using, vs. how fast you can explicitly multiply numbers)
- what you usually need to do is use both, and watch for flaws
 

In addition, there seems to have been little empirical study of whether Individual Bayesian Updating actually outperforms other modes of thought, never mind how this varies by domain. It seems risky to put so much confidence in a relatively unproven technique.

Personally I don't know anyone who would propose people should do the "Individual Bayesian Thinking" mode of thoughts in the way you describe, and I don't see much reason to make a study on this.  Also while a lot people in EA orgs subscribe to basically Bayesian epistemology, I don't know anyone who would try to live by the "IBT", so you should probably be less worried about the risks from the use of it.

The process of Individual Bayesian Updating can thus be critiqued on scientific grounds, 

So, to me, this is characteristic - and, frankly, annoying - about the whole text. I don't think you have properly engaged with Bayesian epistemology, state of art applied rationality practice, or relevant cognitive science. "critiqued on scientific grounds" sounds serious and authoritative ... but where is the science?
 

but there is also another issue with it and hyper-quantitative thinking more generally: motivated reasoning. With no hard qualitative boundaries and little constraining empirical data, the combination of expected value calculations and Individual Bayesian Thinking in EA allows one to justify and/or rationalise essentially anything by generating suitable numbers.


This is both sad and funny. One of the good things about rationalist habits and techniques is, stating explicit numbers often allows one to spot and correct motivated reasoning.  In relation to existential risk and similar domains, often the hope is that by practicing this in domains with good feedback and bets which are possible to empirically evaluate, you get better at thinking clearly... and this will at least partially  generalize to epistemically more challenging domains.

Yes, you can overdo it, or do stupid or straw versions of this. Yes, it is not perfect.

But what's the alternative? Honestly, in my view, in many areas of expertise the alternative is to state views, claims and predictions in sufficiently slippery and non-quantitative way that it is very difficult to clearly disprove them.

Take for example your text and claims about diversity. I think given the way you are using it, it seems anyone trying to refute the advise on empirical grounds would have really hard time, and you would be always able to write some story why some dimension of diversity is not important, or why some other piece of research states something else. (It seems a common occurrence in humanities that some confused ideas basically never die, unless they lose support on the level of 'sociology of science'.)

Bottom line: 
- these 8 paragraphs did not convince me about any mistake people at e.g. FHI may be making
- suggestion "Bayes’ theorem should be applied where it works" is pretty funny; I guess Bayesians wholeheartedly agree with this!
- suggestions like "studies of circumstances under which individual subjective Bayesian reasoning actually outperforms other modes of thought" seems irrelevant given the lack of understanding of actual 'rationality techniques'
- we have real world evidence that getting better at some of the traditional "rationalist" skills makes you better at least at some measurable things, e.g. forecasting

I suspect ... that even what I see as a wrong model of 'erros of EA' maybe points  to some interesting evidence. For example maybe some EA community builders are actually teaching "individual bayesian thinking" as a technique you should do in the way described?  

In the spirit of communication style you advocate for... my immediate emotional reaction to this is "Eternal September has arrived".

I dislike my comment being summarized as "brings up the "declining epistemics" argument to defend EA orgs from criticism".  In the blunt style you want, this is something between distortion and manipulation. 

On my side, I wanted to express my view on the Wytham debate. And I wrote a comment expressing  my views on the debate.

I also dislike the way my comment is straw-manned by selective quotation.

In the next bullet point to "The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way."  I do  explicitly acknowledge the possible large effects of higher order factors.
 

In contrast, large fraction attention in the discussion seems spent on topics which are both two steps removed from the actual thing , and very open to opinions. Where by one step removed I mean e.g. "how was this announced" or "how was this decided", and two steps removed is e.g. "what will be the impact of how was this announced on the sentiment of the twitter discussion". While I do agree such considerations can have large effect, driving decisions by this type of reasoning in my view moves people and orgs into the sphere of pure PR, spin and appearance. 

What I object to is a combination of  
1. ignore the object level, or discuss it in a very lazy way
2. focus on the 2nd order ... but not in a systematic way, but mostly based on saliency and emotional pull (e.g., how will this look on twitter)

Yes, it is a simple matter to judge where this leads in the limit. We have a bunch of examples how the discourse looks like when completely taken over by these considerations - e.g., political campaigns. Words have little meaning connected to physical reality, but are mostly tools in the fight for the emotional states and minds of other people.

Also: while those with high quality epistemics usually agree on similar things  is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions

Also: It's a given that the path of catering to a smaller group of people with higher quality epistemics will have more impact than spreading the core EA messaging to a larger group of people with lower quality epistemics

No, it's not given. Just, so far, effective altruism was about using evidence and reason to figure out how to benefit others as much as possible, and acting based on that. Based on the thinking so far, It was decidedly not trying to be a mass movement, making our core insights more appealing to the public at large . 

In my view, no one figured out yet how the appealing to the masses, don't need to think much  version of effective altruism should look like to be actually good.

(edit: Also, I quite dislike the frame-manipulation move of shifting from "epistemic decline of the community" to "less intelligent or thoughtful people joining". You can imagine a randomized experiment where you take two groups of equally intelligent and thoughtful people, and you make them join community with different styles of epistemic culture (eg physics, and multi-level marketing).  You will get very different results. While you seem to interpret a lot of things as about people (are they smart?  had they studied philosophy?) I think it's often much more about norms.)

I  will try to paraphrase, please correct me if I'm wrong about this: the argument is, this particular bikeshed is important because it provides important evidence about how EA works, how trustworthy the people are, or what are the levels of transparency. I think this is a fair argument.

At the same time I don't think it works in this case, because while I think EA has important issues, this purchase does not really illuminate them.

Specifically, object level facts about this bikeshed

  • do not provide that that much evidence, beyond basic facts like "people involved in this have access to money"
  • the things they tell you are mostly boring
  • they provide some weak positive evidence about the people involved being sane and reasonable
  • it is unclear how much evidence provided by this generalizes to nuclear reactors
     

Object level, you don't need precise numbers and long spreadsheets to roughly evaluate it. As I gestured to, in late 2021, the "x-risk-reduction" area had billions of dollars committed to  it, less than a thousand people working on it, and good experience with progress made on in person events. Given the ~ low millions pound effective cost of the purchase and the marginal costs of time and money, it seems like a sensible decision. In my view this conclusion does not  strongly depend on priors about EA, but you can reach it by doing a quick calculation and a few google searches.

Things about the process seem mostly boring.  How it went seems:
1. some people thought an events venue near Oxford is a sensible, even if uncertain, bet
2. they searched for venues
3. selected a candidate
4. got funding
5. EVF decided to fiscally sponsor this
6. the venue was bought
7. this was not announced with a fanfare
8. boring things like reconstructing some things started?

(Disclosure about step 2: I had seen the list of candidate venues, and actually visited one other place on the list. The process was in my view competent and sensible, for example in the aspect it involved talking with potential users of the venue)

What this tells us about the people involved seems ...not much, but mostly weakly positive?

1. it seems the decision process involved some willingness to explore and do uncertain things; this is better than EA strawman of comparing every option to bednets
2. it seems based on understanding of real-world events organization
3. the decision to not announce it with fanfare seems sensible
4. my impression is the counterfactual PR impacts, if this was announced with a fanfare, pre-FTX, would have been worse

In contrast, some of the things critiques of the decision ask for seem pretty unreasonable to me. For example
1. discussing property purchases before they are made
2. creating a splash of publicity immediately after it was purchased
3. getting EA forum users somehow involved in the process
4. semi-formal numerical estimates of impact

I do think that what it does illuminate is a tension between

  • global poverty reduction EA memes, which includes stuff like comparing purchases to lives saved, and moral duty to do something about it
  • x-risk-reduction EA memes, which includes stuff like willingness to spend a lot of money to influence something important
  • rationality memes, which emphasize than spending $1000 to save 1h of time in the morning , and spending 1h to save $30 in the afternoon, is perhaps not an optimal decision pattern

And I do think it is something between really PR tricky  and PR nightmare to have all of this under one brand.  If this is the main point, than yes, Wytham is a piece of evidence, but this seemed clear much sooner.

With nuclear reactors, I don't see a strong case how this evidence generalizes, in either direction.

 

For me, unfortunately, the discourse surrounding Wytham Abbey, seems like a sign of epistemic decline of the community, or at least on the EA forum.
 

  • The amount of attentions spent on this seems to be a textbook example of bikeshedding

    Quoting Parkinson :"The time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved." A reactor is so vastly expensive and complicated that an average person cannot understand it (see ambiguity aversion), so one assumes that those who work on it understand it. However, everyone can visualize a cheap, simple bicycle shed, so planning one can result in endless discussions because everyone involved wants to implement their own proposal and demonstrate personal contribution.

    In case of EAs, there are complicated, high-stakes things, for example what R&D efforts to support around AI. This has scale of billions of dollars now, much higher stakes in the future, and there is a lot to understand.

    In contrast, absolutely anyone can easily form opinions about appropriateness of a manor house purchase, based on reading a few tweets. 
     
  • Repeatedly, the tone of the discussion is a bit like "I've read a tweet by Émile Torres, I got upset, and I'm writing on EA forum".  Well, tweets by  Émile Torres are known to be an unreliable source of information, and  in contrast  are often optimized to create negative emotional response. (To not single out Torres, this is also true for many other tweets.)
     
  • The discussion often almost completely misses the direct, object-level, even if just at back-of-the-envelope estimate way. On this level venue purchases were sensible: 
    • the amount of EA x-risk-reduction money per person in early 2022 was pretty high
    • organising events and setting up venues is labour intensive, often more difficult to delegate than people assume, and often constrained on time of people who have high opportunity costs
    • on the margin, you can often trade money, time, work ... this trade seems to make sense
    • apparently, multiple people reached similar conclusion; apart from Wytham, there was for example a different venue purchased in the Bay area, and  other near Prague (note: I'm leading the org fiscally sponsoring the later project)
       
  • In contrast, large fraction attention in the discussion seems spent on topics which are both two steps removed from the actual thing , and very open to opinions. Where by one step removed I mean e.g. "how was this announced" or "how was this decided", and two steps removed is e.g. "what will be the impact of how was this announced on the sentiment of the twitter discussion". While I do agree such considerations can have large effect, driving decisions by this type of reasoning in my view moves people and orgs into the sphere of pure PR, spin and appearance. 
     

Nice post, but my rough take is there is 
 

  • it's relatively common markets are inefficient, but unexploitable; trading on "everyone dies" seems a clear case of hard-to-exploit inefficiency
  • markets are not magic; impacts of one-off events with complex consequences are difficult to price in, and what all the magical market aggregation boils down to are a bunch of  human brains doing the trades; e.g. I was able to beat the market and get n-times return at point where markets were insane about covid; later, I talked about it with someone in one of the giant hedge funds, and the simple explanation is, while they were looking into it, at some point I knew more about covid than they were able to assemble
    • example of such hard-to-predict event are e.g. capabilities and impacts of a specific model 
  • the dichotomy 30% growth / everyone dies is unrealistic for trading purposes
    • near term, there are various outcomes like "industry X gets disrupted" or  "someone loses job due to automation" or "war"
      • if you anticipate fears of this type to dominate in next 10 years, you should price many people increasing their savings and borrowing less
         

When the discussion is roughly at the level  'seem to me obviously worth doing ' it seem to me fine to state dissent of the form 'often seems bad or not working to me'.

Stating an opinion is not 'appeal to authority'. I think in many cases it's useful to know what people believe, and if I have to choose between a forum where people state their beliefs openly and more often, and a forum, where people state beliefs only when they are willing to write a long and detailed justification, I prefer the first.

I'm curious in which direction you think the supposed 'conflict of interests' point:

I'm employed at the same institution (FHI) as Zoe works, and we were part of the same RSP program (although in different cohorts).  This mostly creates some incentive to not criticize Zoe's ideas publicly and would preclude me from e.g. reviewing Zoe's papers, because of favourable bias.

Also ... I think while being a stakeholder in a grant to buy a cheap and cost-saving events venue has not much to do with the topics in question, it mostly creates some incentive to be silent, because by engaging critically with the topic, you increase the risk someone will summon an angry twitter mob to attack you.

Overall ...  it's probably worth noticing people like you strong downvoting  my comment (now at karma 5, yours at 12) are the side actually trying to silence the critic here, while agreement with  "it is surprising that some of Carla Zoe Kremer’s reforms haven’t been implemented"or vague criticisms of "EA leadership" are what's in vogue on EA forum now.   

Load More