L

Linch

@ EA Funds
25241 karmaJoined Working (6-15 years)openasteroidimpact.org

Comments
2705

Thanks. I appreciate your kind words.

IMO if EA funds isn't representative of EA, I'm not sure what is.

I think there's a consistent view where EA is about doing careful, thoughtful, analysis with uniformly and transparently high rigor, to communicate that analyses transparently and legibly, and to (almost) always make decisions according to such analyses as well as strong empirical evidence. Under that view GiveWell, and for that matter, JPAL, is much more representative of what EA ought to be about, than what at least LTFF tries to do in practice.

I don't know how popular the view I described above is. But I definitely have sympathy towards it.
 

Thanks. I agree here that "criminals" seem a more plausible interpretation of what he said than "woke activists." I also definitely sympathize with an unthinking tweet written in the moment being misinterpreted, especially by people on the EA Forum.

I think generally though it's easy to misunderstand people, and if people respond to clarify, you should believe what they say they meant to say, not your interpretation of what they said. 

I agree this is true in general. I think we might have different underlying probabilities of how accurate that model is however. In particular, I find it rather plausible that people pushing for "edgy" political beliefs will intentionally backtrack when challenged. I also have a cached view that this type of strategic ambiguity is particularly popular among the alt-right (not saying that other political factions are innocent here). 

And in this particular case, I'd note that the incentive for falsifying what he meant is massive

Again, I don't know Richard and how strong his desire is to always be consistently candid about what he means. It's definitely possible that he's unusually truth-seeking (my guess is that some of his defenders will point to that as one of his chief virtues). I'm just saying that you should not exclude deception from the hypothesis space in situations similar to this one.

I appreciate that you replied! I'm sorry if I was rude. I think you're not engaging with what I actually said in my comment, which is pretty ironic. :) 

(eg there are multiple misreadings. I've never interacted with you before so I don't really know if they're intentional)

"Woke activist" was not my first, second, or third interpretation of that quote fwiw. (In decreasing order I would've said "mentally ill/crazy people", "black people", "people Hanania generically doesn't like" when I first read the tweet). I did remember flagging to myself at the time I first saw the tweet/it blew up that people went to the racism interpretation too quickly, but decided it was not a battle I was particularly excited to fight. I don't find this type of exegesis particularly fun in the majority of contexts, even aside from the unpleasant source material. (I do find the self-censorship mildly regrettable). Now that I've learned greater context re: his past writings, I'd lean towards the racism interpretation being the most plausible. 

Separately, I also don't think interpreting that statement as racism towards Black people is the maximally uncharitable interpretation. 

Also it was clearly not about Manifest. (Though it is nonetheless very cringe).

Thanks for this! You might also be interested in the results of FORESIGHT, and other work within the intelligence communities, about how to warn (H/T @Alex D). 

My understanding is that warning correctly is surprisingly hard; there are many times where (correct) warnings are misunderstood, not heeded, or not even really heard.

There's a temptation ex post to blame the warned for not paying sufficient attention, of course, but it'd also be good for people interested in raising alarms ("warners") to make their warnings as clear, loud, and unambiguous as possible. 

(very minor) as a native Chinese speaker, associating "yang" 阳 (literally, sun) with black feels really discordant/unnatural to me. 

To be clear I also have high error bars on whether traversing 5 OOMs of algorithmic efficiency in the next five years are possible, but that's because a) high error bars on diminishing returns to algorithmic gains, and b) a tentative model that most algorithmic gains in the past were driven by compute gains, rather than exogeneous to it. Algorithmic improvements in ML seems much more driven by the "f-ck around and find out" paradigm than deep theoretical or conceptual breakthroughs; if we model experimentation gains as a function of quality-adjusted researchers multiplied by compute multiplied by time, it's obvious that the compute term is the one that's growing the fastest (and thus the thing that drives the most algorithmic progress).

I didn't read all the comments, but Order's are obvious nonsense, of the "(a+b^n)/n = x, therefore God exists" tier. Eg take this comment:

But something like 5 OOMs seems very much in the realm of possibilities; again, that would just require another decade of trend algorithmic efficiencies (not even counting algorithmic gains from unhobbling).

Here he claims that 100,000x improvement is possible in LLM algorithmic efficiency, given that 10x was possible in a year. This seems unmoored from reality - algorithms cannot infinitely improve, you can derive a mathematical upper bound. You provably cannot get better than Ω(n log n) comparisons for sorting a randomly distributed list. Perhaps he thinks new mathematics or physics will also be discovered before 2027?

This is obviously invalid. The existence of a theoretical complexity upper bound (which incidentally Order doesn't have numbers of) doesn't mean we are anywhere near it, numerically. Those aren't even the same level of abstraction! Furthermore, we have clear theoretical proofs for how fast sorting can get, without AFAIK any such theoretical limits for learning. "algorithms cannot infinitely improve" is irrelevant here, it's the slightly more mathy way to say a deepity like "you can't have infinite growth on a finite planet," without actual relevant semantic meaning[1]

Numerical improvements happen all the time, sometimes by OOMs. No "new mathematics or physics" required.

Frankly, as a former active user of Metaculus, I feel pretty insulted by his comment. Does he really think no one on Metaculus took CS 101? 

  1. ^

    It's probably true that every apparently "exponential" curve become a sigmoid eventually, knowing this fact doesn't let you time the transition. You need actual object-level arguments and understanding, and even then it's very very hard (as people arguing against Moore's Law or for "you can't have infinite growth on a finite planet" found out).

Thanks for elucidating your thoughts more here.

"There are good guys and bad guys. We need to continue to make it clear that the good guys are good, and should be reluctant to draw attention to their downsides."

I agree this will be a (very) bad epistemic move. One thing I want to avoid is disincentivizing broadly good moves because their costs are more obvious/sharp to us. There are of course genuinely good reasons to criticize mostly good but flawed decisions (people like that are more amenable to criticism so criticism of them is more useful, their decisions are more consequential). And of course there are alternative framings where critical feedback is more clearly a gift, which I would want us to move more towards.

That said, all of this is hard to navigate well in practice.

Load more