@JWS asked the question: why do EA critics hate EA so much? Are all EA haters just irrational culture warriors?
I genuinely don't know if this is an interesting/relevant question that's unique to EA. To me, the obvious follow-up question here is whether EA is unique or special in having this (average) level of vitriol in critiques of us? Like is the answer to "why so much EA criticism is hostile and lazy" the same answer to "why is so much criticism, period, hostile and lazy?" Or are there specific factors of EA that's at all relevant here?
I haven't been su...
Sure, social aggression is a rather subjective call. I do think decoupling/locality norms are relevant here. "Garden variety incompetence" may not have been the best choice of words on Sean's part,[1] but it did seem like a) a locally scoped comment specifically answering a question that people on the forum understandably had, b) much of it empirically checkable (other people formerly at FHI, particularly ops staff, could present their perspectives re: relationship management), and c) Bostom's capacity as director is very much relevant to the discussi...
It wasn't carefully chosen. It was the term used by the commenter I was replying to. I was a little frustrated, because it was another example of a truth-seeking enquiry by Milena getting pushed down the track of only-considering-answers-in-which-all-the-agency/wrongness-is-on-the-university side (including some pretty unpleasant options relating to people I'd worked with ('parasitic egregore/siphon money').
>Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic e...
Interesting example! I don't know much about Tate, but I understand him as a) only "influential" in a very ephemeral way, in the way that e.g. pro wrestlers are, and b) only influential among people who themselves aren't influential.
It's possible we aren't using the word "influential" in the same way. E.g. implicit in my understanding of "influential" is something like "having influence on people who matter" whereas maybe you're just defining it as "having influence on (many) people, period?"
I claim that on net FHI would've brought more prestige to Oxford than the other way around, especially in the counterfactual world where it thrived/was allowed to thrive (which might be impractical for other reasons).
I might think of FHI as having borrowed prestige from Oxford. I think it benefited significantly from that prestige. But in the longer run it gets paid back (with interest!).
That metaphor doesn't really work, because it's not that FHI loses prestige when it pays it back -- but I think the basic dynamic of it being a trade of prestige at different points in time is roughly accurate.
I might not be tracking all the exact nuances, but I'd have thought that prestige is ~just legible influence aged a bit, in the same way that old money is just new money aged a bit. I model institutions like Oxford as trying to play the "long game" here.
The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on.
I also do think you originally meant (or conveyed) a broader meaning of influential - as you mention economic output and the dustbins of history, which I would consider to be about broad influence.
Erm, looking at the accomplishments of FHI, I'd be genuinely surprised if random philosophers from Oxford will be nearly as influential going forwards. "It's the man that honors the medal."
The vast majority of academic philosophy at prestigious universities will be relegated to the dustbins of history, FHI's work is quite plausibly an exception.
To be clear, this is not a knock on philosophy; I'd guess that total funding for academic philosophy in the world is on the order of 1B. Most things that are 0.001% of the world economy won't be remembered much 100 years from now. I'd guess philosophy in general punches well above its weight here, but base rates are brutal.
...I do not consider myself to be under the obligation that all negative takes I share about an organization...
Fwiw I think part of the issue that I had[1] with your comment is that the comment came across much more aggressively and personally, rather than as a critique of an organization. I do think the bar for critiquing individuals ought to be moderately higher than the bar for critiquing organizations. Particularly when the critique comes from a different place/capacity[2] than strictly necessary for the conversation[3].
I expect some other pe...
I 'd expect there would be some details of some applications that wouldn't be appropriate to share on a public forum though
Hopefully grantees can opt-in/out as appropriate! They don't need so share everything.
Grantees are obviously welcome to do this. That said, my guess is that this will make the forum less enjoyable/useful for the average reader, rather than more.
This entire thread just demonstrates how confused and useless it is to argue "by definition", or argue about term definitions.
You keep inserting words into people's mouths lmao. Nobody said "by definition" before you did. (Control-F for "by definition" if you don't believe me).
I did not miss your "if." I didn't think it was necessary to go into the semantics dive because I thought the analogy would be relatively clear. Let me try again:
In general, when someone says X group is Y, a reasonable interpretation is that members of X group are more likely t...
The comment you're replying to has somewhat sloppy language and reasoning. Unfortunately your comment managed to be even worse.
If white supremacists are by definition non-respectful to non-white people, and Hanania appears fairly respectful to non-white people, perhaps that allows us to conclude that Hanania does not, in fact, qualify for your definition of "white supremacist"?
This line of reasoning is implausible. If having a single nonwhite person over on a podcast without being rude is strong evidence against white supremacy, trusting nonwhite people en...
(Appreciate the upvote!)
At a high level, l I'm of the opinion that we practice better reasoning transparency than ~all EA funding sources outside of global health, e.g. a) I'm responding to your thread here and other people have not, b) (I think) people can have a decent model of what we actually do rather than just an amorphous positive impression, and c) I make an effort of politely delivering messages that most grantmakers are aware of but don't say because they're worried about flack.
It's really not obvious that this is the best use of limited re...
Hmm, I still think your numbers are not internally consistent but I don't know if it's worth getting into.
Really late to respond to this! Just wanted to quickly say that I've been mulling over this question for a while and don't have clear/coherent answers; hope other people (at EAIF and elsewhere) can comment with either more well-thought-out responses or their initial thoughts!
Less importantly,
In any case, EA Funds' mean amount granted is 76.0 k$, so 52 words/grant would result in 0.684 word/k$ (= 52/(76.0*10^3)), which is lower than the 1.57 word/k$ I estimated above
You previously said:
> The mean length of the write-up of EA Funds' grants is 14.4 words
So I'm a bit confused here.
Also for both LTFF and EAIF, when I looked at mean amount granted in the past, it was under $40k rather than $76k. I'm not sure how you got $76k. I suspect at least some of the difference is skewed upwards by our Global Health and Development fund. Ou...
Thanks for engaging as well. I think I disagree with much of the framing of your comment, but I'll try my best to only mention important cruxes.
EDIT: I think there's a database issue, when I try to delete this comment I think it might also delete a comment in a different thread. To be clear I still endorse this comment, just not its location.
This analysis can't be right. The most recent LTFF payout report alone is 13000 words, which covered 327 grantees, or an average of 40 words/grant (in addition to the other information in eg the database).
EDIT: You say:
...EA Funds' EA Forum posts only cover a tiny minority of their grants, and the number above would not be affected much if there were a few
(I'm also not sure your list is comprehensive, eg Longview only has 12 writeups on their website and you say they "have write-ups roughly as long as Open Philanthropy," but I'm pretty sure they gave out more grants than that (and have not written about them at all).
(I work at EA Funds)
- Charity Entrepreneurship (CE) produces an in-depth report for each organisation it incubates (see CE’s research).
- Effective Altruism Funds has write-ups of 1 sentence for the vast majority of the grants of its 4 funds.
These seem like pretty unreasonable comparisons unless I'm missing something. Like entirely different orders of magnitude. For context, Long-Term Future Fund (which is one of 4 EA Funds) gives out about 200 grants a year.
If I understand your sources correctly, CE's produces like 4 in-depth reports a cycle (a...
Hi Linch,
I estimated CE shares 52.1 (= 81.8/1.57) times as much information per amount granted as EA Funds:
Some 2nd edition book titles:
What We Owe to Shrimp
The Crustacean Precipice
The Most Good You Can Do (for Shrimp)
Supershrimp
Shrimping Good Better
Deep Sea Utopia
@Alexander_Berger Happy to explore a win-win-win opportunity! We are already in communications with VCs, but we'd love to get some philanthropic interest as well! Not donating to us will be an astronomical waste.
For $50M, we'd even consider giving you a permanent seat on our Board of Concurrers!
Working as a safety engineer at Lockheed Martin is a great idea! If for no other reason than career capital. Working for a few years as a junior safety engineer at Lockheed Martin can probably be a great skill-building opportunity that will later place you well for working at a high-impact startup like Open Asteroid Impact.
he got (and many white-collar criminals get) significantly less than his culpability level and harm caused would predict.
What do you think is the correct level of punishment for white collar crimes based on harm? If we only look at first-order effects [1], even stealing 1B is just really bad, consequentially. Like if we use a simple VSL framework it's equivalent to ~100 murders.
But of course, this is very much not how the justice system currently operates, so overall I'm pretty confused.
[1] And not looking at second order effects of his crimes e.g. c...
I think this is a fair compromise between what the prosecutors wanted and what the defense wanted; I don't have an opinion on what's the "correct" level of punishment for this type of crime. My guess is that if I did a first-principles analysis his crime is either the type of thing that gets ~5 years or something that gets life imprisonment without parole, but I'm not confident and also I don't see much value in forming my own independent impression on optimal deterrence theory, given that it's not decision-relevant to me at all.
(speaking for myself)
I had an extended discussion with Scott (and to a lesser extent Rachel and Austin) about the original proposed market mechanism, which iiuc hasn't been changed much since.
I'm not particularly worried about final funders losing out here, if anything I remember being paternalistically worried that the impact "investors" don't know what they're getting into, in that they appeared to be taking on more risks without getting a risk premium.
But if the investors, project founders, Manifold, etc, are happy to come to this arrangement with...
If I had more time and energy I'd probably make some more evidenced claims about Meta issues, and how things like SBF, sexual misconduct cases or Nonlinear could have been helped with more of #2 than #1 but don't have the time or energy (I'm also less sure about this claim).
At the risk of saying the obvious, literally every single person at Alameda and FTX's inner circle worked in large corporations in the for-profit sector out of college and before Alameda/FTX. (SBF: Jane Street, Gary Wang: Google, Caroline Ellison: Jane Street, Nishad Singh: Facebook/Met...
I believe we changed the text a bunch in August/early September. I think there were a few places we didn't catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations.
Eg here's the text at near the beginning of the application form:
...The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks. However,
I'd be a bit surprised if you could find people on this forum who (still) work at Cohere. Hard to see a stronger signal to interview elsewhere than your CEO explaining in a public memo why they hate you.
but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products?
I agree it's odd in the sense that most companies don't do it. I see it as a attempt to enforce a certain kind of culture (promoting conformity, discouragement of d...
Thank you for your detailed, well-informed, and clearly written post.
America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power.
This probably doesn't address your core points, but the most plausible explanation for me is that vegetarians on average just care a lot less about animal welfare than farmers care about their livelihoods. Most people have many moral goals in their minds that compete with other moral goals as well as more mundane concerns (which ...
Minor, but: searching on the EA Forum, your post and Quentin Pope's post are the only posts with the exact phrase "no evidence" (EDIT: in the title, which weakens my point significantly but it still holds) The closest other match on the first page is There is little (good) evidence that aid systematically harms political institutions, which to my eyes seem substantially more caveated.
Over on LessWrong, the phrase is more common, but the top hits are multiple posts that specifically argue against the phrase in the abstract. So overall I would not consider i...
The point is not that 1.5 is a large number, in terms of single variables -- it is -- the point is that 2.7x is a ridiculous number.
2.7x is almost exactly the amount world gdp per capita has changed in the last 30 years. Obviously some individual countries (e.g. China) have had bigger increases in that window.
30 years isn't that high in the grand scheme of things; it's far smaller than most lifetimes.
(EDIT: nvm this is false, the chart said "current dollars" which I thought meant inflation-adjusted, but it's actually not inflation adjusted)
Makes sense! I agree that fast takeoff + short timelines makes my position outlined above much weaker.
e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.
I want to flag that if an AI lab and the US gov't are equally responsible for something, then the comparison will still favor the AI lab CEO, as lab CEOs have much greater control of their company than the president has over the USG.
I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.
Thanks! I don't have much expertise or deep analysis here, just sharing/presenting my own intuitions. Definitely think this is an important question that analysis may shed some light on. If somebody with relevant experience (eg DC insider knowledge, or academic study of US political history) wants to cowork with me to analyze things more deeply, I'd be happy to collab.
I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at th...
I want to separate out:
My comment was just suggesting that (1) might be superfluous (under some set of assumptions), without having a position on (2).
I broadly agree that making sure gov'ts do the right things is really important. If only I knew what they are! One reasonably safe (though far from definitely robustly safe) action is better education and clearer communications:
> Conversely, we may be underestimating the value of clear conversati...
One perspective that I (and I think many other people in the AI Safety space) have is that AI Safety people's "main job" so to speak is to safely hand off the reins to our value-aligned weakly superintelligent AI successors.
This involves:
a) Making sure the transition itself goes smoothly and
b) Making sure that the first few generations of our superhuman AI successors are value-aligned with goals that we broadly endorse.
Importantly, this likely means that the details of the first superhuman AIs we make are critically important. We may not be able to, ...
My default story is one where government actors eventually take an increasing (likely dominant) role in the development of AGI. Some assumptions behind this default story:
1. AGI progress continues to be fairly concentrated among a small number of actors, even as AI becomes percentage points of GDP.
2. Takeoff speeds (from the perspective of the State) are relatively slow.
3. Timelines are moderate to long (after 2030 say).
If what I say is broadly correct, I think this may have has some underrated downstream implications For example, we may be currently...
Additional musing this made me think of: there's also the consideration that the next-best candidate also has a counterfactual, and if they're aligned will probably themselves end up doing something else impactful if they don't take this job
Agreed, if you or other people want to read about issues with naive counterfactuals, I briefly discuss it here.
Quick update:
(My own guesses only)
For what it's worth my guess is that a key reason people aren't giving as much to our GH&D fund is due to the influx of healthy competitors. Ie, it's not offering much of a differentiated product from what people could get elsewhere. I haven't interviewed our donors about this so I can't be sure, but my impression is that when the GH&D fund first launched, there weren't any plausible competitors for donors for the niche of "I want to figure out my own cause prioritization, but I want to defer to external experts to source and p...
(I work for EA Funds)
Re 2: Yes this is correct. It does not include institutional funds. I'm also not sure if it includes non-cash donations from individuals either; I think the dashboard was created back when there was only one way to donate to EA Funds. I'll check.
(Thanks for your hard work on the post btw!)
A relevant reframing here is whether having a PhD provides a high Bayes factor update to being hired. Eg, if people with and without PhDs have a 2% chance of being hired, but ">50% of successful applicants had a PhD" because most applicants have a PhD, then you should probably not include this, but if 1 in 50 applicants are hired, but it rises to 1 in 10 people if you have a PhD and falls to 1 in 100 if you don't, then the PhD is a massive evidential update even if there is no causal effect.