This is who I thought would be responsible too, along with the CEO of CEA, that they report to, (and those working for the FTX Future Fund, although their conflictedness means they can't give an unbiased evaluation). But since the FTX catastrophe, the community health team has apparently broadened their mandate to include "epistemic health" and "Special Projects", rather than narrowing it to focus just on catastrophic risks to the community, which would seem to make EA less resilient in one regard, than it was before.
Of course I'm not necessarily saying th...
Surely one obvious person with this responsibility was Nick Beckstead, who became President of the FTX Foundation in November 2021. That was the key period where EA partnered with FTX. Beckstead had long experience in grantmaking, credibility, and presumably incentive/ability to do due diligence. Seems clear to me from these podcasts that MacAskill (and to a lesser extent the more junior employees who joined later) deferred to Beckstead.
^In summarising Why They Do It, Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud". And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.
In this case, the other defendants have said they knew what they're doing was wrong, that they were misappropriating customers' assets, and investing them. That weighs somewhat against...
Quote: (and clearly they calculated incorrectly if they did)
I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to "no fraud" as opposed to "safer amounts of fraud." The risk of getting busted from less extreme or less risky fraud would seem considerably less.
Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He'd need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to hon...
Great comment.
Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud".
I agree with your analysis but I think Will also sets up a false dichotomy. One's inability to conceptualize or realize that one's actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the "high integrity to really bad continuum", you have morally scrupulous people who constantly wond...
There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.
I think you can get closer to dissolving this problem by considering why you're assigning credit. Often, we're assigning some kind of finite financial rewards.
Imagine that a group of n people have all jointly created $1 of value in the world, and that if any one of them did not participate, there would only be $0 units of value. Clearly, we can't give $1 to all of them, because then we would be paying $n to reward an event that only created $0 of value, which is inefficient. If, however, only the first guy (i=1) is an "agent" that responds to incenti...
Hi Polkashell,
There are indeed questionable people in EA, as in all communities. EA may be worse in some ways, because of its utilitarian bent, and because many of the best EAs have left the community in the last couple of years.
I think it's common in EA for people to:
What can make such events more traumatic is if EA has become the source of their livelihood, meaning, f...
It may not be worth becoming a research lead under many worldviews.
I'm with you on almost all of your essay, regarding the advantages of a PhD, and the need for more research leads in AIS, but I would raise another kind of issue - there are not very many career options for a research lead in AIS at present. After a PhD, you could pursue:
Thanks for engaging with my criticism in a positive way.
Regarding how timely the data ought to be, I don't think live data is necessary at all - it would be sufficient in my view to post updated information every year or two.
I don't think "applied in the last 30 days" is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:
I had a similar experience with 4 months of wait (uncalibrated grant decision timelines on the website) and unresponsiveness to email with LTFF, and I know a couple of people who had similar problems. I also found it pretty "disrespectful".
Its hard to understand why a) they wouldn't list the empirical grant timelines on their website, and b) why they would have to be so long.
I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it's a bit technically challenging to implement - but the obvious MVP (as you point out) is to have a bunch of stats on our site. I'll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don't want to commit to doing this).
In the meantime, here are a few numbers that I quickly put together (across all of our funds).
Grant decision turnaround times (mean, median):
I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.
There is an "EA Hotel", which is decently-sized, very intensely EA, and very cheap.
Occasionally it makes sense for people to accept very low cost-of-living situations. But a person's impact is usually a lot higher than their salary. Suppose that a person's salary is x, their impact 10x, and their impact is 1.1 times higher when they live in SF, due to proximity to funders and AI companies. Then you would have to cut costs by 90% to make it worthwhile to live elsewhere. Otherwise, you would essentially be stepping over dollars to pick up dimes.
Of course there are some theoretical reasons for growing fast. But theory only gets you so far, on this issue. Rather, this question depends on whether growing EA is promising currently (I lean against) compared to other projects one could grow. Even if EA looks like the right thing to build, you need to talk to people who have seen EA grow and contract at various rates over the last 15 years, to understand which modes of growth have been healthier, and have contributed to gained capacity, rather than just an increase in raw numbers. In my experience, one ...
Yes, they were involved in the first, small, iteration of EAG, but their contributions were small compared to the human capital that they consumed. More importantly, they were a high-demand group that caused a lot of people serious psychological damage. For many, it has taken years to recover a sense of normality. They staged a partial takeover of some major EA institutions. They also gaslit the EA community about what they were doing, which confused and distracted decent-sized subsections of the EA communtiy for years.
I watched The Master a couple of mont...
Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it's because their thinking is politics-first. Their side of politics is warning of a likely "climate catastrophe", so they have to make that catastrophe as bad as possible - existential.
I think that disagreement about the size of the risks is part of the equation. But it's missing what is, for at least a few of the prominent critics, the main element - people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like "bias", "prejudice", and "disproportionate disadvantage". So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.
Obviously this is not wha...
I disagree because I think these people would be in favour of action to mitigate x-risk from extreme climate change and nuclear war.
I guess you're right, but even so I'd ask:
I'd read "offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and other EV-sponsored projects will transition to being independent legal entities" as "all of them" but now I'm less sure.
Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.
Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.
You may well be right about D'Angelo and the others.
Nitpicks:
Re 2: It's plausible, but I'm not sure that this is true. Points against:
Causal Foundations is probably 4-8 full-timers, depending on how you count the small-to-medium slices of time from various PhD students. Several of our 2023 outputs seem comparably important to the deception paper:
2 - I'm thinking more of the "community of people concerned about AI safety" than EA.
1,3,4- I agree there's uncertainty, disagreement and nuance, but I think if NYT's (summarised) or Nathan's version of events is correct (and they do seem to me to make more sense to me than other existing accounts) then the board look somewhat like "good guys", albeit ones that overplayed their hand, whereas Sam looks somewhat "bad", and I'd bet that over time, more reasonable people will come around to such a view.
It's a disappointing outcome - it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.
But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn't control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.
Hey Ryan :)
I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.
However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue
Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.
It looks like, on net, people disagree with my take in the original post.
I just disagreed with the OP because it's a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.
On the meta-level, anonymously sharing negative psychoanalyses of people you're debating seems like very poor behaviour.
Now, I'm a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it's needed, to avoid some unjust social consequences.
In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, the...
I'm sorry, but it's not an "overconfident criticism" to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.
This interaction is interesting, but I wasn't aware of it (I've only reread a fraction of Hutch's messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it's not credible.
Moreover, these psychoanalyses don't ring true. I'm in a good headspace, giving FTX hardly any attention. Of course, I am not...
Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn't notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.
There's also fairly robust market for (at least larger) real-money claims against FTX with prices around 35-40 cents on the dollar. I'd expect recovery to be somewhat higher in nominal dollars, because it may take some time for distributions to occur and that is presumably priced into the market price. (Anyone with a risk appetite for buying large FTX claims probably thinks their expected rate of return on their next-best investment choice is fairly high, implying a fairly high discount rate is being applied here.)
The annual budgets of Bellingcat and Propublica are in the single-digit millions. (The latter has had negative experiences with EA donations, but is still relevant for sizing up the space.)
It's hard to say, but the International Federation of Journalists has 600k members, so maybe there exists 6M journalists worlwide, of which maybe 10% are investigative journalists (600k IJs). If they are paid like $50k/year, that's $30B used for IJ.
Surely from browsing the internet and newspapers, it's clear than less than 1% (<60k) of journalists are "investigative". And I bet that half of the impact comes from an identifiable 200-2k of them, such as former Pulitzer Prize winners, Propublica, Bellingcat, and a few other venues.
I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.
It seems pretty clear that Amazon's intent is to have state of the art AI backing Alexa. That alone would not be particularly concerning. The problem would be if Amazon has some leverage to force Anthropic to accelerate capabilities research and neglect safety - which is certainly possible, but it seems like Anthropic wants to avoid it by keeping Amazon as a minority investor and maintaining the existing governance structure.
I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.
There are also big incentive gradients within longtermism:
(Disclosure: I decided to work in biorisk and not AI)
this is something that ends up miles away from 'winding down EA', or EA being 'not a movement'.
To be clear, winding down EA is something I was arguing we shouldn't be doing.
I feel like we're closer to agreement here, but on reflection the details of your plan here don't sum up to 'end EA as a movement' at all.
At a certain point it becomes semantic, but I guess readers can decide, when you put together:
JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”? Ryan, do you have a sense of what that would concretely look like?
Well I'm not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them.
It could make sense for there to be a community focused on "effe...
Roughly yes, with some differences:
Basically, why do mid-sized companies usually not spawn cults and socially harm their members like movements like EA and the animal welfare community sometimes do? I think it's because movements by their nature try to motivate members tow...
Scott's already said what I believe
Yes, I had this exact quote in mind when I said in Sect 5 that "Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA."
People would believe them, want to co-ordinate on it. Then they'd want to organise to help make their own ideas more efficient and boom, we're just back to an EA movement all over again.
One of my main theses is supposed to be that people can and should coordinate their activities without acting like a movement.
...I still want concern about red
Thanks for explaining your viewpoints Ryan. I think I have a better understanding, but I'm still not sure I grok it intuitively. Let me try to repeat what I think is your view here (with the help of looking at some of your other quick takes)
...note for readers, this is my understanding of Ryan's thoughts, not what he's said
1 > The EA movement was directly/significantly causally responsible for the FTX disaster, despite being at a small scale (e.g. "there are only ~10k effective altruists")
2 > We should believe that without reform, similar catastro
A lot of the comments seem fixated on, and wanting to object to the idea of "reputational collapse" in a way that I find hard to relate to. This wasn't a particularly load-bearing part of my argument, it was only used to argue that the idea that EA is a particularly promising way to get people interested in x-risk has become less plausible. Which was only one of three reasons not to promote EA in order to promote x-risk. Which was only one of many strategic suggestions.
That said, I find it hard not to notice that the reputation of, and enthusiasm for EA ha...
I think there was perhaps some miscommunication around your use and my interpretation of "collapse". To me it implies that something is at an unrecoverable stage, like a "collapsed" building or support for a presidential candidate "collapsing" in a primary race. In your pinned quick take you posit that Effective Altruism as a brand may be damaged to an unrecoverable extent which makes me feel this is the right reading of your post, or at least it was a justified interpretation at least.
***
I actually agree with a lot of your claims in your reply. For exampl...
Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF's trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come.
I think this an underappreciated point. If you look at google trends for Theranos, the public interest didn't really get going until a few years after the fraud was exposed, when popular podcasts, documentaries and tv series started dropping. I think the FTX story is about as juicy as that one. I could easily see a film about FTX becoming the next "th...
1 - it's because Sam was publicly claiming that the trading platform and the traders were completely firewalled from one another, and had no special info, as would normally (e.g. in the US) be legally required to make trading fair, but which is impossible if the CEOs are dating
2 - I'm not objecting the spending. It was clear at the time that he was promoting an image of frugality that wasn't accurate. One example here, but there are many more.
3 - A lot of different Alameda people warned some people at the time of the split. For a variety of reasons, I beli...
1 - Oh I see. So who knew that Sam and Caroline continued to date while claiming that FTX and Alameda were completely separate?
2 - You link to a video of a non-EA saying that Sam drives a corolla and also has a shot of his very expensive-looking apartment...what about this is misleading or inaccurate? What did you expect the EAs you have in mind to 'disclose' - that FTX itself wasn't frugal? Was anyone claiming it was? Would anyone expect it to have been? Could you share some of your many actual examples?
3 - (I don't think you've addressed anything I said ...
I agree the primary role of EAs here was as victims, and that presumably only a couple of EAs intentionally conspired with Sam. But I wouldn't write it off as just social naivete; I think there was also some negligence in how we boosted him, e.g.:
- Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
- Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.
FWIW, these examples feel hindsight-bias-y to me. They have the flavour of "we now know this information was significant, so of course at the time people should have known this and done something about it". If I put myself in the shoes of the "some EAs" in these examples, it's not clea...
I think it's worth emphasizing that if "naive consequentialism" just means sometimes thinking the ends justify the means in a particular case, and being wrong about it, then that extends into the history of scandals far far beyond groups that have ever been motivated by explicitly utilitarian technical moral theory.
Oh, I definitely agree that the guilt narrative has some truth to it too, and that the final position must be some mix of the two, with somewhere between a 10/90 and 90/10 split. But I'd definitely been neglecting the 'we got used' narrative, and had assumed others were too (though aprilsun's comment suggests I might be incorrect about that).
I'd add that for different questions related to the future of EA, the different narratives change their mix. For example, the 'we got used' narrative is at its most relevant if asking about 'all EAs except Sam'. But if...
There are various reasons to believe that SBF's presence in EA increased the chance that FTX would happen and thrive:
If someone told me about a temporary 5x increase in volume that understandably messed things up, I would think they were talking about a couple month timeframe, not 8 months to 2 years. Surely there’s some point at which you step back and realise you need to adapt your systems to scale with demand? E.g. automating deadline notifications.
It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.
I think the core of the issue is that there's unfortunately somewhat of a hierarchy of needs from a grant making org. That you're operating at size, and in diverse areas, with always-open applications, and using part-time staff is impressive, but people will still judge you harshly if you struggling to perform your basic service.
Regarding these basics, we seem to agree that an OpenPhil alternative should accurately represent their evaluation timelines on the website, and should give an updated timeline when the stated grant decision time passes (at least o...
That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".
Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics".
Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy r...
The level of malfunctioning that is going on here seems severe:
The website still says LTFF will respond in 8 weeks (my emphasis)
Oof. Apologies, I thought we've fixed that everywhere already. Will try to fix asap.
but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.
Yeah I think this is very fair. I do think the funding ecosystem is pretty broken in a bunch of ways and of course we're a part of that; I'm reminded of Luke Muelhauser's old comment about how MIRI's operations got a lot better after he read Nonpr...
I agree. Early-career EAs are more likely to need to switch projects, less likely to merit multi-year funding, and have - on average - less need for job-security. Single-year funding seems OK for those cases.
For people and orgs with significant track records, however, it seems hard to justify.
Yes, because when you are at at will employee, the chance that you will still have income in n years tends to be higher than if you had to apply to renew a contract, and you don't need to think about that application. People are typically angry if asked to reapply for their own job, because it implies that their employer might want to terminate them.
I'm focused on how the best altruistic workers should be treated, and if you think that giving them job insecurity would create good incentives, I don't agree. We need the best altruistic workers to be rewarded not just better than the less productive altruists, but also better than those pursuing non-altruistic endeavours. It would be hard to achieve this if they do not have job security.
Yes, that's who I meant when I said "those working for the FTX Future Fund"