All of RyanCarey's Comments + Replies

Yes, that's who I meant when I said "those working for the FTX Future Fund"

This is who I thought would be responsible too, along with the CEO of CEA, that they report to, (and those working for the FTX Future Fund, although their conflictedness means they can't give an unbiased evaluation). But since the FTX catastrophe, the community health team has apparently broadened their mandate to include "epistemic health" and "Special Projects", rather than narrowing it to focus just on catastrophic risks to the community, which would seem to make EA less resilient in one regard, than it was before.

Of course I'm not necessarily saying th... (read more)

Surely one obvious person with this responsibility was Nick Beckstead, who became President of the FTX Foundation in November 2021. That was the key period where EA partnered with FTX. Beckstead had long experience in grantmaking, credibility, and presumably incentive/ability to do due diligence. Seems clear to me from these podcasts that MacAskill (and to a lesser extent the more junior employees who joined later) deferred to Beckstead.

^In summarising Why They Do It, Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud". And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.

In this case, the other defendants have said they knew what they're doing was wrong, that they were misappropriating customers' assets, and investing them. That weighs somewhat against... (read more)

Quote: (and clearly they calculated incorrectly if they did)

I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to "no fraud" as opposed to "safer amounts of fraud." The risk of getting busted from less extreme or less risky fraud would seem considerably less.

Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He'd need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to hon... (read more)

Great comment. 

Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud".

I agree with your analysis but I think Will also sets up a false dichotomy. One's inability to conceptualize or realize that one's actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the "high integrity to really bad continuum", you have morally scrupulous people who constantly wond... (read more)

There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.

I think you can get closer to dissolving this problem by considering why you're assigning credit. Often, we're assigning some kind of finite financial rewards. 

Imagine that a group of n people have all jointly created $1 of value in the world, and that if any one of them did not participate, there would only be $0 units of value. Clearly, we can't give $1 to all of them, because then we would be paying $n to reward an event that only created $0 of value, which is inefficient. If, however, only the first guy (i=1) is an "agent" that responds to incenti... (read more)

Answer by RyanCareyApr 08, 202417
6
1

Hi Polkashell,

There are indeed questionable people in EA, as in all communities. EA may be worse in some ways, because of its utilitarian bent, and because many of the best EAs have left the community in the last couple of years.

I think it's common in EA for people to:

  • have high hopes in EA, and have them be dashed, when their preferred project is defunded, when a scandal breaks, and so on. 
  • burn out, after they give a lot of effort to a project. 

What can make such events more traumatic is if EA has become the source of their livelihood, meaning, f... (read more)

Julia tells me "I would say I listed it as a possible project rather than calling for it exactly."]

It actually was not just neutrally listed as a "possible" project, because it was the fourth bullet point under "Projects and programs we’d like to see" here.

It may not be worth becoming a research lead under many worldviews. 

I'm with you on almost all of your essay, regarding the advantages of a PhD, and the need for more research leads in AIS, but I would raise another kind of issue - there are not very many career options for a research lead in AIS at present. After a PhD, you could pursue:

  1. Big RFPs. But most RFPs from large funders have a narrow focus area - currently it tends to be prosaic ML, safety, and mechanistic interpretability. And having to submit to grantmakers' research direction somewhat def
... (read more)
8
L Rudolf L
1mo
(A) Call this "Request For Researchers" (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it's hard to do well, and it does not seem like OpenPhil wants to keep trying. (B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a series of short few-month grants. But also the long durations just do make them pretty high-stakes bets, and you need to select hard not just on research skill but also the character traits that mean people don't need external incentives. (C) I think "agenda-agnostic" and "high quality" might be hard to combine. It seems like there are three main ways to select good people: rely on competence signals (e.g. lots of cited papers, works at a selective organisation), rely on more-or-less standardised tests (e.g. a typical programming interview, SATs), or rely on inside-view judgements of what's good in some domain. New researchers are hard to assess by the first, I don't think there's a cheap programming-interview-but-for-research-in-general that spots research talent at high rates, and therefore it seems you have to rely a bunch on the third. And this is very correlated with agendas; a researcher in domain X will be good at judging ideas in that domain, but less so in others. The style of this that I'd find most promising is: 1. Someone with a good overview of the field (e.g. at OpenPhil) picks a few "department chairs", each with some agenda/topic. 2. Each department chair picks a few research leads who they think have promising work/ideas in the direction of their expertise. 3. These research leads then get collaborators/money/ops/compute through the department. I think this would be better than a grab-bag o
6
AdamGleave
1mo
This is an important point. There's a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you're in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that's great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it's promising. A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem. I do like the idea of a more research agenda agnostic research organization. I'm striving to have FAR be more open-minded, but we can't support everything so are still pretty opinionated to prioritize agendas that we're most excited by & which are a good fit for our research style (engineering-intensive empirical work). I'd like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who'd like to set something like this up.

Thanks for engaging with my criticism in a positive way.

Regarding how timely the data ought to be, I don't think live data is necessary at all - it would be sufficient in my view to post updated information every year or two.

I don't think "applied in the last 30 days" is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:

  1. Restrict to applications from n to n+m months ago, where n>=3
  2. Make a note of what percent
... (read more)
2
calebp
1mo
Oh, I thought you might have suggested the live thing before, my mistake. Maybe I should have just given the 90-day figure above. (That approach seems reasonable to me)

I had a similar experience with 4 months of wait (uncalibrated grant decision timelines on the website) and unresponsiveness to email with LTFF, and I know a couple of people who had similar problems. I also found it pretty "disrespectful".

Its hard to understand why a) they wouldn't list the empirical grant timelines on their website, and b) why they would have to be so long.

I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it's a bit technically challenging to implement - but the obvious MVP (as you point out) is to have a bunch of stats on our site. I'll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don't want to commit to doing this).

In the meantime, here are a few numbers that I quickly put together (across all of our funds).

Grant decision turnaround times (mean, median):

  • applied in the last 30 days = 14 d
... (read more)

I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.

There is an "EA Hotel", which is decently-sized, very intensely EA, and very cheap.

Occasionally it makes sense for people to accept very low cost-of-living situations. But a person's impact is usually a lot higher than their salary. Suppose that a person's salary is x, their impact 10x, and their impact is 1.1 times higher when they live in SF, due to proximity to funders and AI companies. Then you would have to cut costs by 90% to make it worthwhile to live elsewhere. Otherwise, you would essentially be stepping over dollars to pick up dimes.

3
Chris Leong
4mo
One advantage of the EA hotel, compared to a grant, for example, is that selection effects for it are surprisingly strong. This can help resolve some of the challenges of evaluation.

Of course there are some theoretical reasons for growing fast. But theory only gets you so far, on this issue. Rather, this question depends on whether growing EA is promising currently (I lean against) compared to other projects one could grow. Even if EA looks like the right thing to build, you need to talk to people who have seen EA grow and contract at various rates over the last 15 years, to understand which modes of growth have been healthier, and have contributed to gained capacity, rather than just an increase in raw numbers. In my experience, one ... (read more)

Yes, they were involved in the first, small, iteration of EAG, but their contributions were small compared to the human capital that they consumed. More importantly, they were a high-demand group that caused a lot of people serious psychological damage. For many, it has taken years to recover a sense of normality. They staged a partial takeover of some major EA institutions. They also gaslit the EA community about what they were doing, which confused and distracted decent-sized subsections of the EA communtiy for years.

I watched The Master a couple of mont... (read more)

4
Habryka
4mo
I agree with a broad gist of this comment, but I think this specific sentence is heavily underselling Leverage's involvement. They ran the first two EA Summits, and also were heavily involved with the first two full EA Globals (which I was officially in charge of, so I would know).

Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it's because their thinking is politics-first. Their side of politics is warning of a likely "climate catastrophe", so they have to make that catastrophe as bad as possible - existential.

4
Daniel_Friedrich
4mo
That seems like an extremely unnatural thought process. Climate change is the perfect analogy - in these circles, it's salient both as a tool of oppression and an x-risk. I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I'd guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don't tend to think about longtermistic consequences, the distinction doesn't seem that meaningful. AI x-risk is more weird and terrifying and it goes against the heuristics that "technological progress is good", "people have always feared new technologies they didn't understand" and "the powerful draw attention away from their power". Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population - it's just that some people in AI ethics feel particularly strong & confident about these heuristics. Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we're very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).

I think that disagreement about the size of the risks is part of the equation. But it's missing what is, for at least a few of the prominent critics, the main element - people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like "bias", "prejudice", and "disproportionate disadvantage". So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.

Obviously this is not wha... (read more)

I disagree because I think these people would be in favour of action to mitigate x-risk from extreme climate change and nuclear war.

I guess you're right, but even so I'd ask:

  • Is it 11 new orgs, or will some of them stick together (perhaps with CEA) when they leave? 
  • What about other orgs not on the website, like GovAI and Owain's team? 
  • Separately, are any teams going to leave CEA?

Related to (1) is the question: which sponsored projects are definitely being spun out?

I'd read "offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and other EV-sponsored projects will transition to being independent legal entities" as "all of them" but now I'm less sure.

Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.

Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.

You may well be right about D'Angelo and the others.

5
gwern
5mo
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There's not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: "Oh Dad - not again!")
  1. The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D'Angelo, Hoffman, and Hurd moved toward the "doomer" pole over time.
3
gwern
5mo
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in... 2017 or something? a very long time ago, anyway - I don't remember him dismissing the dangers or being pollyannaish.) 'Superalignment' didn't come out of nowhere or surprise anyone about Ilya being in charge. Elon was... not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D'Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger. A lot of people have indeed moved towards the 'doomer' pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.

Nitpicks:

  1. I think Dario and others would've also been involved in setting up the corporate structure
  2. Sam never gave the "doomer" faction a near majority. That only happened because 2-3 "non-doomers" left and Ilya flipped.
2
gwern
5mo
1. I haven't seen any coverage of the double structure or Anthropic exit which suggests that Amodei helped think up or write the double structure. Certainly, the language they use around the Anthropic public benefit corporation indicates they all think, at least post-exit, that the OA double structure was a terrible idea (eg. see the end of this article). 2. You don't know that. They seem to have often had near majorities, rather than being a token 1 or 2 board members. By most standards, Karnofsky and Sutskever are 'doomers', and Zillis is likely a 'doomer' too as that is the whole premise of Neuralink and she was a Musk representative (which is why she was pushed out after Musk turned on OA publicly and began active hostilities like breaking contracts with OA). Hoffman's views are hard to characterize, but he doesn't seem to clearly come down as an anti-doomer or to be an Altman loyalist. (Which would be a good enough reason for Altman to push him out; and for a charismatic leader, neutralizing a co-founder is always useful, for the same reason no one would sell life insurance to an Old Bolshevik in Stalinist Russia.) If I look at the best timeline of the board composition I've seen thus far, at a number of times post-2018, it looks like there was a 'near majority' or even outright majority. For example, 2020-12-31 has either a tie or an outright majority for either side depending on how you assume Sutskever & Hoffman (Sutskever?/Zilis/Karnofsky/D'Angelo/McCauley vs Hoffman? vs Altman/Brockman), and with the 2021-12-31 list the Altman faction needs to pick up every possible vote to match the existing 5 'EA' faction (Zilis/Karnofsky/D'Angelo/McCauley/Toner vs Hurd?/Sutskever?/Hoffman? vs Brockman/Altman) although this has to be wrong because the board maxes out at 7 according to the bylaws so it's unclear how exactly the plausible majorities evolved over time.
Linch
5mo18
2
0
1
1

Re 2: It's plausible, but I'm not sure that this is true. Points against:

  1. Reid Hoffman was reported as being specifically pushed out by Altman: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-privately-unhappy-about-leaving-openais-board 
  2. Will Hurd is plausibly quite concerned about AI Risk[1]. It's hard to know for sure because his campaign website is framed in the language of US-China competition (and has unfortunate-by-my-lights suggestions like "Equip the Military and Intelligence Community with Advanced AI"), but I think a lot of the pr
... (read more)

Causal Foundations is probably 4-8 full-timers, depending on how you count the small-to-medium slices of time from various PhD students. Several of our 2023 outputs seem comparably important to the deception paper: 

  • Towards Causal Foundations of Safe AGI, The Alignment Forum - the summary of everything we're doing.
  • Characterising Decision Theories with Mechanised Causal Graphs, arXiv - the most formal treatment yet of TDT and UDT, together with CDT and EDT in a shared framework.
  • Human Control: Definitions and Algorithms, UAI - a paper arguing that corrig
... (read more)
2
Gavin
5mo
excellent, thanks, will edit

What if you just pushed it back one month - to late June?

4
Eli_Nathan
5mo
Open to it for 2025, though looks like at least Oxford will still have exams then (exams often stretch until 1–2 weeks after the end of term). But early July might work and we can look into what dates we can get when we start booking.

2 - I'm thinking more of the "community of people concerned about AI safety" than EA.

1,3,4- I agree there's uncertainty, disagreement and nuance, but I think if NYT's (summarised) or Nathan's version of events is correct (and they do seem to me to make more sense to me than other existing accounts) then the board look somewhat like "good guys", albeit ones that overplayed their hand, whereas Sam looks somewhat "bad", and I'd bet that over time, more reasonable people will come around to such a view.

4
Brennan.W
5mo
2- makes sense! 1,3,4- Thanks for sharing (the NYT summary isn’t working for me unfortunately) but I see your reasoning here that the intention and/or direction of the attempted ouster may have been “good”. However, I believe the actions themselves represent a very poor approach to governance and demonstrate a very narrow focus that clearly didn’t appropriately consider many of the key stakeholders involved. Even assuming the best intentions, in my perspective, when a person has been placed on the board of such a consequential organization and is explicitly tasked with helping to ensure effective governance, the degree to which this situation was handled poorly is enough for me to come away believing that the “bad” of their approach outweighs the potential “good” of their intentions. Unfortunately it seems likely that this entire situation will wind up having a back-fire effect from what was (we assume) intended by creating a significant amount of negative publicity for and sentiment towards the AI safety community (and EA). At the very least, there is now a new (all male 🤔 but that’s a whole other thread to expand upon) board with members that seem much less likely to be concerned about safety. And now Sam and the less cautious cohort within the company seem to have a significant amount of momentum and good will behind them internally which could embolden them along less cautious paths. To bring it back to the “good guy bad guy” framing. Maybe I could buy that the board members were “good guys” as concerned humans, but “bad guys” as board members. I’m sure there are many people on this forum who could define my attempted points much more clearly in specific philosophical terms 😅 but I hope the general ideas came through coherently enough to add some value to the thread. Would love to hear your thoughts and any counter points or alternative perspectives!

It's a disappointing outcome - it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.

But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn't control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.

Hey Ryan :)

I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.

However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue

  1. The statement suggests we have a much clearer picture of the situation and factors at play than I believe anyone currently has (as of 22 Nov 2023)
  2. The “we” phrasing seem
... (read more)
6
Jason
5mo
Good points in the second paragraph. While it's common in both nonprofits and for-profits to have executives on the board, it seems like a really bad idea here. Anyone with a massive financial interest in the for-profit taking an aggressive approach should not be on the non-profit's board. 

Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.

5
Ben_West
5mo
Hmm, this doesn't seem obvious to me – if you care more about people's success then you are more willing to give offers to people who don't have a robust resume etc., which is going to lead to a lower hit rate than usual.

It looks like, on net, people disagree with my take in the original post. 

I just disagreed with the OP because it's a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.

On the meta-level, anonymously sharing negative psychoanalyses of people you're debating seems like very poor behaviour. 

Now, I'm a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it's needed, to avoid some unjust social consequences.

In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, the... (read more)

I'm sorry, but it's not an "overconfident criticism" to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.

This interaction is interesting, but I wasn't aware of it (I've only reread a fraction of Hutch's messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it's not credible. 

Moreover, these psychoanalyses don't ring true. I'm in a good headspace, giving FTX hardly any attention. Of course, I am not... (read more)

-14
aprilsun
7mo

Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn't notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.

There's also fairly robust market for (at least larger) real-money claims against FTX with prices around 35-40 cents on the dollar. I'd expect recovery to be somewhat higher in nominal dollars, because it may take some time for distributions to occur and that is presumably priced into the market price. (Anyone with a risk appetite for buying large FTX claims probably thinks their expected rate of return on their next-best investment choice is fairly high, implying a fairly high discount rate is being applied here.)

-48
aprilsun
7mo
9
Nicky Pochinkov
7mo
I've added manifold markets and more details from the book, not to be fully trusted on face value. Thought they spent/lost a lot of money, and misused funds in Alameda, they had huge amounts of money, so the book figures suggest that might not account for all the customer funds to be lost (if one writes off VC investment and similar)

It is a bit disheartening to see that some readers will take the book at face value.

The annual budgets of Bellingcat and Propublica are in the single-digit millions. (The latter has had negative experiences with EA donations, but is still relevant for sizing up the space.)

It's hard to say, but the International Federation of Journalists has 600k members, so maybe there exists 6M journalists worlwide, of which maybe 10% are investigative journalists (600k IJs). If they are paid like $50k/year, that's $30B used for IJ.

Surely from browsing the internet and newspapers, it's clear than less than 1% (<60k) of journalists are "investigative". And I bet that half of the impact comes from an identifiable 200-2k of them, such as former Pulitzer Prize winners, Propublica, Bellingcat, and a few other venues.

4
Jaime Sevilla
7mo
Yeah in hindisght that is probably about right. It'd be interesting to look at some of these high profile journalists, and see if they are well supported to do impactful journalism or if they have to spend a lot of time on chasing trends to afford working on IJ pieces.

Anthropic is small compared with Google and OpenAI+Microsoft.

5
calebp
7mo
Ah, I thought you were implying that Anthropic weren't already racing when you were actually pointing at Amazon (a major company) joining the race. I agree that Anthropic is not a "major" company. It seems pretty overdetermined to me that Amazon and Apple will join either join the race by acquiring a company or by reconfiguring teams/hiring. I'm a bit confused about whether I want it to happen now, or later. I'd guess later.

I would, however, not downplay their talent density.

I hope this is just cash and not a strategic partnership, because if it is, then it would mean there is now a third major company in the AGI race.

It seems pretty clear that Amazon's intent is to have state of the art AI backing Alexa. That alone would not be particularly concerning. The problem would be if Amazon has some leverage to force Anthropic to accelerate capabilities research and neglect safety - which is certainly possible, but it seems like Anthropic wants to avoid it by keeping Amazon as a minority investor and maintaining the existing governance structure.

6
calebp
7mo
Um, conditional on any AI labs being in a race in what way are Anthropic not already racing?

I interpret it as broadly the latter based on the further statements in the Twitter thread, though I could well be wrong.

There are also big incentive gradients within longtermism:

  • To work on AI experiments rather than AI theory (higher salary, better locations, more job security)
  • To work for a grantmaker rather than a grantee (for job security), and
  • To work for an AI capabilities company rather than outside (higher salary)
5
Vaidehi Agarwalla
7mo
These are all great points. I was planning to add this into the main post, but I don't think it ended up in the final draft - so thanks for raising this! 
9
NickLaing
7mo
Wow very well put. This is the one that scares me the most out of these three, and I think there could be more exploring to be done as to first, how strong an incentive this might be, and then how that incentive can change people's view on their job and AI "To work for an AI capabilities company rather than outside (higher salary)" I know it's a side note not directly related to the original question, but I would be interested to see data comparing 1. Safety researchers' pdoom who work for AI capabilities companies vs. Those who work for independent safety orgs (this might have been done already) 2. What proportion of AI safety people who started working for capabilities orgs have moved on over time (I would call it defected) to working more on capabilities than alignment.
  • To work in AI instead of other areas (higher salary, topic is shiny)

(Disclosure: I decided to work in biorisk and not AI)

this is something that ends up miles away from 'winding down EA', or EA being 'not a movement'.

To be clear, winding down EA is something I was arguing we shouldn't be doing.

I feel like we're closer to agreement here, but on reflection the details of your plan here don't sum up to 'end EA as a movement' at all.

At a certain point it becomes semantic, but I guess readers can decide, when you put together:

... (read more)

JWS, do you think EA could work as a professional network of “impact analysts” or “impact engineers” rather than as a “movement”? Ryan, do you have a sense of what that would concretely look like?

Well I'm not sure it makes sense to try to fit all EAs into one professional community that is labelled as such, since we often have quite different jobs and work in quite different fields. My model would be a patchwork of overlapping fields, and a professional network that often extends between them.

It could make sense for there to be a community focused on "effe... (read more)

Roughly yes, with some differences:

  • I think the disasters would scale sublinearly
  • I'm also worried about Leverage and various other cults and disasters, not just FTX.
  • I wouldn't think of the separate communities as "movements" per se. Rather, each cause area would have a professional network of nonprofits and companies.

Basically, why do mid-sized companies usually not spawn cults and socially harm their members like movements like EA and the animal welfare community sometimes do? I think it's because movements by their nature try to motivate members tow... (read more)

Scott's already said what I believe

Yes, I had this exact quote in mind when I said in Sect 5 that "Religions can withstand persecution by totalitarian governments, and some feel just about as strongly about EA."

People would believe them, want to co-ordinate on it. Then they'd want to organise to help make their own ideas more efficient and boom, we're just back to an EA movement all over again.

One of my main theses is supposed to be that people can and should coordinate their activities without acting like a movement.

I still want concern about red

... (read more)
JWS
7mo12
1
0

Thanks for explaining your viewpoints Ryan. I think I have a better understanding, but I'm still not sure I grok it intuitively. Let me try to repeat what I think is your view here (with the help of looking at some of your other quick takes)

note for readers, this is my understanding of Ryan's thoughts, not what he's said

 1 > The EA movement was directly/significantly causally responsible for the FTX disaster, despite being at a small scale (e.g. "there are only ~10k effective altruists")

2 > We should believe that without reform, similar catastro

... (read more)

A lot of the comments seem fixated on, and wanting to object to the idea of "reputational collapse" in a way that I find hard to relate to. This wasn't a particularly load-bearing part of my argument, it was only used to argue that the idea that EA is a particularly promising way to get people interested in x-risk has become less plausible. Which was only one of three reasons not to promote EA in order to promote x-risk. Which was only one of many strategic suggestions.

That said, I find it hard not to notice that the reputation of, and enthusiasm for EA ha... (read more)

JWS
7mo10
1
0

I think there was perhaps some miscommunication around your use and my interpretation of "collapse". To me it implies that something is at an unrecoverable stage, like a "collapsed" building or support for a presidential candidate "collapsing" in a primary race. In your pinned quick take you posit that Effective Altruism as a brand may be damaged to an unrecoverable extent which makes me feel this is the right reading of your post, or at least it was a justified interpretation at least.

***

I actually agree with a lot of your claims in your reply. For exampl... (read more)

Firstly, these people currently know least, and many of them will hear more in the future, such as when SBF's trial happens, the Michael Lewis book is released, or some of the nine film/video adaptations come.

I think this an underappreciated point. If you look at google trends for Theranos, the public interest didn't really get going until a few years after the fraud was exposed, when popular podcasts, documentaries and tv series started dropping. I think the FTX story is about as juicy as that one. I could easily see a film about FTX becoming the next "th... (read more)

1 - it's because Sam was publicly claiming that the trading platform and the traders were completely firewalled from one another, and had no special info, as would normally (e.g. in the US) be legally required to make trading fair, but which is impossible if the CEOs are dating

2 - I'm not objecting the spending. It was clear at the time that he was promoting an image of frugality that wasn't accurate. One example here, but there are many more.

3 - A lot of different Alameda people warned some people at the time of the split. For a variety of reasons, I beli... (read more)

1 - Oh I see. So who knew that Sam and Caroline continued to date while claiming that FTX and Alameda were completely separate?

2 - You link to a video of a non-EA saying that Sam drives a corolla and also has a shot of his very expensive-looking apartment...what about this is misleading or inaccurate? What did you expect the EAs you have in mind to 'disclose' - that FTX itself wasn't frugal? Was anyone claiming it was? Would anyone expect it to have been? Could you share some of your many actual examples?

3 - (I don't think you've addressed anything I said ... (read more)

See here. Among people who know EA as well as I do, many - perhaps 25% - are about as pessimistic as me, and some of the remainder have conflicts of interest, or have left.

7
aprilsun
7mo
Interesting. I think I know EA as well as you do and know many EAs who know EA as well as you do and as I said, you're the person most committed to this narrative that I could think of even before you made this post (no doubt there are others more pessimistic I haven't come across, but I'd be very surprised if it was 25%+). I also can't think of any who have left, but perhaps our respective circles are relevantly skewed in some way (and I'm very curious about who has!). Point taken, though, that many of us have 'conflicts of interest,' if by that you mean 'it would be better for us if EA were completely innocent and thriving' (although as others have pointed out, we are an unusually guilt-prone community).

I agree the primary role of EAs here was as victims, and that presumably only a couple of EAs intentionally conspired with Sam. But I wouldn't write it off as just social naivete; I think there was also some negligence in how we boosted him, e.g.:

  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.
  • Despite warnings from early-Alameda peo
... (read more)
  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.

FWIW, these examples feel hindsight-bias-y to me. They have the flavour of "we now know this information was significant, so of course at the time people should have known this and done something about it". If I put myself in the shoes of the "some EAs" in these examples, it's not clea... (read more)

I think it's worth emphasizing that if "naive consequentialism" just means sometimes thinking the ends  justify the means in a particular case, and being wrong about it, then that extends into the history of scandals far far beyond groups that have ever been motivated by explicitly utilitarian technical moral theory. 

-3
LukeDing
7mo
I think this is a very good summary.
7
aprilsun
7mo
As you've linked me to this comment elsewhere, I'll respond. * I'm not sure why EAs who knew about Caroline and Sam dating would have felt the need to 'disclose' this? (Unless they already suspected the fraud but I don't think I've seen anyone claim that anyone did?) * Sam and FTX seem about as frugal as I'd expect for an insanely profitable business and an intention to give away the vast majority of tens of billions of dollars, too frugal if anything (although I actually think my disagreeing with you here is a point in favour of your broader argument -- the more Sam splurged on luxuries, the less it looks like he was motivated by any kind of altruism) * What financial and other support did FTX receive after the early Alameda split by people who were 'warned'? (There may well be a substantial amount, I just don't think I've heard of any.) I'll note, though, that it's easy to see as 'warnings' now what at the time could have just looked like a messy divorce. * Admittedly I don't know much about how these things work, but yet again this looks like hindsight bias to me. Would this have been a priority for you if you had no idea what was coming? If the firewalling was delayed for whatever reason, would you have refused to award grants until it had been? Perhaps. But I don't think it's obvious. Also wouldn't they still be vulnerable to clawbacks in any case? * Again, I just don't think I know anything about this. In fact I don't think I knew Sam was under government investigation before November. I'd be curious if you had details to share, especially regarding how serious the allegations were at the time and which meetings he was invited to.

Oh, I definitely agree that the guilt narrative has some truth to it too, and that the final position must be some mix of the two, with somewhere between a 10/90 and 90/10 split. But I'd definitely been neglecting the 'we got used' narrative, and had assumed others were too (though aprilsun's comment suggests I might be incorrect about that).

I'd add that for different questions related to the future of EA, the different narratives change their mix. For example, the 'we got used' narrative is at its most relevant if asking about 'all EAs except Sam'. But if... (read more)

There are various reasons to believe that SBF's presence in EA increased the chance that FTX would happen and thrive:

  • Only ~10k/10B people are in EA, while they represent ~1/10 of history's worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs. This should give an immediate suspicion that P(FTX thrives | SBF in EA)/P(FTX thrives | SBF not in EA) is very large indeed.
  • Sam decided to do ETG due to conversations with EA leaders. 
  • EA gave Alameda a large majority of its funding and talent.
  • EA gave FTX at least 1-2 of the other leaders of the company.
  • ETG was a big part of Sam's public image and source of his reputation.
2
Michael_PJ
7mo
This seems wildly off to me - I think the strength of the conclusion here should make you doubt the reasoning! I think that the scale of the fraud seems like a random variable uncorrelated with our behaviour as a community. It seems to me like the relevant outcome is "producing someone able and willing to run a company-level fraud"; given that, whether or not it's a big one or a small one seems like it just adds (an enormous amount of) noise.  How many people-able-and-willing-to-run-a-company-level-fraud does the world produce? I'm not sure, but I would say it has to be at least a dozen per year in finance alone, and more in crypto. So far EA has got 1. Is that above the base rate? Hard to say, especially if you control for the community's demographics (socioeconomic class, education, etc.).

Ok, it makes sense that a temporary 5x in volume can really mess you up.

If someone told me about a temporary 5x increase in volume that understandably messed things up, I would think they were talking about a couple month timeframe, not 8 months to 2 years. Surely there’s some point at which you step back and realise you need to adapt your systems to scale with demand? E.g. automating deadline notifications.

It’s also not clear to me that either supply or demand for funding will go back to pre-LTFF levels, given the increased interest in AI safety from both potential donors and potential recipients.

I think the core of the issue is that there's unfortunately somewhat of a hierarchy of needs from a grant making org. That you're operating at size, and in diverse areas, with always-open applications, and using part-time staff is impressive, but people will still judge you harshly if you struggling to perform your basic service.

Regarding these basics, we seem to agree that an OpenPhil alternative should accurately represent their evaluation timelines on the website, and should give an updated timeline when the stated grant decision time passes (at least o... (read more)

That you're struggling with the basics is what leads me to say that LTFF doesn't "have it together".

Just FWIW, this feels kind of unfair, given that like, if our grant volume didn't increase by like 5x over the past 1-2 years (and especially the last 8 months), we would probably be totally rocking it in terms of "the basics". 

Like, yeah, the funding ecosystem is still recovering from a major shock, and it feels kind of unfair to judge the LTFF performance on the basis of such an unprecedented context. My guess is things will settle into some healthy r... (read more)

The level of malfunctioning that is going on here seems severe:

  • The two month average presumably includes a lot of easy decisions, not just hard ones.
  • The website still says LTFF will respond in 8 weeks (my emphasis)
  • The website says they may not respond within an applicant's preferred deadline. But what it should actually say is that LTFF also may not respond within their own self-imposed deadline.
  • And then the website should indicate when, statistically, it does actually tend to give a response.
  • Moreover, my understanding is that weeks after these self-impose
... (read more)
7
abergal
7mo
Hey Ryan: - Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that's real bad. - I agree that it would be good to have a comprehensive plan-- personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it's ultimately not my call.
4
NunoSempere
7mo
This blogpost of mine: Quick thoughts on Manifund’s application to Open Philanthropy might be of interest here.

The website still says LTFF will respond in 8 weeks (my emphasis)

Oof. Apologies, I thought we've fixed that everywhere already. Will try to fix asap. 

but there's a part of me that thinks if you can't even get it together on a basic level, then to find that OpenPhil alternative, we should be looking elsewhere.

Yeah I think this is very fair. I do think the funding ecosystem is pretty broken in a bunch of ways and of course we're a part of that; I'm reminded of Luke Muelhauser's old comment about how MIRI's operations got a lot better after he read Nonpr... (read more)

I agree. Early-career EAs are more likely to need to switch projects, less likely to merit multi-year funding, and have - on average - less need for job-security. Single-year funding seems OK for those cases.

For people and orgs with significant track records, however, it seems hard to justify.

Yes, because when you are at at will employee, the chance that you will still have income in n years tends to be higher than if you had to apply to renew a contract, and you don't need to think about that application. People are typically angry if asked to reapply for their own job, because it implies that their employer might want to terminate them.

I'm focused on how the best altruistic workers should be treated, and if you think that giving them job insecurity would create good incentives, I don't agree. We need the best altruistic workers to be rewarded not just better than the less productive altruists, but also better than those pursuing non-altruistic endeavours. It would be hard to achieve this if they do not have job security.

5
Austin
8mo
I'm sympathetic to treating good altruistic workers well; I generally advocate for a norm of much higher salaries than is typically provided. I don't think job insecurity per se is what I'm after per se, but rather allowing funders to fund the best altruistic workers next year, rather than being locked into their current allocations for 3 years. The default in the for profit sector in the US is not multi-year guaranteed contracts but rather at will employment, where workers may leave a job or be fired for basically any reason. It may seem harsh in comparison to norms in other countries (eg the EU or Japan) but I also believe it leads to more productive allocation of workers to jobs. Remember that effective altruism exists not to serve its workers, but rather to effectively help those in need (people in developing countries, suffering animals, people in the future!) There's instrumental benefits in treating EA workers well in terms of morale, fairness, and ability to recruit, but keep in mind the tradeoffs of less effective allocation schemes.
Load more