All of RyanCarey's Comments + Replies

Yeah, the cost of cheap shared housing is something like $20k/yr of 2026 dollars, whereas your impact would be worth a lot more than that, either because you are making hundreds of thousands of post-tax dollars per year, or because you're foregoing those potential earnings to do important research or activism. Van-living is usually penny-wise, but pound-foolish.

Is this very different from $100k/yr of GDP/cap adjusted for purchasing power differences?

1
Lydia Nottingham
i think so. there are parts of my current life that were not accessible to a $100k/yr-earner in 2019, for example, [LMs on tap](https://lydianottingham.substack.com/p/vitae-per-person-vpp-a-new-way-of/comment/197131937). coastal elites also enjoy a large amount of climate-controlled space: i think this is a big deal, & not available to a $100k/yr-earner in texas. 'vitae per person' is apparently similar to [rawls' notion of primary goods](https://substack.com/home/post/p-183800417), with a focus on material ones.

Any updates on this, now that a couple of years have passed? Based on the website, I guess you decided not to hire a chair in the end? Also, was there only $750k granted in 2025?

It will do a service to your reader if you choose a title that explains what your post is arguing.

1
Will_Davison
Could you suggest an alternative title? Maybe you're thinking something like: "we should set up a pilot charity to formalise the promise - predicted to save 915 QALYs per dollar"?

Another relevant comment:

Overall a nice system card for Opus 4! It's a strange choice to contract Apollo to evaluate sabotage, get feedback that "[early snapshot] schemes and deceives at such high rates that we advise against deploying this model...." and then not re-contracting Apollo for final evals

I think we should keep our eye the most important role that online EA (and adjacent) platforms have played historically. Over the last 20 years, there has always been one or two key locations for otherwise isolated EAs and utilitarians to discover like-minds online, get feedback on their ideas, and then become researchers or real-life contributors. Originally, it was (various incarnations of) Felicifia, and then the EA Forum. The rationalist community benefited in similar ways from the extropians mailing list, SL4, Overcoming Bias and LessWrong. The sheer ... (read more)

4
Sarah Cheng 🔸
I appreciate this comment a lot, thank you! I broadly agree with this! :) I personally care a lot about keeping the Forum community alive. Although I ultimately care about impact, and so I think it's possible that we can do so while also spending our marginal resources on other projects (such as EA Funds). Yeah I mentioned in my post that I don't know how likely the Forum is to turn into a bulletin board by default. I have the feeling that it was naturally moving in that direction last year, and I think that without some external push to make EA more salient, that's just what would happen to an online discussion platform by default. For example, you can see this kind of thing happening pretty often in slacks. I think if you lose enough authors, you eventually hit a threshold where the platform no longer feels like a community of people (i.e. people view it as "the place where orgs post updates"), and that change in perception heavily discourages people from discussing things. I think we need to be attentive to how visitors view "what the EA Forum is about".

Nice, I'll look forward to reading this!

How is EAIF performing in the value proposition that it provides to funding applicants, such as the speed of decisions, responsiveness to applicants' questions, and applicants' reported experiences? Historically your sister fund was pretty bad to applicants, and some were really turned off by the experience.

9
Jamie_Harris
I don't think our capacity has been as stretched as LTFF. We get fewer applications. Id guess the median application wait time is around 4 weeks. It feels somewhat uninformative to share a mean, because sometimes there are substantial delays due to: * applicants themselves being unresponsive to our own emails or saying they need several weeks to send us some follow up info * Logistical complexities on some specific applications. I haven't looked these things up though; let me know if you're keen for a more precise answer. As for applicant questions: likewise, I personally don't get many of these. I answer them when I do, even if sometimes more briefly than I'd like to be able to. I haven't asked Harri his experience though. (I'm intrigued to see these things described as "the value proposition to funding applicants". I would have seen the value proposition more as like 'funding for EA infrastructure projects, even for small amounts', with these other elements more as secondary parts of the 'experience'. Of course, this still matters though.)

I guess a lot of these faulty ideas come from the role of morality as a system of rules for putting up boundaries around acceptable behaviour, and for apportioning blameworthiness moreso than praiseworthiness. Similar to how the legal system usually gives individuals freedom so long as they're not doing harm, our moral system mostly speaks to harms (rather than benefits) from actions (rather than inaction). By extension, the basis of the badness of these harms has to be a violation of "rights" (things that people deserve not to have done to them). Insofar ... (read more)

Yeah, insofar as we accept biased norms of that sort, it's really important to recognize that they are merely heuristics. Reifying (or, as Scott Alexander calls it, "crystallizing") such heuristics into foundational moral principles risks a lot of harm.

(This is one of the themes I'm hoping to hammer home to philosophers in my next book. Besides deontic constraints, risk aversion offers another nice example.)

Another relevant dimension is that the forum (and Groups) are the most targeted to EAs, so they will be most sensitive to fluctuations in the size of the EA community, whereas 80k will be the least sensitive, and Events will be somewhere in-between.

Given this and the sharp decline in applications to events, it seems like the issue is really a decrease in the size of, or enthusiasm in the EA community, rather than anything specific to the forum.

I'm sure I have some thoughts, but to begin with, it would helpful for understanding what's going on if the dashboard would tell us how 2024 went for the events and groups teams.

9
Will Aldred
Agree. Although, while the Events dashboard isn’t up to date, I notice that the EAG team released the following table in a post last month, which does have complete 2024 data: EAG applicant numbers were down 42% from 2022 to 2024,[1] which is a comparable decline to that in monthly Forum users (down 35% from November 2022’s peak to November 2024).[2] To me, this is evidence that the dropping numbers are driven by changes in the larger zeitgeist rather than by any particular thing the Events or Online team is doing (as @Jason surmises in his comment above). 1. ^ (3460/5988) x 100% = 58% (2 s.f.) 2. ^ (3561/5509) x 100% = 65% (2 s.f.) Note that, in a surprising (to me) coincidence, the absolute numbers of annual EAG applicants and monthly EA Forum users are very similar.

Worth noting that although high EA salaries increase the risk to EA organisations, they reduce risk to EA individuals, because people can spend less than their full salary, thereby saving for a time when EA funding dries up.

(In general, the salaries which I will work for in EA go up with funding uncertainty, not down, because indeed it means future funding is more likely to dry up, and I have to pay the high costs of a career transition, or self-fund for many years)

I think the core issue is that the lottery wins you government dollars, which you can't actually spend freely. Government dollars are simply worth less, to Pablo, than Pablo's personal dollars. One way to see this is that if Pablo could spend the government dollars on the other moonshot opportunities, then it would be fine that he's losing his own money.

So we should stipulate that after calculating abstract dollar values, you have to convert them, by some exchange rate, to personal dollars. The exchange rate simply depends on how much better the opportunit... (read more)

Also Nick Bostrom, Nick Beckstead, Will Macaskill, Ben Todd, some of whom have been lifelong academics.

Probably different factors in different cases.

It sounds like you would prefer the rationalist community prevent its members from taking taboo views on social issues? But in my view, an important characteristic of the rationalist community, perhaps its most fundamental, is that it's a place where people can re-evaluate the common wisdom, with a measure of independence from societal pressure. If you want the rationalist community (or any community) to maintain that character, you need to support the right of people to express views that you regard as repulsive, not just the views that you like. This could be different if the views were an incitement to violence, but proposing a hypothesis for socio-economic differences isn't that.

dr_s
10
2
3

Well, it's complicated. I think in theory these things should be open to discussion (see my point on moral philosophy). But now suppose that hypothetically there was incontrovertible scientific evidence that Group A is less moral or capable than Group B. We should still absolutely champion the view that wanting to ship Group A into camps and exterminate them is barbaric and vile, and that instead the humane and ethical thing to do is help Group A compensate for their issues and flourish at the best of their capabilities (after all, we generally hold this v... (read more)

In my view, what's going on is largely these two things:

[rationalists etc] are well to the left of the median citizens, but they are to the right of [typical journalists and academics]

Of course. And:

biodeterminism... these groups are very, very right-wing on... eugenics, biological race and gender differences etc.-but on everything else they are centre-left. 

Yes, ACX readers do believe that genes influence a lot of life outcomes, and favour reproductive technologies like embryo selection, which are right-coded views. These views are actually not restr... (read more)

3
dr_s
  The problem is that this is really a short step away from "certain races have lower IQ and it's kinda all there is to it to explain their socio-economic status", and I've seen many people take that step. Roko and Hanania which I mentioned explicitly absolutely do so publicly and repeatedly.
6
David Mathers🔸
The race stuff is much more right-coded than some of the other genetic/disability stuff. 

This was just a "where do you rate yourself from 1-10" type question, but you can see more of the questions and data here.

1
dr_s
So the thing with self-identification is that I think it might suffer from a certain skew. I think there's fundamentally a bit of a stigma on identifying as right wing, and especially extreme right wing. Lots of middle class, educated people who perceive themselves as rational, empathetic and science-minded are more likely to want to perceive themselves as left wing, because that's what left wing identity used to prominently be until a bit over 15 years ago (which is when most of us probably had their formative youth political experiences). So someone might resist the label even if in practice they are on the right half of the Overton window. Must be noted though that in some cases this might just be the result of the Overton window moving around them - and I definitely have the feeling that we now have a more polarized distribution anyway.

I think the trend you describe is mostly an issue with "progressives", i.e. "leftists" rather than an issue for all those left of center. And the rationalists don't actually lean right in my experience. They average more like anti-woke and centrist. The distribution in the 2024 ACX survey below has perhaps a bit more centre-left and a bit less centre and centre-right than the rationalists at large but not by much, in my estimation.

1
dr_s
Fair! I think it's hard to fully slot rationalists politically because, well, the mix of high decoupling and generally esoteric interests make for some unique combinations that don't fit neatly in the standard spectrum. I'd definitely qualify myself as centre-left, with some more leftist-y views on some aspects of economics, but definitely bothered by the current progressive vibe that I hesitate to define "woke" since that term is abused to hell but am also not sure how to call since they obstinately refuse to give themselves a political label or even recognise that they constitute a noteworthy distinct political phenomenon at all. How was this survey done, by the way? Self ID or some kind of scored test? 

There is one caveat: if someone acting on behalf on an EA organisation truly did something wrong which contributed to this fraud, then obviously we need to investigate that. But I am not aware of any evidence to suggest that happened. 

I tend to think EA did. Back in September 2023, I argued:

EA contributed to a vast financial fraud, through its:

  • People. SBF was the best-known EA, and one of the earliest 1%. FTX’s leadership was mostly EAs. FTXFF was overwhelmingly run by EAs, including EA’s main leader, and another intellectual leader of EA. 
  • R
... (read more)

Their suggestions are relatively abstract, but you might consider reading Katja and Robin on the general topic of whether to focus on contributing money vs other things when you're young.

Yes, that's who I meant when I said "those working for the FTX Future Fund"

This is who I thought would be responsible too, along with the CEO of CEA, that they report to, (and those working for the FTX Future Fund, although their conflictedness means they can't give an unbiased evaluation). But since the FTX catastrophe, the community health team has apparently broadened their mandate to include "epistemic health" and "Special Projects", rather than narrowing it to focus just on catastrophic risks to the community, which would seem to make EA less resilient in one regard, than it was before.

Of course I'm not necessarily saying th... (read more)

Surely one obvious person with this responsibility was Nick Beckstead, who became President of the FTX Foundation in November 2021. That was the key period where EA partnered with FTX. Beckstead had long experience in grantmaking, credibility, and presumably incentive/ability to do due diligence. Seems clear to me from these podcasts that MacAskill (and to a lesser extent the more junior employees who joined later) deferred to Beckstead.

In summarising Why They Do It, Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud". And that may well be true on average, but we know quite a lot about the details of this case, which I believe point us in a different direction.

In this case, the other defendants have said they knew what they're doing was wrong, that they were misappropriating customers' assets, and investing them. That weighs somewhat against ... (read more)

(This comment is basically just voicing agreement with points raised in Ryan’s and David’s comments above.) 

One of the things that stood out to me about the episode was the argument[1] that working on good governance and working on reducing the influence of dangerous actors are mutually exclusive strategies, and that the former is much more tractable and important than the latter. 

Most “good governance” research to date also seems to focus on system-level interventions,[2] while interventions aimed at reducing the impacts of individuals... (read more)

9
random
Interesting discussion. In the interview, MacAskill mentioned Madoff as an example of the idea that it’s not about "bad apples." [1] Giving Madoff as an example in this context doesn’t make sense to me. But maybe MacAskill was meaning to say that it's not about "bad apples that are identified as such before/at the time of their fraud"? That would be the only interpretation that makes sense to me, because Madoff sounds like he really was a "bad apple" based on the info in Why They Do It. Here's what Soltes says about Madoff in Why They Do It (quoted from the audiobook, with emphasis added):   1. ^ Here’s a quote from MacAskill (emphasis added): 

Quote: (and clearly they calculated incorrectly if they did)

I am less confident that, if an amoral person applied cost-benefit analysis properly here, it would lead to "no fraud" as opposed to "safer amounts of fraud." The risk of getting busted from less extreme or less risky fraud would seem considerably less.

Hypothetically, say SBF misused customer funds to buy stocks and bonds, and limited the amount he misused to 40 percent of customer assets. He'd need a catastrophic stock/bond market crash, plus almost all depositors wanting out, to be unable to hon... (read more)

Great comment. 

Will says that usually, that most fraudsters aren't just "bad apples" or doing "cost-benefit analysis" on their risk of being punished. Rather, they fail to "conceptualise what they're doing as fraud".

I agree with your analysis but I think Will also sets up a false dichotomy. One's inability to conceptualize or realize that one's actions are wrong is itself a sign of being a bad apple. To simplify a bit, on the one end of the spectrum of the "high integrity to really bad continuum", you have morally scrupulous people who constantly wond... (read more)

There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.

I think you can get closer to dissolving this problem by considering why you're assigning credit. Often, we're assigning some kind of finite financial rewards. 

Imagine that a group of n people have all jointly created $1 of value in the world, and that if any one of them did not participate, there would only be $0 units of value. Clearly, we can't give $1 to all of them, because then we would be paying $n to reward an event that only created $0 of value, which is inefficient. If, however, only the first guy (i=1) is an "agent" that responds to incenti... (read more)

Answer by RyanCarey17
6
1

Hi Polkashell,

There are indeed questionable people in EA, as in all communities. EA may be worse in some ways, because of its utilitarian bent, and because many of the best EAs have left the community in the last couple of years.

I think it's common in EA for people to:

  • have high hopes in EA, and have them be dashed, when their preferred project is defunded, when a scandal breaks, and so on. 
  • burn out, after they give a lot of effort to a project. 

What can make such events more traumatic is if EA has become the source of their livelihood, meaning, f... (read more)

Julia tells me "I would say I listed it as a possible project rather than calling for it exactly."]

It actually was not just neutrally listed as a "possible" project, because it was the fourth bullet point under "Projects and programs we’d like to see" here.

It may not be worth becoming a research lead under many worldviews. 

I'm with you on almost all of your essay, regarding the advantages of a PhD, and the need for more research leads in AIS, but I would raise another kind of issue - there are not very many career options for a research lead in AIS at present. After a PhD, you could pursue:

  1. Big RFPs. But most RFPs from large funders have a narrow focus area - currently it tends to be prosaic ML, safety, and mechanistic interpretability. And having to submit to grantmakers' research direction somewhat def
... (read more)
8
L Rudolf L
(A) Call this "Request For Researchers" (RFR). OpenPhil has tried a more general version of this in the form of the Century Fellowship, but they discontinued this. That in turn is a Thiel Fellowship clone, like several other programs (e.g. Magnificent Grants). The early years of the Thiel Fellowship show that this can work, but I think it's hard to do well, and it does not seem like OpenPhil wants to keep trying. (B) I think it would be great for some people to get support for multiple years. PhDs work like this, and good research can be hard to do over a series of short few-month grants. But also the long durations just do make them pretty high-stakes bets, and you need to select hard not just on research skill but also the character traits that mean people don't need external incentives. (C) I think "agenda-agnostic" and "high quality" might be hard to combine. It seems like there are three main ways to select good people: rely on competence signals (e.g. lots of cited papers, works at a selective organisation), rely on more-or-less standardised tests (e.g. a typical programming interview, SATs), or rely on inside-view judgements of what's good in some domain. New researchers are hard to assess by the first, I don't think there's a cheap programming-interview-but-for-research-in-general that spots research talent at high rates, and therefore it seems you have to rely a bunch on the third. And this is very correlated with agendas; a researcher in domain X will be good at judging ideas in that domain, but less so in others. The style of this that I'd find most promising is: 1. Someone with a good overview of the field (e.g. at OpenPhil) picks a few "department chairs", each with some agenda/topic. 2. Each department chair picks a few research leads who they think have promising work/ideas in the direction of their expertise. 3. These research leads then get collaborators/money/ops/compute through the department. I think this would be better than a grab-bag o
6
AdamGleave
This is an important point. There's a huge demand for research leads in general, but the people hiring & funding often have pretty narrow interests. If your agenda is legibly exciting to them, then you're in a great position. Otherwise, there can be very little support for more exploratory work. And I want to emphasize the legible part here: you can do something that's great & would be exciting to people if they understood it, but novel research is often time-consuming to understand, and these are time-constrained people who will not want to invest that time unless they have a strong signal it's promising. A lot of this problem is downstream of very limited grantmaker time in AI safety. I expect this to improve in the near future, but not enough to fully solve the problem. I do like the idea of a more research agenda agnostic research organization. I'm striving to have FAR be more open-minded, but we can't support everything so are still pretty opinionated to prioritize agendas that we're most excited by & which are a good fit for our research style (engineering-intensive empirical work). I'd like to see another org in this space set-up to support a broader range of agendas, and am happy to advise people who'd like to set something like this up.

Thanks for engaging with my criticism in a positive way.

Regarding how timely the data ought to be, I don't think live data is necessary at all - it would be sufficient in my view to post updated information every year or two.

I don't think "applied in the last 30 days" is quite the right reference class, however, because by-definition, the averages will ignore all applications that have been waiting for over one month. I think the most useful kind of statistics would:

  1. Restrict to applications from n to n+m months ago, where n>=3
  2. Make a note of what percent
... (read more)
2
calebp
Oh, I thought you might have suggested the live thing before, my mistake. Maybe I should have just given the 90-day figure above. (That approach seems reasonable to me)

I had a similar experience with 4 months of wait (uncalibrated grant decision timelines on the website) and unresponsiveness to email with LTFF, and I know a couple of people who had similar problems. I also found it pretty "disrespectful".

Its hard to understand why a) they wouldn't list the empirical grant timelines on their website, and b) why they would have to be so long.

I think it could be good to put these number on our site. I liked your past suggestion of having live data, though it's a bit technically challenging to implement - but the obvious MVP (as you point out) is to have a bunch of stats on our site. I'll make a note to add some stats (though maintaining this kind of information can be quite costly, so I don't want to commit to doing this).

In the meantime, here are a few numbers that I quickly put together (across all of our funds).

Grant decision turnaround times (mean, median):

  • applied in the last 30 days = 14 d
... (read more)

I had a similar experience in spring 2023, with an application to EAIF. The fundamental issue was the very slow process from application to decision. This was made worse by poor communication.

There is an "EA Hotel", which is decently-sized, very intensely EA, and very cheap.

Occasionally it makes sense for people to accept very low cost-of-living situations. But a person's impact is usually a lot higher than their salary. Suppose that a person's salary is x, their impact 10x, and their impact is 1.1 times higher when they live in SF, due to proximity to funders and AI companies. Then you would have to cut costs by 90% to make it worthwhile to live elsewhere. Otherwise, you would essentially be stepping over dollars to pick up dimes.

3
Chris Leong
One advantage of the EA hotel, compared to a grant, for example, is that selection effects for it are surprisingly strong. This can help resolve some of the challenges of evaluation.

Of course there are some theoretical reasons for growing fast. But theory only gets you so far, on this issue. Rather, this question depends on whether growing EA is promising currently (I lean against) compared to other projects one could grow. Even if EA looks like the right thing to build, you need to talk to people who have seen EA grow and contract at various rates over the last 15 years, to understand which modes of growth have been healthier, and have contributed to gained capacity, rather than just an increase in raw numbers. In my experience, one ... (read more)

Yes, they were involved in the first, small, iteration of EAG, but their contributions were small compared to the human capital that they consumed. More importantly, they were a high-demand group that caused a lot of people serious psychological damage. For many, it has taken years to recover a sense of normality. They staged a partial takeover of some major EA institutions. They also gaslit the EA community about what they were doing, which confused and distracted decent-sized subsections of the EA communtiy for years.

I watched The Master a couple of mont... (read more)

4
Habryka [Deactivated]
I agree with a broad gist of this comment, but I think this specific sentence is heavily underselling Leverage's involvement. They ran the first two EA Summits, and also were heavily involved with the first two full EA Globals (which I was officially in charge of, so I would know).

Interesting point, but why do these people think that climate change is going to cause likely extinction? Again, it's because their thinking is politics-first. Their side of politics is warning of a likely "climate catastrophe", so they have to make that catastrophe as bad as possible - existential.

4
Daniel_Friedrich
That seems like an extremely unnatural thought process. Climate change is the perfect analogy - in these circles, it's salient both as a tool of oppression and an x-risk. I think far more selection of attitudes happens through paying attention to more extreme predictions, rather than through thinking / communicating strategically. Also, I'd guess people who spread these messages most consciously imagine a systemic collapse, rather than a literal extinction. As people don't tend to think about longtermistic consequences, the distinction doesn't seem that meaningful. AI x-risk is more weird and terrifying and it goes against the heuristics that "technological progress is good", "people have always feared new technologies they didn't understand" and "the powerful draw attention away from their power". Some people, for whom AI x-risk is hard to accept happen to overlap with AI ethics. My guess is that the proportion is similar in the general population - it's just that some people in AI ethics feel particularly strong & confident about these heuristics. Btw I think climate change could pose an x-risk in the broad sense (incl. 2nd-order effects & astronomic waste), just one that we're very likely to solve (i.e. the tail risks, energy depletion, biodiversity decline or the social effects would have to surprise us).

I think that disagreement about the size of the risks is part of the equation. But it's missing what is, for at least a few of the prominent critics, the main element - people like Timnit, Kate Crawford, Meredith Whittaker are bought in leftie ideologies focuses on things like "bias", "prejudice", and "disproportionate disadvantage". So they see AI as primarily an instrument of oppression. The idea of existential risk cuts against the oppression/justice narrative, in that it could kill everyone equally. So they have to opposite it.

Obviously this is not wha... (read more)

I disagree because I think these people would be in favour of action to mitigate x-risk from extreme climate change and nuclear war.

I guess you're right, but even so I'd ask:

  • Is it 11 new orgs, or will some of them stick together (perhaps with CEA) when they leave? 
  • What about other orgs not on the website, like GovAI and Owain's team? 
  • Separately, are any teams going to leave CEA?

Related to (1) is the question: which sponsored projects are definitely being spun out?

I'd read "offboarding the projects which currently sit under the Effective Ventures umbrella. This means CEA, 80,000 Hours, Giving What We Can and other EV-sponsored projects will transition to being independent legal entities" as "all of them" but now I'm less sure.

Hmm, OK. Back when I met Ilya, about 2018, he was radiating excitement that his next idea would create AGI, and didn't seem sensitive to safety worries. I also thought it was "common knowledge" that his interest in safety increased substantially between 2018-22, and that's why I was unsurprised to see him in charge of superalignment.

Re Elon-Zillis, all I'm saying is that it looked to Sam like the seat would belong to someone loyal to him at the time the seat was created.

You may well be right about D'Angelo and the others.

5
gwern
Hm, maybe it was common knowledge in some areas? I just always took him for being concerned. There's not really any contradiction between being excited about your short-term work and worried about long-term risks. Fooling yourself about your current idea is an important skill for a researcher. (You ever hear the joke about Geoff Hinton? He suddenly solves how the brain works, at long last, and euphorically tells his daughter; she replies: "Oh Dad - not again!")
  1. The main thing that I doubt is that Sam knew at the time that he was gifting the board to doomers. Ilya was a loyalist and non-doomer when appointed. Elon was I guess some mix of doomer and loyalist at the start. Given how AIS worries generally increased in SV circles over time, more likely than not some of D'Angelo, Hoffman, and Hurd moved toward the "doomer" pole over time.
3
gwern
Ilya has always been a doomer AFAICT, he was just loyal to Altman personally, who recruited him to OA. (I can tell you that when I spent a few hours chatting with him in... 2017 or something? a very long time ago, anyway - I don't remember him dismissing the dangers or being pollyannaish.) 'Superalignment' didn't come out of nowhere or surprise anyone about Ilya being in charge. Elon was... not loyal to Altman but appeared content to largely leave oversight of OA to Altman until he had one of his characteristic mood changes, got frustrated and tried to take over. In any case, he surely counts as a doomer by the time Zilis is being added to the board as his proxy. D'Angelo likewise seems to have consistently, in his few public quotes, been concerned about the danger. A lot of people have indeed moved towards the 'doomer' pole but much of that has been timelines: AI doom in 2060 looks and feels a lot different from AI doom in 2027.

Nitpicks:

  1. I think Dario and others would've also been involved in setting up the corporate structure
  2. Sam never gave the "doomer" faction a near majority. That only happened because 2-3 "non-doomers" left and Ilya flipped.
2
gwern
1. I haven't seen any coverage of the double structure or Anthropic exit which suggests that Amodei helped think up or write the double structure. Certainly, the language they use around the Anthropic public benefit corporation indicates they all think, at least post-exit, that the OA double structure was a terrible idea (eg. see the end of this article). 2. You don't know that. They seem to have often had near majorities, rather than being a token 1 or 2 board members. By most standards, Karnofsky and Sutskever are 'doomers', and Zillis is likely a 'doomer' too as that is the whole premise of Neuralink and she was a Musk representative (which is why she was pushed out after Musk turned on OA publicly and began active hostilities like breaking contracts with OA). Hoffman's views are hard to characterize, but he doesn't seem to clearly come down as an anti-doomer or to be an Altman loyalist. (Which would be a good enough reason for Altman to push him out; and for a charismatic leader, neutralizing a co-founder is always useful, for the same reason no one would sell life insurance to an Old Bolshevik in Stalinist Russia.) If I look at the best timeline of the board composition I've seen thus far, at a number of times post-2018, it looks like there was a 'near majority' or even outright majority. For example, 2020-12-31 has either a tie or an outright majority for either side depending on how you assume Sutskever & Hoffman (Sutskever?/Zilis/Karnofsky/D'Angelo/McCauley vs Hoffman? vs Altman/Brockman), and with the 2021-12-31 list the Altman faction needs to pick up every possible vote to match the existing 5 'EA' faction (Zilis/Karnofsky/D'Angelo/McCauley/Toner vs Hurd?/Sutskever?/Hoffman? vs Brockman/Altman) although this has to be wrong because the board maxes out at 7 according to the bylaws so it's unclear how exactly the plausible majorities evolved over time.
Linch
18
2
0
1
1

Re 2: It's plausible, but I'm not sure that this is true. Points against:

  1. Reid Hoffman was reported as being specifically pushed out by Altman: https://www.semafor.com/article/11/19/2023/reid-hoffman-was-privately-unhappy-about-leaving-openais-board 
  2. Will Hurd is plausibly quite concerned about AI Risk[1]. It's hard to know for sure because his campaign website is framed in the language of US-China competition (and has unfortunate-by-my-lights suggestions like "Equip the Military and Intelligence Community with Advanced AI"), but I think a lot of the pr
... (read more)

Causal Foundations is probably 4-8 full-timers, depending on how you count the small-to-medium slices of time from various PhD students. Several of our 2023 outputs seem comparably important to the deception paper: 

  • Towards Causal Foundations of Safe AGI, The Alignment Forum - the summary of everything we're doing.
  • Characterising Decision Theories with Mechanised Causal Graphs, arXiv - the most formal treatment yet of TDT and UDT, together with CDT and EDT in a shared framework.
  • Human Control: Definitions and Algorithms, UAI - a paper arguing that corrig
... (read more)
2
technicalities
excellent, thanks, will edit

What if you just pushed it back one month - to late June?

4
Eli_Nathan
Open to it for 2025, though looks like at least Oxford will still have exams then (exams often stretch until 1–2 weeks after the end of term). But early July might work and we can look into what dates we can get when we start booking.

2 - I'm thinking more of the "community of people concerned about AI safety" than EA.

1,3,4- I agree there's uncertainty, disagreement and nuance, but I think if NYT's (summarised) or Nathan's version of events is correct (and they do seem to me to make more sense to me than other existing accounts) then the board look somewhat like "good guys", albeit ones that overplayed their hand, whereas Sam looks somewhat "bad", and I'd bet that over time, more reasonable people will come around to such a view.

4
Brennan W.
2- makes sense! 1,3,4- Thanks for sharing (the NYT summary isn’t working for me unfortunately) but I see your reasoning here that the intention and/or direction of the attempted ouster may have been “good”. However, I believe the actions themselves represent a very poor approach to governance and demonstrate a very narrow focus that clearly didn’t appropriately consider many of the key stakeholders involved. Even assuming the best intentions, in my perspective, when a person has been placed on the board of such a consequential organization and is explicitly tasked with helping to ensure effective governance, the degree to which this situation was handled poorly is enough for me to come away believing that the “bad” of their approach outweighs the potential “good” of their intentions. Unfortunately it seems likely that this entire situation will wind up having a back-fire effect from what was (we assume) intended by creating a significant amount of negative publicity for and sentiment towards the AI safety community (and EA). At the very least, there is now a new (all male 🤔 but that’s a whole other thread to expand upon) board with members that seem much less likely to be concerned about safety. And now Sam and the less cautious cohort within the company seem to have a significant amount of momentum and good will behind them internally which could embolden them along less cautious paths. To bring it back to the “good guy bad guy” framing. Maybe I could buy that the board members were “good guys” as concerned humans, but “bad guys” as board members. I’m sure there are many people on this forum who could define my attempted points much more clearly in specific philosophical terms 😅 but I hope the general ideas came through coherently enough to add some value to the thread. Would love to hear your thoughts and any counter points or alternative perspectives!

It's a disappointing outcome - it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.

But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn't control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.

Hey Ryan :)

I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.

However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue

  1. The statement suggests we have a much clearer picture of the situation and factors at play than I believe anyone currently has (as of 22 Nov 2023)
  2. The “we” phrasing seem
... (read more)
6
Jason
Good points in the second paragraph. While it's common in both nonprofits and for-profits to have executives on the board, it seems like a really bad idea here. Anyone with a massive financial interest in the for-profit taking an aggressive approach should not be on the non-profit's board. 

Yeah I think EA just neglects the downside of career whiplash a bit. Another instance is how EA orgs sometimes offer internships where only a tiny fraction of interns will get a job, or hire and then quickly fire staff. In a more ideal world, EA orgs would value rejected & fired applicants much more highly than non-EA orgs, and so low-hit-rate internships, and rapid firing would be much less common in EA than outside.

5
Ben_West🔸
Hmm, this doesn't seem obvious to me – if you care more about people's success then you are more willing to give offers to people who don't have a robust resume etc., which is going to lead to a lower hit rate than usual.

It looks like, on net, people disagree with my take in the original post. 

I just disagreed with the OP because it's a false dichotomy; we could just agree with the true things that activists believe, and not the false ones, and not go based on vibes. We desire to believe that mech-interp is mere safety-washing iff it is, and so on.

On the meta-level, anonymously sharing negative psychoanalyses of people you're debating seems like very poor behaviour. 

Now, I'm a huge fan of anonymity. Sometimes, one must criticise some vindictive organisation, or political orthodoxy, and it's needed, to avoid some unjust social consequences.

In other cases, anonymity is inessential. One wants to debate in an aggressive style, while avoiding the just social consequences of doing so. When anonymous users misbehave, we think worse of anonymous users in general. If people always write anonymously, the... (read more)

I'm sorry, but it's not an "overconfident criticism" to accuse FTX of investing stolen money, when this is something that 2-3 of the leaders of FTX have already pled guilty to doing.

This interaction is interesting, but I wasn't aware of it (I've only reread a fraction of Hutch's messages since knowing his identity) so to the extent that your hypothesis involves me having had some psychological reaction to it, it's not credible. 

Moreover, these psychoanalyses don't ring true. I'm in a good headspace, giving FTX hardly any attention. Of course, I am not... (read more)

-14
aprilsun

Creditors are expected by manifold markets to receive only 40c on each dollar that was invested on the platform (I didn't notice this info in the post when I previously viewed it). And, we do know why there is money missing: FTX stole it and invested it in their hedge fund, which gambled away and lost the money.

There's also fairly robust market for (at least larger) real-money claims against FTX with prices around 35-40 cents on the dollar. I'd expect recovery to be somewhat higher in nominal dollars, because it may take some time for distributions to occur and that is presumably priced into the market price. (Anyone with a risk appetite for buying large FTX claims probably thinks their expected rate of return on their next-best investment choice is fairly high, implying a fairly high discount rate is being applied here.)

-48
aprilsun
9
Nicky Pochinkov
I've added manifold markets and more details from the book, not to be fully trusted on face value. Thought they spent/lost a lot of money, and misused funds in Alameda, they had huge amounts of money, so the book figures suggest that might not account for all the customer funds to be lost (if one writes off VC investment and similar)

It is a bit disheartening to see that some readers will take the book at face value.

Load more