If I calculated correct, in the fully funded version, stipends would be 76% of the cost. Not quite >80% but close. I think I agree that stipends is not much more than than 20% of the value.
Basically I agree with you that stipends are the least cost effective part of AISC. This is why stipends are lowest on the funding priority.
However it is possible for stipend to be less necessary than the rest, but still worth paying. They are in the budget because, if someone wants to fund it, we would like to hand out stipends.
I think giving stipen...
I you read this post and you decide that the reason why AISC is not getting funded, are not good reasons for not funding AISC, then you have a donation opportunity!
Unless donors don’t care about optics at all, paying Remmelt’s salary is a difficult ask.
There is an easy fix to this. You can donate anonymously.
Perhaps they could add an appendix to their funding proposal where they answer some common objections they would expect people to have
Correctly guessing what misconception others will have is hard. But discussions on earlier drafts on this post, did inspire us to start drafting something like that. Thanks.
A colleague of mine said that [if you want to attract high-profile research leads], “you are only as strong as your weakest project” - which I thought was well put.
We're not trying to attract high profile research leads. We're trying to start worthwhile p...
- Most of these suggestions are based on speculations. I'd like a bit more evidence that it would actually make a difference, before re-structuring. Funders are welcome to reach out to us.
Responding to my self.
There is one thing (that is mentioned in the post) we know is getting in the way of funding, which is Remmelt's image. But there wouldn't be and AISC without Remmelt.
I don't expect pretending to be two different programs would help much.
However, donating anonymously is an option. We have had anonymous donations in the past from people who don't want to entangle their reputation with ours.
Donors want to know that if they donate to keep it alive, you're going to restructure the program towards something more financially viable
Hi Linda! CEA's EAGx Coordinator here. This is definitely not a policy, and I also want everyone to know about events at the earliest date so they can make arrangements to attend. It's one of my biggest goals to increase the lead-time for events, for both organizers and attendees, and I'm hoping that we'll be able to publicly announce more 2025 events soon.
Typically, we add an event to the webpage as soon as the event is "officially confirmed," which is usually as soon as the contract with the venue is signed. This contract procedure sometimes drags on and...
At this writing www.aisafety.camp goes to our new website while aisafety.camp goes to our old website. We're working on fixing this.
If you want to spread information about AISC, please make sure to link to our new webpage, and not the old one.
I don't think it's too 'woo'/new age-y. Lot's of EAs are meditators. There are literally meditation sessions happening at EAG London this week.
Also, Qualia Research Institute (qri.org) is EA or at least EA adjacent.
(What org is or isn't EA is pretty vague)
Also, isn't enlightenment notoriously hard to reach? I.e. it takes years of lots of meditation. Most humans probably don't have both the luxury and the discipline to spend that much time. Even if it's real (I think it is), there are probably lower hanging fruit to pick.
My guess is that helping someone to go from depressed to normal, is a bigger step in suffering reduction than from normal to enlightened. Same for lifting someone out of poverty.
However, I have not though about this a lot.
I agree with this comment.
If EA and ES both existed, I expect the main focus areas to be very different (e.g. political change is not a main focus area in EA, but would be in ES), but (if harmfull tribalism can be avoided) the movements don't have to be opposed to each other.
I'm not sure why ES would be against charter cities. Are charter cities bad for unions?
Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.
I expect a serious intellectual movement...
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone wh...
There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.
By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.
I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more...
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.
Thanks for sharing.
What the other grantmaker (the one who gave your y) though of this?
Where they aware of your OpenPhil grant ...
I have a feature removal suggestion.
Can the notification menu please go back to being like LW?
The LW version (which EA Forum used to have too) is more compact, which gives a better overview. I also prefer when karma and notifications are separate. I don't want to see karma updates in my notification dropdown.
From the linked report:
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage.
Here's a story I recently heard from someone I trust:
An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before...
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone wh...
[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it's not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I'd want to have more context on the specifics of the situation.
It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to...
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.
In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers....
If this was for any substantial amount of money I think it would be pretty bad, though it depends on the relative size of the OP grants and SFF grants.
I think most of the time you should just let promised funding be promised funding, but there is a real and difficult coordination problem here. The general rule I follow when I have been a recommender on the SFF or Lightspeed Grants has been that when I am coordinating with another funder, and we both give X dollars a year but want to fund the organization to different levels (let's call them level A f...
Thanks for sharing, Linda!
After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF).
I very much agree Open Phil breaking a promise to provide funding would be bad. However, I assume Open Phil asked about alternative sources of funding in the application, and I wonder whether the promise to provide funding was conditional on the other sources not being successful.
I understand posting this here, but for following up specific cases like this, especially second hand I think it's better to first contact OpenPhil before airing it publicly. Like you mentioned there is likely to be much context here we don't have, and it's hard to have a public discussion without most of the context.
"There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all"
That's a fair comment I understand the importance of ov...
Hers's the other career coaching options on the list. It case you want to connect with our colleagues.
I do think AISF is a real improvement to the field. My apologies for not making this clear enough.
The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.
You mean MIRI's syllabus?
I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks".
I personally used CHAI's one and found it very useful.
Also some times you should go read a bunch of text books. Textbooks are great.
Week 0: Even though it is a theory course, it would likely be useful to have some basic understanding of machine learning, although this would vary depending on the exact content of the course. It might or might not make sense to run a week 0 depending on most people's backgrounds.
I would reccomend having a week 0 with some ML and RL basics.
I did a day 0 ML and RL speed run, at the start of two of my AI Safety workshops at EA hotel in 2019. Where you there for that? It might have been recorded, but I have no idea where it might have ended up. Althoug...
I was surprised to read this:
In 2020, the going advice for how to learn about AI Safety for the first time was:
- Read everything on the alignment forum. [...]
- Speak to AI safety researchers. [...]
MIRI, CHAI and 80k all had public reading guides since at least 2017, when I started studying AI Safety.
So seems like at least part of the problem was that these...
I'm updating the AI Safety Support - Lots of Links page, and came across this post when following trails of potentially useful links.
Are you still doing coaching, and if "yes" do you want to be listed on the lots of links page?
I'm guessing that what Marius means by "AISC is probably about ~50x cheaper than MATS" is that AISC is probably ~50x cheaper per participant than MATS.
Our cost per participant is $0.6k - $3k USD
50 times this would be 30k - 150k per participant.
I'm guessing that MATS is around 50k per person (including stipends).
Here's where the $12k-$30k USD comes from:
...Dollar cost per new researcher produced by AISC
- The organizers have proposed $60–300K per year in expenses.
- The number of non-RL participants of programs have increased from 32 (AISC4) to 130&
5. Overall, I think AISC is less impactful than e.g. MATS even without normalizing for participants. Nevertheless, AISC is probably about ~50x cheaper than MATS. So when taking cost into account, it feels clearly impactful enough to continue the project. I think the resulting projects are lower quality but the people are also more junior, so it feels more like an early educational program than e.g. MATS.
This seems correct to me. MATS is investing a lot in few people. AISC is investing a little in many people.
Also agreement on all the other points.
From Lucius Bushnaq:
I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there's a good chance I would never have become an AI notkilleveryoneism researcher.
Full comment here: This might be the last AI Safety Camp — LessWrong
Thanks for this comment. To me this highlights how AISC is very much not like MATS. We're very different programs doing very different things. MATS and AISC are both AI safety upskilling programs, but we are using different resources to help different people with different aspects of their journey.
I can't say where AISC falls in the talent pipeline model, because that's not how the world actually work.
AISC participants have obviously heard about AI safety, since they would not have found us otherwise. But other than that, people are all over th...
I don't like this funnel model, or any other funnel model I've seen. It's not wrong exactly, but it misses so much, that it's often more harmfull than helpful.
For example:
I don't have a nice looking replacement for the funnel. If had a nice clean model like this, it would probably be as bad. The real world is just very messy.
...
- All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now.
- [...] They also use the number of AI alignment researchers created as an impo
The impact assessment was commissioned by AISC, not independent.
Here are some evaluations not commissioned by us
If you have suggestions for how AISC can get more people to do more independent evaluations, please let me know.
- Why does the founder, Remmelt Ellen, keep posting things described as "content-free stream of consciousness", "the entire scientific community would probably consider this writing to be crankery", or so obviously flawed it gets -46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.
I see your concern.
Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two st...
But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.
- MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
There is so much wrong here, I don't even know how to start (i.e. I don't know what the core cruxes are) but I'll give it a try.
I AISC is not MATS because we're not trying to be MATS.
MATS is trying to find the best people and have them mentored by the best mentors, in the best environment. This is...
How does the conflictedness compare to the conflictedness (if any) you would feel if you were a business performing services for Meta?
To me, selling services to a bad actor feel significantly more immoral than receiving their donation, since selling a service to them is much more directly helpful to them.
(This is not a comment on how bad Meta is. I do not have an informed opinion on this.)
The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.
Agree!
I believe this is a big contributor or burnout and people leaving EA.
See also: The Cost of Rejection — EA Forum (effectivealtruism.org)
However, I don't think the solution is more feedback from grant makers. The vetting boatneck is a big part of the problem. Requiring mor...
I would advise to just ask for feedback from anyone in one's EA network you think have some understanding of grantmaker perspectives. For example, if 80k hrs advisors, your local EA group leadership and someone you know working at an EA org
Most people in EA don't have anyone in their network with a good understanding of grant makers perspective.
I think that "your local EA group leadership" usually don't know. The author of this post is a national group founder, and they don't have a good understanding of what grant makers want.
A typical lunch c...
Disagree.
I think this section illustrated something important, that I would not have properly understood without a real demonstration with real facts about a real person. It hits different emotionally when it's real, and given how important this point is, and how emotionally charged everything else is, I think I needed this demonstration for the lesson to hit home for me.
I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.
I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.
This is not true. If Kat had just left in the section making Ben look bad, everyone would have been "what? Where is the evidence for this? This seems really bad?".
The way it is written it still leaves many people with an impression, but alleviates any burden of proof that Kat would have had.
You might still think it's a fine rhetorical tool to use, but I think it's clear that Kat of course couldn't have just put the accusations into the post without experiencing substantial backlash and scrutiny of her claims.
I wrote this in response to Ben's post
...Thanks for writing this post.
I've heard enough bad stuff about Nonlinear from before, that I was seriously concerned about them. But I did not know what to do. Especially since part of their bad reputation is about attacking critics, and I don't feel well positioned to take that fight.
I'm happy some of these accusations are now out in the open. If it's all wrong and Nonlinear is blame free, then this is their chance to clear their reputation.
I can't say that I will withhold judgment until more evidence come
- The Nonlinear team should have gotten their replies up sooner, even if in pieces. In the court of public opinion, time/speed matters. Muzzling up and taking ~3 months to release their side of the story comes across as too polished and buttoned up.
Strong disagree.
A) Sure, all else equal speed would have been better. But if you take the hypothesis that NL is mostly innocent as true for a moment. Getting such a post written about you must be absolutely terrible. If it was me, I'd probably not be in a good shape to write anything in response very quickly...
EA Forum feature request
(I'm not sure where to post this, so I'm writing it here)
1) Being able to filter for multiple tags simultaneously. Mostly I want to be able to filter for "Career choice" + any other tag of my choice. E.g. AI or Academia to get career advice specifically for those career paths. But there are probably other useful combos too.
(Just for future reference, I think “EA Forum feature suggestion thread” is the designated place to post feature requests.)
What caused the restriction?
I'm noticing I'm confused. I have no hypothesis for what could case that sort of restriction.