I've asked for more information and will share what I find, as long as I have permission to do so.
Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.
There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.
By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.
I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more...
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.
Thanks for sharing.
What the other grantmaker (the one who gave your y) though of this?
Where they aware of your OpenPhil grant ...
I have a feature removal suggestion.
Can the notification menu please go back to being like LW?
The LW version (which EA Forum used to have too) is more compact, which gives a better overview. I also prefer when karma and notifications are separate. I don't want to see karma updates in my notification dropdown.
From the linked report:
We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage.
Here's a story I recently heard from someone I trust:
An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before...
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone wh...
[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it's not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I'd want to have more context on the specifics of the situation.
It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to...
Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.
In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers....
If this was for any substantial amount of money I think it would be pretty bad, though it depends on the relative size of the OP grants and SFF grants.
I think most of the time you should just let promised funding be promised funding, but there is a real and difficult coordination problem here. The general rule I follow when I have been a recommender on the SFF or Lightspeed Grants has been that when I am coordinating with another funder, and we both give X dollars a year but want to fund the organization to different levels (let's call them level A f...
Thanks for sharing, Linda!
After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF).
I very much agree Open Phil breaking a promise to provide funding would be bad. However, I assume Open Phil asked about alternative sources of funding in the application, and I wonder whether the promise to provide funding was conditional on the other sources not being successful.
I understand posting this here, but for following up specific cases like this, especially second hand I think it's better to first contact OpenPhil before airing it publicly. Like you mentioned there is likely to be much context here we don't have, and it's hard to have a public discussion without most of the context.
"There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all"
That's a fair comment I understand the importance of ov...
Hers's the other career coaching options on the list. It case you want to connect with our colleagues.
I do think AISF is a real improvement to the field. My apologies for not making this clear enough.
The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.
You mean MIRI's syllabus?
I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks".
I personally used CHAI's one and found it very useful.
Also some times you should go read a bunch of text books. Textbooks are great.
Week 0: Even though it is a theory course, it would likely be useful to have some basic understanding of machine learning, although this would vary depending on the exact content of the course. It might or might not make sense to run a week 0 depending on most people's backgrounds.
I would reccomend having a week 0 with some ML and RL basics.
I did a day 0 ML and RL speed run, at the start of two of my AI Safety workshops at EA hotel in 2019. Where you there for that? It might have been recorded, but I have no idea where it might have ended up. Althoug...
I was surprised to read this:
In 2020, the going advice for how to learn about AI Safety for the first time was:
- Read everything on the alignment forum. [...]
- Speak to AI safety researchers. [...]
MIRI, CHAI and 80k all had public reading guides since at least 2017, when I started studying AI Safety.
So seems like at least part of the problem was that these...
I'm updating the AI Safety Support - Lots of Links page, and came across this post when following trails of potentially useful links.
Are you still doing coaching, and if "yes" do you want to be listed on the lots of links page?
I'm guessing that what Marius means by "AISC is probably about ~50x cheaper than MATS" is that AISC is probably ~50x cheaper per participant than MATS.
Our cost per participant is $0.6k - $3k USD
50 times this would be 30k - 150k per participant.
I'm guessing that MATS is around 50k per person (including stipends).
Here's where the $12k-$30k USD comes from:
...Dollar cost per new researcher produced by AISC
- The organizers have proposed $60–300K per year in expenses.
- The number of non-RL participants of programs have increased from 32 (AISC4) to 130&
5. Overall, I think AISC is less impactful than e.g. MATS even without normalizing for participants. Nevertheless, AISC is probably about ~50x cheaper than MATS. So when taking cost into account, it feels clearly impactful enough to continue the project. I think the resulting projects are lower quality but the people are also more junior, so it feels more like an early educational program than e.g. MATS.
This seems correct to me. MATS is investing a lot in few people. AISC is investing a little in many people.
Also agreement on all the other points.
From Lucius Bushnaq:
I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there's a good chance I would never have become an AI notkilleveryoneism researcher.
Full comment here: This might be the last AI Safety Camp — LessWrong
Thanks for this comment. To me this highlights how AISC is very much not like MATS. We're very different programs doing very different things. MATS and AISC are both AI safety upskilling programs, but we are using different resources to help different people with different aspects of their journey.
I can't say where AISC falls in the talent pipeline model, because that's not how the world actually work.
AISC participants have obviously heard about AI safety, since they would not have found us otherwise. But other than that, people are all over th...
I don't like this funnel model, or any other funnel model I've seen. It's not wrong exactly, but it misses so much, that it's often more harmfull than helpful.
For example:
I don't have a nice looking replacement for the funnel. If had a nice clean model like this, it would probably be as bad. The real world is just very messy.
...
- All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now.
- [...] They also use the number of AI alignment researchers created as an impo
The impact assessment was commissioned by AISC, not independent.
Here are some evaluations not commissioned by us
If you have suggestions for how AISC can get more people to do more independent evaluations, please let me know.
- Why does the founder, Remmelt Ellen, keep posting things described as "content-free stream of consciousness", "the entire scientific community would probably consider this writing to be crankery", or so obviously flawed it gets -46 karma? This seems like a concern especially given the philosophical/conceptual focus of AISC projects, and the historical difficulty in choosing useful AI alignment directions without empirical grounding.
I see your concern.
Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two st...
But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive.
Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.
- MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.
There is so much wrong here, I don't even know how to start (i.e. I don't know what the core cruxes are) but I'll give it a try.
I AISC is not MATS because we're not trying to be MATS.
MATS is trying to find the best people and have them mentored by the best mentors, in the best environment. This is...
How does the conflictedness compare to the conflictedness (if any) you would feel if you were a business performing services for Meta?
To me, selling services to a bad actor feel significantly more immoral than receiving their donation, since selling a service to them is much more directly helpful to them.
(This is not a comment on how bad Meta is. I do not have an informed opinion on this.)
The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.
Agree!
I believe this is a big contributor or burnout and people leaving EA.
See also: The Cost of Rejection — EA Forum (effectivealtruism.org)
However, I don't think the solution is more feedback from grant makers. The vetting boatneck is a big part of the problem. Requiring mor...
I would advise to just ask for feedback from anyone in one's EA network you think have some understanding of grantmaker perspectives. For example, if 80k hrs advisors, your local EA group leadership and someone you know working at an EA org
Most people in EA don't have anyone in their network with a good understanding of grant makers perspective.
I think that "your local EA group leadership" usually don't know. The author of this post is a national group founder, and they don't have a good understanding of what grant makers want.
A typical lunch c...
I think paying a friendly outsider would be the best option. I don't expect I have much say in this, since I don't have much spare money, so I will not be the one heiring. But I would like TracingWoodsgrains to look into the Nonlinear story.
Disagree.
I think this section illustrated something important, that I would not have properly understood without a real demonstration with real facts about a real person. It hits different emotionally when it's real, and given how important this point is, and how emotionally charged everything else is, I think I needed this demonstration for the lesson to hit home for me.
I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.
I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.
This is not true. If Kat had just left in the section making Ben look bad, everyone would have been "what? Where is the evidence for this? This seems really bad?".
The way it is written it still leaves many people with an impression, but alleviates any burden of proof that Kat would have had.
You might still think it's a fine rhetorical tool to use, but I think it's clear that Kat of course couldn't have just put the accusations into the post without experiencing substantial backlash and scrutiny of her claims.
I wrote this in response to Ben's post
...Thanks for writing this post.
I've heard enough bad stuff about Nonlinear from before, that I was seriously concerned about them. But I did not know what to do. Especially since part of their bad reputation is about attacking critics, and I don't feel well positioned to take that fight.
I'm happy some of these accusations are now out in the open. If it's all wrong and Nonlinear is blame free, then this is their chance to clear their reputation.
I can't say that I will withhold judgment until more evidence come
- The Nonlinear team should have gotten their replies up sooner, even if in pieces. In the court of public opinion, time/speed matters. Muzzling up and taking ~3 months to release their side of the story comes across as too polished and buttoned up.
Strong disagree.
A) Sure, all else equal speed would have been better. But if you take the hypothesis that NL is mostly innocent as true for a moment. Getting such a post written about you must be absolutely terrible. If it was me, I'd probably not be in a good shape to write anything in response very quickly...
As far as I know, the reason AISS shut down was 100% because of lack of funding. However, it's not so easy to just start things up again. People who don't get paid tend to quit and move on.
EA Forum feature request
(I'm not sure where to post this, so I'm writing it here)
1) Being able to filter for multiple tags simultaneously. Mostly I want to be able to filter for "Career choice" + any other tag of my choice. E.g. AI or Academia to get career advice specifically for those career paths. But there are probably other useful combos too.
(Just for future reference, I think “EA Forum feature suggestion thread” is the designated place to post feature requests.)
Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off.
I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us.
Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. ...
But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached.
(Just wanted to add a counter datapoint: I have been a local community organizer for several years and this has not been my experience.)
I wasn't sure about the 'do-ocracy' thing either. Of course, it's true that no one's stopping you from starting whatever project you want - I mean, EA concerns the activities of private citizens. But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.
The type of AI we are worried about is a an AI that peruses some kind of goal, and if you have a goal, then self preservation is a natural instrumental goal, as you point out in the paperclip maximiser example.
It might be possible that someone builds a super intelligent AI that don't have a goal. Depending on your exact definition GPT4 could be counted as super intelligent, since it knows more than any human. But it's not dangerous (by it self) since it's not trying to do anything.
You are right that it is possible for something that is intellig...
In addition, if I were getting career-related information from a community builder, that community builder's future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).
As far as I know, this is exactly what is happening.
Can we address critiques of the DALY framework by selecting moral weighting frameworks that are appropriate for our particular applications, addressing methodological critiques when they get raised, and taking care to contextualize our usage of a particular framework? - Maybe.
I'm pretty sure the answer is "No, we can't". The whole point of DALY is that it lets us compare completely different interventions. If you replace it with something that is different in each context, you have not replaced it.
I think the best we can do is to calibrate it better, buy a...
I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths.
If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I'm in AI Safety I value EA being about more than X-risk prevention.
Hey Linda,
I'm head of CEA's groups team. It is true that we care about career changes - and it is true that our funders care about career changes. However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using "polarizing techniques".
In terms of ...
I think the specific list of orgs you picked is a bit ad-hock but also ok.
It looks like you've chosen to focus on reperch orgs specifically, plus overview resources. I think this is a resonable choice.
Some orgs that would fit on the list (i.e. other research orgs), are
* Conjecture
* Orthogonal
* CLR
* Convergence
* Aligned AI
* ARC
There are also several important training and support orgs that is not on your list (AISC, SERI MATS, etc). But I think it's probably the right choice to just link to aisafety.training, and let people find variolous progra...
There are lots of more AI Safety orgs and initiatives. Not sure if would be practical to add all.
See here for manybof them: aisafety.world
Is this still an impacts market? Looks to me that this is primally just a fundraising platform. I'm not complaining. I think EA should have a fundraising platform! I'm just confused.
Over all I think this is a good post. However this part surprised me.
However, I am personally worried about people skill-building for a couple of years and then not switching to doing the most valuable alignment work they can, because it can be easy to justify that your work is helping when it isn’t. This can happen even at labs that claim to have a safety focus! Working at any of Anthropic, DeepMind, Redwood Research, or OpenAI seems like a safe bet though.
I agree with the first bit. I'm also worried that people motivated to help with alignment end ...
For me personally, the core of Effective Altruism is "it's not about you". Everything else follows from there.
This is very much in contrast to other cultures of altruism I have encountered, which focus very much on the mental state of the giver. When you stop questioning if you are pure and have the right motives, ect, and just focus on results, that's when you get EA.
But also, don't be 100% altruistic. Some of your efforts should be about you. If you only take care of your self for instrumental reasons, you will systematically under invest in your self. So be just genuinely egoistic with some parts of your effort, where "be egoistic" just means "do what ever you want".
Thanks, that clarifies things.
I'm still not sure what you mean by org. Do you count CEA as an org, or EVF as an org?
I think in terms of projects and people and funding. Legal orgs are just another part of the infrastructure that supports funding and people.
I think it would be great if AI Safety Support, where given enough funding to hire 50 people, and used that funding to provide financial security to lots of existing projects. Although that is heavily biased by the fact that I personally know and trust the people running AISS, and that their work s...
I've been Alice. I had some experiences within EA that lead me to take a year long EA-leave. When I left I did not know for how long, and if I would come back. This was defiantly the right thing for me to do. If you're Alice and you feel you need to take a step back, then you are probably correct. Even if you can't exactly articulate why, you are probably correct. If the EA network is net positive for you and your work, then you will be back.
I'm talking about increasing the number of large organisations.
I'm confused what you are suggesting exactly. When reading post I assumed that you suggested more centralisation in general. If there where a compotator to CEA, I would not call that "more centralisation". Although maybe it depends on how we get there form here?
If several small orgs join together to form a new big org, that would seem like going towards more centralisation. But if someone starts a new org that grows into a large org, which competes with an existing large org, that would look li...
I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises.
I apologise and I will try to be more careful in the future.
One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.
Below the story from someone wh... (read more)