Linda Linsefors

@ AI Safety Camp
1998 karmaJoined London, UK


Hi, I am a Physicist, Effective Altruist and AI safety student/researcher/organiser
Resume - Linda Linsefors - Google Docs


Topic contributions

I agree with this comment.

If EA and ES both existed, I expect the main focus areas to be very different (e.g. political change is not a main focus area in EA, but would be in ES), but (if harmfull tribalism can be avoided) the movements don't have to be opposed to each other. 

I'm not sure why ES would be against charter cities. Are charter cities bad for unions? 

Scandinavia didn’t become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.

I expect a serious intellectual movement, that aims to uplift the world to Scandinavian standards, to actually learn about Scandinavian society, and what makes it work. 

“Real socialism hasn’t been tried either!” the Effective Samaritan quips back. “Every attempt has always been co-opted by ruling elites who used it for their own ends. The closest we’ve gotten is Scandinavia which now has the world’s highest standards of living, even if not entirely socialist it’s gotta count for something!”

I'm guessing that "socialism" hear means something like Marxism? Since this is the type of socialism that "has not been really tried" according to some, and also the typ of socialism that usually end up with dictatorship. 

Scandinavian socialism did not come from Marxism. 
Source: How Denmark invented Social Democracy (youtube.com)

I'm not a historian, and I have not fact checked the above video in any way. But if fits with other things I've heard, and my own experience of Swedish v.s. US attitudes. 

I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises. 

I apologise and I will try to be more careful in the future. 

One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.

Below the story from someone who was involved. They have asked to stay anonymous, please respect this. 

The short version of the story is: (1) we applied to OP for funding, (2) late 2022/early-2023 we were in active discussions with them, (3) at some point, we received 200k USD via the SFF speculator grants, (4) then OP got back confirming that they would fund is with the amount for the "lower end" budget scenario minus those 200k.

My rough sense is similar to what e.g. Oli describes in the comments. It's roughly understandable to me that they didn't want to give the full amount they would have been willing to fund without other funding coming in. At the same time, it continues to feel pretty off to me that they let the SFF specultor grant 1:1 replace their funding, without even talking to SFF at all -- since this means that OP got to spend a counterfactual 200k on other things they liked, but SFF did not get to spend additional funding on things they consider high priority.

One thing I regret on my end, in retrospect, is not pushing harder on this, including clarifying to OP that the SFF funding we received was partially uncoined, i.e. it wasn't restricted to funding only the specific program that OP gave us (coined) funding for. But, importantly, I don't think I made that sufficiently clear to OP and I can't claim to know what they would have done if I had pushed for that more confidently.

I've asked for more information and will share what I find, as long as I have permission to do so.

Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.

There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.

By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.

I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more email. They have other important work to do. But, I also have other important work to do. I'm also not doing anything wrong by not spending extra time figuring out who at their staff to contact and send a private email, which according to your data, has a 55% chance ending up ignored.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.

Thanks for sharing. 

What the other grantmaker (the one who gave your y) though of this?

Where they aware of your OpenPhil grant when they offered you funding?

Did OpenPhil role back your grant because you did not have use for more than X or some other reason?

I have a feature removal suggestion.

Can the notification menu please go back to being like LW?

The LW version (which EA Forum used to have too) is more compact, which gives a better overview. I also prefer when karma and notifications are separate.  I don't want to see karma updates in my notification dropdown.

From the linked report:

We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. 

Here's a story I recently heard from someone I trust:

An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF). When OpenPhil found out about this, they rolled back the amount of money the would pay to this project, buy the exact amount that this project was promised by SFF, rendering the SFF grant meaningless. 

I don't think this is ok behaviour, and definitely not what you do to get more funders involved. 


Is some context I'm missing here? Or has there been some misunderstanding? Or is this as bad as it looks?


I'm not going to name either the source or the project publicly (they can name themselves if they want to), since I don't want to get anyone else in to trouble, or risk their chances of getting OpenPhil funding. I also want to make clear that I'm writing this on my own initiativ. 

There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all, and I think this sort of things are worth calling out. 


Edit: I've partly misunderstood what happened. See comment below for clarification. My apologies. 

Hers's  the other career coaching options on the list. It case you want to connect with our colleagues. 

Load more