All of Linda Linsefors's Comments + Replies

I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises. 

I apologise and I will try to be more careful in the future. 

One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.

Below the story from someone wh... (read more)

I've asked for more information and will share what I find, as long as I have permission to do so.

Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.

There are benefit of having this discussion in public, regardless of how responsive OpenPhil staff are.

By posting this publicly I already found out that they did the same to Neal Nanda. Neal though that in his case he though this was "extremely reasonable". I'm not sure why and I've just asked some follow up questions.

I get from your response that you think 45% is good response record, but that depends on how you look at it. In the reference class of major grantmakers it's not bad, and don't think OpenPhil is dong something wrong for not responding to more... (read more)

7
Vasco Grilo
1mo
I agree. I was not clear. I meant that, for this case, I think "public criticism after private criticism" > "public criticism before private criticism" > "public criticism without private criticism" > "private criticism without public criticism". So I am glad you commented if the alternative was no comment at all. Yes, I would say the response rate is good enough to justify getting in touch (unless we are talking about people who consistently did not reply to past emails). At the same time, I actually think people at Open Phil might be doing something wrong by not replying to some of my emails assuming they read them, because it is possible to reply to an email in 10 s. For example, by saying something like "Thanks. Sorry, but I do not plan to look into this.". I guess people assume this is as bad or worse than no reply, but I would rather have a short reply, so I suppose I should clarify this in future emails.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.


Thanks for sharing. 
 

What the other grantmaker (the one who gave your y) though of this?

Where they aware of your OpenPhil grant ... (read more)

9
Neel Nanda
1mo
I got the OpenPhil grant only after the other grant went through (and wasn't thinking much about OpenPhil when I applied for the other grant). I never thought to inform the other grant maker after I got the OpenPhil grant, which maybe I should have in hindsight out of courtesy? This was covering some salary for a fixed period of research, partially retroactive, after an FTX grant fell through. So I guess I didn't have use for more than X, in some sense (I'm always happy to be paid a higher salary! But I wouldn't have worked for a longer period of time, so I would have felt a bit weird about the situation)

I have a feature removal suggestion.

Can the notification menu please go back to being like LW?

The LW version (which EA Forum used to have too) is more compact, which gives a better overview. I also prefer when karma and notifications are separate.  I don't want to see karma updates in my notification dropdown.

From the linked report:

We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. 

Here's a story I recently heard from someone I trust:

An AI Safety project got their grant application approved by OpenPhil, but still had more room for funding. After OpenPhil promised them a grant but before... (read more)

I misunderstood the order of events, which does change the story in important ways. The way OpenPhil handled this is not ideal for encouraging other funders, but there were no broken promises. 

I apologise and I will try to be more careful in the future. 

One reason I was too quick on this is that I am concerned about the dynamics that come with having a single overwhelmingly dominant donor in AI Safety (and other EA cause areas), which I don't think is healthy for the field. But this situation is not OpenPhils fault.

Below the story from someone wh... (read more)

[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it's not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I'd want to have more context on the specifics of the situation.

It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to... (read more)

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.

In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers.... (read more)

If this was for any substantial amount of money I think it would be pretty bad, though it depends on the relative size of the OP grants and SFF grants. 

I think most of the time you should just let promised funding be promised funding, but there is a real and difficult coordination problem here. The general rule I follow when I have been a recommender on the SFF or Lightspeed Grants has been that when I am coordinating with another funder, and we both give X dollars a year but want to fund the organization to different levels (let's call them level A f... (read more)

Thanks for sharing, Linda!

After OpenPhil promised them a grant but before it was paid out, this same project also got a promise of funding from Survival and Flourishing Fund (SFF).

I very much agree Open Phil breaking a promise to provide funding would be bad. However, I assume Open Phil asked about alternative sources of funding in the application, and I wonder whether the promise to provide funding was conditional on the other sources not being successful.

I understand posting this here, but for following up specific cases like this, especially second hand I think it's better to first contact OpenPhil before airing it publicly. Like you mentioned there is likely to be much context here we don't have, and it's hard to have a public discussion without most of the context.

"There is probably some more delicate way I could have handled this, but anything more complicated than writing this comment, would probably have ended up with me not taking action at all"

That's a fair comment I understand the importance of ov... (read more)

Hers's  the other career coaching options on the list. It case you want to connect with our colleagues. 

4
Linda Linsefors
1mo
Hers's  the other career coaching options on the list. It case you want to connect with our colleagues.  * AI Safety Quest - Navigation Calls  * Arkose   * 80,000 hours - Career Coaching

I do think AISF is a real improvement to the field. My apologies for not making this clear enough.

 

The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.

You mean MIRI's syllabus? 

I don't remember what 80k's one looked like back in the days, but the one that is up not is not just "Go read a bunch of textbooks".

I personally used CHAI's one and found it very useful.

Also some times you should go read a bunch of text books. Textbooks are great. 

Week 0: Even though it is a theory course, it would likely be useful to have some basic understanding of machine learning, although this would vary depending on the exact content of the course. It might or might not make sense to run a week 0 depending on most people's backgrounds.

I would reccomend having a week 0 with some ML and RL basics. 

I did a day 0 ML and RL speed run, at the start of two of my AI Safety workshops at EA hotel in 2019. Where you there for that? It might have been recorded, but I have no idea where it might have ended up. Althoug... (read more)

2
Chris Leong
1mo
I was there for an AI Safety workshop, I can't remember the content though. Do you know what you included?

I was surprised to read this: 

In 2020, the going advice for how to learn about AI Safety for the first time was:

  1. Read everything on the alignment forum. [...]
  2. Speak to AI safety researchers. [...]


MIRI, CHAI and 80k all had public reading guides since at least 2017, when I started studying AI Safety.

So seems like at least part of the problem was that these... (read more)

2
Chris Leong
1mo
I didn't know that CHAI or 80,000 Hours had recommended material. The 80,000 Hours syllabus = "Go read a bunch of textbooks". This is probably not ideal for a "getting started' guide.

I'm updating the AI Safety Support - Lots of Links page, and came across this post when following trails of potentially useful links. 

Are you still doing coaching, and if "yes" do you want to be listed on the lots of links page?

1
jeffreyyun
1mo
Just saw this, yes I am! :D

For what it's worth, I think it was good that Thomas brought this up so that we could respond. 

1
Remmelt
3mo
Also see further discussion on LessWrong here and here.

I'm guessing that what Marius means by "AISC is probably about ~50x cheaper than MATS" is that AISC is probably ~50x cheaper per participant than MATS.

Our cost per participant is $0.6k - $3k USD

50 times this would be 30k - 150k per participant. 
I'm guessing that MATS is around 50k per person (including stipends).


Here's where the $12k-$30k USD comes from:

Dollar cost per new researcher produced by AISC

  • The organizers have proposed $60–300K per year in expenses. 
  • The number of non-RL participants of programs have increased from 32 (AISC4) to 130&
... (read more)

5. Overall, I think AISC is less impactful than e.g. MATS even without normalizing for participants. Nevertheless, AISC is probably about ~50x cheaper than MATS. So when taking cost into account, it feels clearly impactful enough to continue the project. I think the resulting projects are lower quality but the people are also more junior, so it feels more like an early educational program than e.g. MATS. 

This seems correct to me. MATS is investing a lot in few people. AISC is investing a little in many people. 

Also agreement on all the other points. 

From Lucius Bushnaq:

I was the private donor who gave €5K. My reaction to hearing that AISC was not getting funding was that this seemed insane. The iteration I was in two years ago was fantastic for me, and the research project I got started on there is basically still continuing at Apollo now. Without AISC, I think there's a good chance I would never have become an AI notkilleveryoneism researcher. 

Full comment here: This might be the last AI Safety Camp — LessWrong

Thanks for this comment. To me this highlights how AISC is very much not like MATS. We're very different programs doing very different things. MATS and AISC are both AI safety upskilling programs, but we are using different resources to help different people with different aspects of their journey. 

I can't say where AISC falls in the talent pipeline model, because that's not how the world actually work. 

AISC participants have obviously heard about AI safety, since they would not have found us otherwise. But other than that, people are all over th... (read more)

I don't like this funnel model, or any other funnel model I've seen. It's not wrong exactly, but it misses so much, that it's often more harmfull than helpful. 

For example:

  • If you actually talk to people their story is not this linear, and that is important. 
  • The picture make it looks like AISC, MATS, etc are interchangeable, or just different quality versions of the same thing. This is very far from the truth. 

I don't have a nice looking replacement for the funnel. If had a nice clean model like this, it would probably be as bad. The real world is just very messy.

We have reached out to them and gotten some donations. 

  • All but 2 of the papers listed on Manifund as coming from AISC projects are from 2021 or earlier. Because I'm interested in the current quality in the presence of competing programs, I looked at the two from 2022 or later: this in a second-tier journal and this in a NeurIPS workshop, with no top conference papers. I count 52 participants in the last AISC so this seems like a pretty poor rate, especially given that 2022 and 2023 cohorts (#7 and #8) could both have published by now.
  • [...] They also use the number of AI alignment researchers created as an impo
... (read more)

The impact assessment was commissioned by AISC, not independent.

Here are some evaluations not commissioned by us

If you have suggestions for how AISC can get more people to do more independent evaluations, please let me know.

I see your concern. 

Me and Remmelt have different beliefs about AI risk, which is why the last AISC was split into two st... (read more)

2
Remmelt
3mo
The “Do Not Build Uncontrollable AI” area is meant for anyone to join who have this concern. The purpose of this area is to contribute to restricting corporations from recklessly scaling the training and uses of ML models. I want the area to be open for contributors who think that: 1. we’re not on track to solving safe control of AGI; and/or 2. there are fundamental limits to the controllability of AGI, and unfortunately AGI cannot be kept safe over the long term; and/or 3. corporations are causing increasing harms in how they scale uses of AI models. After thinking about this over three years, I now think 1.-3. are all true. I would love more people who hold any of these views to collaborate thoughtfully across the board!

But on the other hand, I've regularly meet alumni who tell me how useful AISC was for them, which convinces me AISC is clearly very net positive. 

Naive question, but does AISC have enough of such past alumni that you could meet your current funding need by asking them for support? It seems like they'd be in the best position to evaluate the program and know that it's worth funding.

  • MATS has steadily increased in quality over the past two years, and is now more prestigious than AISC. We also have Astra, and people who go directly to residencies at OpenAI, Anthropic, etc. One should expect that AISC doesn't attract the best talent.


There is so much wrong here, I don't even know how to start (i.e. I don't know what the core cruxes are) but I'll give it a try. 

I AISC is not MATS because we're not trying to be MATS. 

MATS is trying to find the best people and have them mentored by the best mentors, in the best environment. This is... (read more)

How does the conflictedness compare to the conflictedness (if any) you would feel if you were a business performing services for Meta?

To me, selling services to a bad actor feel significantly more immoral than receiving their donation, since selling a service to them is much more directly helpful to them.

(This is not a comment on how bad Meta is. I do not have an informed opinion on this.)

The culture of “when in doubt, apply” combined with the culture of “we can do better things with our time than give feedback,” combined with lack of transparency regarding the statistical odds of getting funded, is a dangerous mix that creates resentment and harms the community.

Agree!
I believe this is a big contributor or burnout and people leaving EA.

See also: The Cost of Rejection — EA Forum (effectivealtruism.org)

 

However, I don't think the solution is more feedback from grant makers. The vetting boatneck is a big part of the problem. Requiring mor... (read more)

2
Jeff Kaufman
3mo
DM'd!

I would advise to just ask for feedback from anyone in one's EA network you think have some understanding of grantmaker perspectives. For example, if 80k hrs advisors, your local EA group leadership and someone you know working at an EA org

Most people in EA don't have anyone in their network with a good understanding of grant makers perspective. 

I think that "your local EA group leadership" usually don't know. The author of this post is a national group founder, and they don't have a good understanding of what grant makers want. 

A typical lunch c... (read more)

1
David T
3mo
And even if you happen to have access to people with relevant knowledge, all the arguments against the actual grantmakers offering feedback applies more strongly to them: * its time consuming, more so because they're reading the grant app in addition to their job rather than as part of it * giving "it makes no sense" feedback is hard, more so when personal relationships are involved and the next question is going to be "how do I make it make sense?" * people might overoptimize for feedback, which is a bigger problem when the person offering the feedback has more limited knowledge of current grant selection priorities I get that casually discussing at networking events might eliminate the bottom 10% of ideas (if everyone pushes back on your idea that ballet should be a cause area or that building friendly AI in the form of human brain emulation is easy, you probably shouldn't pursue it), but I'm not sure how "networking" can possibly be the most efficient way of improving actual proposals. Unless - like in industrial funding - there's a case for third party grant writer / project manager types that actually help people turn half decent ideas into well-defined fundable projects for a share of the fund? 
3
Ulrik Horn
3mo
Good point, perhaps I have been especially lucky then as a newcomer to direct EA work and grant applications. I guess that makes me feel even more gratitude for all the support I have received including people helping both discuss project ideas as well as help review grant applications.

I think paying a friendly outsider would be the best option. I don't expect I have much say in this, since I don't have much spare money, so I will not be the one heiring. But I would like TracingWoodsgrains to look into the Nonlinear story. 

My current understanding is that OpenPhil is very unlikely to give us money. 

2
Chris バルス
2mo
I have read the posts related to your funding situation, and I still haven't fully figured out why OF wouldn't fund you. Would you like to bring light to the reason why, if you know? 

Disagree.

I think this section illustrated something important, that I would not have properly understood without a real demonstration with real facts about a real person. It hits different emotionally when it's real, and given how important this point is, and how emotionally charged everything else is, I think I needed this demonstration for the lesson to hit home for me. 

I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.

I also don't think this is retaliation. If that was the goal Kat could have just ended the section after making Ben look maximally bad, and not adding the clarifying context.

This is not true. If Kat had just left in the section making Ben look bad, everyone would have been "what? Where is the evidence for this? This seems really bad?". 

The way it is written it still leaves many people with an impression, but alleviates any burden of proof that Kat would have had.

You might still think it's a fine rhetorical tool to use, but I think it's clear that Kat of course couldn't have just put the accusations into the post without experiencing substantial backlash and scrutiny of her claims.

I wrote this in response to Ben's post 

Thanks for writing this post.

I've heard enough bad stuff about Nonlinear from before, that I was seriously concerned about them. But I did not know what to do. Especially since part of their bad reputation is about attacking critics, and I don't feel well positioned to take that fight.

I'm happy some of these accusations are now out in the open. If it's all wrong and Nonlinear is blame free, then this is their chance to clear their reputation. 

I can't say that I will withhold judgment until more evidence come

... (read more)
  1. The Nonlinear team should have gotten their replies up sooner, even if in pieces. In the court of public opinion, time/speed matters. Muzzling up and taking ~3 months to release their side of the story comes across as too polished and buttoned up.


Strong disagree. 

A) Sure, all else equal speed would have been better. But if you take the hypothesis that NL is mostly innocent as true for a moment. Getting such a post written about you must be absolutely terrible. If it was me, I'd probably not be in a good shape to write anything in response very quickly... (read more)

As far as I know, the reason AISS shut down was 100% because of lack of funding. However, it's not so easy to just start things up again. People who don't get paid tend to quit and move on.

I don't understand how to do this on your search page. 

3
Sarah Cheng
9mo
"Filter by topics" lets you search for and select any number of topics, and the results will show anything that has all of the selected topics. Hope that helps!

EA Forum feature request

(I'm not sure where to post this, so I'm writing it here)

1) Being able to filter for multiple tags simultaneously. Mostly I want to be able to filter for "Career choice" + any other tag of my choice. E.g. AI or Academia to get career advice specifically for those career paths. But there are probably other useful combos too. 

3
Sarah Cheng
9mo
Thanks for the feedback Linda! I believe you can accomplish this using the topic filters on our current search page, but please let me know if you run into any issues.

(Just for future reference, I think “EA Forum feature suggestion thread” is the designated place to post feature requests.)

  • Someone could set up a leadership fast-track program.


How is this on the decentralisation list? 

8
Larks
10mo
Yes, I generally think of things like a meritocratic officer corps as being a pro-centralisation move, vs relying on personal connections and military aristocrats with independent sources of legitimacy. 
3[anonymous]10mo
I think this is related to: "Distance myself from the idea that I’m “the” face of EA...Trying to correct this will hopefully be a step in the direction of decentralisation on the perception and culture dimensions....I’m also going to try to provide even more support to other EA and EA-aligned public figures, and have spent a fair amount of time on that this year so far. "

Reading this post is very uncomfortable in an uncanny valley sort of way. A lot of things said is true and needs to be said, but the all over feeling of the post is off. 

I think most of the problem comes from blurring the line between how EA functions in practice for people who are close to money and the rest of us. 

Like, sure EA is a do-ocracy, and I can do what ever I want, and no-one is sopping me. But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached. ... (read more)

But also, every local community organiser I talk to talks about how CEA is controlling and that their funding comes with lots of strings attached.

(Just wanted to add a counter datapoint: I have been a local community organizer for several years and this has not been my experience.)

I wasn't sure about the 'do-ocracy' thing either. Of course, it's true that no one's stopping you from starting whatever project you want - I mean, EA concerns the activities of private citizens. But, unless you have 'buy-in' from one of the listed 'senior EAs', it is very hard to get traction or funding for your project (I speak from experience). In that sense, EA feels quite like a big, conventional organisation.

The type of AI we are worried about is a an AI that peruses some kind of goal, and if you have a goal, then self preservation is a natural instrumental goal, as you point out in the paperclip maximiser example. 

It might be possible that someone builds a super intelligent AI that don't have a goal. Depending on your exact definition GPT4 could be counted as super intelligent, since it knows more than any human. But it's not dangerous (by it self) since it's not trying to do anything. 

You are right that it is possible for something that is intellig... (read more)

In addition, if I were getting career-related information from a community builder, that community builder's future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best).

As far as I know, this is exactly what is happening. 

Can we address critiques of the DALY framework by selecting moral weighting frameworks that are appropriate for our particular applications, addressing methodological critiques when they get raised, and taking care to contextualize our usage of a particular framework? - Maybe.

I'm pretty sure the answer is "No, we can't". The whole point of DALY is that it lets us compare completely different interventions. If you replace it with something that is different in each context, you have not replaced it.

I think the best we can do is to calibrate it better, buy a... (read more)

2
MHR
10mo
Sorry if my comment was unclear. I don't mean that we should use a different set of weights when looking at different interventions, I mean that we should use different weighting frameworks depending on the types of questions we are trying to ask. If we're trying to quantify the impacts of different interventions on health outcomes, the post-2010 DALY scale might be reasonable. If we're trying to quantify the impacts of different interventions on wellbeing, then WELLBYs might be reasonable. If we value improvements in health outcomes independent of their impact on subjective wellbeing, then some type of blended framework (e.g. GiveWell's moral weighting scheme) might make sense.  I'll return to the RP Moral Weights Project as an example of what I'm critiquing (the Moral Weight Project is fantastic in lots of ways, I don't mean to say the whole project is bad because of this one critique). For the project, the authors are trying to develop weights that express animals' changes in hedonic wellbeing in terms of human DALYs. But it's not clear that DALYs are a coherent unit for what they're trying to measure. The give trying to "estimate the welfare gain from, say, moving layer hens from cages to a cage-free system" as an example of the kind of application they're looking at. But locking a human in a cage wouldn't obviously change the number of DALYs gained in the world, at least under the post-2010 definition. For that application, a unit that included subjective wellbeing would make a lot more sense. That's the kind of thing I'm trying to get at.  But I do agree with you that asking disabled people about their experiences and incorporating those results into whatever weighting scale we use is a very valuable step!

I recently had a conversation with a local EA community builder. Like many local community builders they got their funding from CEA. They told me that their continued funding was conditioned on scoring high on the metric of how many people they directed towards long-term-ism career paths. 

If this is in fact how CEA operates, then I think this bad, for because of the reasons described in this post. Even though I'm in AI Safety I value EA being about more than X-risk prevention.

Hey Linda,

I'm head of CEA's groups team. It is true that we care about career changes - and it is true that our funders care about career changes.  However it is not true that this is the only thing that we care about. There are lots of other things we value, for example grant-recipients have started effective institutions, set up valuable partnerships, engaged with public sector and philanthropic bodies. This list is not exhaustive! We also care about the welcomingness of groups, and we care about groups not using "polarizing techniques".

In terms of ... (read more)

8
Jason
10mo
In addition, if I were getting career-related information from a community builder, that community builder's future career prospects depended on getting people like me to choose a specific career path, and that fact was neither disclosed nor reasonably implied, I would feel misled by omission (at best). By analogy, let's say I went to a military recruiter and talked to them at length about opportunities in various branches of the military. Even though they identified themselves as a generic military recruiter, they secretly only got credit for promotion if I decided to join the Navy. I would feel entitled to proactive disclosure of that information, and would feel misled if I got a pro-Navy pitch without such disclosure.  (I am not saying I would feel misled if the community builder were evaluated on getting people to make EA career choices more broadly. I think it's pretty obvious that recruiting is part of the mission and that community builders may be evaluated on that. Likewise, I wouldn't feel misled if the military recruiter didn't tell me they were evaluated on how many people they recruited for the military as a whole.)

I think the specific list of orgs you picked is a bit ad-hock but also ok. 

It looks like you've chosen to focus on reperch orgs specifically, plus overview resources. I think this is a resonable choice. 

Some orgs that would fit on the list (i.e. other research orgs), are
* Conjecture
* Orthogonal
* CLR
* Convergence
* Aligned AI
* ARC

There are also several important training and support orgs that is not on your list (AISC, SERI MATS, etc). But I think it's probably the right choice to just link to aisafety.training, and let people find variolous progra... (read more)

1
mariekedev
10mo
Thanks, this is great. I added all the orgs you listed and  removed the confusing arrow. I'll go over the list with a few AI experts from our community to check if the selected orgs/initiatives can be better selected and organised. 

There are lots of more AI Safety orgs and initiatives. Not sure if would be practical to add all.

See here for manybof them: aisafety.world

3
mariekedev
10mo
Thanks! I added 'Many more, see aisafety.world' to the branch.  Are there particular orgs and initiatives that you'd put in the branch itself as well that are currently missing, so they stand out more? 

Is this still an impacts market? Looks to me that this is primally just a fundraising platform. I'm not complaining. I think EA should have a fundraising platform! I'm just confused.

8
Dawn Drescher
1y
Hiii! Thanks! Yeah, what’s a market and what isn’t… I’m used to a rather wide definition from economics, but we did briefly consider whether we should use a different or sub-brand (like ranking.impactmarkets.io or so) for this project. The idea is that, if all goes well, we roll out something like the carbon credit markets but for all positive impact via a three-phase process: 1. In the first phase we want to work with just the donor impact score. Any prizes will be attached to such a score and basically take the shape of follow-on donations. This is probably a market to the extent that Metaculus is a market. They say “sort of” and prefer the term “prediction aggregator.” So maybe we’re currently an impact aggregator. 2. In the second phase, we want to introduce a play money currency that we might call “impact credit” or “impact mark.” The idea is to reward people with high scores with something that they can transfer within the platform so that incentives for donors will be controlled less and less by the people with the prize money and increasingly by the top donors who have proved their mettle and received impact credits as a result. We’ll start moving in that direction if we get something like 100+ monthly active users. Metaculus would probably consider this an “impact market” and Manifold Markets even has it in its name. But rebranding away from “market” and then maybe rebranding back towards “market” a year later seemed unwise to us. 3. Eventually, and this brings us to the third phase, we want to understand the legal landscape well enough to allow trade of impact credits against dollars or other currencies. We would like for impact credits to enjoy the same status that carbon credits already have. They should function like generalized carbon credits. I think at this point the resulting market will be widely considered a literal “market.” This is much more of a long-term vision though.

Over all I think this is a good post. However this part surprised me.

However, I am personally worried about people skill-building for a couple of years and then not switching to doing the most valuable alignment work they can, because it can be easy to justify that your work is helping when it isn’t. This can happen even at labs that claim to have a safety focus! Working at any of Anthropic, DeepMind, Redwood Research, or OpenAI seems like a safe bet though. 

I agree with the first bit. I'm also worried that people motivated to help with alignment end ... (read more)

For me personally, the core of Effective Altruism is "it's not about you". Everything else follows from there.

This is very much in contrast to other cultures of altruism I have encountered, which focus very much on the mental state of the giver. When you stop questioning if you are pure and have the right motives, ect, and just focus on results, that's when you get EA.

But also, don't be 100% altruistic. Some of your efforts should be about you. If you only take care of your self for instrumental reasons, you will systematically under invest in your self. So be just genuinely egoistic with some parts of your effort, where "be egoistic" just means "do what ever you want". 

Thanks, that clarifies things.

I'm still not sure what you mean by org. Do you count CEA as an org, or EVF as an org?

I think in terms of projects and people and funding. Legal orgs are just another part of the infrastructure that supports funding and people. 

I think it would be great if AI Safety Support, where given enough funding to hire 50 people, and used that funding to provide financial security to lots of existing projects. Although that is heavily biased by the fact that I personally know and trust the people running AISS, and that their work s... (read more)

2
DavidNash
1y
In my example I was more referring to orgs like EVF, but I imagine if EA was more centralised there would be a range of larger orgs, some more like EVF and others more like Open Phil, who aren't incubating projects.

I've been Alice. I had some experiences within EA that lead me to take a year long EA-leave. When I left I did not know for how long, and if I would come back. This was defiantly the right thing for me to do. If you're Alice and you feel you need to take a step back, then you are probably correct. Even if you can't exactly articulate why, you are probably correct. If the EA network is net positive for you and your work, then you will be back. 

I'm talking about increasing the number of large organisations.

I'm confused what you are suggesting exactly. When reading post I assumed that you suggested more centralisation in general. If there where a compotator to CEA, I would not call that "more centralisation". Although maybe it depends on how we get there form here?

If several small orgs join together to form a new big org, that would seem like going towards more centralisation. But if someone starts a new org that grows into a large org, which competes with an existing large org, that would look li... (read more)

2
DavidNash
1y
I think it would be better to have 20 organisations with about 50 people each than 3 organisations with 50 people and then everyone else working as individuals. One organisation with 1000 people would probably be the worst option.
Load more