Aaron_Scher

I'm Aaron, I've done Uni group organizing at the Claremont Colleges for a bit. Current cause prioritization is AI Alignment.

Topic Contributions

Comments

Global health is important for the epistemic foundations of EA, even for longtermists

This is great and I’m glad you wrote it. For what it’s worth, the evidence from global health does not appear to me strong enough to justify high credence (>90%) in the claim “some ways of doing good are much better than others” (maybe operationalized as "the top 1% of charities are >50x more cost-effective than the median", but I  made up these numbers).

The DCP2 (2006) data (cited by Ord, 2013) gives the distribution of the cost-effectiveness of global health interventions. This is not the distribution of the cost-effectiveness of possible donations you can make. The data tells us that treatment of Kaposi Sarcoma is much less cost-effective than antiretroviral therapy in terms of avoiding HIV related DALYs, but it tell us nothing about the distribution of charities, and therefore does not actually answer the relevant question: of the options available to me, how much better are the best than the others?

If there is one charity focused on each of the health interventions in the DCP2 (and they are roughly equally good at turning money into the interventions) – and therefore one action corresponding to each intervention – then it is true that the very best ways of doing good available to me are better than average.

The other extreme is that the most cost-effective interventions were funded first (or people only set up charities to do the most cost-effective interventions) and therefore the best opportunities still available are very close to average cost-effectiveness. I expect we live somewhere between these two extremes, and there are more charities set up for antiretroviral therapy than kaposi sarcoma.

The evidence that would change my mind is if somebody publicly analyzed the cost-effectiveness of all (or many) charities focused on global health interventions. I have been meaning to look into this, but haven’t yet gotten around to it. It’s a great opportunity for the Red Teaming Contest, and others should try to do this before me. My sense is that GiveWell has done some of this but only publishes the analysis for their recommended charities; and probably they already look at charities they expect to be better than average – so they wouldn’t have a representative data set.

Is the time crunch for AI Safety Movement Building now?

The edit is key here. I would consider running an AI-safety arguments competition in order to do better outreach to graduate-and-above level researchers to be a form of movement building and one for which crunch time could be in the last 5 years before AGI (although probably earlier is better for norm changes). 

One value add from compiling good arguments is that if there is a period of panic following advanced capabilities (some form of firealarm), then it will be really helpful to have existing and high quality arguments and resources on hand to help direct this panic into positive actions. 

This all said, I don't think Chris's advice applies here: 

I would be especially excited to see people who are engaged in general EA movement building to pass that onto a successor (if someone competent is available) and transition towards AI Safety specific movement building.

I think this advice likely doesn't apply because the models/strategies for this sort of AI Safety field building are very different from that of general EA community building (e.g., University groups), the background knowledge is quite different, the target population is different, the end goal is different, etc. If you are a community builder reading this and you want to transition to AI Safety community building but don't know much about it, probably learning about AI Safety for >20 hours is the best thing you can do. The AGISF curriculums are pretty great. 

We should expect to worry more about speculative risks

I’m a bit confused by this post. I’m going to summarize the main idea back, and I would appreciate it if you could correct me where I’m misinterpreting.

Human psychology is flawed in such a way that we consistently estimate the probability of existential risk from each cause to be ~10% by default. In reality, the probability of existential risk from particular causes is generally less than 10% [this feels like an implicit assumption], so finding more information about the risks causes us to decrease our worry about those risks. We can get more information about easier-to-analyze risks, so we update our probabilities downward after getting this correcting information, but for hard-to-analyze risk we do not get such correcting information so we remain quite worried. AI risk is currently hard-to-analyze, so we remain in this state of prior belief (although the 10% part varies by individual, could be 50% or 2%).

I’m also confused about this part specifically: 

initially assign something on the order of a 10% credence to the hypothesis that it will by default lead to existentially bad outcomes. In each case, if we can gain much greater clarity about the risk, then we should think there’s about a 90% chance this clarity will make us less worried about it

 – why is there a 90% chance that more information leads to less worry? Is this assuming that for 90% of risks, they have P(Doom) < 10%, and for the other 10% of risks P(Doom) ≥ 10%?

On funding, trust relationships, and scaling our community [PalmCone memo]

A solution that doesn’t actually work but might be slightly useful: Slow the lemons by making EA-related Funding things less appealing than the alternative.

One specific way to do this is to pay less than industry pays for similar positions: altruistic pay cut. Lightcone, the org Habryka runs, does this: “Our current salary policy is to pay rates competitive with industry salary minus 30%.” At a full-time employment level, this seems like one way to dissuade people who are interested in money, at least assuming they are qualified and hard working enough to get a job in industry with similar ease.

Additionally, it might help to frame university group organizing grants in the big scheme of the world. For instance, as I was talking to somebody group organizing grants I reminded them that the amount of money they would be making (which I probably estimated at a couple thousand dollars per month), is peanuts compared to what they’ll be earning in a year or two when they graduate from a top university with a median salary of ~80k. It also seems relevant to emphasize that you actually have to put in the time and effort into organizing a group for a grant like this; it’s not free money – it’s money in exchange for time/labor. Technically it’s possible to do nothing and pretty much be a scam artist, but I didn’t want to say that.

This solution doesn’t work for a few reasons. One is that it only focuses on one issue – the people who are actually in it for themselves. I expect we will also have problems of well-intending people who just aren’t very good at stuff. Unfortunately, this seems really hard to evaluate, and many of us deal with imposter syndrome, so self-evaluation/selection seems bad.

This solution also doesn’t work because it’s hard to assess somebody’s fit for a grant, meaning it might remain easier to get EA-related money than other money. I claim that it is hard to evaluate somebody’s fit for a grant in large part because feedback loops are terrible. Say you give somebody some money to do some project. Many grants have some product or deliverable that you can judge for its output quality, like a research paper. Some EA-related grants have this, but many don’t (e.g., paying somebody to skill up might have deliverables like a test score but might not). Without some form of deliverable or something, how do you know if your grant was any good? Idk maybe somebody who does grantmaking has an idea on this. More importantly, a lot of the bets people in this community are taking are low chance of success, high EV. If you expect projects to fail a lot, then failure on past projects is not necessarily a good indicator of somebody’s fit for new grants (in fact it's likely good to keep funding high EV, low P(success) projects, depending on your risk tolerance). So this makes it difficult to actually make EA-related money harder to get than other money.

We Ran an AI Timelines Retreat

Good question. Short answer: despite being an April Fools post, that post seems to encapsulate much of what Yudkowski actually believes – so the social context is that the post is joking in its tone and content but not so much the attitude of the author; sorry I can't link to anything to further substantiate this. I believe Yudkowski's general policy is to not put numbers on his estimates.

Better answer: Here is a somewhat up-to-date database about predictions about existential risk chances from some folks in the community. You'll notice these are far below near-certainty. 

One of the studies listed in the database is this one in which there are a few researchers who put the chance of doom pretty high.

What would you like to see Giving What We Can write about?

#17 in the spreadsheet is "How much do charities differ in impact?"

I would love to see an actual distribution of charity cost-effectiveness. As far as I know, that doesn't exist. Most folks rely on Ord (2013) which is the distribution of health interventions, but it says nothing about where charities actually do work. 

The AI Messiah

I really enjoyed this comment, thanks for writing it Thomas!

Is it still hard to get a job in EA? Insights from CEA’s recruitment data

Thanks for writing this up and making it public. Couple comments:

On average 45 applications were submitted to each position.

CEA Core roles received an average of 54 applications each; EOIs received an average of 53 applications each.

Is the first number a typo? Shouldn't it be ~54

 

Ashby hires 4% of applicants, compared to 2% at CEA

...

Overall, CEA might be slightly more selective than Ashby’s customers, but it does not seem like the difference is large

Whether this is "large" is obviously subjective. When I read this, I see 'CEA is twice as selective as industry over the last couple years'. Therefore my conclusion is something like: Yes, it is still hard to get a job in EA, as evident from CEA being around twice as selective as industry for some roles; there are about 54 applicants per role at CEA. I think the summary of this post should be updated to say something like "CEA is more competitive but in the same ballpark as industry"

EA needs money more than ever

Congrats on your first forum post!! Now in EA Forum style I’m going to disagree with you.... but really, I enjoyed reading this and I’m glad you shared your perspective on this matter. I’m sharing my views not to tell you you’re wrong but to add to the conversation and maybe find a point of synthesis or agreement. I'm actually very glad you posted this

I don’t think I have an obligation to help all people. I think I have an obligation to do as much good as possible with the resources available to me. This means I should specialize my altruistic work in the areas with the highest EV or marginal return. This is not directly related to the number of morally valuable beings I care about. I don’t think that now valuing future humans means I have additional obligations. What changes is the bar for what’s most effective.

Say I haven’t learned about longtermism, I think GiveWell is awesome, and I am a person who feel obligated to do good. Maybe I can save lives to ~$50,000 per life by donating to GiveDirectly. Then I keep reading and find that AMF saves lives for ~$5,000 per life. I want to do the most good, so I give to the AMF, maximizing the positive impact of my donations.

Then I hear about longtermism and I get confused by the big numbers. But after thinking for awhile I decide that there are some cost-effective things I can fund in the longtermism or x-risk reduction space. I pull some numbers out of thin air and decide that a $500 donation to LTFF will save one life in expectation.

At this point, I think I should do the most good possible per resource, which means donating to the LTFF[1].

My obligation is to do the most good, on the margin where I can, I think. What longtermism changes for me is the cost-effectiveness bar that needs to be cleared. Prior to longtermism, it’s about $5,000 per life saved, via AMF. Now it’s about $500 but with some caveats. Importantly, increasing the pool of money is still good because it is still good to prevent kids dying of malaria; however, this is not the best use of my money.

Importantly, efficiency still matters. If LTFF saves lives for $500 and NTI saves lives for $400 (number also pulled out of thin air), I should give to NTI, all else equal.

I somewhat agree with you about

“Wow, we need to help current people, current animals, and future people and future animals, all with a subset of present-day resources. What a tremendous task”

However, I think it’s better to act according to “do the most good I can with my given resources, targeting the highest EV or marginal return areas”. Doing good well requires making sacrifices, and the second framing better captures this requirement.

Maybe a way I would try to synthesize my view and your conclusion is as follows: We have enormous opportunities to do good, more than ever before. If saving lives is cheaper now than ever before, the alternatives are relatively more expensive. That is, wasting $500 was only worth 0.1 lives before and now it’s worth a whole life. This makes wasting our resources even worse than it used to be.

Edit: Also thank you for writing your post because if gave me an opportunity on my own beliefs about this. :)

  1. ^

    Although realistically I would diversify because of moral uncertainty, some psychological benefits of doing good with p~1, empirical uncertainty about how good LTFF is, social benefits of giving to near-term causes, wanting to remain connected to current suffering, and intuitively seems good, etc.

Load More