robirahman

Data science graduate student. Member of Effective Altruism DC and Harvard EA.

Posts

Sorted by New

Wiki Contributions

Comments

A Red-Team Against the Impact of Small Donations

If the financial capital is $46B and the population is 10k, the average person's career capital is worth about ~$5M of direct impact (as opposed to the money they'll donate)? I have a wide confidence interval but that seems reasonable. I'm curious to see how many people currently going into EA jobs will still be working on them 30 years later.

A Red-Team Against the Impact of Small Donations

Sorry, I didn't mean to imply that biorisk does or doesn't have "fast timelines" in the same sense as some AI forecasts. I was responding to the point about "if [EA organization] is a good use of funds, why doesn't OpenPhil fund it?" being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.

A Red-Team Against the Impact of Small Donations

> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn't OpenPhil just fund it?

Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.

This is highly implausible. First of all, if it's true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.

But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we're really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.

When you're working on global poverty, perhaps you'd want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.

For x-risks this seems totally implausible. What's the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don't think alignment progress has outpaced that. If time until AGI is limited and short then we're actively falling behind. I don't think their investments or effectiveness are increasing fast enough for this explanation to make sense.

Open Thread: November 2021

I noticed something at EAG London which I want to promote to someone's conscious attention. Almost no one at the conference was overweight, even though the attendees were mostly from countries with  overweight and obesity rates ranging from 50-80% and 20-40% respectively. I estimate that I interacted with 100 people, of whom 2 were overweight. Here are some possible explanations; if the last one is true, it is potentially very concerning:

1. effective altruism is most common among young people, who have lower rates of obesity than the general population
2. effective altruism is correlated with veganism, which leads to generally healthy eating, which leads to lower rates of diseases including obesity
3. effective altruists have really good executive function, which helps resist the temptation of junk food
4. selection effects: something about effective altruism doesn't appeal to overweight people

It's clearly bad that EA has low representation of religious adherents and underprivileged minorities. Without getting into the issue of missing out on diverse perspectives, it's also directly harmful in that it limits our talent and donor pools. Churches receive over $50 billion in donations each year in the US alone, an amount that dwarfs annual outlays to all effective causes. I think this topic has been covered on the forum before from the religion and ethnicity angles, but I haven't seen it for other types of demographics.

If we're somehow limiting participation to the 3/10ths of the population who are under 25 BMI, are we needlessly keeping out 7/10ths of the people who might otherwise work to effectively improve the world?

Managing COVID restrictions for EA Global travel: My plans + request for other examples

Additional suggestion: don't just have a photo of your vaccine card on your phone; physically bring it or scan and print a copy.

Managing COVID restrictions for EA Global travel: My plans + request for other examples

Thanks for the writeup! I'm following this process but going to the UK a few days earlier, so I'll try this out and provide results before you leave.

I ordered a 2-day covid test and received a booking reference number. My flight arrives in London on Friday, so tomorrow morning I will fill out the passenger locator form.

Edit, 2021-10-20: Submitted all my info to the UK gov website and got a passenger locator form. I'll update tomorrow when boarding the plane.

2021-10-21: will be departing from the US for the UK on Thursday evening.

2021-10-22: will be arriving in London on Friday morning.