Robi Rahman

Data Scientist @ Stanford Institute for Human-Centered Artificial Intelligence
Working (0-5 years experience)
506New York, NY, USAJoined Aug 2021



Data scientist working on the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.


EAs are counterfactually responsible for DeepMind?

Tengo una oferta permanente disponible para cualquiera que lea esto: elige un tema de altruismo efectivo sobre el que no se haya escrito en español, y te apuesto $20 a que puedo escribir un artículo mejor que tú. No soy un hablante nativo de español, ¡así que esta apuesta debería ser fácil de ganar! Nombra un juez neutral con el que ambos estemos de acuerdo, les mostraremos nuestros artículos de forma anónima y él elegirá al ganador. Algunos temas que me emocionaría especialmente tratar para este concurso:
- un discurso de ascensor de sesenta segundos sobre los riesgos de la inteligencia artificial
- el argumento del niño ahogado
- explicación de los principales programas de caridad de GiveWell y por qué los apoyamos


I have a standing offer available to anyone reading this: pick an effective altruism topic that hasn't been written about in Spanish, and I'll bet you $20 I can write a better piece than you can. I'm not a native Spanish speaker, so this bet should be easy to win! Name a neutral judge we can both agree on, we'll show them both of our articles anonymously, and they pick the winner. Some topics I would be especially excited to try for this contest:
- a sixty-second elevator pitch about risks from artificial intelligence
- the drowning child argument
- explanation of GiveWell's top charity programs and why we support those

What do you think are the most harmful parts of EA?

> We might need to limit capacity at certain events (whereas previously we always accepted people if they were above a certain bar).

What do you mean by this? I don't get why the policy shouldn't be "always accept people if they are above a certain bar". Perhaps the bar should change, but it feels to me like the obvious way of deciding who to accept should be something like "for everyone who wants to attend, estimate how much value it would produce for them to attend, and estimate how costly it would be, and then accept everyone where value exceeds benefits". It sounds like you're suggesting doing something else--what other policy are you suggesting following?

It sounds like previously the policy was that they have some threshold they had thought of in advance of setting up the conferences - let's say everyone above this threshold is a Qualified Effective Altruist, and their policy is to open the applications for every event and make all the conferences big enough to admit every QEA who applies. But now there is less funding and more QEAs, so individual conferences may have to have higher bars for entry, and they might not admit you even if they think you're qualified according to the previous standard.

Apparently catering at conferences typically costs more like $50-80 per person per meal, according to the EAGxBerkeley postmortem thread.

I agree with all of your thoughts here, and want to remark in support of the intuitions.

There's a classic math question known as the handshake problem, which asks and answers things like: in a group of n people, how many unique handshakes can take place if each person shakes the hand of all the other n-1 people? If n=1, there's only one person and they can't shake anyone else's hand, so 0 handshakes take place. If n=2, the second person can shake the first person's hand, so there's 1 handshake. If n=3, the third person shakes hands with each of the first two people, who also shake hands with each other. In general, the solution for n people is (n-1) + (n-2) + ... + 2 + 1 + 0 = n*(n-1)/2. The most important insight or takeaway is that this expression is proportional to the square of the number of people, so the number of handshakes grows quadratically as more people are added.

I think fellowships strongly experience this effect, where adding one more participant to a small group will let that person meet the others in the group and provide a valuable connection to those few people, but adding one more participant to a large group provides a potential valuable connection to a lot more people. Going from 30 participants to 31 has a lot more potential upside than going from 20 participants to 21.

(These effects don't continue forever, because there are different limiting factors that become relevant. Adding one attendee to a 1500-person conference doesn't quadratically increase the value of the conference because the first 1500 people are already limited by the number of 1-on-1s they have time for during the event. And you can't just change the fellowship from 30 people to 60 and expect it to be 4x as good; at that point, you're going to start needing more space and everyone is going to be far from some of the other people, so there's an asymptotic bound to the value that is less than quadratic.)

Surely the heart is still endorsed by the author!

I strongly disagree with your implication that "these things" (presumably "sexism, racism, and other toxic ideologies" as mentioned in the original post) are "accepted" within this movement, and I'm tired of stuff like this being brought up and distracting us from the mission we're all here for, which is to help others.

Who are the EAs claiming that race-and-IQ conversations are untouched by white supremacism? I have never seen an effective altruist claim anything like that.

discussions about race and IQ always focus on black people having lower average scores on IQ tests than white people, and tends not to discuss claims by race scientists that white people have lower average scores than Asians

Discussions about race and IQ that are instigated by white supremacists often mention results showing that black people score lower than white people while omitting results showing that white people score lower than Asians; discussions in academic psychology are more likely to mention all of those results together. And I've never seen anyone in effective altruism mention anything on this topic, until this post and your comment just now.

I don't see how it's relevant to effective altruism. Looking for group differences on IQ tests doesn't seem to help with fundraising, preventing pandemics, or distributing bednets, so unsurprisingly it never comes up here.

I clicked the link about pig thumping but that wasn't mentioned anywhere on the linked page. You might want to update that one.

Load More