Prabhat Soni

I'm an undergrad studying CS, applied math and economics at IIIT Delhi, India. Please don't hesitate to PM me! You can also reach me at psoni1019 at gmail dot com

Comments

Is the current definition of EA not representative of hits-based giving?

Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P

Is the current definition of EA not representative of hits-based giving?

I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:

Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.

Arden Koehler: If it turns out that we should consult a crystal ball in order to find out if that’s the best way, then we should do that?

Ben Todd: Yeah.

Arden Koehler: Okay. Yeah. So again, very abstract: whatever it is that turns out to be the best way of figuring out how to do the most good.

Ben Todd: Yeah. I mean, in general, you have this just big question of how narrow or broad to make the definition of effective altruism and it is a difficult thing to say.

I don't think this is an "official definition" (for example, endorsed by CEA) but I think (or atleast hope!) that CEA is working out a more complete definition for EA.

Can the EA community copy Teach for America? (Looking for Task Y)

Task Y candidate: Fellowship facilitator for EA Virtual Programs

EA Virtual Programs runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).

EA intro fellowships (and similar programs) have been successful at minting engaged EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (see this and this). Thus, if a fellowship/reading group has to reject people, that's significant value lost. Rejected applicants generally re-apply at low rates (despite being encouraged to!).

Uncertainties:

  • Is EA Virtual Programs short on facilitators? I don't know. The answer to this question would presumably change post-COVID (IMO the answer could shift in either direction), and so in the interest of future-proofing this answer, I will not bother to find the current demand for facilitators.
  • Will EA Virtual Programs exist post-COVID? An organizer at the EA Virtual Programs informally said that nothing concrete has been decided yet, but the project was probably leaning towards continuing in some capacity. It is not clear to me whether there will even be significantly fewer applicants post-COVID (since most(?) university groups are running their fellowships independantly right now).

I know of atleast a few non-student working professionals who are facilitators for EA Virtual Programs, which I will take as evidence that this can be a Task Y.

Rationality as an EA Cause Area

Thanks for explaining your views further! This seems about right to me, and I think this is an interesting direction that should be explored further.

Rationality as an EA Cause Area

I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.

  1. It seems very unlikely that promoting rationality (and hoping some of those folks would be attracted to EA) is more effective than promoting EA in the first place.
  2. I am unsure whether it is more effective to grow the the number of people interested in AI safety by promoting rationality or by directly reaching out to AI researchers (or other things one might do to grow the AI safety community).

Also, the post title is misleading since an interpretation of it could be that making people more rational is intrinsically valuable (or that due to increased rationality they would live happier lives). While this is likely true, this would probably be an ineffective intervention.

"Hinge of History" Refuted (April Fools' Day)

Strong upvote. This post caused me to deprioritize longtermism and shift my focus to presently alive beings.

Contact us

Do you have a preference on whether to contact you or contact JP Addison (the programmer of the EA Forum) for technical bugs?

Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community)

What is the minimum threshold of expected attendees required for GWWC/OFTW to be interested in collaborating?

Load More