All of Prabhat Soni's Comments + Replies

Is the current definition of EA not representative of hits-based giving?

Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P

Is the current definition of EA not representative of hits-based giving?

I think this excerpt from the Ben Todd on the core of effective altruism (80k podcast) sort of answers your question:

Ben Todd: Well yeah, just quickly on the definition, my definition didn’t have “Using evidence and reason” actually as part of the fundamental definition. I’m just saying we should seek the best ways of helping others through whatever means are best to find those things. And obviously, I’m pretty keen on using evidence and reason, but I wouldn’t foreground it.

Arden Koehler: If it turns out that we should consult a crystal ball in order to fi

... (read more)
1Venkatesh5moThanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit. I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.
Can the EA community copy Teach for America? (Looking for Task Y)

Task Y candidate: Fellowship facilitator for EA Virtual Programs

EA Virtual Programs runs intro fellowships, in-depth fellowships, and The Precipice reading groups (plus occasional other programs). The time commitment for facilitators is generally 2-5 hours per week (depending on the particular program).

EA intro fellowships (and similar programs) have been successful at minting engaged EAs. There are large diminishing returns even in selecting applicants with a not-so-strong application since the application process does not predict future engagement well (... (read more)

1yiyang25dI think this sounds right! This makes me feel like we should also pay particular attention to making the facilitator experience is great too. Organising local intro EA programs can also be a great Task Y candidate.
2akrivka1moThis is a great idea and I think the current form of EAVP can support this!
Rationality as an EA Cause Area

Thanks for explaining your views further! This seems about right to me, and I think this is an interesting direction that should be explored further.

Rationality as an EA Cause Area

I think rationality should not be considered as a seperate cause area, but perhaps deserves to be a sub-cause area of EA movement building and AI safety.

  1. It seems very unlikely that promoting rationality (and hoping some of those folks would be attracted to EA) is more effective than promoting EA in the first place.
  2. I am unsure whether it is more effective to grow the the number of people interested in AI safety by promoting rationality or by directly reaching out to AI researchers (or other things one might do to grow the AI safety community).

Also, the post... (read more)

2casebash6moPart of my model is that there is decreasing marginal utility as you invest more effort in one form of outreach, so there can be significant benefit in investing some resources in investing small amounts of resources in alternate forms of outreach.
"Hinge of History" Refuted (April Fools' Day)

Strong upvote. This post caused me to deprioritize longtermism and shift my focus to presently alive beings.

Contact us

Do you have a preference on whether to contact you or contact JP Addison (the programmer of the EA Forum) for technical bugs?

3Aaron Gertler7moPlease contact me for those, and I'll forward them to JP as appropriate. Thanks!
Join our collaboration for high quality EA outreach events (OFTW + GWWC + EA Community)

What is the minimum threshold of expected attendees required for GWWC/OFTW to be interested in collaborating?

A ranked list of all EA-relevant (audio)books I've read

I was looking for books on rationality. My top 4 shortlist was:

  • Rationality: From AI to Zombies by Eliezer Yudkowsky
  • Predictably Irrational by Dan Ariely
  • Decisive by Chip Heath and Dan Heath (This covers a lot of concepts EAs are familiar with such as confirmation bias and overconfidence, so I didn't feel it would add much to my knowledge base)
  • Thinking, Fast and Slow by Daniel Kahneman (More focussed on cognitive biases rather than rationality in general.)

I ended up going with Rationality: From AI to Zombies.

List of Introductory EA Presentations

Hey I know this post is very old. But in case someone stumbles across this post, the best presentation for introducing EA in my opinion is:

Prabhat Soni's Shortform

Yep, that's what comes to my mind atleast :P

Needed EA-related Articles on the English Wikipedia

Apparently existential risk does not have its own Wikipedia article.

Some related concepts like human extinction, global catastrophic risks, existential risk from AGI, biotechnology risk do have their own Wikipedia articles. On closer inspection, hyperlinks for "existential risk" on Wikipedia redirect to the global catastrophic risk Wiki page. A lot of Wiki articles have started using the term "existential risk". Should there be a seperate article for existential risk?

Stanford EA has Grown During the Pandemic; Your Group Can Too

Another awesome (and low-effort for organizers) way to socialise is the EA Fellowship Weekend (which probably didn't exist when Kuhan wrote this post).

Evidence on correlation between making less than parents and welfare/happiness?

BTW Jessica, the $75K figure from Kahneman's paper that you mentioned is from 2010. After adjusting for inflation, that's ~$90K in 2021 dollars (exact number depends on the inflation calculator you used).

Prabhat Soni's Shortform

Socrates' case against democracy

https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it

Socrates makes the following argument:

  1. Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
  2. "The best argument against democracy is a five minute conversation with the average voter". Half of American adults don’t know that each state gets two senators and two thirds don’t know w
... (read more)
1evelynciara8moWhat's the proposed policy change? Making understanding of elections a requirement to vote?
Big List of Cause Candidates

Sorry, you're right; the link I provided earlier isn't very relevant (that was the only EA Forum article on WBE I could find). I was thinking something along the lines of what Hanson wrote. Especially the economic and legal issues (this and the last 3 paragraphs in this; there are other issues raised in the same Wiki article as well). Also Bostrom raised significant concerns in Superintelligence, Ch. 2 that if WBE was the path to the first AGI invented, there is significant risk that unfriendly AGI will be created (see the last set of bullet points in this... (read more)

2NunoSempere9moOk, cheers, will add.
2BrianTan9moHey Prabhat, yeah I'm aware of Kurzgesagt, and am happy they have videos on topics related to EA. But they've never specifically mentioned EA or GiveWell yet. I think either of those happening could have a large effect.
1[comment deleted]9mo
2NunoSempere9moIn that context, this seems maybe like just a pathway for reducing long-term-risks from malevolent actors? Or, are you thinking more of Age of Em or something else which Hanson wrote?
Prabhat Soni's Shortform

A film titled "Superintelligence" has released in November 2020. Could it raise risks?

Epistemic status: There's a good chance I'm overthinking this, and overestimating the risk.

Superintelligence [Wiki] [Movie trailer]. When you Google "Superintelligence", the top results are no longer those relating to Nick Bostrom's book but rather this movie. A summary of the movie:

When a powerful superintelligence chooses to study Carol, the most average person on Earth, the fate of the world hangs in the balance. As the AI decides whether to enslave, save or destroy hu

... (read more)
2BrianTan9moI don't know if it would raise risks, and I haven't watched the movie (only the trailer), but I'm disappointed about this movie. Superintelligence is a really important concept, and they turned it into a romantic action comedy film, and making it a not-so-serious topic. The film also didn't do well amongst critics - it has an approval rating 29% on Rotten Tomatoes. I think there's nothing we can do about the movie at this rate though.
Prabhat Soni's Shortform

Kurzgesagt – In a Nutshell is a popular YouTube channel. A lot of its content is EA-adjacent. The most viewed videos in a bunch of EA topics are ones posted by Kurzgesagt. The videos are also of very high quality. Has anyone tried collaborating with them or supporting them? I think it could be high impact (although careful evaluation is probably required).

 

Most of their EA-adjacent videos:

... (read more)
5Aaron Gertler9moSome fairly prominent people in EA have spoken with the person behind this channel about creating an introductory EA video. I'm not sure whether such a video is actually in the works, though. (I imagine that sponsoring one of these is quite expensive.) Sorry for the lack of further detail; I don't know much else, beyond the fact that this meeting happened.
List of EA-related email newsletters

Thanks for this! RationalNewsletter is the only rationality-related newsletter I could find.

Prabhat Soni's Shortform

Cause prioritisation for negative utilitarians and other downside-focused value systems: It is interesting to note that reduction of extinction risk is not very high-impact in downside-focussed value systems.

4MichaelStJules9moSome things that are extinction risks are also s-risks or at least risk causing a lot of suffeing, e.g. AI risk, large-scale conflict. See Common ground for longtermists [https://centerforreducingsuffering.org/common-ground-for-longtermists/] by Tobias Baumann for the Center for Reducing Suffering. But ya, downside-focused value systems typically accept the procreation asymmetry [https://en.wikipedia.org/wiki/Asymmetry_(population_ethics)], so future people not existing is not bad in itself.
2RobertDaoust9moIt is very high-impact when survival is considered indispensable to have control over nature for preventing negative values from coming back after extinction.
Rationality as an EA Cause Area

Promoting effective altruism promotes rationality in certain domains. And, improving institutional decision making is related to improving rationality. But yeah, these don't cover everything in improving rationality.

How much does a vote matter?

Thanks Nathan, this was helpful!

Are we neglecting education? Philosophy in schools as a longtermist area

Hi Jack, thanks for writing this. I read this post when it was published a few months ago, so I may not remember everything written in this post.

I have another related proposal: moral science (~ ethics) education for primary and middle school students. Moral science is often taught to students till 8th grade (atleast it was taught in my school). So, moral science education in schools is already tractable.

I would classify this under broadly promoting positive moral values. The current set of moral values are far from ideal, and EAs could have an impact by c... (read more)

Some thoughts on EA outreach to high schoolers

Hey Jack, thanks for the reply. Yeah, I agree that it's not obvious which among among the two is more promising.

Practical ethics given moral uncertainty

Thanks! After so long I finally understood moral uncertainity :P

Institutions for Future Generations

Hey, thanks for writing this. There are some age/time-related reforms that you have mentioned: Longer Election Cycles, Legislative Youth Quotas, Age Limits on Electorate, Age-weighted Voting, Enfranchisement of the Young, and Guardianship Voting for the Very Young.

These reforms would only promote "short longtermism" (i.e. next 50-100 years) while what we actually care about is "cosmic longtermism" (i.e. next ~1 billion years). What are your thoughts on this?

Prabhat Soni's Shortform

Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P  The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

2xccf1yThat sounds harder to misinterpret, yeah.
Prabhat Soni's Shortform

I've never seen anyone explain EA using the Pareto Principle (80/20 rule). The cause prioritisation / effectiveness part of EA is basically the Pareto principle applied to doing good. I'd guess 25-50% of the public knows of the Pareto principle. So, I think this might be a good approach. Thoughts?

2Max_Daniel1ySee here [https://forum.effectivealtruism.org/posts/2XfiQuHrNFCyKsmuZ/max_daniel-s-shortform?commentId=ziN5zSaNz9Xsy6QQs] for some related material, in particular Owen Cotton-Barratt's talk Prospecting for Gold and the recent paper by Kokotajlo & Oprea.
4xccf1yThat's a good point, it's not a connection I've heard people make before but it does make sense. I'm a bit concerned that the message "you can do 80% of the good with only 20% of the donation" could be misinterpreted: * I associate the Pareto principle with saving time and money. EA isn't really a movement about getting people to decrease the amount of time and money they spend on charity though, if anything probably the opposite. * To put it another way, the top opportunities identified by EA still have room for more funding. So the mental motion I want to instill is not about shaving away your low-impact charitable efforts, it's more about doubling down on high-impact charitable efforts that are underfunded (or discovering new high-impact charitable efforts). * We wouldn't want to imply that the remaining 20% of the good is somehow less valuable--it is more costly to access, but in principle if all of the low-hanging altruistic fruit is picked, there's no reason not to move on to the higher-hanging fruit. The message "concentrate your altruism on the 80% and don't bother with the 20%" could come across as callous. I would rather make a positive statement that you can do a lot of good surprisingly cheaply than a negative statement that you shouldn't ever do good inefficiently. Nevertheless I think the 80/20 principle could be a good intuition pump for the idea that results are often disproportionate with effort and I appreciate your brainstorming :)
Prabhat Soni's Shortform

Does a vaccine/treatment for malaria exist? If yes, why are bednets more cost-effective than providing the vaccine/treatment?

9Linch1yThere's only one approved malaria vaccine, and it's not very good [https://en.wikipedia.org/wiki/Malaria_vaccine] (requires 4 shots, and ~36% reduction in number of cases). Anti-mosquito bednets have an additional advantage (over malaria vaccines) in being able to prevent mosquito-borne diseases other than malaria, though I don't know how big a deal this is in practice (eg, I don't know how often the same area will have yellow fever and malaria).
Prabhat Soni's Shortform

Is it high impact to work in AI policy roles at Google, Facebook, etc? If so, why is it discussed so rarely in EA?

3lifelonglearner1yI see it discussed sometimes in AI safety groups. There are, for example, safety oriented teams at both Google Research and DeepMind. But I agree it could be discussed more.
Prabhat Soni's Shortform

Hmm interesting ideas. I have one disagreement though, my best guess is that there are more rationalist people than altruistic people.

I think around 50% of the people who study some quantitative/tech subject and have good IQ qualify as rationalist (is this an okay proxy for rationalist people?). And my definition for altruistic people is someone who makes career decisions primarily due to altruistic people.

Based on these definitions, I think there are more rationalist people than altruistic people. Though, this might be biased since I study at a tech college (i.e. more rationalists) and live in India (i.e. less altruistic people, presumably because people tend to become altruistic when their basic needs are met).

Prabhat Soni's Shortform

Among rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism?

This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs)

I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

7markus_over1yI don't have or know of any data (which doesn't mean much, to be fair), but my hunch would be that rationalist people who haven't heard of EA are, on average, probably more open to EA ideas than the average altruistic person who hasn't heard of EA. While altruistic people might generally agree with the core ideas, they may be less likely to actually apply them to their actions. It's a vague claim though, and I make these assumption because of the few dozens of EAs I know personally, I'd very roughly assume 2/3 of them to come across as more rationalist than altruistic (if you had to choose which of the two they are), plus I'd further assume that from the general population more people will appear to be altruistic, than rationalist. If rationalists are more rare in the general population, yet more common among EAs, that would seem like evidence for them being a better match so to speak. These are all just guesses however without much to back them up, so I too would be interest in what other people think (or know).
Prabhat Soni's Shortform

Hmm this is interesting. I think I broadly agree with you. I think a key consideration is that humans have a good-ish track record of living/surviving  in deserts, and I would expect this to continue.

Prabhat Soni's Shortform

Thanks Ryan for your comment!

It seems like we've identified a crux here: what will be the total number of people living in Greenland in 2100 / world with 4 degrees warming?

 

I have disagreements with some of your estimates.

The total drylands population is 35% of the world population

Large populations currently reside in places like India, China and Brazil. These currently non-drylands could be converted to drylands in the future (and also possibly desertified). Thus, the 35% figure could increase in the future.

So less than 10% of those from drylands hav

... (read more)
5RyanCarey1yI'm not sure you've understood how I'm calculating my figures, so let me show how we can set a really conservative upper bound for the number of people who would move to Greenland. Based on current numbers, 3.5% of world population are migrants, and 6% are in deserts. So that means less than 3.5/9.5=37% of desert populations have migrated. Even if half of those had migrated because of the weather, that would be less than 20% of all desert populations. Moreover, even if people migrated uniformly according to land area, only 1.4% of migrants would move to Greenland (that's the fraction of land area occupied by Greenland). So an ultra-conservative upper bound for the number of people migrating to Greenland would be 1B*.37*.2*.014=1M. So my initial status-quo estimate was 1e3, and my ultra-conservative estimate was 1e6. It seems pretty likely to me that the true figure will be 1e3-1e6, whereas 5e7 is certainly not a realistic estimate.
Prabhat Soni's Shortform

High impact career for Danish people: Influencing what will happen with Greenland

EDIT: Comments give a good counter-argument against my views!

Climate change could get really bad. Let's imagine a world with 4 degrees warming. This would probably mean mass migration of billions of people to Canada, Russia, Antartica and Greenland.

Out of these, Canada and Russia will probably have fewer decisions to make since they already have large populations and will likely see a smooth transition into a billion+ people country. Antarctica could be promising to influence,... (read more)

3RyanCarey1yThe total drylands population is 35% [https://www.un.org/en/events/desertification_decade/whynow.shtml] of the world population (~6% from desert/semi-desert). The total number of migrants, however, is 3.5 [https://en.wikipedia.org/wiki/Human_migration]% of world population. So less than 10% of those from drylands have left. But most such migrants move because of politics, war, employment rather than climate. The number leaving because of climate is less (and possibly much less) than 5% of the drylands population. So suppose a billion people newly found themselves in drylands or desert, and that 5% migrated, making 50M migrants. Probably too few of these people will go to any country, let alone Greenland, to make it into a new superpower. But let's run the numbers for Greenland anyway. Of the world's 300M migrants, Greenland currently has only ~10k. So of an extra 50M, Greenland could be expected to take ~2k, so I'm coming in 5-6 orders of magnitude lower than the 1B figure. It does still have some military relevance, and would be good to keep it neutral, or at least out of the hands of China/Russia.
Some thoughts on EA outreach to high schoolers

Another approach that targets high-schoolers that I can think of is promoting philosophy education in schools. How does EA outreach in schools compare with this?

1jackmalde1yHi Prabhat, I'm a bit late to responding, but that was my article and I do have some thoughts on how promoting general philosophy education compares to EA outreach. On the one hand whilst philosophy could in theory become part of the core curriculum and be taught on a regular basis, this is unlikely to be true of EA. It is difficult for EA outreach to be made consistent for students, which might make it hard for students to stay engaged. Therefore I think that general philosophy wins on a "consistency" metric. However, having dedicated EA teachers at schools could (possibly) allow for more consistent EA outreach. On the other hand there is the question of how direct (to EA) the teaching is. On this metric obviously EA outreach wins. Despite this, there is a question over how useful EA outreach might actually be to high-schoolers in terms of how decision-relevant it would be for them. As raised by Ben Todd in another comment, it might be that most of what we can say to students ("do technical subjects" etc.) is already fairly well known. Perhaps the best approach with younger students is instead to introduce people to a philosophical way of thinking more generally with an EA slant where possible (e.g. Singerian-style practical ethics), with a view for EA outreach further down the line. Therefore on a "usefulness" metric I'm not entirely sure which approach wins. Overall I think both approaches have promise, but I would be very happy for people to explore further.
RyanCarey's Shortform

I'd be curious to discuss if there's a case for Moscow. 80,000 Hours's lists being a Russia or India specialist under "Other paths we're excited about". The case would probably revolve around Russia's huge nuclear arsenal and efforts to build AI. If climate change were to become really bad (say 4 degrees+ warming), Russia (along with Canada and New Zealand) would become the new hub for immigration given it's geography  -- and this alone could make it one of the most influential countries in the world.

Prabhat Soni's Shortform

Some good, interesting critiques to effective altruism.

Short version: read https://bostonreview.net/forum/logic-effective-altruism/peter-singer-reply-effective-altruism-responses (5-10 mins)

Longer version: start reading from https://bostonreview.net/forum/peter-singer-logic-effective-altruism (~ 1 hour)

I think these critiques are fairly comprehensive. They probably cover like 80-90% of all possible critiques.

8Benjamin_Todd1yThis is a big topic, but I think these critiques mainly fail to address the core ideas of EA (that we should seek the very best ways of helping), and instead criticise related ideas like utilitarianism or international aid. On the philosophy end of things, more here: https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff [https://forum.effectivealtruism.org/posts/hvYvH6wabAoXHJjsC/philosophical-critiques-of-effective-altruism-by-prof-jeff]
A central directory for open research questions

Yep, that's what I meant by "open source"! Awesome to hear you're taking this forward!

3MichaelA1yUpdate: Effective Thesis have now basically done both of the things you suggested (you can see the changes here [https://effectivethesis.org/agendas/]). So thanks for the suggestions!
A central directory for open research questions

Hey, thanks for putting this together. I think it would be quite valuable to have these lists be put up on Effective Thesis's research agenda page. My reasoning for this is that Effective Thesis's research agenda page probably has more viewers than this EA Forum post or the Google Doc version of this post.

Additionally, if you agree with the above, I'd be curious to hear your thoughts on how we could make Effective Thesis's research agenda page open source?

2MichaelA1yI think those are both good ideas! (This is assuming that by "open source" you mean something like "easy for anyone to make suggestions to, in a way that lets the page be efficiently expanded and updated". Did you have something else in mind?) I don't know the Effective Thesis people personally (though what they're doing seems really valuable to me). But I've now contacted them via their website, with a message quoting your comment and asking for their thoughts.
Load More