WilliamKiely

May 2022: My team's entry to the  FLI Worldbuilding Contest is a Top 20 finalist! We'd appreciate if you left us feedback on our entry by mid-June for the next stage of the contest. Thanks! https://worldbuild.ai/W-0000000476/


I was an organizer for EA Austin from 2017-2020. Feel free to reach out to me about anything.

https://www.admonymous.co/will

Topic Contributions

Comments

Proposal: Impact List -- like the Forbes List except for impact via donations

Misc thoughts:

Doing credible cost-effectiveness estimates of all the world's top (by $ amount) philanthropists (who may plausibly make the list) seems very time-intensive.

Supposing the list became popular, I imagine people would commonly ask "Why is so-and-so not on the list?" and there'd be a need for a list of the most-asked-about-people-who-are-unexpectedly-not-on-the-list with justifications for why they are not on the list. After a few minutes of thinking about it, I'm still not sure how to avoid this. Figuring out how to celebrate top philanthropists (by impact) without claiming to be exhaustive and having people disagree with the rankings seems hard.

Proposal: Impact List -- like the Forbes List except for impact via donations

Considerations in the opposite direction:

  • Value of information of initial investments in the project. If it's not looking good after a year, the project can be abandoned when <<$10M has been spent.
  • 80/20 rule: It could influence one person to become the new top EA funder, and this could represent a majority of the money moved to high-cost-effectiveness philanthropy.
  • It could positively influence the trajectory of EA giving, such that capping the influence at 10 years doesn't capture a lot of the value. E.g. Some person who is a child now becomes the next SBF in another 10-20 years, in part due to the impact the list has on the culture of giving.
Proposal: Impact List -- like the Forbes List except for impact via donations
A very simple expected value calculation

Your estimate seems optimistic to me because:

(a) It seems likely that even in a wildly successful case of EA going more mainstream Impact List could only take a fraction of the credit for that. E.g. If 10 years from now the total amount of money committed to EA (in 2022 dollars) increased from its current ~$40B to ~$400B, I'd probably only assign about 10% or so of the credit for that growth to a $1M/year (2022 dollars) Impact List project, even in the case where it seemed like Impact List played a large role. So that's maybe $36B or so of donations the $10M investment in Impact List can take credit for.

(b) When we're talking hundreds of billions of dollars, there's significant diminishing marginal value of the money being committed to EA. So turn the $36B into $10B or something (not sure the appropriate discount). Then we're talking a 0.1%-1% chance of that. So that's $10M-$100M of value.

If a good team can be assembled, it does seem worth funding to me, but it doesn't seem as clear-cut as your estimate suggests.

What important truth do very few people agree with you on?

Related: In October 2017, "What important truth do very few effective altruists agree with you on?" was asked in the main Effective Altruism Facebook group and got 389 comments. (This is Peter Thiel's contrarian question applied to EAs.)

IPTi for malaria: a promising intervention with likely room to scale

Thank you, Miranda, the context you provided is indeed very helpful and satisfies my curiosity.

I also want to add that all the communication I've seen from GiveWell with the public recently has frankly been outstanding (e.g. on rollover funding). I'm really impressed and appreciate the great work you all are doing, keep it up!

Messy personal stuff that affected my cause prioritization (or: how I started to care about AI safety)

Thanks for sharing, Julia. I think this sort of post is valuable for helping individuals make better cause prioritization decisions. A related post is Claire Zabel's How we can make it easier to change your mind about cause areas.

Providing these insights can also help us understand why others might not be receptive to working on EA causes, which can be relevant for outreach work.

(Erin commented "people aren’t gonna like EA anyways – I’ve gotten more cynical", but I'm optimistic that an EA community that better understands stories like yours could do things differently to make people more receptive to caring about certain causes on the margin.)

Erin Braid's Shortform

Interesting suggestion. I'm not familiar with anyone doing a donation match like this.

It seems like having a default charity for matching money to go to could be counterproductive to the matcher's goals. E.g. Every.org wanted to get more people to use their platform to donate. But I think many people don't really find it more valuable for money to get directed to one charity over another. EAs are different in that regard. While we're certainly not unique in caring which charities money goes to, I think many people might think "Why should I donate when the money is already going to go to charity?" and decide not to participate.

While generally I wouldn't advise people to do donation matches, would it be good for organizations already running them to make cash transfers the default use of the money if matching donors don't direct it elsewhere? Maybe. One benefit might be that it just gets people to think more about the value of directing money to one organization versus another, instead of merely thinking that they're raising more money for a charity of their choice.

[$20K In Prizes] AI Safety Arguments Competition

(Alternatively if it's not too long but just needs to be one paragraph, use this version:)

The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper: "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously." Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked: "If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less."

[$20K In Prizes] AI Safety Arguments Competition

(For policy makers and tech executives. If this is too, shorten it by ending it after the I.J. Good quote.)

The British mathematician I. J. Good who worked with Alan Turing on Allied code-breaking during World War II is remembered for making this important insight in a 1966 paper:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. It is curious that this point is made so seldom outside of science fiction. It is sometimes worthwhile to take science fiction seriously.

I.J. Good expressed concern that we might not be able to keep this superintelligent machine under our control and also was able to recognize that this concern was worth taking seriously despite how it was usually only talked about in science fiction. History has proven him right--Today far more people are taking this concern seriously. For example, Shane Legg, co-founder of DeepMind, recently remarked:

If you go back 10-12 years ago the whole notion of Artificial General Intelligence was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [But] every year [the number of people who roll their eyes] becomes less.
Which Post Idea Is Most Effective?
9. Massively scalable project-based community building idea

If your idea for this is good this might be the highest value post you could write from this list.

20 and 21 (before you get too familiar with EA thinking and possibly forget your origin story) also seem high value.

If 17 is a novel practical idea it's probably also worth writing about.

8 and 16 interest me.

Load More