All of Yitz's Comments + Replies

Arguments for Why Preventing Human Extinction is Wrong

I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.

Announcing a contest: EA Criticism and Red Teaming

I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?

Sam Bankman-Fried should spend $100M on short-term projects now

I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.

Sam Bankman-Fried should spend $100M on short-term projects now

Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”

It's hard to imagine him not being primarily seen as a crypto guy while he's regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.

Sam Bankman-Fried should spend $100M on short-term projects now

Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons wo... (read more)

Sam Bankman-Fried should spend $100M on short-term projects now

Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.

Arguments for Why Preventing Human Extinction is Wrong

If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?

Teo sums it up pretty well here: []
Arguments for Why Preventing Human Extinction is Wrong

Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.

If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?
How can we make Our World in Data more useful to the EA community?
Answer by YitzMay 29, 20221

I'm not sure if you're still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis ( could badly use some infographics!! My favorite graph on the subject is this one (from; I like it because it shows this isn't just localized to one area), but I'm pretty sure it's under copyright unfortunately.

Monthly Overload of EA - June 2022

Love this newsletter, thanks for making it :)

Arguments for Why Preventing Human Extinction is Wrong

One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.

That's an interesting way of looking at it. That view seems nihilistic and like it could lead to hedonism since if our only purpose is to make sure we completely destroy ourselves and the universe, nothing really matters.
My first effective altruism conference: 10 learnings, my 121s and next steps

Thanks for the post—It was really amazing talking with you at the conference :)

Was great to meet and hang out with you Yitz!
Arguments for Why Preventing Human Extinction is Wrong

We already know that we can create net positive lives for individuals

Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)

That is a good point. I was actually considering that when I was making my statement. I suspect self-delusion might be the core of the belief of many individuals who think their their lives are net positive. In order to adapt/avoid great emotional pain, humans might self-delude when faced with the question of whether their life is overall positive. Even if it is not possible for human lives to be net positive, my first counterargument would still hold for two different reason. First, we'd still be able to improve the lives of other species. Second, it would still be valuable to prevent much more negative lives that might happen if other kinds of humans were allowed to evolve in our absence. It might be difficult to ensure our extinction was permanent. If we took care to make ourselves extinct and that we somehow wouldn't come back, it's possible that within, say, a billion years the universe would change in such a way as to make the spark of life that would lead to humans happen again. Cosmological and extremely long processes might undo any precautions we took. Alternatively, maybe different kinds of humans that would evolve in our absence would be more capable of having positive lives than we are. I don't think I am familiar with anything by Thomas Ligotti. I'll look into them.
Arguments for Why Preventing Human Extinction is Wrong

If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?

Would you cleanse all the universe with that utilitronium shockwave which is a no less relevant thought experimemt pertaining to CU?
Some unfun lessons I learned as a junior grantmaker

I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.

If you come across as insulting, someone might say you're an asshole to everyone they talk to for the next five years, which might make it harder for you to do other things you'd hoped to do.

I agree, and like I said, I'm sure those sentences can be massively improved. I prefer to have my feelings a little hurt than remain in the dark as to why a grant didn't get accepted.
Some unfun lessons I learned as a junior grantmaker

my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice

Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.

Guided by the Beauty of One’s Philosophies: Why Aesthetics Matter

I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.

What are examples where extreme risk policies have been successfully implemented?
Answer by YitzMay 16, 20228

Fixing the Ozone Layer should provide a whole host of important insights here.

Should we be hiring more “unqualified” people?

what's stopping random people from just going after the bounties themselves?

Simple answer—they don’t know the bounties exist. Bounties are usually only posted in local EA groups, and if you’re outside of those groups, even if you’re looking for bounties to collect, the amount of effort it would take to find out about our community’s bounty postings would be prohibitively high (and there’s plenty of lower-hanging fruit in search space). Likewise, many large companies hire recruiters to expressly go out and find talent, rather than hoping that talent finds them. The market is efficient, but it is not omniscient.

Should we be hiring more “unqualified” people?

Are you sure that all problems we’re facing are necessarily difficult in this he sort of way a non-expert would be bad at? I don’t have the time right now to search through past bounties, but I remember a number of them involved fairly simple testable theories which would simply take a lot of time and effort, but not expertise.

2Peter Wildeford2mo
I can't think of any problem area where I'd be excited to actively hire a ton of people without vetting or supervision, but I agree that just because I can't think of one doesn't mean that one doesn't exist. Also, as you and others mention, giving out prizes our bounties could work well if you have an area where you could easily evaluate the quality of a piece of work.
EA and the current funding situation

That’s a fair point, thank you for bringing that up :)

EA and the current funding situation

How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The rea... (read more)

I think it's easier than it might seem to do something net negative even ignoring opportunity cost. For example, actively compete with some other better project, interfere with politics or policy incorrectly, create a negative culture shift in the overall ecosystem, etc. Besides, I don't think the attitude that our primary problem is spending down the money is prudent. This is putting the cart before the horse, and as Habryka said might lead to people asking "how can I spend money quick?" rather than "how can I ambitiously do good?" EA certainly has a lot of money, but I think people underestimate how fast $50 billion can disappear if it's mismanaged (see, for an extreme example, Enron).
The Mystery of the Cuban missile crisis

Thanks for the excellent analysis! It’s notable that if your theory is correct, it wasn’t a single person here making an irrational decision, but two different entire command structures being so blinded by emotional thinking that nobody thought to even suggest that Cuba wouldn’t change anything in terms of defensive/offensive capabilities.

Has anyone actually talked to conservatives* about EA?

I’m curious if you have any friends who identify as “far right” or “alt-right”—do their views on EA substantially differ?

2Ariel Simnegar2mo
I have friends who'd identify as "deeply conservative" who I'd include in my above answer, but I'd opine that "deeply conservative" is a significantly different characterization from "far right" in modern American politics. For example, conservative values support upholding the integrity of institutions, not insurrection and/or attempts to overturn democratic elections. Unfortunately I can't give you an informed answer on "far right" or "alt-right" types.
The AI Messiah

I’m curious on what exactly you see your opinions as differing here. Is it just how much to trust inside vs outside view, or something else?

I'm not sure that it's purely "how much to trust inside vs outside view," but I think that is at least a very large share of it. I also think the point on what I would call humility ("epistemic learned helplessness") is basically correct. All of this is by degrees, but I think I fall more to the epistemically humble end of the spectrum when compared to Thomas (judging by his reasoning). I also appreciate any time that someone brings up the train to crazy town, which I think is an excellent turn of phrase that captures an important idea.
A tale of 2.75 orthogonality theses

As a singular data point, I’ll submit that until reading this article, I was under the impression that the Orthogonality thesis is the main reason why researchers are concerned.

2John G. Halstead2mo
agreed! some evidence of that in my comment
Joseph Lemien's Shortform

Please do! I'd absolutely love to read that :)

The team at EA for Jews is growing — apply now or refer others!

This is hilarious; I was literally thinking yesterday that we should be reaching out to the Orthodox/Modern Orthodox Jewish community, and was going to write a post on that today! Happy to know this already exists :)

May I ask what your long-term plans are?

Hi Yitz! Our long term plan -- a very high level -- is to (1) do outreach to Jewish communities and spread EA ideas to those communities, and (2) to build a community of Jews involved in EA. We currently have a number of projects in the works including EA fellowships and materials aimed at different Jewish audiences (including different denominations as well as demographics such as for b'nai mitzvah age Jews). I'll send you a DM with more info and my calendly if you'd like to learn more!
Yitz's Shortform

I need to book plane tickets for EAGx Prague before they get prohibitively expensive, but I’ve never done this before and haven’t been able to get myself to actually go through the process for some reason. Any advice for what to do when you get “stuck” on something that you know will be pretty easy once you actually do it?

It's a pretty easy process - You could also just get someone else to do it for you
Some people I know have found that committing to a friend that they'd donate $X to an ineffective charity to be a sufficient motivator. I've had mixed results myself.
Sounds like an ugh field [] . Spencer Greenberg also had a podcast episode [] on motivation recently, including backchaining to your ultimate motivations through a series of "why" questions in order to access more motivating feelings. My random advice would be to book a friend or maybe some EA whose done it before to walk you through the process and provide their flight-booking wisdom (a pretense or useful or both) like "you have to pay for a checked bag both ways so maybe it's better to upgrade to the seat with a free checked bag".
What would you like to see Giving What We Can write about?
Answer by YitzApr 29, 20221

I’d be interested in reading about the impact of artistic careers!

1Paula Amato2mo
Related: I am interested in the impact of spending discretionary income on purchasing art.
Help us make civilizational refuges happen

Quick note that I misread "refuges" as "refugees," and got really confused. In case anyone else made the same mistake, this post is talking about bunkers, not immigrants ;)

Free-spending EA might be a big problem for optics and epistemics

Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives,  and I hope that this leads to more careful, but not less giving.

Free-spending EA might be a big problem for optics and epistemics

+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).

+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.


I think I would actually be for this, as long as the resolution criteria can be made clear, and at least in the beginning it can only be for people who already have a large online presence .

One potential issue is if the resolution criteria is worded the wrong way, perhaps something like "there will be at least one news article which mentions negative allegations against person X," it may encourage unethical people to try to purposely spread false negative allegations in order to game the market.  The resolution criteria would therefore have to be very carefully thought about so that sort of thing doesn't happen.

4Nathan Young3mo
Sure, that's a failure mode. I would only support it if the resolution criteria were around verified accusations. Mere accusations cannot be enough.
Issues with centralised grantmaking

Posted on my shortform, but thought it’s worth putting here as well, given that I was inspired by this post to write it:

Thinking about what I’d do if I was a grantmaker that others wouldn’t do. One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of t

... (read more)
Yitz's Shortform

Thinking about what I’d do if I was a grantmaker that others wouldn’t do (inspired by One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of them to be in... (read more)

Leftism virtue cafe's Shortform

I share your concerns; what would you recommend doing about it though? One initiative that may come in handy here is the increased focus on getting “regular people” to do grantmaking work, which at least helps spread resources around somewhat. Not sure there’s anything we can do to stop the general bad incentives of acting for the sake of money rather than altruism. For that, we just need to hope most members of the community have strong virtues, which tbh I think we’re pretty good about.

EAGxBoston: Updates and Info from the Organizing Team

Unfortunately I missed the application deadline, but I look forward to hearing great things from those who will attend!

I'm actually working on that right now! Still in the exploratory stage,  but hoping to get a scalable program going which will allow for significantly more work in that space. :)

FTX Future Fund and Longtermism

Do we know how much impact Sam Bankman-Fried‘s personal philosophy is going to have on FTX’s grant-making choices? This is a lot of financial power for a single organization to have, so I expect the makeup of the core team to have an outsized effect on the rest of the movement.

Valid question!
The Future Fund’s Project Ideas Competition

could lead to disincentive to post more controversial ideas there though

The Future Fund’s Project Ideas Competition

This would have to be a separate project from my proposed direct Wikipedia editing, but I'd  be very much in support of this (I see the efforts as being complementary)

We're announcing a $100,000 blog prize

I think even "technically flawed" critiques could actually be very useful, because developing arguments against that which are more easily accessible will probably be helpful in the future. (disclaimer I'm currently on a sleeping med making me feel slightly loopy, so apologies if the above doesn't make sense)