I’m really exited about this, and look forward to participating! Some questions—how will you determine which submissions count as “ Winners” vs “runners up” vs “honorable mentions”? I’m confused what the criteria for differentiating categories are. Also, are there any limits as to how many submissions can make each category?
I didn't focus on it in this post, but I genuinely think that the most helpful thing to do involves showing proficiency in achieving near-term goals, as that both allows us to troubleshoot potential practical issues, and allows outsiders to evaluate our track record. Part of showing integrity is showing transparency (assuming that we want outside support), and working on neartermist causes allows us to more easily do that.
Fair enough; I didn’t mean to imply that $100M is exactly the amount that needs to be spent, though I would expect it to be near a lower bound he would have to spend (on projects with clear measurable results) if he wants to because known as “that effective altruism guy” rather than “that cryptocurrency guy”
It's hard to imagine him not being primarily seen as a crypto guy while he's regularly going to Congress to talk about crypto, and lobbying for a particular regulatory regime. Gates managed this by not running Microsoft any more, it might take a similarly big change in circumstances to get there for SBF.
Within the domain of politics (and to a lesser degree, global health), PR impact makes an extremely large difference in how effective you’re able to be at the end of the day. If you want, I’d be happy to provide data on that, but my guess is you’d agree with me there (please let me know if that isn’t the case). As such, if you care about results, you should care about PR as well. I suspect that your unease mostly lies in the second half of your response—we should do things for “direct, non-reputational reasons,” and actions done for reputational reasons wo... (read more)
Other than the donations towards helping Ukraine, I’m not sure there’s any significant charity on the linked page that will have really noticeable effects within a year or two. For what I’m talking about, there needs to be an obvious difference made quickly—it also doesn’t help that those are all pre-existing charities under other people’s names, which makes it hard to say for sure that it was SBF’s work that made the crucial difference even if one of them does significantly impact the world in the short term.
If it was just me (and maybe a few other similar-minded people) in the universe however, and if I was reasonably certain it would actually do what it said in the label, then I may very well press it. What about you, for the version I presented for your philosophy?
Excellent question! I wouldn’t, but only because of epistemic humility—I would probably end up consulting with as many philosophers as possible and see how close we can come to a consensus decision regarding what to practically do with the button.
I'm not sure if you're still actively monitoring this post, but the Wikipedia page on the Lead-crime hypothesis (https://en.wikipedia.org/wiki/Lead%E2%80%93crime_hypothesis) could badly use some infographics!! My favorite graph on the subject is this one (from https://news.sky.com/story/violent-crime-linked-to-levels-of-lead-in-air-10458451; I like it because it shows this isn't just localized to one area), but I'm pretty sure it's under copyright unfortunately.
Love this newsletter, thanks for making it :)
One possible “fun” implication of following this line of thought to its extreme conclusion would be that we should strive to stay alive and improve science to the point at which we are able to fully destroy the universe (maybe by purposefully paperclipping, or instigating vacuum decay?). Idk what to do with this thought, just think it’s interesting.
Thanks for the post—It was really amazing talking with you at the conference :)
We already know that we can create net positive lives for individuals
Do we know this? Thomas Ligotti would argue that even most well-off humans live in suffering, and it’s only through self-delusion that we think otherwise (not that I fully agree with him, but his case is surprisingly strong)
If you could push a button and all life in the universe would immediately, painlessly, and permanently halt, would you push it?
I think it’s okay to come off as a bit insulting in the name of better feedback, especially when you’re unlikely to be working with them long-term.
If you come across as insulting, someone might say you're an asshole to everyone they talk to for the next five years, which might make it harder for you to do other things you'd hoped to do.
my best guess is that more time delving into specific grants will only rarely actually change the final funding decision in practice
Has anyone actually tested this? It might be worthwhile to record your initial impressions on a set number of grants, then deliberately spend x amount of time researching them further, and calculating the ratio of how often further research makes you change your mind.
I would strongly support doing this—I have strong roots in the artistic world, and there are many extremely talented artists online that I think could potentially be of value to EA.
Fixing the Ozone Layer should provide a whole host of important insights here.
I would be in favor of this!
what's stopping random people from just going after the bounties themselves?
Simple answer—they don’t know the bounties exist. Bounties are usually only posted in local EA groups, and if you’re outside of those groups, even if you’re looking for bounties to collect, the amount of effort it would take to find out about our community’s bounty postings would be prohibitively high (and there’s plenty of lower-hanging fruit in search space). Likewise, many large companies hire recruiters to expressly go out and find talent, rather than hoping that talent finds them. The market is efficient, but it is not omniscient.
Are you sure that all problems we’re facing are necessarily difficult in this he sort of way a non-expert would be bad at? I don’t have the time right now to search through past bounties, but I remember a number of them involved fairly simple testable theories which would simply take a lot of time and effort, but not expertise.
That’s a fair point, thank you for bringing that up :)
How bad is it to fund someone untrustworthy? Obviously if they take the money and run, that would be a total loss, but I doubt that’s a particularly common occurrence (you can only do it once, and would completely shatter social reputation, so even unethical people don’t tend to do that). A more common failure mode would seem to be apathy, where once funded not much gets done, because the person doesn’t really care about the problem. However, if something gets done instead of nothing at all, then that would probably be (a fairly weak) net positive. The rea... (read more)
Thanks for the excellent analysis! It’s notable that if your theory is correct, it wasn’t a single person here making an irrational decision, but two different entire command structures being so blinded by emotional thinking that nobody thought to even suggest that Cuba wouldn’t change anything in terms of defensive/offensive capabilities.
I’m curious if you have any friends who identify as “far right” or “alt-right”—do their views on EA substantially differ?
I’m curious on what exactly you see your opinions as differing here. Is it just how much to trust inside vs outside view, or something else?
As a singular data point, I’ll submit that until reading this article, I was under the impression that the Orthogonality thesis is the main reason why researchers are concerned.
Please do! I'd absolutely love to read that :)
This is hilarious; I was literally thinking yesterday that we should be reaching out to the Orthodox/Modern Orthodox Jewish community, and was going to write a post on that today! Happy to know this already exists :)
May I ask what your long-term plans are?
I need to book plane tickets for EAGx Prague before they get prohibitively expensive, but I’ve never done this before and haven’t been able to get myself to actually go through the process for some reason. Any advice for what to do when you get “stuck” on something that you know will be pretty easy once you actually do it?
I’d be interested in reading about the impact of artistic careers!
Quick note that I misread "refuges" as "refugees," and got really confused. In case anyone else made the same mistake, this post is talking about bunkers, not immigrants ;)
Very strongly agree with you here. I also agree that the positives tend to outweigh the negatives, and I hope that this leads to more careful, but not less giving.
+1 here as well, frugality option would be an amazing thing to normalize, especially if we can get it going as a thing beyond the world of EA (which may be possible if we get some good reporting on it).
+1. One concrete application: Offer donation options instead of generous stipends as compensation for speaking engagements.
I think “unfunded ideas” would be a great title for a tag!
I think I would actually be for this, as long as the resolution criteria can be made clear, and at least in the beginning it can only be for people who already have a large online presence .
One potential issue is if the resolution criteria is worded the wrong way, perhaps something like "there will be at least one news article which mentions negative allegations against person X," it may encourage unethical people to try to purposely spread false negative allegations in order to game the market. The resolution criteria would therefore have to be very carefully thought about so that sort of thing doesn't happen.
Posted on my shortform, but thought it’s worth putting here as well, given that I was inspired by this post to write it:
... (read more)Thinking about what I’d do if I was a grantmaker that others wouldn’t do. One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of t
Thinking about what I’d do if I was a grantmaker that others wouldn’t do (inspired by https://forum.effectivealtruism.org/posts/AvwgADnkdxynknYRR/issues-with-centralised-grantmaking). One course of action I’d strongly consider is to reach out to my non-EA friends—most of whom are fairly poor, are artists/game developers whose ideas/philosophies I consider high value, and who live around the world—and fund them to do independent research/work on EA cause areas instead of the minimum-wage day jobs many of them currently have. I’d expect some of them to be in... (read more)
I share your concerns; what would you recommend doing about it though? One initiative that may come in handy here is the increased focus on getting “regular people” to do grantmaking work, which at least helps spread resources around somewhat. Not sure there’s anything we can do to stop the general bad incentives of acting for the sake of money rather than altruism. For that, we just need to hope most members of the community have strong virtues, which tbh I think we’re pretty good about.
Unfortunately I missed the application deadline, but I look forward to hearing great things from those who will attend!
I'm actually working on that right now! Still in the exploratory stage, but hoping to get a scalable program going which will allow for significantly more work in that space. :)
Do we know how much impact Sam Bankman-Fried‘s personal philosophy is going to have on FTX’s grant-making choices? This is a lot of financial power for a single organization to have, so I expect the makeup of the core team to have an outsized effect on the rest of the movement.
could lead to disincentive to post more controversial ideas there though
This would have to be a separate project from my proposed direct Wikipedia editing, but I'd be very much in support of this (I see the efforts as being complementary)
I think even "technically flawed" critiques could actually be very useful, because developing arguments against that which are more easily accessible will probably be helpful in the future. (disclaimer I'm currently on a sleeping med making me feel slightly loopy, so apologies if the above doesn't make sense)
I don’t think that would imply that nothing really matters, since reducing suffering and maximizing happiness (as well as good ol’ “care about other human beings while they live”) could still be valid sources of meaning. In fact, insuring that we do not become extinct too early would be extremely important to insure the best possible fate of the universe (that being a quick and painless destruction or whatever), so just doing what feels best at the moment probably would not be a great strategy for a True Believer in this hypothetical.