I didn't downvote, and the comment is now at +12 votes and +8 agreement (not sure where it was before), but my guess is it would be more upvoted if it were worded more equivocally (e.g., "I think the evidence suggests climate change poses...") and had links to the materials you reference (e.g., "[link] predicted that the melting of the Greenland ice cap would occur..."). There also may be object-level disagreements (e.g., some think climate change is an existential risk for humans in the long run or in the tail risks, such as where geoengineering might be ...
I see a lot of people's worldviews updating this week based on the collapse of FTX. One view I think people in EA may be neglecting to update towards is pessimism about the expected value of the long-term future. Doing good is hard. As Tolstoy wrote, "All happy families are alike; each unhappy family is unhappy in its own way." There is also Yudkowsky: "Value is fragile"; Burns: "The best-laid plans of mice and men often go awry"; von Moltke: "No plan survive...
"EA's are, despite our commitments to ethical behaviour, perhaps no more trustworthy with power than anyone else."
I wonder if "perhaps no more trustworthy with power than anyone else" goes a little too far. I think the EA community made mistakes that facilitated FTX misbehavior, but that is only one small group of people. Many EAs have substantial power in the world and have continued to be largely trustworthy (and thus less newsworthy!), and I think we have evidence like our stronger-than-average explicit commitments to use power for good and the critical...
Fair point. I think, in a knee-jerk reaction, I adjusted too far here. At the very least, it seems that EA's are at least somewhat more likely to do good with power if they have that aim rather than people who just want power for power's sake. It's still an adjustment downwards on my part for the EV of EA politicians, but not to 0 compared to the median candidate of said candidate's political party.
Well, my understanding now is that it is very structurally different (not just reputationally or culturally different) from publicly traded stock: the tiny trading volume, the guaranteed price floor, probably other things. If it were similar, I think I would probably have much less of that concern. This does imply standard net worth calculations for Sam Bankman-Fried were poor estimates, and I put a decent chance on Forbes/Bloomberg/etc. making public changes to their methodology because of this (maybe 7% chance? very low base rate).
I've updated a little toward this being less concerning. Thanks.
That makes sense.
Hmm, I do think in the absence of the leverage, having wealth in FTT was kind of reasonable, and the leverage was the primary thing that enabled the whole thing to implode this quickly.
I was still surprised by Alameda not having a more diversified portfolio, but I think it's basically accurate to model FTT as stock, and it's not that crazy to have a lot of your wealth in your own stock (and for it to be hard for you to exit that position, since it looks really suspicious if you sell a lot of your own stock).
But I do agree that there was probabl...
I strongly agree with this. In particular, it seems that the critiques of EA in relation to these events are much less focused on the recent fraud concern than EAs are in their defenses. I think we are choosing the easiest thing to condemn and distance ourselves from, in a very concerning way. Deliberately or not, our focus on the outrage against recent fraud distracts onlookers and community members from the more serious underlying concerns that weigh more heavily on our behavior given their likelihood.
The 2 most pressing to me are the possibilities (i) t...
For what it's worth, as someone saying in another thread that I do think there were concerns about Sam's honesty circulating, I don't know of anyone I have ever talked to who expressed concern about the money being held primarily in FTT, or who would have predicted anything close to the hole in the balance sheet that we now see.
I heard people say that we should assume that Sam's net-wealth has high-variance, given that crypto is a crazy industry, but I think you are overstating the degree to which people were aware of the incredible leverage in FTX's...
This is great data to have! Thanks for collecting and sharing it. I think the Sioux Falls (Metaculus underestimate of the 48% ban support) and Swiss (Metaculus overestimate of the 37% ban support) factory farming ban proposals are particularly interesting opportunities to connect this survey data to policy results. I'll share a few scattered, preliminary thoughts to spark discussion, and I hope to see more work on this topic in the future.
Thanks for reading and engaging with our work!
Rather than further praising or critiquing the FTX/Alameda team, I want to flag my concern that the broader community, including myself, made a big mistake in the "too much money" discourse and subsequent push away from earning to give (ETG) and fundraising. People have discussed Open Philanthropy and FTX funding in a way that gives the impression that tens of billions are locked in for effective altruism, despite many EA nonprofits still insisting on their significant room for more funding. (There has been some pushback, and my impression that...
Thanks for going into the methodological details here.
I think we view "double-counting" differently, or I may not be sufficiently clear in how I handle it. If we take a particular war as a piece of evidence, which we think fits into both "Historical Harms" and "Disvalue Through Intent," and it is overall -8 evidence on the EV of the far future, but it seems 75% explained through "Historical Harms" and 25% explained through "Disvalue Through Intent," then I would put -6 weight on the former and -2 weight on the latter. I agree this isn't very precise, and I...
This is helpful data. Two important axes of variation here are:
- Time, where this has fortunatley become more frequently discussed in recent years
- Involvement, where I speak a lot with artificial intelligence and machine learning researches who work on AI safety but not global priorities research; often their motivation was just reading something like Life 3.0. I think these people tend to have thought through crucial considerations less than, say, people on this forum.
Trade 3 is removing a happy person, which is usually bad in a person-affecting view, possibly bad enough to not be worth less than $0.99 and thus not be Dutch booked.
Hi Khorton, I wouldn't describe it as stepping back into the community, and I don't plan on doing that, regardless of this issue, unless you consider occasional posts and presentations or socializing with my EA friends as such. This post on the EV of the future was just particularly suited for the EA Forum (e.g., previous posts on it), and it's been 3 years since I published that public apology and have done everything asked of me by the concerned parties (around 4 years since I was made aware of the concerns, and I know of no concerns about my behavior since then).
I'm not planning to comment more here. This is in my opinion a terrible place to have these conversations, as Dony pointed out as well.
[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]
Thanks for clarifying. I see the connection between both sets of comments, but the draft comments still seem more like 'it might be confusing whether this is about your experience in EA or an even-coverage history', while the new comments seem more like 'it might give the impression ...
Thanks. I agree with essentially all of this, and I left a comment with details: https://forum.effectivealtruism.org/posts/ZbdNFuEP2zWN5w2Yx/ryancarey-s-shortform?commentId=oxodp9BzigZ5qgEHg
I would reiterate that this was only on my website for a few weeks, and I removed it as soon as I got the negative feedback. [Edit: As I say in my detailed comment, I viewed the term "co-founder" in terms of the broad base of people who built EA as a social movement. Others read it as a narrower term, such as the 1-3 co-founders of a typical company or nonprofit. Now I...
You removed it after people called you out on it being bullshit and you know it isn't true.
[Edit: I've now made some small additions to the post to better ensure readers do not get the impressions that you're worried about. The substantive content of the post remains the same, and I have not read any disagreements with it, though please let me know if there are any.]
I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about. I strongly agree with your guess that EA would...
I think I agree with essentially all of this, though I would have preferred if you gave this feedback when you were reading the draft because I would have worded my comments to ensure they don't give the impression you're worried about.
If it seemed to you like I was raising different issues in the draft, then each to their own, I guess. But these concerns were what I had in mind when I wrote comments like the following:
...> 2004–2008: Before I found other EAs
If you're starting with this, then you should probably include "my" in the title (or similar) becau
Hi John, just to clarify some inaccuracies in your two comments:
- I’ve never harassed anyone, and I’ve never stated or implied that I have. I have apologized for making some people uncomfortable with “coming on too strong” in my online romantic advances. As I've said before in that Apology, I never intended to cause any discomfort, and I’m sorry that I did so. There have, to my knowledge, been no concerns about my behavior since I was made aware of these concerns in mid-2018.
- I didn’t lie on my website. I had (in a few places) described myself as a ...
Hi Jacy, you said in your apology "I am also stepping back from the EA community more generally, as I have been planning to since last year in order to focus on my research."
I haven't seen you around since then, so was surprised to see you attend an EA university retreat* and start posting more about EA. Would you describe yourself as stepping back into the EA community now?
*https://twitter.com/jacyanthis/status/1515682513280282631?s=20&t=reRvYxXCs2z-AvszF31Gng
It's great to know where your specific weights differ! I agree that each of the arguments you put forth are important. Some specifics:
This is super interesting. Thanks for writing it. Do you think you're conflating several analytically distinct phenomena when you say (i) "Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are ... base our decisions on all of the possible outcomes of our actions no matter how unlikely they are EA fanatics take a roughly maximize expected utility approach" and (ii) "Fanaticism is unreasonable"?
For (i), I mainly have in mind two approaches "fanatics" could be defined by: (ia) "...
This is a brief shortform post to accompany "The Future Might Not Be So Great." These are just some scattered thoughts on the prioritization of quality risks not quite relevant enough to go in the post itself. Thanks to those who gave feedback on the draft of that post, particularly on this section.
...People ask me to predict the future, when all I want to do is prevent it. Better yet, build it. Predicting the future is much too easy, anyway. You look at the people around you, the street you stand on, th
Jamie Harris at Sentience Institute authored a report on "Social Movement Lessons From the US Anti-Abortion Movement" that may be of interest.
That's right that we don't have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally ...
Good points! This is exactly the sort of work we do at Sentience Institute on moral circle expansion (mostly for farmed animals from 2016 to 2020, but since late 2020, most of our work has been directly on AI—and of course the intersections), and it has been my priority since 2014. Also, Peter Singer and Yip Fai Tse are working on "AI Ethics: The Case for Including Animals"; there are a number of EA Forum posts on nonhumans and the long-term future; and the harms of AI and "smart farming" for farmed animals is a common topic, such as this recent article th...
Oh, sorry, I was thinking of the arguments in my post, not (only) those in your post. I should have been more precise in my wording.
Thank you for the reply, Jan, especially noting those additional arguments. I worry that your article neglects them in favor of less important/controversial questions on this topic. I see many EAs taking the "very unlikely that [human descendants] would see value exactly where we see disvalue" argument (I'd call this the 'will argument,' that the future might be dominated by human-descendant will and there is much more will to create happiness than suffering, especially in terms of the likelihood of hedonium over dolorium) and using that to justify a very
...Thanks for posting on this important topic. You might be interested in this EA Forum post where I outlined many arguments against your conclusion, the expected value of extinction risk reduction being (highly) positive.
I do think your "very unlikely that [human descendants] would see value exactly where we see disvalue" argument is a viable one, but I think it's just one of many considerations, and my current impression of the evidence is that it's outweighed.
Also FYI the link in your article to "moral circle expansion" is dead. We work on that approach at
...I remain skeptical of how much this type of research will influence EA-minded decisions, e.g. how many people would switch donations from farmed animal welfare campaigns to humane insecticide campaigns if they increased their estimate of insect sentience by 50%? But I still think the EA community should be allocating substantially more resources to it than they are now, and you seem to be approaching it in a smart way, so I hope you get funding!
I'm especially excited about the impact of this research on general concern for invertebrate sentience (e.g. esta
...[1] Cochrane mass media health articles (and similar):
I can't think of anything that isn't available in a better form now, but it might be interesting to read for historical perspective, such as what it looks like to have key EA ideas half-formed. This post on career advice is a classic. Or this post on promoting Buddhism as diluted utilitarianism, which is similar to the reasoning a lot of utilitarians had for building/promoting EA.
The content on Felicifia.org was most important in my first involvement, though that website isn't active anymore. I feel like forum content (similar to what could be on the EA Forum!) was important because it's casually written and welcoming. Everyone was working together on the same problems and ideas, so I felt eager to join.
Just to add a bit of info: I helped with THINK when I was a college student. It wasn't the most effective strategy (largely, it was founded before we knew people would coalesce so strongly into the EA identity, and we didn't predict that), but Leverage's involvement with it was professional and thoughtful. I didn't get any vibes of cultishness from my time with THINK, though I did find Connection Theory a bit weird and not very useful when I learned about it.
I get it pretty frequently from newcomers (maybe in the top 20 questions for animal-focused EA?), but everyone seems convinced by a brief explanation of how there's still a small chance of big purchasing changes even if every small consumption change doesn't always lead to a purchasing change.
Yes, terraforming is a big way in which close-to-WAS scenarios could arise. I do think it's smaller in expectation than digital environments that develop on their own and thus are close-to-WAS.
I don't think terraforming would be done very differently than today's wildlife, e.g. done without predation and diseases.
Ultimately I still think the digital, not-close-to-WAS scenarios seem much larger in expectation.
I'd qualify this by adding that the philosophical-type reflection seems to lead in expectation to more moral value (positive or negative, e.g. hedonium or dolorium) than other forces, despite overall having less influence than those other forces.
Thanks for commenting, Lukas. I think Lukas, Brian Tomasik, and others affiliated with FRI have thought more about this, and I basically defer to their views here, especially because I haven't heard any reasonable people disagree with this particular point. Namely, I agree with Lukas that there seems to be an inevitable tradeoff here.
I just took it as an assumption in this post that we're focusing on the far future, since I think basically all the theoretical arguments for/against that have been made elsewhere. Here's a good article on it. I personally mostly focus on the far future, though not overwhelmingly so. I'm at something like 80% far future, 20% near-term considerations for my cause prioritization decisions.
This may take a few decades, but social change might take even longer.
To clarify, the post isn't talking about ending factory farming. And I don't think anyone in the E...
Hm, yeah, I don't think I fully understand you here either, and this seems somewhat different than what we discussed via email.
My concern is with (2) in your list. "[T]hey do not wish to be convinced to expand their moral circle" is extremely ambiguous to me. Presumably you mean they -- without MCE advocacy being done -- wouldn't put in wide-MC* values or values that lead to wide-MC into an aligned AI. But I think it's being conflated with, "they actively oppose" or "they would answer 'no' if asked, 'Do you think your values are wr...
I personally don't think WAS is as similar to the most plausible far future dystopias, so I've been prioritizing it less even over just the past couple of years. I don't expect far future dystopias to involve as much naturogenic (nature-caused) suffering, though of course it's possible (e.g. if humans create large numbers of sentient beings in a simulation, but then let the simulation run on its own for a while, then the simulation could come to be viewed as naturogenic-ish and those attitudes could become more relevant).
I think if one wants something very...
Those considerations make sense. I don't have much more to add for/against than what I said in the post.
On the comparison between different MCE strategies, I'm pretty uncertain which are best. The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by mo...
The main reasons I currently favor farmed animal advocacy over your examples (global poverty, environmentalism, and companion animals) are that (1) farmed animal advocacy is far more neglected, (2) farmed animal advocacy is far more similar to potential far future dystopias, mainly just because it involves vast numbers of sentient beings who are largely ignored by most of society.
Wild animal advocacy is far more neglected than farmed animal advocacy, and it involves even larger numbers of sentient beings ignored by most of society. If the superiority o...
Thanks! That's very kind of you.
I'm pretty uncertain about the best levers, and I think research can help a lot with that. Tentatively, I do think that MCE ends up aligning fairly well with conventional EAA (perhaps it should be unsurprising that the most important levers to push on for near-term values are also most important for long-term values, though it depends on how narrowly you're drawing the lines).
A few exceptions to that:
Digital sentience probably matters the most in the long run. There are good reasons to be skeptical we should be advocating
I'm sympathetic to both of those points personally.
1) I considered that, and in addition to time constraints, I know others haven't written on this because there's a big concern of talking about it making it more likely to happen. I err more towards sharing it despite this concern, but I'm pretty uncertain. Even the detail of this post was more than several people wanted me to include.
But mostly, I'm just limited on time.
2) That's reasonable. I think all of these boundaries are fairly arbitrary; we just need to try to use the same standards across cause ar...
That makes sense. If I were convinced hedonium/dolorium dominated to a very large degree, and that hedonium was as good as dolorium is bad, I would probably think the far future was at least moderately +EV.
Yeah, I think that's basically right. I think moral circle expansion (MCE) is closer to your list items than extinction risk reduction (ERR) is because MCE mostly competes in the values space, while ERR mostly competes in the technology space.
However, MCE is competing in a narrower space than just values. It's in the MC space, which is just the space of advocacy on what our moral circle should look like. So I think it's fairly distinct from the list items in that sense, though you could still say they're in the same space because all advocacy competes for ...
Thanks for the comment! A few of my thoughts on this:
Presumably we want some people working on both of these problems, some people have skills more suited to one than the other, and some people are just going to be more passionate about one than the other.
If one is convinced non-extinction civilization is net positive, this seems true and important. Sorry if I framed the post too much as one or the other for the whole community.
...Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), a
Thanks for this summary. While there are many disanalogies between historical examples and current events, I think it's easy for us to neglect the historical evidence and try to reinvent wheels.