Stuart Buck asks:
“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”
Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way...
Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him. Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his re...
Tiny nit: I didn't and don't read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much.
FWIW I find the self-indulgence angle annoying when journalists bring it up, it’s reasonable for Sam to have been reckless, stupid, and even malicious without wanting to see personal material gain from it. Moreover, I think leads others to learn the wrong lessons—as you note in your other comment, the fraud was committed by multiple people with seemingly good intentions; we should be looking more at the non-material incentives (reputation, etc.) and enabling factors of recklessness that led them to justify risks in the service of good outcomes (again, as you do below).
A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.
In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept the...
Thanks for writing up these thoughts Will, it is great to see you weighing in on these topics.
I’m unclear on one point (related to Elizabeth’s comments) around what you heard from former Alameda employees when you were initially learning about the dispute. Did you hear any concerns specifically about Sam’s unethical behavior, and if so, did these concerns constitute a nontrivial share of the total concerns you heard?
I ask because in this comment and on Spencer’s podcast (at ~00:13:32), you characterize the concerns you heard about almost identically....
My understanding is that this wasn't a benign management dispute, it was an ethical dispute about whether to disclose to investors that Alameda had misplaced $4m. SBF's refusal to do so sure seems of a piece with FTX's later issues.
It seems there was a lot of information floating around but no one saw it as their responsibility to check whether SBF was fine and there was no central person for information to be given to. Is that correct?
Has anything been done to change this going forward?
I broadly agree with the picture and it matches my perception.
That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:
The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s...
Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism.
I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / "coordinator-in-chief", and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like "a bunch of EAs sit on their hands because they expect so...
- Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before.
- Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive? Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.
Wanting to push back against this a little bit:
There are very strong consequentialist reasons for acting with integrity
we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests
It concerns me a bit that when legal risk appears suddenly everyone gets very pragmatic in a way that I am not sure feels the same as integrity or truth-seeking. It feels a bit similar to how pragmatic we all were around FTX during the boom. Feels like in crises we get a bit worse at truth seeking and integrity, though I guess many communities do. (Sometimes it feels ...
Hi Yarrow (and others on this thread) - this topic comes up on the Clearer Thinking podcast, which comes out tomorrow. As Emma Richter mentions, the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris's wasn't; it was up to him what topics he wanted to focus on.
Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.
Thanks so much for those links, I hadn't seen them!
(So much AI-related stuff coming out every day, it's so hard to keep on top of everything!)
This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role.
Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next d...
Hi Will,
What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks.
Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep U...
As someone who is a) skeptical of X-risk from AI, but b) think there is a non-negligible (even if relatively low, maybe 3-4%) chance we'll see 100 years of progress in 15 years at some point in the next 50 years, I'm glad you're looking at this.
I'm really excited about Zach coming on board as CEA's new CEO!
Though I haven't worked with him a ton, the interactions I have had with him have been systematically positive: he's been consistently professional, mission-focused and inspiring. He helped lead EV US well through what was a difficult time, and I'm really looking forward to seeing what CEA achieves under his leadership!
Thank you so much for your work with EV over the last year, Howie! It was enormously helpful to have someone so well-trusted, with such excellent judgment, in this position. I’m sure you’ll have an enormous positive impact at Open Phil.
And welcome, Rob - I think it’s fantastic news that you’ve taken the role!
I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.
Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for ther...
Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!
Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.
You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".
To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 8...
Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.
I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?".
I remember how I first heard about EA.
The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”
It was October 2013, midterms week at Tufts Uni...
Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.
Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.
Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.
Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.
(My personal views only, and like Nick I've been recused from a lot of board work since November.)
Thank you, Nick, for all your work on the Boards over the last eleven years. You helped steward the organisations into existence, and were central to helping them flourish and grow. I’ve always been impressed by your work ethic, your willingness to listen and learn, and your ability to provide feedback that was incisive, helpful, and kind.
Because you’ve been less in the limelight than me or Toby, I think many people don’t know just how crucial a role you playe...
Hey,
I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s corr...
At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss.
I wonder how this would look different from the current status quo:
What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA.
Or, the ideal form for the AI safety community might not be a "movement" at all! This would be one of the most straightforward ways to ward off groupthink and related harms, and it has been possible for other cause areas, for instance, global health work mostly doesn't operate as a social movement.
As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"
I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their...
Most of the researchers at GPI are pretty sceptical of AI x-risk.
Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates! Can someone facilitate and record a conversation?
This isn't answering the question you ask (sorry), but one possible response to this line of criticism is for some people within EA / longtermism to more clearly state what vision of the future they are aiming towards. Because this tends not to happen, it means that critics can attribute particular visions to people that they don't have. In particular, critics of WWOTF often thought that I was trying to push for some particular narrow vision of the future, whereas really the primary goal, in my mind at least, is to keep our options open as much as po...
This is a good point, and it's worth pointing out that increasing is always good whereas increasing is only good if the future is of positive value. So risk aversion reduces the value of increasing relative to increasing , provided we put some probability on a bad future.
Agree this is worth pointing out! I've a draft paper that goes into some of this stuff in more detail, and I make this argument.
Another potential argument for trying to improve is that, plausibly at least, the value lost as a r...
One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival. (Human extinction may also not be sufficient for exis...
In footnote 14 you say: “It has also been suggested (Sandberg et al 2016, Ord 2021) that the ultimate physical limits may be set by a civilisation that expands to secure resources but doesn’t use them to create value until much later on, when the energy can be used more efficiently. If so, one could tweak the framework to model this not as a flow of intrinsic value over time, but a flow of new resources which can eventually be used to create value.”
This feels to me that it would really be changing the framework considerably, rather t...
Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.
One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:
I’m not sure where you draw the line, or if there is a principled place to draw the ...
Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.
One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:
I’m not sure where you draw the line, or if there is a principled place to draw the ...
Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.
One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:
I’m not sure where you draw the line, or if there is a principled place to draw the ...
I felt like the paper gave enhancements short shrift. As you note, they are the intervention most plausibly competes with existential risk reduction, as they scale with .
You say: “As with many of these idealised changes, they face the challenge of why this wouldn’t happen eventually, even without the current effort. I think this is a serious challenge for many proposed enhancements.”
I agree that this is a serious challenge, and that one should have more starting scepticism about the persistence of enhancements compared with ...
“While the idea of a gain is simple — a permanent improvement in instantaneous value of a fixed size — it is not so clear how common they are.”
I agree that gains aren’t where the action is, when it comes to longterm impact. Nonetheless, here are some potential examples:
These plausibly have two sources of longterm value. The first is that future agents might have slightly better lives...
You write: “How plausible are speed-ups? The broad course of human history suggests that speed-ups are possible,” and, “though there is more scholarly debate about whether the industrial revolution would have ever happened had it not started in the way it did. And there are other smaller breakthroughs, such as the phonetic alphabet, that only occurred once and whose main effect may have been to speed up progress. So contingent speed-ups may be possible.”
This was the section of the paper I was most surprised / confused by. You seemed open to speed-...
I broadly agree with the upshots you draw, but here are three points that make things a little more complicated:
Continued exponential growth
As you note: (i) if v(.) continues exponentially, then advancements can compete with existential risk reduction; (ii) such continued exponential growth seems very unlikely.
However, it seems above 0 probability that we could have continued exponential growth in v(.) forever, including at the end point (and perhaps even at a very fast rate, like doubling every year). And, if so, then the total val...
Hi Toby,
Thanks so much for doing and sharing this! It’s a beautiful piece of work - characteristically clear and precise.
Remarkably, I didn’t know you’d been writing this, or had an essay coming out that volume! Especially given that I’d been doing some similar work, though with a different emphasis.
I’ve got a number of thoughts, which I’ll break into different comments.
Thanks Will, these are great comments — really taking the discussion forwards. I'll try to reply to them all over the next day or so.
Strong upvote on this - it’s an issue that a lot of people have been discussing, and I found the post very clear!
There’s lots more to say, and I only had time to write something quickly but one consideration is about division of effort with respect to timelines to transformative AI. The longer AI timelines are, the more plausible principles-led EA movement-building looks.
Though I’ve updated a lot in the last couple of years on transformative-AI-in-the-next-decade, I think we should still put significant probability mass on “long” timelines (e.g. more than ...
Thanks for this comment; I found it helpful and agree with a lot of it. I expect the "university groups are disproportionately useful in long timelines worlds" point to be useful to a lot of people.
On this bit:
EA is more adaptive over time... This is much more likely to be relevant in long timelines worlds
This isn't obvious to me. I would expect that short timeline worlds are just weirder and changing more rapidly in general, so being adaptive is more valuable.
Caricature example: in a short timeline world we have one year from the first sentient LLM ...
Thanks! I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)
Some things I didn't understand from your bullet-point list:
Having most of the resources come from one place
By “resources” do you primarily mean funding? (I'll assume ...
Yeah, sorry, I wrote the comment quickly and "resources" was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.
I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the "decentralized people" have a high-salience moment where they realize that what's happening privately isn't what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.
Maybe what’s going on here is vagueness, and me being unclear.
Jeff’s clarification is helpful. I could have just dropped “part of the EA movement or” and the sentence would have been clearer and better.
The key thing I was meaning in this context is: “Is a project engaging in EA movement-building, such that it would make sense that they at least potentially have obligations or responsibilities towards the EA movement as a whole?” The answer is clearly “no” for LEEP (for example), and “yes” for CEA. On that question, I would say “no” for GovAI, Lo...
“to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score.”
Thanks for pointing this out! I didn't intend my post to be taking the mean score across sub-domains; I agree that of the dimensions I list, decision-making power is the most important sub-dimension. (Though the dimensions are interrelated: If you can’t tightly control group membership, or if there isn’t legal ownership, that limits decision-making power ...
Thanks for this comment, it’s very inspiring!
One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:
“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s worki...
Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that - that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)
The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight corr...
CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.
I haven't kept up with recent books, but back in 2015 I preferred Nick Cooney's intro to EA book to both yours and Peter Singer's, and thought it was a shame it got a fraction of the attention.
Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:
The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on th...
Some quick thoughts:
I'm glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can't wait to read what you have to say about AI!
I'm curious about ways you think to mitigate against being seen as the face of/spokesperson for EA
I think this is an excellent question and hasn’t (yet) received the discussion it deserves. Below are a few half-baked thoughts.
The last couple of years have significantly increased my credence that we’ll see explosive growth as a result of AI within the next 20 years. If this happens, it’ll raise a huge number of different challenges; human extinction at the hands of AI is obviously one. But there are others, too, even if we successfully avoid extinction, such as by aligning AI or coordinating to ensure that all powerful AI systems are limited in their ca...
It's very interesting to have your views on this.
Another question: Would you be worried that the impact of humanity on the world (more precisely, industrial civilization) could be net-negative if we aligned AI with human values ?
One of my fears is that if we include factory farms in the equation, humanity causes more suffering than wellbeing, simply because animals are more numerous than humans and often have horrible lives. (if we include wild animals, this gets more complicated).
So if we were to align AI with human values only, this would boost fac...
Will -- many of these AGI side-effects seem plausible -- and almost all are alarming, with extremely high risks of catastrophe and disruption to almost every aspect of human life and civilization.
My main take-away from such thinking is that human individuals and institutions have very poor capacity to respond to AGI disruptions quickly, decisively, and intelligently enough to avoid harmful side-effects. Even if the AGI is technically 'aligned' enough not to directly cause human extinction, its downstream technological, economic, and cultural side-effects s...
Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.
Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 m...
Going to be honest and say that I think this is a perfectly sensible response and I would do the same in Will's position.
Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.
Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)
Thanks for asking! Still not entirely determined - I’ve been planning some time off over the winter, so I’ll revisit this in the new year.
I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.
I’m still in the process of understanding what happened, and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.
I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the rig...
It's not the paramount concern and I doubt you'd want it to be, but I have thought several times that this might be pretty hard for you. I hope you (and all of the Future Fund team and, honestly all of the FTX team) are personally well, with support from people who care about you.
Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?
Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft.
Thanks Will!
My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I'm not sure how much I should update rationally.
Hi - thanks for writing this! A few things regarding your references to WWOTF:
The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
...this still leaves open the qu
>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones...
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avo...
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and ...
It’s because we don’t get to control the price - that’s down to the publisher.
I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.
It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics a...
Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.
To be clear - these are a part of my non-EA life, not my EA life! I’m not sure if something similar would be a good idea to have as part of EA events - either way, I don’t think I can advise on that!
Some sorts of critical commentary are well worth engaging with (e.g. Keiran Setiya’s review of WWOTF); in other cases, where criticism is clearly misrepresentative or strawmanning, I think it’s often best not to engage.
I think it’s a combination of multiplicative factors. Very, very roughly:
To illustrate quantitatively (with normal weekly wellbeing on a +10 to -10 scale) with pretty made-up numbers, it feels like an average week used to b...
Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:
On talking about this publicly
A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.”
Shortly a... (read more)
I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently.
EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influen... (read more)