All of William_MacAskill's Comments + Replies

On talking about this publicly

A number of people have asked why there hasn’t been more communication around FTX. I’ll explain my own case here; I’m not speaking for others. The upshot is that, honestly, I still feel pretty clueless about what would have been the right decisions, in terms of communications, from both me and from others, including EV, over the course of the last year and a half. I do, strongly, feel like I misjudged how long everything would take, and I really wish I’d gotten myself into the mode of “this will all take years.” 

Shortly a... (read more)

I've had quite a few disagreements with other EA's about this, but I will repeat it here, and maybe get more downvotes. But I've worked for 20 years in a multinational and I know how companies deal with potential reputational damage, and I think we need to at least ask ourselves if it would be wise for us to do differently. 

EA is part of a real world which isn't necessarily fair and logical. Our reputation in this real world is vitally important to the good work we plan to do - it impacts our ability to get donations, to carry out projects, to influen... (read more)

7
Nathan Young
7d
Do you think the legal advice was correct? Or is it possible it was wrong to you? If it was worth spending X millions on community building, feels like it may have been worth risking X/5 on lawsuits to avoid quite a lot of frustration. It seems like when there is a crisis, the rationalists perhaps talk too much (the SSC NYT thing perhaps) but EA elites clam up and suddenly go all "due diligence" not sure that's the right call either. (Not that I would do better).

Elon Musk

Stuart Buck asks:

“[W]hy was MacAskill trying to ingratiate himself with Elon Musk so that SBF could put several billion dollars (not even his in the first place) towards buying Twitter? Contributing towards Musk's purchase of Twitter was the best EA use of several billion dollars? That was going to save more lives than any other philanthropic opportunity? Based on what analysis?”

Sam was interested in investing in Twitter because he thought it would be a good investment; it would be a way of making more money for him to give away, rather than a way... (read more)

8
titotal
7d
  I think this is an indication that the EA community may have hard a hard time seeing through tech hype. I don't think this this is a good sign now we're dealing with AI companies who are also motivated to hype and spin.  The linked idea is very obviously unworkable. I am unsurprised that Elon rejected it and that no similar thing has taken off. First, as usual, it could be done cheaper and easier without a blockchain. second, twitter would be giving people a second place to see their content where they don't see twitters ads, thereby shooting themselves in the foot financially for no reason. Third, while facebook and twitter could maybe cooperate here, there is no point in an interchange between other sites like tiktok and twitter as they are fundamentally different formats. Fourth, there's already a way for people to share tweets on other social media sites: it's called "hyperlinks" and "screenshots". Fifth, how do you delete your bad tweets that are ruining your life is they remain permanently on the blockchain? 

How I publicly talked about Sam 

Some people have asked questions about how I publicly talked about Sam, on podcasts and elsewhere. Here is a list of all the occasions I could find where I publicly talked about him.  Though I had my issues with him, especially his overconfidence, overall I was excited by him. I thought he was set to do a tremendous amount of good for the world, and at the time I felt happy to convey that thought. Of course, knowing what I know now, I hate how badly I misjudged him, and hate that I at all helped improve his re... (read more)

Tiny nit: I didn't and don't read much into the 80k comment on liking nice apartments. It struck me as the easiest way to disclose (imply?) that he lived in a nice place without dwelling on it too much. 

huw
8d32
19
1

FWIW I find the self-indulgence angle annoying when journalists bring it up, it’s reasonable for Sam to have been reckless, stupid, and even malicious without wanting to see personal material gain from it. Moreover, I think leads others to learn the wrong lessons—as you note in your other comment, the fraud was committed by multiple people with seemingly good intentions; we should be looking more at the non-material incentives (reputation, etc.) and enabling factors of recklessness that led them to justify risks in the service of good outcomes (again, as you do below).

What I heard from former Alameda people 

A number of people have asked about what I heard and thought about the split at early Alameda. I talk about this on the Spencer podcast, but here’s a summary. I’ll emphasise that this is me speaking about my own experience; I’m not speaking for others.

In early 2018 there was a management dispute at Alameda Research. The company had started to lose money, and a number of people were unhappy with how Sam was running the company. They told Sam they wanted to buy him out and that they’d leave if he didn’t accept the... (read more)

Thanks for writing up these thoughts Will, it is great to see you weighing in on these topics.

I’m unclear on one point (related to Elizabeth’s comments) around what you heard from former Alameda employees when you were initially learning about the dispute. Did you hear any concerns specifically about Sam’s unethical behavior, and if so, did these concerns constitute a nontrivial share of the total concerns you heard? 

I ask because in this comment and on Spencer’s podcast (at ~00:13:32), you characterize the concerns you heard about almost identically.... (read more)

My understanding is that this wasn't a benign management dispute, it was an ethical dispute about whether to disclose to investors that Alameda had misplaced $4m. SBF's refusal to do so sure seems of a piece with FTX's later issues. 

It seems there was a lot of information floating around but no one saw it as their responsibility to check whether SBF was fine and there was no central person for information to be given to. Is that correct? 

Has anything been done to change this going forward? 

Jonas V
7d102
7
0
4
2

I broadly agree with the picture and it matches my perception. 

That said, I'm also aware of specific people who held significant reservations about SBF and FTX throughout the end of 2021 (though perhaps not in 2022 anymore), based on information that was distinct from the 2018 disputes. This involved things like:

  • predicting a 10% annual risk of FTX collapsing with FTX investors and the Future Fund (though not customers) FTX investors, the Future Fund, and possibly customers losing all of their money, 
    • [edit: I checked my prediction logs and I actua
... (read more)

Lessons and updates

The scale of the harm from the fraud committed by Sam Bankman-Fried and the others at FTX and Alameda is difficult to comprehend. Over a million people lost money; dozens of projects’ plans were thrown into disarray because they could not use funding they had received or were promised; the reputational damage to EA has made the good that thousands of honest, morally motivated people are trying to do that much harder. On any reasonable understanding of what happened, what they did was deplorable. I’m horrified by the fact that I was Sam’s... (read more)

2
RobBensinger
6d
Metaculus isn't a prediction market; it's just an opinion poll of people who use the Metaculus website.

Since writing that post, though, I now lean more towards thinking that someone should “own” managing the movement, and that that should be the Centre for Effective Altruism.

I agree with this. Failing that, I feel strongly that CEA should change its name. There are costs to having a leader / manager / "coordinator-in-chief", and costs to not having such an entity; but the worst of both worlds is to have ambiguity about whether a person or org is filling that role. Then you end up with situations like "a bunch of EAs sit on their hands because they expect so... (read more)

  • Going even further on legibly acting in accordance with common-sense virtues than one would otherwise, because onlookers will be more sceptical of people associated with EA than they were before. 
    • Here’s an analogy I’ve found helpful. Suppose it’s a 30mph zone, where almost everyone in fact drives at 35mph. If you’re an EA, how fast should you drive?  Maybe before it was ok to go at 35, in line with prevailing norms. Now I think we should go at 30.

 

Wanting to push back against this a little bit:

  • The big issue here is that SBF was reckless
... (read more)

There are very strong consequentialist reasons for acting with integrity

 

we should be a lot more benevolent and a lot more intensely truth-seeking than common-sense morality suggests

It concerns me a bit that when legal risk appears suddenly everyone gets very pragmatic in a way that I am not sure feels the same as integrity or truth-seeking. It feels a bit similar to how pragmatic we all were around FTX during the boom. Feels like in crises we get a bit worse at truth seeking and integrity, though I guess many communities do. (Sometimes it feels ... (read more)

Hi Yarrow (and others on this thread) - this topic comes up on the Clearer Thinking podcast, which comes out tomorrow. As Emma Richter mentions, the Clearer Thinking podcast is aimed more at people in or related to EA, whereas Sam Harris's wasn't; it was up to him what topics he wanted to focus on. 

Thanks! Didn't know you're sceptical of AI x-risk. I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally. 

7
David Mathers
3mo
Yeah. I actually work on it right now (governance/forecasting not technical stuff obviously) because it's the job that I managed to get when I really needed a job (and its interesting), but I remain personally skeptical. Though it is hard to tell the difference in such a speculative context between 1 in 1000 (which probably means it is actually worth working on in expectation, at least if you expect X-risk to drop dramatically if AI is negotiated successfully and have totalist sympathies in population ethics) and 1 in 1 million* (which might look worth working on in expectation if taken literally, but is probably really a signal that it might be way lower for all you know.) I don't have anything terribly interesting to say about why I'm skeptical: just boring stuff about how prediction is hard, and your prior should be low on a very specific future path, and social epistemology worries about bubbles and ideas that pattern match to religious/apocalyptic, combined with a general feeling that the AI risk stuff I have read is not rigorous enough to (edit, missing bit here) overcome my low prior. 'I wonder if there's a correlation between being a philosopher and having low AI x-risk estimates; it seems that way anecdotally.' I hadn't heard that suggested before. But you will have a much better idea of the distribution of opinion than me. My guess would be that the divide will be LW/rationalist verses not. "Low" is also ambiguous of course: compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates, but they are likely a lot higher than AI X-risk "skeptics" outside EA.  *Seems too low to me, but I am of course biased. 

Thanks so much for those links, I hadn't seen them! 

(So much AI-related stuff coming out every day, it's so hard to keep on top of everything!)

This is a quick post to talk a little bit about what I’m planning to focus on in the near and medium-term future, and to highlight that I’m currently hiring for a joint executive and research assistant position. You can read more about the role and apply here! If you’re potentially interested, hopefully the comments below can help you figure out whether you’d enjoy the role. 

Recent advances in AI, combined with economic modelling (e.g. here), suggest that we might well face explosive AI-driven growth in technological capability in the next d... (read more)

Hi Will,

What is especially interesting here is your focus on an all hazards approach to Grand Challenges. Improved governance has the potential to influence all cause areas, including long-term and short-term, x-risks, and s-risks. 

Here at the Odyssean institute, we’re developing a novel approach to these deep questions of governing Grand Challenges. We’re currently running our first horizon scan on tipping points in global catastrophic risk and will use this as a first step of a longer-term process which will include Decision Making under Deep U... (read more)

6
Spencer Becker-Kahn
3mo
Perhaps at the core there is a theme here that comes up a lot which goes a bit like: Clearly there is a strong incentive to 'work on' any imminent and unavoidable challenge whose resolution could require or result in "hard-to-reverse decisions with important and long-lasting consequences". Current x-risks have been established as sort of the 'most obvious' such challenges (in the sense that making wrong decisions potentially results in extinction, which obviously counts as 'hard-to-reverse' and the consequences of which are 'long-lasting'). But can we think of any other such challenges or any other category of such challenges? I don't know of any others that I've found anywhere near as convincing as the x-risk case, but I suppose that's why the example project on case studies could be important? Another thought I had is kind of: Why might people who have been concerned about x-risk from misaligned AI pivot to asking about these other challenges? (I'm not saying Will counts as 'pivoting' but just generally asking the question). I think one question I have in mind is: Is it because we have already reached a point of small (and diminishing) returns from putting today's resources into the narrower goal of reducing x-risk from misaligned AI?
8
jackva
3mo
Thanks for the update, Will! As you are framing the choice between work on alignment and work on grand challenges/non-alignment work needed under transformative AI, I am curious how you think about pause efforts as a third class of work. Is this something you have thoughts on?

As someone who is a) skeptical of X-risk from AI, but b) think there is a non-negligible (even if relatively low, maybe 3-4%) chance we'll see 100 years of progress in 15 years at some point in the next 50 years, I'm glad you're looking at this. 

9
Ryan Greenblatt
3mo
FWIW many people are already very interested in capability evaluations related to AI acceleration of AI R&D.  For instance, at the UK AI Safety Institute, the Loss of Control team is interested in these evaluations. Some quotes: Introducing the AI Safety Institute: Jobs

I'm really excited about Zach coming on board as CEA's new CEO! 

Though I haven't worked with him a ton, the interactions I have had with him have been systematically positive: he's been consistently professional, mission-focused and inspiring. He helped lead EV US well through what was a difficult time, and I'm really looking forward to seeing what CEA achieves under his leadership!

Thank you so much for your work with EV over the last year, Howie! It was enormously helpful to have someone so well-trusted, with such excellent judgment, in this position. I’m sure you’ll have an enormous positive impact at Open Phil.

And welcome, Rob - I think it’s fantastic news that you’ve taken the role!

I mentioned a few months ago that I was planning to resign from the board of EV UK: I’ve now officially done so.

Since last November, I’ve been recused from the board on all matters associated with FTX and related topics, which has ended up being a large proportion of board business. (This is because the recusal affected not just decisions that were directly related to the collapse of FTX, but also many other decisions for which the way EV UK has been affected by the collapse of FTX was important context.) I know I initially said that I’d wait for ther... (read more)

Thanks so much for all your hard work on CEA/EV over the many years. You have been such a driving force over the years in developing the ideas, the community, and the institutions we needed to help make it all work well. Much of that work over the years has happened through CEA/EV, and before that through Giving What We Can and 80,000 Hours before we'd set up CEA to house them, so this is definitely in some sense the end of an era for you (and for EV). But a lot of your intellectual work and vision has always transcended the particular organisations and I'm really looking forward to much more of that to come!

Will - of course I have some lingering reservations but I do want to acknowledge how much you've changed and improved my life.

You definitely changed my life by co-creating Centre for Effective Altruism, which played a large role in organizations like Giving What We Can and 80,000 Hours, which is what drew me into EA. I was also very inspired by "Doing Good Better".

To get more personal -- you also changed my life when you told me in 2013 pretty frankly that my original plan to pursue a Political Science PhD wasn't very impactful and that I should consider 8... (read more)

Thanks so much for your work, Will! I think this is the right decision given the circumstances and that will help EV move in a good direction. I know some mistakes were made but I still want to recognize your positive influence.

I'm eternally grateful for getting me to focus on the question of "how to do the most good with our limited resources?". 

I remember how I first heard about EA.

The unassuming flyer taped to the philosophy building wall first caught my eye: “How to do the most good with your career?”

It was October 2013, midterms week at Tufts Uni... (read more)

Thanks for all of your hard work on EV, Will! I’ve really appreciated your individual example of generosity and commitment, boldness, initiative-taking, and leadership. I feel like a lot of things would happen more slowly or less ambitiously---or not at all---if it weren’t for your ability to inspire others to dive in and act on the courage of their convictions. I think this was really important for Giving What We Can, 80,000 Hours, Centre for Effective Altruism, the Global Priorities Institute, and your books. Inspiration, enthusiasm, and positivity from you has been a force-multiplier on my own work, and in the lives of many others that I have worked with. I wish you all the best in your upcoming projects.

Thank you for all of your hard work over many years, Will. I've really valued your ability to slice through strategic movement-buliding questions, your care and clear communication, your positivity, and your ability to simply inspire massive projects off the ground. I think you've done a lot of good. I'm excited for you to look after yourself, reflect on what's next, and keep working towards a better world.

Thank you for all your work, and I'm excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank - I know you won't be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.

Thanks for all your work over the last 11 years Will, and best of luck on your future projects. I have appreciated your expertise on and support of EA qua EA, and would be excited about you continuing to support that.

(My personal views only, and like Nick I've been recused from a lot of board work since November.)

Thank you, Nick, for all your work on the Boards over the last eleven years. You helped steward the organisations into existence, and were central to helping them flourish and grow. I’ve always been impressed by your work ethic, your willingness to listen and learn, and your ability to provide feedback that was incisive, helpful, and kind.

Because you’ve been less in the limelight than me or Toby, I think many people don’t know just how crucial a role you playe... (read more)

Hey,

I’m really sorry to hear about this experience. I’ve also experienced what feels like social pressure to have particular beliefs (e.g. around non-causal decision theory, high AI x-risk estimates, other general pictures of the world), and it’s something I also don’t like about the movement. My biggest worries with my own beliefs stem around the worry that I’d have very different views if I’d found myself in a different social environment. It’s just simply very hard to successfully have a group of people who are trying to both figure out what’s corr... (read more)

At the moment, I’m pretty worried that, on the current trajectory, AI safety will end up eating EA. Though I’m very worried about what the next 5-10 years will look like in AI, and though I think we should put significantly more resources into AI safety even than we have done, I still think that AI safety eating EA would be a major loss.

I wonder how this would look different from the current status quo:

  • Wytham Abbey cost £15m, and its site advertises it as basically being primarily for AI/x-risk use (as far as I can see it doesn't advertise what it's been u
... (read more)

What should be done? I have a few thoughts, but my most major best guess is that, now that AI safety is big enough and getting so much attention, it should have its own movement, separate from EA.

Or, the ideal form for the AI safety community might not be a "movement" at all! This would be one of the most straightforward ways to ward off groupthink and related harms, and it has been possible for other cause areas, for instance, global health work mostly doesn't operate as a social movement.

As someone who is extremely pro investing in big-tent EA, my question is, "what does it look like, in practice, to implement 'AI safety...should have its own movement, separate from EA'?"

I do think it is extremely important to maintain EA as a movement centered on the general idea of doing as much good as we can with limited resources. There is serious risk of AIS eating EA, but the answer to that cannot be to carve AIS out of EA. If people come to prioritize AIS from EA principles, as I do, I think it would be anathema to the movement to try to push their... (read more)

Most of the researchers at GPI are pretty sceptical of AI x-risk.


Not really responding to the comment (sorry), just noting that I'd really like to understand why these researchers at GPI and careful-thinking AI alignment people - like Paul Christiano - have such different risk estimates!  Can someone facilitate and record a conversation? 

This isn't answering the question you ask (sorry), but one possible response to this line of criticism is for some people within EA / longtermism  to more clearly state what vision of the future they are aiming towards. Because this tends not to happen, it means that critics can attribute particular visions to people that they don't have. In particular, critics of WWOTF often thought that I was trying to push for some particular narrow vision of the future, whereas really the primary goal, in my mind at least, is to keep our options open as much as po... (read more)

1
aprilsun
8mo
What's wrong with the Long Reflection and Paretopia? I think they're great! A name doesn't have to reference all key aspects of the thing - you can just pick one. And reflecting is what people will actually be doing, so it's a good one to pick. We can still talk about the need for the Long Reflection to be a time of existential security, keeping options open and ending unnecessary suffering. And then Paretopia just sounds like a better version of Paretotopia.  But if you're sure these won't work, I vote Pretopia and Potatopia.

This is a good point, and it's worth pointing out that increasing  is always good whereas increasing  is only good if the future is of positive value. So risk aversion reduces the value of increasing  relative to increasing , provided we put some probability on a bad future.

Agree this is worth pointing out! I've a draft paper that goes into some of this stuff in more detail, and I make this argument. 

Another potential argument for trying to improve  is that, plausibly at least, the value lost as a r... (read more)

Existential risk, and an alternative framework


One common issue with “existential risk” is that it’s so easy to conflate it with “extinction risk”. It seems that even you end up falling into this use of language. You say: “if there were 20 percentage points of near-term existential risk (so an 80 percent chance of survival)”. But human extinction is not necessary for something to be an existential risk, so 20 percentage points of near-term existential risk doesn’t entail an 80 percent chance of survival. (Human extinction may also not be sufficient for exis... (read more)

7
Toby_Ord
9mo
I think this is a useful two factor model, though I don't quite think of avoiding existential risk just as increasing τ. I think of it more as increasing the probability that it doesn't just end now, or at some other intermediate point. In my (unpublished) extensions of this model that I hint at in the chapter, I add a curve representing the probability of surviving to time t (or beyond), and then think of raising this curve as intervening on existential risk. 
7
Toby_Ord
9mo
In this case I meant 'an 80 percent chance of surviving the threat with our potential intact', or of 'our potential surviving the threat'. While this framework is slightly cleaner with extinction risk instead of existential risk (i.e. the curve may simply stop), it can also work with existential risk as while the curve continues after some existential catastrophes, it usually only sweeps out an a small area. This does raise a bigger issue if the existential catastrophe is that we end up with a vastly negative future, as then the curve may continue in very important ways after that point. (There are related challenges pointed out by another commenter where out impacts on the intrinsic value of other animals may also continue after our extinction.) These are genuine challenges (or limitations) for the current model. One definitely can overcome them, but the question would be the best way to do so while maintaining analytic tractability.
8
JackM
9mo
This is a good point, and it's worth pointing out that increasing ¯v is always good whereas increasing τ is only good if the future is of positive value. So risk aversion reduces the value of increasing τ relative to increasing ¯v, provided we put some probability on a bad future. What do you mean by civilisation? Maybe I'm nitpicking but it seems that even if there is a low upper bound on value for a civilisation, you may still be able to increase ¯v by creating a greater number of civilisations e.g. by spreading further in the universe or creating more "digital civilisations".

Flow vs fixed resources

In footnote 14 you say: “It has also been suggested (Sandberg et al 2016, Ord 2021) that the ultimate physical limits may be set by a civilisation that expands to secure resources but doesn’t use them to create value until much later on, when the energy can be used more efficiently. If so, one could tweak the framework to model this not as a flow of intrinsic value over time, but a flow of new resources which can eventually be used to create value.”

This feels to me that it would really be changing the framework considerably, rather t... (read more)

7
Toby_Ord
9mo
You may be right that this is more than a 'tweak'. What I was trying to imply is that the framework is not wildly different. You still have graphs, integrals over time, decomposition into similar variables etc — but they can behave somewhat differently. In this case, the resources approach is tracking what matters (according to the cited papers) faithfully until expansion has ended, but then is indifferent to what happens after that, which is a bit of an oversimplification and could cause problems. I like your example of speed-up in this context of large-scale interstellar settlement, as it also brings another issue into sharp relief. Whether thinking in terms of my standard framework or the 'tweaked' one, you are only going to be able to get a pure speed-up if you increase the travel speed too. So simply increasing the rate of technological (or social) progress won't constitute a speed-up. This happens because in this future, progress ceases to be the main factor setting the rate at which value accrues. 

Humanity

Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.

One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:

  • A species that Homo sapiens evolves into 
  • “Uploaded” humans
  • “Aligned” AI systems
  • Non-aligned AI systems that nonetheless produce morally valuable or disvaluable outcomes

I’m not sure where you draw the line, or if there is a principled place to draw the ... (read more)

2
Toby_Ord
9mo
The term 'humanity' is definitely intended to be interpreted broadly. I was more explicit about this in The Precipice and forgot to reiterate it in this paper. I certainly want to include any worthy successors to homo sapiens. But it may be important to understand the boundary of what counts. A background assumption is that the entities are both moral agents and moral patients — capable of steering the future towards what matters and for being intrinsically part of what matters. I'm not sure if those assumptions are actually needed, but they were guiding my thought. I definitely don't intend to include alien civilisations or future independent earth-originating intelligent life. The point is to capture the causal downstream consequences of things in our sphere of control. So the effects of us on alien civilisations should be counted and any effects we have of on whether any earth species evolves after us, but it isn't meant to be a graph of all value in the universe. My methods wouldn't work for that, as we can't plausibly speed that up, or protect it all etc (unless we were almost all the value anyway).

Humanity

Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.

One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:

  • A species that Homo sapiens evolves into 
  • “Uploaded” humans
  • “Aligned” AI systems
  • Non-aligned AI systems that nonetheless produce morally valuable or disvaluable outcomes

I’m not sure where you draw the line, or if there is a principled place to draw the ... (read more)

Humanity

Like the other commenter says, I feel worried that v(.) refers to the value of “humanity”. For similar reasons, I feel worried that existential risk is defined in terms of humanity’s potential.

One issue is that it’s vague what counts as “humanity”. Homo sapiens count, but what about:

  • A species that Homo sapiens evolves into 
  • “Uploaded” humans
  • “Aligned” AI systems
  • Non-aligned AI systems that nonetheless produce morally valuable or disvaluable outcomes

I’m not sure where you draw the line, or if there is a principled place to draw the ... (read more)

1
aprilsun
8mo
I saw someone had downvoted this. I think it's because you posted it three times.

Enhancements

I felt like the paper gave enhancements short shrift. As you note, they are the intervention most plausibly competes with existential risk reduction, as they scale with   .

You say: “As with many of these idealised changes, they face the challenge of why this wouldn’t happen eventually, even without the current effort. I think this is a serious challenge for many proposed enhancements.”

I agree that this is a serious challenge, and that one should have more starting scepticism about the persistence of enhancements compared with ... (read more)

4
Toby_Ord
9mo
I think I may have been a bit too unclear about which things I found more promising than others. Ultimately the chapter is more about the framework, with a few considerations added for and against each of the kinds of idealised changes, and no real attempt to be complete about those or make all-things-considered judgments about how to rate them. Of the marginal interventions I discuss, I am most excited about existential-risk reduction, followed by enhancements. As to your example, I feel that I might count the point where the world became permanently controlled by preference utilitarians an existential catastrophe — locking in an incorrect moral system forever. In general, lock-in is a good answer for why things might not happen later if they don't happen now, but too much lock-in of too big a consequence is what I call an existential catastrophe. So your example is good as a non-*extinction* case, but to find a non-existential one, you may need to look for examples that are smaller in size, or perhaps only partly locked-in?

Gains

“While the idea of a gain is simple — a permanent improvement in instantaneous value of a fixed size — it is not so clear how common they are.”

I agree that gains aren’t where the action is, when it comes to longterm impact. Nonetheless, here are some potential examples:

  • Species loss / preservation of species
  • Preservation of historical information
  • Preservation of Earth’s ecosystem
  • Preservation of areas of natural importance, like Olympus Mons

These plausibly have two sources of longterm value. The first is that future agents might have slightly better lives... (read more)

9
Toby_Ord
9mo
As you say, there is an issue that some of these things might really be enhancements because they aren't of a fixed size. This is especially true for those that have instrumental effects on the wellbeing of individuals, as if those effects increase with total population or with the wellbeing level of those individuals, then they can be enhancements. So cases where there is a clearly fixed effect per person and a clearly fixed number of people who benefit would be good candidates. As are cases where the thing is of intrinsic non-welfarist value. Though there is also an issue that I don't know how intrinsic value of art, environmental preservation, species types existing, or knowledge is supposed to interact with time. Is it twice as good to have a masterpiece or landscape or species or piece of knowledge for twice as long? It plausibly is. So at least on accounts of value where things scale like that, there is the possibility of acting like a gain. Another issue is if the effects don't truly scale with the duration of our future. For example, on the longest futures that seem possible (lasting far beyond the lifetime of the Sun), even a well preserved site may have faded long before our end point. So many candidates might act like gains on some durations of our future, but not others.

Speed-ups

You write: “How plausible are speed-ups? The broad course of human history suggests that speed-ups are possible,” and, “though there is more scholarly debate about whether the industrial revolution would have ever happened had it not started in the way it did. And there are other smaller breakthroughs, such as the phonetic alphabet, that only occurred once and whose main effect may have been to speed up progress. So contingent speed-ups may be possible.”

This was the section of the paper I was most surprised / confused by. You seemed open to speed-... (read more)

7
Toby_Ord
9mo
I think "open to speed-ups" is about right. As I said in the quoted text, my conclusion was that contingent speed-ups "may be possible". They are not an avenue for long-term change that I'm especially excited about. The main reason for including them here was to distinguish them from advancements (these two things are often run together) and because they fall out very natural as one of the kinds of natural marginal change to the trajectory whose value doesn't depend on the details of the curve.  That said, it sounds like I think they are a bit more likely to be possible than you do. Here are some comments on that. One thing is that it is easier to have a speed-up relative to another trajectory than to have one that is contingent — which wouldn't have happened otherwise. Contingent speed-ups are the ones of most interest to longtermists, but those that are overdetermined to happen are still relevant to studying the value of the future and where it comes from. e.g. if the industrial revolution was going to happen anyway, then the counterfactual value of it happening in the UK in the late 1700s may be small, but it is still an extremely interesting event in terms of dramatically changing the rate of progress from then onwards compared to a world without an industrial revolution. Even if v(.) hits a plateau, you can still have a speed-up, it is just that it only has an impact on the value achieved before we would have hit the value anyway. That could be a large change (e.g. if the plateau isn't reached in the first 1% of our lifetime), but even if it isn't, that doesn't stop this being a speed-up, it is just that changing the speed of some things isn't very valuable, which is a result that is revealed by the framework. Suppose v(.) stops growing exponentially with progress and most of its increase is then governed by growing cubically as a humanity's descendants settle the cosmos. Expanding faster (e.g. by achieving a faster travel speed of 76%c instead of 75%c) cou

Advancements 

I broadly agree with the upshots you draw, but here are three points that make things a little more complicated:

Continued exponential growth

As you note: (i) if v(.) continues exponentially, then advancements can compete with existential risk reduction; (ii) such continued exponential growth seems very unlikely.

However, it seems above 0 probability that we could have continued exponential growth in v(.) forever, including at the end point (and perhaps even at a very fast rate, like doubling every year).  And, if so, then the total val... (read more)

7
Toby_Ord
9mo
 Good point about the fact that I was focusing on some normal kind of economic trajectory when assessing the difficulty of advancements and delays. Your examples are good, as is MichaelStJules' comment about how changing the timing of transformative AI might act as an advancement.
7
Toby_Ord
9mo
>Aliens You are right that the presence or absence of alien civilisations (especially those that expand to settle very large regions) can change things. I didn't address this explicitly because (1) I think it is more likely that we are alone in the affectable universe, and (2) there are many different possible dynamics for multiple interacting civilisations and it is not clear what is the best model. But it is still quite a plausible possibility and some of the possible dynamics are likely enough and simple enough that they are worth analysing. I'm not sure about the details of your calculation, but have thought a bit about it in terms of Jay Olson's model of cosmological expanding civilisations (which is roughly how Anders and I think of it, and similar to model Hanson et al independently came up with). On this model, if civilisations expand at a constant fraction of c (which we can call f), the average distance between independently arising civilisations is D light years, and civilisations permanently hold all locations they reach first, then delaying by 1 year loses roughly 3f/D of the resources they could have reached. So if D were 1 billion light years, and f were close to 1, then a year's delay would lose roughly 1 part in 300 million of the resources. So on my calculation, it would need to be an average distance of about 3 million light years or less, to get the fraction lost down to 1 part in 1 million. And at that point, the arrangement of galaxies makes a big difference. But this was off-the-cuff and I could be overlooking something.
7
Toby_Ord
9mo
>Continued exponential growth I agree that there is a kind of Pascallian possibility of very small probabilities of exponential growth in value going for extremely long times. If so, then advancements scale in value with v-bar and with τ. This isn't enough to make them competitive with existential risk reduction ex ante as they are still down-weighted by the very small probability. But it is perhaps enough to cause some issues. Worse is that there is a possibility of growth in value that is faster than an exponential, and this can more than offset the very small probability. This feels very much like Pascal's Mugging and I'm not inclined to bite the bullet and seek out or focus on outcomes like this. But nor do I have a principled answer to why not. I agree that it is probably useful to put under the label of 'fanaticism'. 

Hi Toby,

Thanks so much for doing and sharing this! It’s a beautiful piece of work - characteristically clear and precise.

Remarkably, I didn’t know you’d been writing this, or had an essay coming out that volume! Especially given that I’d been doing some similar work, though with a different emphasis. 

I’ve got a number of thoughts, which I’ll break into different comments.

Thanks Will, these are great comments — really taking the discussion forwards. I'll try to reply to them all over the next day or so.

Strong upvote on this - it’s an issue that a lot of people have been discussing, and I found the post very clear!

There’s lots more to say, and I only had time to write something quickly but one consideration is about division of effort with respect to timelines to transformative AI. The longer AI timelines are, the more plausible principles-led EA movement-building looks.

Though I’ve updated a lot in the last couple of years on transformative-AI-in-the-next-decade, I think we should still put significant probability mass on “long” timelines (e.g. more than ... (read more)

4
Chris Leong
9mo
  This might be true for technical. Less true for things like trying to organise a petition or drum up support for a protest.

Thanks for this comment; I found it helpful and agree with a lot of it. I expect the "university groups are disproportionately useful in long timelines worlds" point to be useful to a lot of people.

On this bit:

EA is more adaptive over time... This is much more likely to be relevant in long timelines worlds

This isn't obvious to me. I would expect that short timeline worlds are just weirder and changing more rapidly in general, so being adaptive is more valuable. 

Caricature example: in a short timeline world we have one year from the first sentient LLM ... (read more)

6
JP Addison
10mo
I like this comment a whole bunch. I've recently started thinking of this as a playing to your outs strategy, though without the small probabilities that that implies. One other factor in favor of believing that long timelines might happen, and those worlds might be good worlds to focus on, would be that starting very recently it's begun to look possible to actually slow down AI. In those worlds, it's presumably easier to pay an alignment tax, which makes those world more likely to survive.

Thanks!  I agree that we are already (kind of) doing most of these things. So the question is whether further centralisation is tractable (and desirable). Like I say, it seems to me the big thing is if there’s someone, or some group of people, who really wants to make that further centralisation to happen. (E.g. I don’t think I’d be the right person even if I wanted to do it.)

Some things I didn't understand from your bullet-point list:


Having most of the resources come from one place


By “resources” do you primarily mean funding?  (I'll assume ... (read more)

Yeah, sorry, I wrote the comment quickly and "resources" was overloaded. My first reference to resources was intended to be money; the second was information like career guides and such.

I think the critical-info-in-private thing is actually super impactful towards centralization, because when the info leaks, the "decentralized people" have a high-salience moment where they realize that what's happening privately isn't what they thought was happening publicly, they feel slightly lied-to or betrayed, lose perceived empowerment and engagement.

Maybe what’s going on here is vagueness, and me being unclear.

Jeff’s clarification is helpful. I could have just dropped “part of the EA movement or” and the sentence would have been clearer and better. 

The key thing I was meaning in this context is: “Is a project engaging in EA movement-building, such that it would make sense that they at least potentially have obligations or responsibilities towards the EA movement as a whole?” The answer is clearly “no” for LEEP (for example), and “yes” for CEA. On that question, I would say “no” for GovAI, Lo... (read more)

“​to the extent that the text above is breaking down centralization into sub-dimensions, and then impliedly taking something like the mean score of sub-domains to generate an overall centralization score.”

Thanks for pointing this out! I didn't intend my post to be taking the mean score across sub-domains; I agree that of the dimensions I list, decision-making power is the most important sub-dimension. (Though the dimensions are interrelated: If you can’t tightly control group membership, or if there isn’t legal ownership, that limits decision-making power ... (read more)

5
Jason
10mo
I think it is toward the more centralized side of the spectrum than that. I tentatively place it somewhere between the Scouts and sports organizations. The Spectrum I think the spectrum makes sense as long with two caveats. First, these organizations/movements differ on certain sub-dimensions. Reasonable people could come up with different rankings based on how they weight the various sub-dimensions.  Second, in some examples there are significant differences between the centralization of the organization per se and how much influence that organization has over a broader field of human activity. I think we're mainly trying to figure out how centralized EA is as a field of endeavor, not as an organization (since it isn't one). Thus, my model gives significant weight to the field-influence interpretation, especially by considering how feasible it is to seriously practice the field of activity (e.g., basketball) apart from the centralized structure. However, I've tried not to write off the organizational level entirely. Comparison to Relatively Decentralized Religious Groups I'll take the Southern Baptist Convention (in the US) as an example of a "fairly decentralised religious group." It is on the decentralized end of Protestantism (which was one of your examples), but that seems fair given if EA is to be placed between those groups and some social movements. In addition, there are a large number of Baptist and other churches in the US that aren't part of anything larger than themselves, or are part of networks even weaker than the SBC. Recently, the SBC kicked out some churches for having female pastors. Getting kicked out of the SBC is very rare, which is itself evidence of lower control, but main consequence for those churches is basically . . . they can't advertise themselves as part of the SBC. There's no trademark on "Baptist," or centralized control who joins an SBC church (that is decided by leaders in the individual church). Movement in/out of the SBC,

Thanks for this comment, it’s very inspiring!

One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:

“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s worki... (read more)

Honestly, it does seem like it might be challenging, and I welcome ideas on things to do. (In particular, it might be hard without sacrificing lots of value in other ways. E.g. going on big-name podcasts can be very, very valuable, and I wouldn’t want to indefinitely avoid doing that - that would be too big a cost. More generally, public advocacy is still very valuable, and I still plan to be “a” public proponent of EA.)

The lowest-hanging fruit is just really hammering the message to journalists / writers I speak to; but there’s not a super tight corr... (read more)

4
Charlie_Guthmann
10mo
Have you thought about not doing interviews?

CEA distributes books at scale, right? Seems like offering more different books could boost name recognition of other authors and remove a signal of emphasis on you. This would be far from a total fix, but is very easy to implement.

I haven't kept up with recent books, but back in 2015 I preferred Nick Cooney's intro to EA book to both yours and Peter Singer's, and thought it was a shame it got a fraction of the attention.

Thanks! And I agree re comparative advantage!

Hey - I’m starting to post and comment more on the Forum than I have been, and you might be wondering about whether and when I’m going to respond to questions around FTX. So here’s a short comment to explain how I’m currently thinking about things:

The independent investigation commissioned by EV is still ongoing, and the firm running it strongly preferred me not to publish posts on backwards-looking topics around FTX while the investigation is still in-progress. I don’t know when it’ll be finished, or what the situation will be like for communicating on th... (read more)

Some quick thoughts:

  • Thanks for your work, it's my sense you work really really hard and have done for a long time. Thank you
  • Thanks for the emotional effort. I guess that at times your part within EA is pretty sad, tiring, stressful. I'm sad if that happens to you
  • I sense you screwed up in trusting SBF and in someone not being on top of where the money was moving in FTXFF. It was an error. Seems worth calling an L an L (a loss a loss). This has caused a little harm to me personally and I forgive you. Sounds fatuous but feels important to say. I'm pretty conf
... (read more)

I'm glad that you are stepping down from EV UK and focusing more on global priorities and cause prioritisation (and engaging on this forum!). I have a feeling, given your philosophy background, that this will move you to focus more where you have a comparative advantage. I can't wait to read what you have to say about AI!

I'm curious about ways you think to mitigate against being seen as the face of/spokesperson for EA

I think this is an excellent question and hasn’t (yet) received the discussion it deserves. Below are a few half-baked thoughts.

The last couple of years have significantly increased my credence that we’ll see explosive growth as a result of AI within the next 20 years. If this happens, it’ll raise a huge number of different challenges; human extinction at the hands of AI is obviously one. But there are others, too, even if we successfully avoid extinction, such as by aligning AI or coordinating to ensure that all powerful AI systems are limited in their ca... (read more)

1
vaniver
10mo
Might be a good time to update Are We Living At The Most Influential Time in History?.
8
AnonymousTurtle
11mo
So nice to see you back on the forum! I agree with most of your comment, but I am very surprised by some points: Does this mean that you consider plausible an improvement in productivity of ~100,000 x in a 5 year period in the next 20 years? As in, one hour of work would become more productive than 40 years of full time work 5 years earlier? That seems significantly more transformative than most people would find plausible. I'm really surprised to read this. Wouldn't interstellar travel close to the speed of light require a huge amount of energy, and a level of technological transformation that again seems much higher than most people expect? At that point it seems unlikely that concepts like "defense-dominant" or "controlling resources" (I assume the matter of the systems?) would still be meaningful, or at least in a way predictable enough to make regulation written before-transformation useful. If AI goes badly, you could make the exact same argument in the opposite direction. Wouldn't those two effects cancel out, given that we're so uncertain about AI effects on humans? I don't understand the theory of change for people at AI labs impacting the global factory farming market (including CEOs, but especially the technical staff). After some quick googling, the global factory farmed market size is around 2 trillions of dollars. Being able to influence that significantly would imply a valuation of AI labs that's very significantly larger than the one implied by the current market.

It's very interesting to have your views on this.

Another question: Would you be worried that the impact of humanity on the world (more precisely, industrial civilization) could be net-negative if we aligned AI with human values ?

One of my fears is that if we include factory farms in the equation, humanity causes more suffering than wellbeing, simply because animals are more numerous than humans and often have horrible lives. (if we include wild animals, this gets more complicated). 
So if we were to align AI with human values only, this would boost fac... (read more)

3
David Mathers
11mo
'Relatedly, laws around capital ownership. If almost all economic value is created by AI, then whoever owns the aligned AI (and hardware, data, etc) would have almost total economic power. Similarly, if  all military power is held by AI, then whoever owns the AI would have almost total military power. In principle this could be a single company or a small group of people. We could try to work on legislation in advance to more widely share the increased power from aligned AI. ' I'm a bit worried that even if on paper ownership of AI is somehow spread over a large proportion of the population, people who literally control the AI could just ignore this. 

Will -- many of these AGI side-effects seem plausible -- and almost all are alarming, with extremely high risks of catastrophe and disruption to almost every aspect of human life and civilization.

My main take-away from such thinking is that human individuals and institutions have very poor capacity to respond to AGI disruptions quickly, decisively, and intelligently enough to avoid harmful side-effects. Even if the AGI is technically 'aligned' enough not to directly cause human extinction, its downstream technological, economic, and cultural side-effects s... (read more)

2
RAB
11mo
On point 2, re: defense-dominant vs. offense-dominant future technologies - even if technologies are offense-dominant, the original colonists of a solar system are likely to maintain substantial control over settled solar systems, because even if they tend to lose battles over those systems, antimatter or other highly destructive weapons can render the system useless to would-be conquerors. In general I expect interstellar conflict to look vaguely Cold War-esque in the worse cases, because the weapons are likely to be catastrophically powerful, hard to defend against (e.g. large bodies accelerated to significant fractions of lightspeed), and visible after launch, with time for retaliation (if slower than light).

Given the TIME article, I thought I should give you all an update. Even though I have major issues with the piece, I don’t plan to respond to it right now.

Since my last shortform post, I’ve done a bunch of thinking, updating and planning in light of the FTX collapse. I had hoped to be able to publish a first post with some thoughts and clarifications by now; I really want to get it out as soon as I can, but I won’t comment publicly on FTX at least until the independent investigation commissioned by EV is over. Unfortunately, I think that’s a minimum of 2 m... (read more)

1
Milan_Griffes
1y
When is the independent investigation expected to complete? 

Going to be honest and say that I think this is a perfectly sensible response and I would do the same in Will's position.

Thank you for sharing this. I think lots of us would be interested in hearing your take on that post, so it's useful to understand your (reasonable-sounding) rationale of waiting until the independent investigation is done.

Could you share the link to your last shortform post? (it seems like the words "last shortform post" are linking to the Time article again, which I'm assuming is a mistake?)

Thanks for asking! Still not entirely determined - I’ve been planning some time off over the winter, so I’ll revisit this in the new year.

I’ve been thinking hard about whether to publicly comment more on FTX in the near term. Much for the reasons Holden gives here, and for some of the reasons given here, I’ve decided against saying any more than I’ve already said for now.

I’m still in the process of understanding what happened,  and processing the new information that comes in every day. I'm also still working through my views on how I and the EA community could and should respond.

I know this might be dissatisfying, and I’m really sorry about that, but I think it’s the rig... (read more)

It's not the paramount concern and I doubt you'd want it to be, but I have thought several times that this might be pretty hard for you. I hope you (and all of the Future Fund team and, honestly all of the FTX team) are personally well, with support from people who care about you. 

Do you plan to comment in a few weeks, a few months, or not planning to comment publicly? Or is that still to be determined?

Hi Eli, thank you so much for writing this! I’m very overloaded at the moment, so I’m very sorry I’m not going to be able to engage fully with this. I just wanted to make the most important comment, though, which is a meta one: that I think this is an excellent example of constructive critical engagement — I’m glad that you’ve stated your disagreements so clearly, and I also appreciate that you reached out in advance to share a draft. 

4
Greg_Colbourn
1y
Hi Will, really hope you can find time to engage. I think the points discussed are pretty cruxy for overall EA strategy!  3% chance of AI takeover, and 33% chance of TAI, by 2100, seems like it would put you in contention for winning your own FTX AI Worldview Prize[1] arguing for <7% chance of P(misalignment x-risk|AGI by 2070) (assuming ~2 of the 9%  [3%/33%] risk is in the 2070-2100 window). 1. ^ If you were eligible

Thanks Will!

My dad just sent me a video of the Yom Kippur sermon this year (relevant portion starting roughly here) at the congregation I grew up in. It was inspired by longtermism and specifically your writing on it, which is pretty cool. This updates me emotionally toward your broad strategy here, though I'm not sure how much I should update rationally.

Hi - thanks for writing this! A few things regarding your references to WWOTF:

The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

this still leaves open the qu

... (read more)

>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones... (read more)

8
david_reinstein
2y
I'm struggling to interpret this statement? What is the underlying sense in which pain and pleasure are measures in the same units and are thus ‘equal even though the pain is morally weighted more highly?’ Knutson states the problem well IMO [1] Maybe you have some ideas and intuition into how to think about this? ---------------------------------------- 1. Thanks MSJ for this reference ↩︎
I think such views have major problems, but I don’t talk about those problems in the book. (Briefly: If you think that any X outweighs any Y, then you seem forced to believe that any probability of X, no matter how tiny, outweighs any Y. So: you can either prevent a one in a trillion trillion trillion chance of someone with a suffering life coming into existence, or guarantee a trillion lives of bliss. The lexical view says you should do the former. This seems wrong, and I think doesn’t hold up under moral uncertainty, either. There are ways of avo
... (read more)
The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

You seem to be using a different definition of the Asymmetry than Magnus is, and I'm not sure it's a much more common one. On Magnus's definition (which is also used by e.g. Chappell; Holtug, Nils (2004), "Person-affecting Moralities"; and ... (read more)

3
MichaelStJules
2y
Thank you for clarifying! This is true for utility/social welfare functions that are additive even over uncertainty (and maybe some other classes), but not in general. See this thread of mine. Is this related to lexical amplifications of nonlexical theories like CU under MEC? Or another approach to moral uncertainty? My impression from your co-authored book on moral uncertainty is that you endorse MEC with intertheoretic comparisons (I get the impression Ord endorses a parliamentary approach from his other work, but I don't know about Bykvist). 

It’s because we don’t get to control the price - that’s down to the publisher.

I’d love us to set up a non-profit publishing house or imprint that could mean that we would have control over the price.

It would be a very different book if the audience had been EAs. There would have been a lot more on prioritisation (see response to Berger thread above), a lot more numbers and back-of-the-envelope calculations, a lot more on AI, a lot more deep philosophy arguments, and generally more of a willingness to engage in more speculative arguments. I’d have had more of the philosophy essay “In this chapter I argue that..” style, and I’d have put less effort into “bringing the ideas to life” via metaphors and case studies. Chapters 8 and 9, on population ethics a... (read more)

Yes, we got extensive advice on infohazards from experts on this and other areas, including from people who have both domain expertise and thought a lot about how to communicate about key ideas publicly given info hazard concerns. We were careful not to mention anything that isn’t already in the public discourse.

2
freedomandutility
2y
Good to know, thanks!

To be clear - these are a part of my non-EA life, not my EA life!  I’m not sure if something similar would be a good idea to have as part of EA events - either way, I don’t think I can advise on that!

Some sorts of critical commentary are well worth engaging with (e.g. Keiran Setiya’s review of WWOTF); in other cases, where criticism is clearly misrepresentative or strawmanning, I think it’s often best not to engage.

4
Devin Kalish
2y
In a sense I agree, but clearly to whom? If it is only clear to us, this might be too convenient an excuse to ignore critics for a movement to allow itself to have, and at any rate leaving it unaddressed will allow misconceptions to spread.

I think it’s a combination of multiplicative factors. Very, very roughly:

  • Prescribed medication and supplements: 2x improvement
  • Understanding my own mind and adapting my life around that (including meditation, CBT, etc): 1.5x improvement 
  • Work and personal life improvements (not stressed about getting an academic job, doing rewarding work, having great friends and a great relationship): 2x improvement 

To illustrate quantitatively (with normal weekly wellbeing on a +10 to -10 scale) with pretty made-up numbers, it feels like an average week used to b... (read more)

Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:  

  • I think that AI itself (e.g. language models) will help a lot with AI safety.
  • In general, my perception of society is that it’s very risk-averse about new technologies, has very high safety standards, and governments are happy to slow down the introduction of new tech. 
  • I’m comparatively sceptical of ultra-fast takeoff scenarios, and of very near-term AGI (though I think both of these are possible, and that’s where much of the risk lies), w
... (read more)
Load more