See here: https://80000hours.org/podcast/episodes/david-wallace-many-worlds-theory-of-quantum-mechanics/
Basically, you can treat fraction of worlds as equivalent to probability, so there is little apparent need to change anything if MWI turns out to be true.
Employers, suppliers, etc. should be safe. Although the underlying law is complex, at a high level a clawback is possible when (as Wikipedia describes "constructive fraud") the transfer "took place for less than reasonably equivalent value at a time when the debtor was in a distressed financial condition." If I sell my labor (or widgets) to Charity X and receive a fair market wage or price in return, then the transfer took place for reasonably equivalent value and all creditors can generally pound sand.
It can get more complex, though. Let's say I am ...
I meant broad sense existential risk, not just extinction. The first graph is supposed to represent a specific worldview where the relevant form of existential risk is extinction, and extinction is reasonably likely. In particular I had Eliezer Yudkowsky's views about AI in mind. (But I decided to draw a graph with the transition around 50% rather than his 99% or so, because I thought it would be clearer.) One could certainly draw many more graphs, or change the descriptions of the existing graphs, without representing everyone's thoughts on the function m...
Thanks for this post. I wonder if it would be good to somehow target different outside of EA subcultures with messaging corresponding to their nearest-neighbor EA subculture. To some extent I guess this already happens, but maybe there is an advantage to explicitly thinking about it in these terms.
What category would you put ideas like the unilateralist's curse or Bostrom's vulnerable world hypothesis? They seem like philosophical theories to me, but not really moral theories (and I think they attract a disproportionate amount of criticism).
I don't share this view, and I agree that it is weird. But maybe the feeling behind it is something like: if I, personally, were in extreme poverty I would want people to prioritize getting me material help over mental health help. I imagine I would be kind of baffled and annoyed if some charity was giving me CBT books instead of food or malaria nets.
That's just a feeling though, and it doesn't rigorously answer any real cause prioritization question.
MacAskill (who I believe coined the term?) does not think that the present is the hinge of history. I think the majority view among self-described longtermists is that the present is the hinge of history. But the term unites everyone who cares about things that are expected to have large effects on the long-run future (including but not limited to existential risk).
I think the term's agnosticism about whether we live at the hinge of history and whether existential risk in the next few decades is high is a big reason for its popularity.
Some loose data on this:
Of the ~900 people who filled my Twitter poll about whether we lived in the most important century, about 1/3 said "yes," about 1/3 said "no," and about 1/3 said "maybe."
The original EA materials (at least the ones that I first encountered in 2015 when I was getting into EA) promoted evidence-based charity, that is making donations to causes with very solid evidence. But the the formal definition of EA is equally or more consistent with hits based charity, making donations with limited or equivocal evidence but large upside with the expectation that you will eventually hit the jackpot.
I think the failure to separate and explain the difference between these things leads to a lot of understandable confusion and anger.
A question about this--do you work at the University of Canterbury now, or will you be supervising these students remotely?