I give the MWI a probability of greater than 0.5 of being correct, but as far as I can tell, there isn't any way to generate more value out of it. There isn't any way to create more branches. You only can choose to be intentional and explicit about creating new identifiable branches, but that doesn't mean that you've created more branches. The branching happens regardless of human action.
Someone with a better understanding of this please weigh in.
I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.
Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an 'EA University') could be very high impact. Also relates to the point about 'institutions.'
"...cryptocurrencies makes stopping the funding of terrorists basically impossible."
No. Really, really, no. I could talk a lot more about this, but if you think terrorist groups can manage infosec well enough to overcome concerted attacks by the NSA, or Mossad, or FSB, etc., you're fooling yourself.
"Impossible" might be an exaggeration, but it does seem to make it much easier. That's also what the article you link to suggests. Edit: Are you skeptical because of the on/off ramps, the security of terrorist's computer infrastructure or something else?
...
I feel like a number of these maybe could be fitted under a single very large organization. Namely:
Basically, a big EA research University with a forecasting, policy research and ML/AI safety department.
I'd also add non-profit and for-profit startup incubator. I think Universities would be much better if they made it possible to try something entrepreneurial without having to fully drop-out.
In my experience, EAs tend to be pretty dissatisfied with the higher education system, but I interpreted the muted/mixed response to my post on the topic as a sign that my experience might have been biased, or that despite the dissatisfaction, there wasn't any real hunger for change. Or maybe a sense that change was too intractable.
Though I might also have done a poor job at making the case.
My speculative, cynical, maybe unfair take is that most senior EAs are so enmeshed in the higher education system, and sunk so much time succeeding in it, that they're ...
The very quick summary: Japan used to be closed off from the rest of the world, until 1853 when the US forced them to open up. This triggered major reforms. The Shogun was overthrown and replaced with the emperor, and in less than a century, Japan went from an essentially medieval economic and societal structure, to a modern industrial economy.
I don't know of any books exclusively focused on it, but it's analyzed in Why Nations Fail and Political Order and Political Decay.
I have argued for a more "mutiny" (edit: maybe "exit" is a better word for it) style theory of change in higher education so I really like the idea of an EA university where learning would be more guided by a genuine sense of purpose, curiosity and ambition to improve the world rather than a zero-sum competition for prestige and a need to check boxes in order to get a piece of paper. Though I realize that many EAs probably don't share my antipathy towards the current higher education system.
...One downside of EA universities I can think of is that it might
Another example that comes to mind is Japan's Meiji Restoration. I don't think it fits neatly in any of the categories. It’s a combination of mutiny, steering and rowing. But just like the American revolution, I think it illustrates that very rapid and disruptive change in political and economic systems can be undertaken successfully.
The ability to maintain, or improve steering and/or rowing seem to be two important preconditions for a successful mutiny.
Also, the various revolutions that swept Eastern Europe and led to the end of the Soviet Union also seem...
I agree that the US revolution was unusual and in many ways more conservative than other revolutions.
I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.
I feel like this is really underselling what happened, though I guess it might be subjective. Sure, they didn't try to reinvent government, culture and the economy completely from scratch, but it was still the move from a monarchy to the first modern liberal constitutional republic.
If something dangerous occurs when driving, slamming on the brakes is often a pretty good heuristic, regardless of the specific nature of the danger.
What if you're being chased by a dragon?
...I think we can make a similar analogy for Anchoring, because some the same reasons that make Steering more attractive now than in the past also apply for Anchoring. If there are an unusually large number of icebergs up ahead, or you are afraid the Mutineers will steer us towards them, or you are attempting to moor up alongside a larger vessel, reducing speed could b
I agree with this. I was just pushing back against the "somewhere between never-before-done and impossible" characterization. Mutiny definitely goes wrong more often than not, and just blindly smashing things without understanding how they work, and with no real plan for how to replace them is a recipe for disaster.
Certainly, but I still think that it counts as an example of a successful "mutiny." If overthrowing the government and starting a new country isn't mutiny, I don't know what is. And I don't think anyone sympathetic to the mutiny theory of change wants to restart from the state of nature and reinvent all of civilization completely from scratch.
I would suggest that the feasibility of managing it once you smashed all the working pieces is somewhere between never-before-done and impossible.
How does the American Revolution fit into this? Wasn't the US basically created from scratch, and now is arguably the most successful country in the world?
Ideally, people would get the opportunity to get up to speed, "bridge the inferential gap" and get to start thinking about how to have an impact full time during their undergraduate studies. The way most university programs are set up right now, people spend years on often irrelevant content and wasteful busywork. I was thus pleased to see Ben Todd and Will MacAskill mention the idea of creating some kind of EA University during their EAG appearances.
See also my own “case for education.”
If the Everett interpretation is true, then all experiences are already amplified exponentially. Unless I'm missing something, a QC doesn't deserve any special consideration. It all adds up to normality.
You're right. The questions of moral realism and hedonistic utilitarianism do make me skeptical about QRI's research (as I currently understand it), but doing research starting from uncertain premises definitely can be worthwhile.
Thanks for the response. I guess I find the idea that there is such a thing as a platonic form of qualia or valence highly dubious.
A simple thought experiment: for any formal description of "negative valence," you could build an agent that acts to maximize this "negative valence" form and still acts exactly like a human maximizing happiness when looking from the outside (something like a "philosophical masochist"). It seems to me that it's impossible to define positive and negative valence independently from the environment the agent is embedded in.
Disclaimer: I'm not very familiar with either QRI's research or neuroscience, but in the spirit of Cunningham's Law:
QRI's research seems to predicated on the idea that moral realism and hedonistic utilitarianism are true. I'm very skeptical about both of these, and I think QRI's time would be better spent working on the question of whether these starting assumptions are true in the first place.
Hi Samuel,
I’d say there’s at least some diversity of views on these topics within QRI. When I introduced STV in PQ, I very intentionally did not frame it as a moral hypothesis. If we’re doing research, best to keep the descriptive and the normative as separate as possible. If STV is true it may make certain normative frames easier to formulate, but STV itself is not a theory of morality or ethics.
One way to put this is that when I wear my philosopher’s hat, I’m most concerned about understanding what the ‘natural kinds’ (in Plato’s terms) of qualia are. If...
Thanks for writing this post. I'm glad this incident is getting addressed on the EA forum. I agree with most of the points being made here.
However, I'm not sure if 'becoming more attentive to various kinds of diversity' and maintaining norms that allow for 'the public discussion of ideas likely to cause offense' have to be at odds. In mainstream political discourse it often sounds like this is the case, however I would like to think that EA might be able to balance these two concerns without making any significant concessions.
The reason I think this might ...
I'm not sure what kinds of programmes or certifications you're thinking of, but as far as I know, if someone wants to learn maths, physics, economics, ML... or just almost any kind of academic subject, universities are literally the only option. There is no middle path between learning things completely on your own and jumping through all the higher education hoops. And even if a person manages to learn some subject up to a graduate level, there is no way to get it recognized.
The reason I posted this here is because I think there is an interesting contradi
...Here are some resources about the case against the current education system for those not familiar with the arguments:
The Case Against Education (LW)
The Case For Dropping Out of College (my best summary of the arguments)
Thanks for the post. I agree that the promotion of democratic institutions as an EA cause area is worth a closer look. I think you might find this EA Forum post by Ben Kuhn interesting: “"Why Nations Fail" and the long-termist view of global poverty.”
Though I'm skeptical. A lot of the benefits from democracy require liberal democracy. For example, both Iran and Russia are technically democracies, yet neither seems like a force for domestic welfare or international peace. In The Great Delusion, John Mearscheimer also casts some doubt on the democratic peace
...Yeah, it does sound like he might be open to fund EA causes at some point in the future.
I do think though that it is still a good criticism. There is a risk that people who would otherwise pursue some weird idiosyncratic, yet impactful, projects might be discouraged by the fact that it might be hard to justify it from a simple EA framework. I think that one potential downside risk of 80k's work for example is that some people might end up being less impactful because they choose the "safe" EA path rather than a more unusual, risky, and, from the EA community's perspective, low status path.
Related: https://forum.effectivealtruism.org/posts/rrkEWw8gg6jPS7Dw3/the-home-base-of-ea
"But EA orgs can't be inclusive, so we should have a separate social space for EA's that is inclusive. Working at an EA org shouldn't be the only option for one's sanity."
On a meta level, it would be nice if there was some more general advice on this. Even though EA outreach to authoritarian countries is generally viewed as a bad idea (see here), we cannot help that at least some people in these countries will learn about EA and will want to communicate and contribute in some way.
Far from me the idea that anyone do anything dangerous. But EA in itself doesn't seem really controversial or dangerous to talk about. In fact, EA Dubai has a public Facebook group and UAE laws aren't that different from Saudi laws. https://www.facebook.com/groups/1728710337347189/
And here is a EA Middle East group: https://www.facebook.com/groups/1076904819058029/
I imagine that there should be ways to minimize the risks by remaining anonymous, using a VPN and avoiding any "controversial" topics, though I agree that one should be extremely careful with this.
I just want to point out that this seems very, very difficult to me, and I would not recommend trusting "being safe" unless you really have no other choice.
I know of multiple very smart people who have tried to stay anonymous, got caught, and bad things happened. (For instance, read many books on "top hackers")
Maybe try to create an EA Saudi Arabia social media group and see if you can find people that way. That would allow you to find others who are already interested in EA while staying anonymous.
One potential high impact activity might be to research how best to spread EA ideas in the Muslim world, and how EA should interact with the faith.
Though the hotel isn't trying to have a big public presence so a boring name like CEEALAR might be just right.
It's difficult. You'd probably need a model of every country since state capacity, health care, information access... can vary widely.
If the death rate is really that high, then we should significantly update P(it goes world scale pandemic) and P(a particular person gets it | it goes world scale pandemic) downwards as it would cause governments and individuals to put a lot of resources towards prevention.
One can also imagine that P(a particular person dies from it | a particular person gets it) will go down with time as resources are spent on finding better treatment and a cure.
The idea that people always act selfishly is probably a bit extreme. But there's something very important pointed out in this post: considering selfish incentives is extremely important when thinking about how EA can become more sustainable and grow.
Just a few selfish incentives that I see operating within EA: the forum cash prize, reputation gains within the EA community for donating to effective charities, reputation gains for working for an EA org, being part of a community...
The point here is not that these are bad, but that we should acknowledge...
Or you could say that EA is an ideology that has tolerance, open-mindedness and skepticism as some of its highest values. Saying that EA is an ideology doesn't necessarily mean that it shares the same flaws as most other ideologies.
I agree that in practice, EA does have an ideology. The majority of EAs share the assumptions of rationality, scientific materialism, utilitarianism and some form of techno-optimism. This explains why the three cause areas you mention aren't taken seriously by EA. And so if one wants to defend the current focus of most EAs, one also has to defend the assumptions - the ideology - that most EAs have.
However, in principle, EA does not prevent one from adopting the proposed cause areas. If I became convinced that the most effective way to do good was to...
My reading of this post is that it attempts to gesture at the valley of bad rationality.