All of SamuelKnoche's Comments + Replies

I give the MWI a probability of greater than 0.5 of being correct, but as far as I can tell, there isn't any way to generate more value out of it. There isn't any way to create more branches. You only can choose to be intentional and explicit about creating new identifiable branches, but that doesn't mean that you've created more branches. The branching happens regardless of human action.

Someone with a better understanding of this please weigh in.

I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.

Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an 'EA University') could be very high impact. Also relates to the point about 'institutions.'

"...cryptocurrencies makes stopping the funding of terrorists basically impossible."

No. Really, really, no. I could talk a lot more about this, but if you think terrorist groups can manage infosec well enough to overcome concerted attacks by the NSA, or Mossad, or FSB, etc., you're fooling yourself.

"Impossible" might be an exaggeration, but it does seem to make it much easier. That's also what the article you link to suggests. Edit: Are you skeptical because of the on/off ramps, the security of terrorist's computer infrastructure or something else?

... (read more)
4
Davidmanheim
2y
It's not harder to stop, it's easier - blockchain is far easier to trace than cash, which is what is used now. And I'm skeptical for both of those reasons. NatSec people already have large groups working on this, and my understanding is that even the best BTC wallets and mixers aren't actually much help. And despite limited political will, OFAC has added people and addresses, and as far as I'm aware, no one has successfully moved money out of any of the (very few) OFAC-blocked addresses. Maybe that's not pools enforcing the rules, and obviously blocking individual addresses is incredibly naïve as a strategy, but I will note that pools and other key players have every reason not to want to piss off the US Government.

I feel like a number of these maybe could be fitted under a single very large organization. Namely:

  • Max-Planck Society (MPG) for EA research
  • EA university
  • Forecasting Organization
  • EA forecasting tournament
  • ML labs
  • Large think tank

Basically, a big EA research University with a forecasting, policy research and ML/AI safety department.

I'd also add non-profit and for-profit startup incubator. I think Universities would be much better if they made it possible to try something entrepreneurial without having to fully drop-out.

In my experience, EAs tend to be pretty dissatisfied with the higher education system, but I interpreted the muted/mixed response to my post on the topic as a sign that my experience might have been biased, or that despite the dissatisfaction, there wasn't any real hunger for change. Or maybe a sense that change was too intractable.

Though I might also have done a poor job at making the case.

My speculative, cynical, maybe unfair take is that most senior EAs are so enmeshed in the higher education system, and sunk so much time succeeding in it, that they're ... (read more)

The very quick summary: Japan used to be closed off from the rest of the world, until 1853 when the US forced them to open up. This triggered major reforms. The Shogun was overthrown and replaced with the emperor, and in less than a century, Japan went from an essentially medieval economic and societal structure, to a modern industrial economy.

I don't know of any books exclusively focused on it, but it's analyzed in Why Nations Fail and Political Order and Political Decay.

4
Charles He
2y
This is a good summary. I guess I have heard about this before, because I read a bit about the Qing dynasty and the Sino-Japanese wars. (Note that I haven't read these books and your comment updates me toward reading them.) Acemoglu and Fukuyama are brilliant, but speaking in the abstract, I am skeptical of drawing too much from Big Idea sort of books. They tend to focus on and line up facts in their narrative. This doesn't tend to lead to robust models and insights if we want to do something else with the underlying history. Instead, it seems ideal to consume several books from several established scholars specialized on Japan and the Meiji restoration.  I will try to search Amazon/Goodreads and maybe report back. 

I have argued for a more "mutiny" (edit: maybe "exit" is a better word for it) style theory of change in higher education so I really like the idea of an EA university where learning would be more guided by a genuine sense of purpose, curiosity and ambition to improve the world rather than a zero-sum competition for prestige and a need to check boxes in order to get a piece of paper. Though I realize that many EAs probably don't share my antipathy towards the current higher education system.

One downside of EA universities I can think of is that it might

... (read more)
5
ElliotJDavies
2y
Anecdotally, most EAs I have spoken to about this topic have tended to agree 

Another example that comes to mind is Japan's Meiji Restoration. I don't think it fits neatly in any of the categories. It’s a combination of mutiny, steering and rowing. But just like the American revolution, I think it illustrates that very rapid and disruptive change in political and economic systems can be undertaken successfully.

The ability to maintain, or improve steering and/or rowing seem to be two important preconditions for a successful mutiny.

Also, the various revolutions that swept Eastern Europe and led to the end of the Soviet Union also seem... (read more)

3
Davidmanheim
2y
I've claimed before that the critical enabler for eucatastrophe is having a clear and implementable vision of where you are heading - and that's exactly what is missing is most mutinies.  To offer an analogy in the form of misquoting Russian literature, "all [functioning governments] are the same, but each [dysfunctional new attempt at government] is [dysfunctional] in its own way.
1
Charles He
2y
This seems like an important and interesting example that advances your point.  I don't know anything about it. Do you (or anyone else) know a good book or author on the subject?

Thanks for clarifying. I did somewhat misinterpret the intention of your comment.

I agree that the US revolution was unusual and in many ways more conservative than other revolutions.

I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.

I feel like this is really underselling what happened, though I guess it might be subjective. Sure, they didn't try to reinvent government, culture and the economy completely from scratch, but it was still the move from a monarchy to the first modern liberal constitutional republic.

If something dangerous occurs when driving, slamming on the brakes is often a pretty good heuristic, regardless of the specific nature of the danger.

What if you're being chased by a dragon?

I think we can make a similar analogy for Anchoring, because some the same reasons that make Steering more attractive now than in the past also apply for Anchoring. If there are an unusually large number of icebergs up ahead, or you are afraid the Mutineers will steer us towards them, or you are attempting to moor up alongside a larger vessel, reducing speed could b

... (read more)
8
Larks
2y
Sorry, I think I must have been unclear. I didn't mean to conclude that Anchoring was definitively the best strategy for us to adopt, merely that some of the pro tanto reasons Holden mentioned in favour of Steering also seemed like they should apply to Anchoring.  As you mention, opposed against this are arguments like Aschenbrennerism, that the world would actually be safer if we went faster. And obviously many Anchoring arguments are quite problematic - e.g. an extreme version of stare decisis whereby rules cannot be changed, even gradually, if they are agreed to be wrong.

I agree with this. I was just pushing back against the "somewhere between never-before-done and impossible" characterization. Mutiny definitely goes wrong more often than not, and just blindly smashing things without understanding how they work, and with no real plan for how to replace them is a recipe for disaster.

Certainly, but I still think that it counts as an example of a successful "mutiny." If overthrowing the government and starting a new country isn't mutiny, I don't know what is. And I don't think anyone sympathetic to the mutiny theory of change wants to restart from the state of nature and reinvent all of civilization completely from scratch.

6
Larks
2y
The US revolution is very often considered to be an unusually conservative revolution - even the arch-conservative Burke contemporaneously admired it in many ways. It was much less disruptive than revolutions like in France, Russia or China, which attempted to radically re-order their governments, economies and societies. In a sense I guess you could think of the US revolution as being a bit like a mutiny that then kept largely the same course as the previous captain anyway.

I would suggest that the feasibility of managing it once you smashed all the working pieces is somewhere between never-before-done and impossible.

How does the American Revolution fit into this? Wasn't the US basically created from scratch, and now is arguably the most successful country in the world?

8
Davidmanheim
2y
For reasons others have pointed out, the American revolution is weaker evidence, but I certainly agree it's at least marginal evidence against my point - or at least, evidence that smaller revolutions are less likely to fail than bigger ones. And as others explained in far more detail, the Americans smashed very little in terms of what made their system work, and invented very little - they just wanted to do things which had been done before, many of which they were already doing to some extent, independently.
9
Charles He
2y
David is probably thinking more about the French revolution, or the Great Leap forward. It is difficult to answer this without getting into detail on these issues: The French revolution was initially driven by moderate reformers, but spiraled into dysfunction because no revolutionary institution could provide stability. Once you deleted all of the original institutions ("smashed all the working pieces" as David said), leadership fell to power-seeking fanatics with crazy epistemics. There was also fear from both internal and outside forces (Vendee, First Coalition) that constantly disrupted governance and fed extreme elements.  The Great Leap Forward was driven by a central leadership with a sort of magical thinking: they had contempt for normal material limits and conventional wisdom, and was certain that productivity would be massively unlocked by smashing landholdings and moving people in communes where they work together ("smashed all the working pieces"). It is grotesque now, but the very low industrial capital of China and the impressive success of the 1st five year plan, makes judgement look better. It is also worth comparing Mao's epistemics with beliefs in today's tech boom ("move fast and break things", techno-optimism) that seems to have another explanation in regulatory capture and loose capital markets. In both situations above, the leaders were obsessed with the systems they opposed. They were certain if you smashed everything, things would be fixed, but they weren't literate in the nuances of how politics or industry functions. All the leaders were brought to heel at huge human cost, and the mundane, conventional processes they hated were essential in restoring order. The "American Revolution" was elite lead, they were literally Harvard, Yale, Columbia and Princeton educated dudes. Their motivation was almost literally not wanting to pay taxes, and there is a credible subtext that the colonials were motivated by British colonial restrictions again
4
Larks
2y
The US inherited a lot of things from before, including its common law legal system; people literally debate the relevance of the 1328 Statute of Northampton in contemporary US court cases. 

Ideally, people would get the opportunity to get up to speed, "bridge the inferential gap" and get to start thinking about how to have an impact full time during their undergraduate studies. The way most university programs are set up right now, people spend years on often irrelevant content and wasteful busywork. I was thus pleased to see Ben Todd and Will MacAskill mention the idea of creating some kind of EA University during their EAG appearances.

See also my own “case for education.”

If the Everett interpretation is true, then all experiences are already amplified exponentially. Unless I'm missing something, a QC doesn't deserve any special consideration. It all adds up to normality.

2
EricBlair
2y
You don't seem to be missing anything, if Everett is true then this is a whole different issue (future post) and QCs become worth as much as measuring n superpositions (creating n many worlds) and then running a classical simulation. As to decision theory there are good papers* explaining why you do want your decision theory to be normalised IF you don't want many worlds to break your life. * David Deutsch: https://arxiv.org/ftp/quant-ph/papers/9906/9906015.pdf *Hillary Greaves: https://arxiv.org/abs/quant-ph/0312136 (much more approachable)

You're right. The questions of moral realism and hedonistic utilitarianism do make me skeptical about QRI's research (as I currently understand it), but doing research starting from uncertain premises definitely can be worthwhile.

Thanks for the response. I guess I find the idea that there is such a thing as a platonic form of qualia or valence highly dubious.

A simple thought experiment: for any formal description of "negative valence," you could build an agent that acts to maximize this "negative valence" form and still acts exactly like a human maximizing happiness when looking from the outside (something like a "philosophical masochist"). It seems to me that it's impossible to define positive and negative valence independently from the environment the agent is embedded in.

5
MikeJohnson
3y
Hi Samuel, I think it’s a good thought experiment. One prediction I’ve made is that one could make an agent such as that, but it would be deeply computationally suboptimal: it would be a system that maximizes disharmony/dissonance internally, but seeks out consonant patterns externally. Possible to make but definitely an AI-complete problem. Just as an idle question, what do you suppose the natural kinds of phenomenology are? I think this can be a generative place to think about qualia in general.

Disclaimer: I'm not very familiar with either QRI's research or neuroscience, but in the spirit of Cunningham's Law:

QRI's research seems to predicated on the idea that moral realism and hedonistic utilitarianism are true. I'm very skeptical about both of these, and I think QRI's time would be better spent working on the question of whether these starting assumptions are true in the first place.

5
Linch
3y
I disagree that QRI's comparative advantage, such as it is, is figuring out the correctness of moral realism or hedonistic utilitarianism. "Your philosophers were so preoccupied with whether or not they should, they didn't even stop to think if they could."

Hi Samuel,

I’d say there’s at least some diversity of views on these topics within QRI. When I introduced STV in PQ, I very intentionally did not frame it as a moral hypothesis. If we’re doing research, best to keep the descriptive and the normative as separate as possible. If STV is true it may make certain normative frames easier to formulate, but STV itself is not a theory of morality or ethics.

One way to put this is that when I wear my philosopher’s hat, I’m most concerned about understanding what the ‘natural kinds’ (in Plato’s terms) of qualia are. If... (read more)

Thanks for writing this post. I'm glad this incident is getting addressed on the EA forum. I agree with most of the points being made here.

However, I'm not sure if 'becoming more attentive to various kinds of diversity' and maintaining norms that allow for 'the public discussion of ideas likely to cause offense' have to be at odds. In mainstream political discourse it often sounds like this is the case, however I would like to think that EA might be able to balance these two concerns without making any significant concessions.

The reason I think this might ... (read more)

4
Aaron Gertler
4y
I think this is true, but even if EA discussion might be more productive, I still think trade-offs exist in this domain. Given that the dominant culture in many intellectual spaces holds that public discussion of certain views is likely to cause harm to people, EA groups risk appearing very unwelcoming to people in those spaces if they support discussion of such views.  It may be worthwhile to have these discussions anyway, given all the benefits that come with more open discourse, but the signal will be sent all the same.

I'm not sure what kinds of programmes or certifications you're thinking of, but as far as I know, if someone wants to learn maths, physics, economics, ML... or just almost any kind of academic subject, universities are literally the only option. There is no middle path between learning things completely on your own and jumping through all the higher education hoops. And even if a person manages to learn some subject up to a graduate level, there is no way to get it recognized.

The reason I posted this here is because I think there is an interesting contradi

... (read more)

Here are some resources about the case against the current education system for those not familiar with the arguments:

Bryan Caplan on 80k

The Case Against Education (LW)

The Case For Dropping Out of College (my best summary of the arguments)

Thanks for the post. I agree that the promotion of democratic institutions as an EA cause area is worth a closer look. I think you might find this EA Forum post by Ben Kuhn interesting: “"Why Nations Fail" and the long-termist view of global poverty.”

Though I'm skeptical. A lot of the benefits from democracy require liberal democracy. For example, both Iran and Russia are technically democracies, yet neither seems like a force for domestic welfare or international peace. In The Great Delusion, John Mearscheimer also casts some doubt on the democratic peace

... (read more)
9
bryanschonfeld
4y
Thank you so much for these thoughtful comments! A few responses: 1. While there are of course differences of opinion on this issue outside of the research community, the social science research literature universally considers Russia and Iran to be non-democratic (see for example, the Polity IV project or the recent Acemoglu et. al. 2019 Democracy dataset). These regimes might be considered "competitive authoritarian" regimes (see Way and Levitsky) or hybrid regimes/"anocracies"--the benefits of democracy stated in the article do not apply to these states. While liberal democracy is likely preferable for outcomes like democratic peace, other outcomes like higher spending on public goods are linked primarily to electoral democracy, rather than to liberal norms. 2. In terms of the democratic peace--it's true that not all scholars agree with the consensus about the democratic peace--though Mearsheimer is definitely an outlier to the extent of believing power politics to be the only thing that matters (i.e. he thinks Europe isn't at war because of US troops in Germany). Scholars that have traditionally emphasized power politics (like Robert Jervis) acknowledge that the current situation-- in which powerful countries in Europe/Japan/South Korea don't even contemplate war against one another--is unique historically and likely linked to democratic norms. 3. I agree on China--I think pro-democracy aid can be effective when a country is in transition, like a mixed regime (and when a country actually needs aid enough to be influenced). Conditional aid can boost the fragile institutions of new democracies and make democratic consolidation more likely ( a very important long term outcome).

Yeah, it does sound like he might be open to fund EA causes at some point in the future.

I do think though that it is still a good criticism. There is a risk that people who would otherwise pursue some weird idiosyncratic, yet impactful, projects might be discouraged by the fact that it might be hard to justify it from a simple EA framework. I think that one potential downside risk of 80k's work for example is that some people might end up being less impactful because they choose the "safe" EA path rather than a more unusual, risky, and, from the EA community's perspective, low status path.

1
Nathan Young
4y
Let's model it. Currently it seems a very vague risk. If it's a significant risk, it seems worth considering in a way that we could find out if we were wrong. I'd also say things like: * EAs do a lot of projects, many of which are outlandish or not obviously impactful, how does this compare to the counterfactual?

Related: https://forum.effectivealtruism.org/posts/rrkEWw8gg6jPS7Dw3/the-home-base-of-ea

"But EA orgs can't be inclusive, so we should have a separate social space for EA's that is inclusive. Working at an EA org shouldn't be the only option for one's sanity."

On a meta level, it would be nice if there was some more general advice on this. Even though EA outreach to authoritarian countries is generally viewed as a bad idea (see here), we cannot help that at least some people in these countries will learn about EA and will want to communicate and contribute in some way.

Far from me the idea that anyone do anything dangerous. But EA in itself doesn't seem really controversial or dangerous to talk about. In fact, EA Dubai has a public Facebook group and UAE laws aren't that different from Saudi laws. https://www.facebook.com/groups/1728710337347189/

And here is a EA Middle East group: https://www.facebook.com/groups/1076904819058029/

2
SamuelKnoche
4y
On a meta level, it would be nice if there was some more general advice on this. Even though EA outreach to authoritarian countries is generally viewed as a bad idea (see here), we cannot help that at least some people in these countries will learn about EA and will want to communicate and contribute in some way.

I imagine that there should be ways to minimize the risks by remaining anonymous, using a VPN and avoiding any "controversial" topics, though I agree that one should be extremely careful with this.

I just want to point out that this seems very, very difficult to me, and I would not recommend trusting "being safe" unless you really have no other choice.

I know of multiple very smart people who have tried to stay anonymous, got caught, and bad things happened. (For instance, read many books on "top hackers")

Maybe try to create an EA Saudi Arabia social media group and see if you can find people that way. That would allow you to find others who are already interested in EA while staying anonymous.

One potential high impact activity might be to research how best to spread EA ideas in the Muslim world, and how EA should interact with the faith.

9
Milan_Griffes
4y
FYI it looks like the Saudi government probably monitors Facebook & Twitter:

Though the hotel isn't trying to have a big public presence so a boring name like CEEALAR might be just right.

It's difficult. You'd probably need a model of every country since state capacity, health care, information access... can vary widely.

If the death rate is really that high, then we should significantly update P(it goes world scale pandemic) and P(a particular person gets it | it goes world scale pandemic) downwards as it would cause governments and individuals to put a lot of resources towards prevention.

One can also imagine that P(a particular person dies from it | a particular person gets it) will go down with time as resources are spent on finding better treatment and a cure.

1
JustinShovelain
4y
Good points! I agree but I'm not sure how significant those effects will be though... Have an idea of how we'd in a principled precise way update based on those effects?

The idea that people always act selfishly is probably a bit extreme. But there's something very important pointed out in this post: considering selfish incentives is extremely important when thinking about how EA can become more sustainable and grow.

Just a few selfish incentives that I see operating within EA: the forum cash prize, reputation gains within the EA community for donating to effective charities, reputation gains for working for an EA org, being part of a community...

The point here is not that these are bad, but that we should acknowledge... (read more)

2
Gavin
4y
Sure, I agree that most people's actions have a streak of self-interest, and that posterity could serve as this even in cases of sacrificing your life. I took OP to be making a stronger claim, that it is simply wrong to say that "people have altruistic values" as well. There's just something up with saying that these altruistic actions are caused by selfish/social incentives, where the strongest such incentive is ostracism or the death penalty for doing it.
1[anonymous]4y
While I think that was a valuable post, the definition of ideology in it is so broad that even things like science and the study of climate change would be ideologies (as kbog points out in the comments). I'm not sure what system or way of thinking wouldn't qualify as an ideology based on the definition used.

Or you could say that EA is an ideology that has tolerance, open-mindedness and skepticism as some of its highest values. Saying that EA is an ideology doesn't necessarily mean that it shares the same flaws as most other ideologies.

I agree that in practice, EA does have an ideology. The majority of EAs share the assumptions of rationality, scientific materialism, utilitarianism and some form of techno-optimism. This explains why the three cause areas you mention aren't taken seriously by EA. And so if one wants to defend the current focus of most EAs, one also has to defend the assumptions - the ideology - that most EAs have.

However, in principle, EA does not prevent one from adopting the proposed cause areas. If I became convinced that the most effective way to do good was to... (read more)

Hotel Rwanda is pretty good.