D

D0TheMath

1031 karmaJoined Jan 2019College Park, MD 20742, USA

Bio

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences
1

Effective Altruism Forum Podcast

Comments
161

There's much thought in finance about this. Some general books are:

  1. Options, Futures, and Other Derivatives

  2. Principles of Corporate Finance

And more particularly, The Black Swan: The Impact of the Highly Improbable, along with other stuff by Taleb (this is kind-of his whole thing).

The same standards applied to anything else: A decent track record of such experiments succeeding, and/or well-supported argument based on (in this case) sound economics.

So far the track-record is heavily against. Indeed, many of the worst calamities in history took the form of "revolution".

In lieu of that track record, you need one hell of an argument to explain why your plan is better, which at the minimum likely requires basing it on sound economics (which, if you want particular pointers, mostly means Chicago school, but sufficiently good complexity economics would also be fine).

It makes me sad that I automatically double timing estimates from EA orgs, treat that as the absolute minimum time something could take, and am often still disappointed.

I definitely strongly agree with this. I do think its slowly, ever so slowly getting better though.

More broadly I think Anthropic, like many, hasn’t come to final views on these topics and is working on developing views, probably with more information and talent than most alternatives by virtue of being a well-funded company.

It would be remiss to not also mention the large conflict of interest analysts at Anthropic have when developing these views.

I do dislike this feature of EA, but I don't think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.

What talents do you think aren't applicable outside the EAsphere?

(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)

Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.

My understanding of history says that usually letting militaries have such power, or initiating violent overthrow via any other means to launch an internal rebellion leads to bad results. Examples include the French, Russian, and English revolutions. Counterexamples possibly include the American Revolution, though notably I struggle to point to anything concrete that would have been different about the world had America had a peaceful break off like Canada later on did.

Do you know of counter-examples, maybe relating to poor developing nations which after the rebellion became rich developed nations?

I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.

This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.

I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.

Load more