D0TheMath

569College Park, MD 20742, USAJoined Jan 2019

Bio

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences
1

Effective Altruism Forum Podcast

Comments
88

I am skeptical that people who are so subject to framing effects that the strategies used in this post are required to convince them of ideas are the kinds of people you should be introducing to your EA group.

The reason EA can do as much good as it can is because of its high level of epistemic rigor. If you pull in people with a lower-than-EA-average level of epistemic rigor, this lowers EA as a whole's ability to do good. This may be a good idea if we're very people-constrained, and can't find anyone with an EA average or greater than EA average level of epistemic rigor.

However, though EA is people-constrained, it is also very small, and so I very much doubt you can't find anyone with a greater level of epistemic rigor than these insanely-framing-effect-susceptible folks, and I encourage you, and all other group organizers, to take such people's inability to be convinced by arguments that don't pattern-match to arguments-from-my-tribe as a deep blessing!

It’s far easier to see the irrationalities and possible exploits of other people’s work than your own, rationalizing a world possibly takes different skills than creating an interesting one, its easier to write & build an audience, and you don’t have to spend so much time explaining the setting/magic system/other important info.

r/rational put together a spreadsheet of a bunch of rationalist fiction. You should find much EA related material here https://docs.google.com/spreadsheets/d/1OEoxYzFeF0UpJmHY5pqHP_Yam-cw9kXDyXZbH6ANJiM/htmlview

Strongly downvoted for reasons stated above.

I know that you had a paragraph where you said this, but you didn't actually explain why you thought this or why you thought others were wrong, and far more of the article was devoted to stating why you thought those arguing in favor were inauthentic in their beliefs. This was also argued in a way which gave no insight into why you thought the issue was intractable.

Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.

Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.

Seems like your first article doesn’t actually engage with discussions about wild animal suffering in a meaningful way, except to say that you’re unsure whether wild animal suffering people are authentic in their beliefs, but 1) in my experience they are, and 2) if they’re not but their arguments are still valid, then we should prioritize wild animal suffering anyway, and tell the pre-existing wild animal suffering people to take their very important cause more seriously.

I’m glad you liked the post, but I wasn’t actually trying to make any points about EA’s weirdness going too far. Most of the points made about electrons here are very philosophically flawed.

I agree the name is non-ideal, and doesn't quite capture differences. A better term may be conventionalists versus non-conventionalists (or to make the two sides stand for something positive, conventionalists versus longtermists).

Conventionalists focus on cause areas like global poverty reduction, animal welfare, governance reforms, improving institutional decision making, and other things which have (to some extent) been done before.

Non-conventionalists focus on cause areas like global catastrophic risk prevention, s-risk prevention, improving our understanding of psychological valence, and other things which have mostly not been done before, or at least have been done comparatively fewer times.

These terms may also be terrible. Many before have tried to prevent the end of the world (see: Petrov), and prevent s-risks (see: effort against Nazism and Communism). Similarly, it's harder to draw a clear value difference or epistemic difference between these two divisions. One obvious choice is to say the conventionalists place less trust in inside-view reasoning, but the case that any particular (say) charter city trying out a brand new organizational structure will be highly beneficial seems to rely on far more inside-view reasoning (economic theory for instance) than the case that AGI is imminent (simply perform a basic extrapolation on graphs of progress or compute in the field).

It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won't work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.

If you mean advocating for policies which subsidize good safety research, or advocate for interpretability research in ML models, then I think a better term would be "AI governance" or some other term which specifies that it's non-technical alignment work, focused on building institutions which are more likely to use solutions rather than finding those solutions.

It seems a bit misleading to call many of these “AI alignment opportunities”. AI alignment has to do with the relatively narrow problem of solving the AI control problem (ie making it so very powerful models don’t decide to destroy all value in the world), and increasing the chances society decides to use that solution.

These opportunities are more along the lines of using ml to do good in a general sense.

Load More