D0TheMath

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences

Effective Altruism Forum Podcast

Topic Contributions

Comments

Every moment of an electron's existence is suffering

Strongly downvoted for reasons stated above.

Every moment of an electron's existence is suffering

I know that you had a paragraph where you said this, but you didn't actually explain why you thought this or why you thought others were wrong, and far more of the article was devoted to stating why you thought those arguing in favor were inauthentic in their beliefs. This was also argued in a way which gave no insight into why you thought the issue was intractable.

On Deference and Yudkowsky's AI Risk Estimates

Eliezer is cleanly just a major contributor. If he went off the rails tomorrow, some people would follow him (and the community would be better with those few gone), but the vast majority would say “wtf is that Eliezer fellow doing”. I also don’t think he sees himself as the leader of the community either.

Probably Eliezer likes Eliezer more than EA/Rationality likes Eliezer, because Eliezer really likes Eliezer. If I were as smart & good at starting social movements as Eliezer, I’d probably also have an inflated ego, so I don’t take it as too unreasonable of a character flaw.

Every moment of an electron's existence is suffering

Seems like your first article doesn’t actually engage with discussions about wild animal suffering in a meaningful way, except to say that you’re unsure whether wild animal suffering people are authentic in their beliefs, but 1) in my experience they are, and 2) if they’re not but their arguments are still valid, then we should prioritize wild animal suffering anyway, and tell the pre-existing wild animal suffering people to take their very important cause more seriously.

I’m glad you liked the post, but I wasn’t actually trying to make any points about EA’s weirdness going too far. Most of the points made about electrons here are very philosophically flawed.

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I agree the name is non-ideal, and doesn't quite capture differences. A better term may be conventionalists versus non-conventionalists (or to make the two sides stand for something positive, conventionalists versus longtermists).

Conventionalists focus on cause areas like global poverty reduction, animal welfare, governance reforms, improving institutional decision making, and other things which have (to some extent) been done before.

Non-conventionalists focus on cause areas like global catastrophic risk prevention, s-risk prevention, improving our understanding of psychological valence, and other things which have mostly not been done before, or at least have been done comparatively fewer times.

These terms may also be terrible. Many before have tried to prevent the end of the world (see: Petrov), and prevent s-risks (see: effort against Nazism and Communism). Similarly, it's harder to draw a clear value difference or epistemic difference between these two divisions. One obvious choice is to say the conventionalists place less trust in inside-view reasoning, but the case that any particular (say) charter city trying out a brand new organizational structure will be highly beneficial seems to rely on far more inside-view reasoning (economic theory for instance) than the case that AGI is imminent (simply perform a basic extrapolation on graphs of progress or compute in the field).

20 AI and impact opportunities

It depends on what you mean. If you mean trying to help developing countries achieve SDG goals, then this won't work for a variety of reasons, the most straightforward of which is that using data-based approaches to build statistical models is different enough from cutting edge machine learning or alignment research that it will be very likely useless to the task, and the vast majority of the benefit from such work is found in the standard benefits to people living in developing countries.

If you mean advocating for policies which subsidize good safety research, or advocate for interpretability research in ML models, then I think a better term would be "AI governance" or some other term which specifies that it's non-technical alignment work, focused on building institutions which are more likely to use solutions rather than finding those solutions.

20 AI and impact opportunities

It seems a bit misleading to call many of these “AI alignment opportunities”. AI alignment has to do with the relatively narrow problem of solving the AI control problem (ie making it so very powerful models don’t decide to destroy all value in the world), and increasing the chances society decides to use that solution.

These opportunities are more along the lines of using ml to do good in a general sense.

EA Obligations versus Financial Security

This framing doesn’t clarify the issue much to me. Why do you think this billionaire would want young professionals to build their safety net over donating? Seems there are considerations (gcrs, potential monetary ease of building safety nets, low expected return on some people’s long-term career, low expectation that people stay involved in EA long-term, etc) which may flip the calculus for any given person.

How can someone bet against EA?

They could find someone who agrees with Tyler, work out some measurement for the influence EA has on the world, and bet against each other on where that measurement will be several years or decades down the line.

Plausible influence measures include:

  • The amount of money moved to EA-aligned charities
  • The number of people who browse the forum
  • The number of federal politicians &/or billionaires who subscribe to EA ideas
  • Google trends data
  • The number of people who know about EA
  • The number of people who buy Doing Good Better
  • The number of charities who use the SNT framework to argue for more funding on their home-pages
  • The number of EA-aligned charities
  • ...
The Culture of Fear in Effective Altruism Is Much Worse than Commonly Recognized

There are a lot of articles I've wanted to write for a long time on how those in effective altruism can help each other do more good and overall change effective altruism to even better than it is now. Yet there is a single barrier left stopping me. It's the culture of fear in effective altruism.

I suggest writing the articles anyway. I predict that unless your arguments are bad, the articles (supposing you will write 5 articles) will get >=0 karma each a week after publication, and am willing to bet $10 at 1:10 odds this is the case. We can agree on a neutral party to judge the quality of the articles if you'd like. We can adjust the odds and the karma level if you like.

Load More