D

D0TheMath

1098 karmaJoined College Park, MD 20742, USA
Interests:
Forecasting

Bio

An undergrad at University of Maryland, College Park. Majoring in math.

After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!

I’m currently starting the EA group for the university of maryland, college park.

Also see my LessWrong profile

Sequences
1

Effective Altruism Forum Podcast

Comments
171

When you start talking about silicon valley in particular, you start getting confounders like AI, which has a high chance of killing everyone. But if we condition on that going well or assume the relevant people won't be working on that, then yes that does seem like a useful activity, though note that silicon valley activities are not very neglected, and you can certainly do better than them by pushing EA money (not necessarily people[1]) into the research areas which are more prone to market failures or are otherwise too "weird" for others to believe in.

On the former, vaccine development & distribution or gene drives are obvious ones which comes to mind. Both of which have a commons problem. For the latter, intelligence enhancement.


    1. Why not people? I think EA has a very bad track record of extreme group think, caused by a severe lack of intellectual diversity & humility. This is obviously not very good when you're trying to increase the productivity of a field or research endeavor. ↩︎

This seems pretty unlikely to me tbh, people are just less productive in the developing world than the developed world, and its much easier to do stuff--including do good--when you have functioning institutions, surrounded by competent people, connections & support structures, etc etc.

That's not to say sending people to the developed world is bad. Note that you can get lots of the benefits of living in a developed country by simply having the right to live in a developed country, or having your support structure or legal system or credentials based in a developed country.

Of course, its much easier to just allow everyone in a developing country to just move to a developed country, but assuming the hyper rationalist bot exists with an open boarders constraint, it seems incredibly obvious to me that what you say would not happen.

I think it seems pretty evil & infantilizing to force people to stay in their home country because you think they’ll do more good there. The most you should do is argue they’ll do more good in their home country than a western country, then leave it up to them to decide.

I will furthermore claim that if you find yourself disagreeing, you should live in the lowest quality of living country you can find, since clearly that is the best place to work in your own view.

Maybe I have more faith in the market here than you do, but I do think that technical & scientific & economic advancement do in fact have a tendency to not only make everywhere better, but permanently so. Even if the spread is slower than we’d like. By forcing the very capable to stay in their home country we ultimately deprive the world and the future from the great additions they may make given much better & healthier working conditions.

This is not the central threat, but if you did want a mechanism, I recommend looking into the krebs cycle.

I do think this is correct to an extent, but also that much moral progress has been made by reflecting on our moral inconsistencies, and smoothing them out. I at least value fairness, which is a complicated concept, but also is actively repulsed by the idea that those closer to me should weigh more in society's moral calculations. Other values I have, like family, convenience, selfish hedonism, friendship, etc are at odds with this fairness value in many circumstances.

But I think its still useful to connect the drowning child argument with the parts of me which resonate with it, and think about actually how much I care about those parts of me over other parts in such circumstances.

Human morality is complicated, and I would prefer more people 'round these parts do moral reflection by doing & feeling rather than thinking, but I don't think there's no place for argument in moral reflection.

Even if most aren't receptive to the argument, the argument may still be correct. In which case its still valuable to argue for and write about.

I agree with you about the bad argumentation tactics of Situational Awareness, but not about the object level. That is, I think Leopold's arguments are both bad, and false. I'd be interested in talking more about why they're false, and I'm also curious about why you think they're true.

Otherwise I think that you are in part spending 80k's reputation in endorsing these organizations

Agree on this. For a long time I've had a very low opinion of 80k's epistemics[1] (both podcast, and website), and having orgs like OpenAI and Meta on there was a big contributing factor[2].


    1. In particular that they try to both present as an authoritative source on strategic matters concerning job selection, while not doing the necessary homework to actually claim such status & using articles (and parts of articles) that empirically nobody reads & I've found are hard to find to add in those clarifications, if they ever do. ↩︎

    2. Probably second to their horrendous SBF interview. ↩︎

The second two points don’t seem obviously correct to me.

First, the US already has a significant amount of food security, so its unclear whether cultivated meats would actually add much.

Second, If cultivated meats destroy the animal agriculture industry, this could very easily lead to a net loss of jobs in the economy.

rationalist community kind of leans right wing on average

Seems false. It leans right compared to the extreme left wing, but right compared to the general population? No. Its too libertarian for that. I bet rightists would also say it leans left, and centrists would say its too extreme. Overall, I think its just classically libertarian.

Load more