Robi Rahman

Data Scientist @ Epoch
1405 karmaJoined Working (6-15 years)New York, NY, USA
www.robirahman.com

Bio

Participation
9

Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.

Comments
209

From an altruistic cause prioritization perspective, existential risk seems to require longtermism

No it doesn't! Scott Alexander has a great post about how existential risk issues are actually perfectly well motivated without appealing to longtermism at all.

When I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)

working on AI x-risk is mostly about increasing the value of the future, because, in his view, it isn't likely to lead to extinction

Ah yes I get it now. Thanks!

What is maxevas? Couldn't find anything relevant by googling.

Hope I'm not misreading your comment, but I think you might have voted incorrectly, as if the scale is flipped.

[This comment is no longer endorsed by its author]Reply
Robi Rahman
2
0
0
93% agree

On the current margin, improving our odds of survival seems much more crucial to the long-term value of civilization. My reason for believing this is that there are some dangerous technologies which I expect will be invented soon, and are more likely to lead to extinction in their early years than later on. Therefore, we should currently spend more effort on ensuring survival, because we will have more time to improve the value of the future after that.

(Counterpoint: ASI is the main technology that might lead to extinction, and the period when it's invented might be equally front-loaded in terms of setting values as it is in terms of extinction risk.)

stop the EA (or two?) that seem to have joined DOGE and started laying waste to USAID

I'm out of the loop, who's this allegedly EA person who works at DOGE?

The idea of haggling doesn't sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative.

Counterpoint: some people are more price-sensitive than typical consumers, and really can't afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)

I think EA's have the mental strength to handle diverse political views well.

No, I think you would expect EAs to have the mental strength to handle diverse political views, but in practice most of them don't. For example, see this heavily downvoted post about demographic collapse by Malcolm and Simone Collins. Everyone is egregiously misreading it as being racist or maybe just downvoting it because of some vague right-wing connotations they have of the authors.

If you don't aim to persuade anyone else to agree with your moral framework and take action along with you, you're not doing the most good within your framework.

(Unless your framework says that any good/harm done by anyone other than yourself is morally valueless and therefore you don't care about SBF, serial killers, the number of people taking the GWWC pledge, etc.)

embrace of the "Meat-Eater Problem" inbuilt into both the EA Community and its core ideas

Embrace of the meat-eater problem is not built into the EA community. I'm guessing a large majority of EAs, especially the less engaged ones who don't comment on the Forum, would not take the meat-eater problem seriously as a reason we ought to save fewer human lives.

Load more