Robi Rahman

Data Scientist @ Epoch
1263 karmaJoined Aug 2021Working (6-15 years)New York, NY, USA

Bio

Participation
8

Data scientist working on AI forecasting through Epoch and the Stanford AI Index. GWWC pledge member since 2017. Formerly social chair at Harvard Effective Altruism, facilitator for Arete Fellowship, and founder of the DC Slate Star Codex meetup.

Comments
179

I don't see Shapley values mentioned anywhere in your post. I think you've made a mistake in attributing the values of things multiple people have worked on, and these would help you fix that mistake.

I don't really see anything in the article to support the headline claim, and the anonymous sources don't actually work at NIST, do they?

Rather than farmers investing more profits from growing plants into animal farming, I think the main avenue of harm is that animal feed is an input to meat production, so if the supply of feed increases, production of meat would increase.

Under preference utilitarianism, it doesn't necessarily matter whether AIs are conscious.

I'm guessing preference utilitarians would typically say that only the preferences of conscious entities matter. I doubt any of them would care about satisfying an electron's "preference" to be near protons rather than ionized.

So you think your influence on future voting behavior is more impactful than your effect on the election you vote in?

Gina and I eventually decided that the data collection process was too time-consuming, and we stopped partway through.

Josh You and I wrote a python script that searches Google for a list of keywords, saves the text of the web pages in the search results, and shows them to GPT and asks it questions about them from a prompt. This would quickly automate the rest of your data collection if you have the pledge signers in a list already. Email me if you want a copy.

The social value of voting in elections is something where I've seen a lot of good arguments on both sides of an issue and it's unresolved with substantial implications for how I should behave. I would really love to see a debate between Holden Karnofsky, Eric Neyman, and Toby Ord against Chris Freiman and Jacob Falkovich.

Context for people who don't follow the authors:

"Why Swing-State Voting is not Effective Altruism" by Jason Brennan and Chris Freiman: https://onlinelibrary.wiley.com/doi/abs/10.1111/jopp.12273

Eric Neyman on voting:

https://ericneyman.wordpress.com/?s=vot

"Casting the Decisive Vote" by Toby Ord

https://www.tobyord.com/writing/decisive-vote

"Vote Against" by Jacob Falkovich

https://putanumonit.com/2015/12/30/010-voting/

I don't think this is empirically true. US speed limits are typically set lower than the safest driving speeds for the roads, so micromurders from speeding are often negative in areas without pedestrians.

I agree, however, isn't there still the danger that as scientific research is augmented by AI, nanotechnology will become more practical? The steelmanned case for nanotech x-risk would probably argue that various things that are intractable for us to do now, have no theoretical reasons why they couldn't be done if we were slightly better at other adjacent techniques.

Load more