KT

Karthik Tadepalli

Economics PhD @ UC Berkeley
3009 karmaJoined Pursuing a doctoral degree (e.g. PhD)karthiktadepalli.com

Bio

I research a wide variety of issues relevant to global health and development. I'm always happy to chat - if you think we have similar interests and would like to talk, send me a calendar invite at karthikt@berkeley.edu!

Sequences
1

What we know about economic growth in LMICs

Comments
382

"these people have been the most effective altruists in history" is about them being effective and altruistic, not members of a community called Effective Altruism.

Good to know, what's the source of this info?

Self identification seems like an obvious condition. If you were sharing news that Mr Beast was calling himself an EA, none of these comments would apply.

I'm puzzled that in the footnote, you say that there were 7x more comments than posts, but in the graph of 3.1.1, there are more posts than comments on almost every day? And the number of comments is never more than 2x the number of posts in the same day. How does that add up to be 7x more?

FWIW your claim doesn't contradict the main point here, which is that AI governance is a better option to prioritize. The OP says it's because alignment is hard, you say it's because alignment is the default, but both point to the same conclusion in this specific case

AI companies are constrained by the risk that they might not be able to monetize their products effectively enough to recover the insane compute costs of training. As an extreme example, if everyone used free GPT but zero people were willing to pay for a subscription, then investors would become significantly less excited by AI companies, because the potential profits they would expect to recover would be lower than if people are willing to buy subscriptions at a high rate.

So I think it's better to frame the impact of a subscription not as "you give OAI $20" but rather "you increase OAI's (real and perceived) ability to monetize its products by 1/(# of subscribers)".

Without rehashing the moral offsetting debate, I seriously doubt that there are any AI safety funding options that provide as much benefit as the harm of enabling OpenAI. This intuition comes from the fact that Open Phil funds a ton of AI safety work, so your money would only be marginal for AI safety work that falls below their funding bar, combined with my anecdotal (totally could be wrong) view that AI safety projects are more limited by manpower than by money.

Load more