DP

dan.pandori

397 karmaJoined May 2021

Posts
1

Sorted by New

Comments
59

There are a lot of 'lurkers', but less than 30 folks would be involved in the yearly holiday matching thread and sheet. Every self-professed EA I talked to at Google was involved in those campaigns, so I think that covers the most involved US Googlers.

Most people donated closer to 5-10% than Jeff or Oliver's much higher amounts, that is for sure true.

So I think both your explanations are true. There are not that many EAs at Google (although I don't think that's surprising), and most donate much less than they likely could. I put myself in that bucket, as I donated around 20%, but likely could have done close to twice that. Although it would be hard for me to do that in recent years, as I switched to Waymo where I can't sell my stock.

RE: why aren't there as many EAs giving this much money: I'm (obviously) not Jeff, but I was at Alphabet for many of the years Jeff was. Relevantly, I was also involved in the yearly donation matching campaigns. There were around 2-3 other folks who donated similar amounts to Jeff. Those four-ish people were the majority of EA matching funds at Alphabet.

It's hard to be sure how many people actually donated outside of giving campaigns, so this might undercount things. But to get to 1k EAs donating this much money, you'd need like 300 companies with similarly sized EA contingents. I don't think there are 300 companies with as large of a (wealthy) EA contingent as Alphabet, so the fact that Jeff was a strong outlier at Google explains most of this to me.

I think that there are only like 5k individuals as committed to EA as Jeff and his wife are. And making as much money as they did is fairly rare, especially when you consider the likelihood of super committed folks going into direct work.

Legal or constitutional infeasibility does not always prevent executive orders from being applied (or followed). I feel like the US president declaring a state of emergency related to AI catastrophic risk (and then forcing large AI companies to stop training large models) sounds at least as constitutionally viable as the attempted executive order for student loan forgiveness.

I agree that this seems fairly unlikely to happen in practice though.

I deeply appreciate the degree to which this comment acknowledges issues and provides alternative organizations that may be better in specific respects. It has given me substantial respect for LTFF.

This feels like a "be the change you want to see in the world" moment. If you want such an event, it seems like you could basically just make a forum post (or quick take) offering 1:1s?

I think that basically all of these are being pursued and many are good ideas. I would be less put off if the post title was 'More people should work on aligning profit incentives with alignment research', but suggesting that no one is doing this seems off base.

This is what I got after a few minutes of Google search (not endorsing any of the links beyond that they are claiming to do the thing described).

AI Auditing:
https://www.unite.ai/how-to-perform-an-ai-audit-in-2023/

Model interpretability:
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-machine-learning-interpretability?view=azureml-api-2

Monitoring and usage:
https://www.walkme.com/lpages/shadow-ai/?t=1&PcampId=7014G000001ya0pQAA&camp=it_shadow-ai_namer&utm_source=it_shadow_ai&utm_medium=paid-search_google&utm_content=walkme_ai&utm_campaign=it_shadow-ai_namer&utm_term=paid-media&gclid=Cj0KCQjw3JanBhCPARIsAJpXTx69aVdhkJkHOpEQd4_Bfpp_9_93hQM8NVTWkfZU8eR15VU--34lCKMaAkUUEALw_wcB

Future Endowment Fund sounds a lot like an impact certificate:
https://forum.effectivealtruism.org/posts/4bPjDbxkYMCAdqPCv/manifund-impact-market-mini-grants-round-on-forecasting

I agree that 'utilitarianism' often gets elided into meaning a variation of hedonic utilitarianism. I would like to hold philosophical discourse to a higher bar. In particular, once someone mentions hedonic utilitarianism, I'm going to hold them to the standard of separating out hedonic utilitarianism and preference utilitarianism, for example.

I agree hedonic utilitarians exist. I'm just saying the utilitarians I've talked to always add more terms than pleasure and suffering to their utility function. Most are preference utilitarians.

I feel like 'valuism' is redefining utilitarianism, and the contrasts to utilitarianism don't seem very convincing. For instance, you define valuism as noticing what you intrinsically value and trying to take effective action to increase that. This seems identical to a utilitarian whose utility function is composed of what they intrinsically value.

I think you might be defining utilitarianism such that they are only allowed to care about one thing? Which is sort of true, in that utilitarianism generally advocates converting everything into a common scale, but that common scale can measure multiple things. My utility function includes happiness, suffering, beauty, and curiosity as terms. This is totally fine, and a normal part of utilitarian discourse. Most utilitarians I've talked to are total preference utilitarians, I've never met a pure hedonistic utilitarian.

Likewise, I'm allowed to maintain my happiness and mental health as an instrumental goal for maximizing utility. This doesn't mean that utilitarianism is wrong, it just means we can't pretend we can be utility maximizing soul-less robots. I feel like there is a post on folks realizing this at least every few months. Which makes sense! It's an important realization!

Also, utilitarianism also doesn't need objective morality any more than any other moral philosophy, so I didn't understand your objection there.

This comment came across as unnecessarily aggressive to me.

The original post is a newsletter that seems to be trying to paint everyone in their best light. That's a nice thing to do! The epistemic status of the post (hype) also feels pretty clear already.

Load more