Director of Research at PAISRI
I've got a few:
My somewhat uncharitable reaction while reading this was something like "people running ineffective charities are upset that EAs don't want to fund them, and their philosopher friend then tries to argue that efficiency is not that important".
I'm a big fan of ideas like this. One of the things I think EAs can bring to charitable giving that is otherwise missing from the landscape is being risk-neutral, and thus willing to bet on high variance strategies that, taken as a whole in a portfolio, may have the same or hopefully higher expect returns compared to typical risk-averse charitable spending that tends to focus on things like making no money is wasted to the exclusion of taking necessary risks to realize benefits.
Taking a predictive processing perspective, we should expect to see an initial decrease in happiness upon finding oneself living a less expensive lifestyle because it would be a regular "surprise" violating the expected outcome, but then over time for this surprise to go away as daily evidence slowly retrains the brain the to expect less and so have less negative emotional valence around upon perceiving the actual conditions.
However I'd still expect someone who "fell from grace" like this to be somewhat sadder than a person who rose to the same level of wealth or grew up at it because they'd have more sad moments of nostalgia for better times that would be missing from the others, but this would likely be a small effect an not easily detectable (would expect it to be washed out by noise in a study).
Without rising to the level of maliciousness, I've noticed a related pattern to ones you describe here where sometimes my writing attracts supporters who don't really understand my point and whose statements of support I would not endorse because they misunderstand the ideas. They are easy to tolerate because they say nice things and may come to my defense against people who disagree with me, but much like with your many flavors of malicious supporters they can ultimately have negative effects.
I like the general idea here, but personally I dislike comments that don't tell the the reader new information, so just saying the equivalent of "yay" without adding something is likely to get a downvote from me if the comment is upvoted, especially if it gets upvoted above more substantial comments.
I was quite surprised to hear how large the Fraunhofer Society is given I've never heard of it before! I think in and of itself this is a kind of evidence against their effectiveness, although I could also imagine they've turned out some winning innovations as parts of contracts and so their involvement gets lost because I think of it as a thing that company X did.
It seems unclear to me that the level of CO2 emissions from one model being greater than one car necessarily implies that AI is likely to have an outsized impact on climate change. I think there's some missing calculations here about number of models, number of cars, how much additional marginal CO2 is being created here not accounted for by other segments, and how much marginal impact on climate change is to be expected from the additional CO2 from AI models. That in hand, we could potentially assess how much additional risk there is from AI in the short term on climate change.
Mixed. On the one hand, I feel like I'm less involved because I have less time for engaging with people on the forum and during events and am spending less time on EA-aligned research and writing.
On the other, that's in no small part because I took a job that pays a lot more than my old one, dramatically increasing my ability to give, but it also requires a lot more of my time. So I've sort of transitioned towards an earning-to-give relationship with EA that leaves me feeling more on the outside but still connected and benefiting from EA to guide my giving choices and keep me motivated to give rather than keep more for myself.
While I appreciate what the author is getting at, as presented I think it shows a lack of compassion for how difficult it is to do what one reckons one ought to do.
It's true you can simply "choose" to be good, but this is about as easy as saying all you have to do to do X for a wide variety of things X that don't require special skills is choose to do X, such as wake up early, exercise, eat healthier food when it is readily available, etc.. Despite this, lots of people try to explicitly choose to do these things and fail anyway. What's up?
The issue lies in what it means to choose. Unless you suppose some sort of notion of free will, choosing is actually not that easy to control because there's a lot of complex brain functions essentially competing to get you to doing whatever the next thing you do is, and so "choosing" actually looks a lot more like "setting up a lot of conditions both in the external world and in your mind such that a particular choice happens" rather than some atomic, free-willed choice spontaneously happening. Getting to the point where you can feel like you can simply choose to do the right thing all the time requires a tremendous amount of alignment between different parts of the brain competing to produce your next action.
I think it's best to take this article as a kind of advice. Sometimes it will be that the only thing keeping you from doing what you believe you ought to do is just some minor hold-up where you don't believe you can do it, and accepting that you can do it suddenly means that you can, but most of the time the fruit will not hang so low and instead there will be a lot else to do in order to do what one considers morally best.