I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
I have only limited resources with which to do good. If I'm not doing good directly through a full-time job, I budget 20% of my income toward doing as much good as possible, and then I don't worry about it after that. If I spend time and money on advocating for a ceasefire, that's time and money that I can't spend on something else.
If you ask me my opinion about whether Israel should attack Gaza, I'd say they shouldn't. But I don't know enough about the issue to say what should be done about it, and I doubt advocacy on this issue would be very effective—"Israel and Palestine should stop fighting" has been more or less the consensus position among the general public for ~70 years, and it still hasn't happened, and I doubt anything I do will have an impact on the same scale as a donation to a GiveWell top charity.
To convince me to advocate for a ceasefire, you have to argue not just that it's good, but that it's the best thing I could be doing. All you've said is that it's good. Why is it the best thing that I could be doing? I'd like this post better if you said more about why it's the best thing. (I doubt I'd end up agreeing, but I appreciate when people make the argument.)
the results here don't depend on actual infinities (infinite universe, infinitely long lives, infinite value)
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can't handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there's a way to fix infinite-EV distributions by positing that utility is bounded, but that you don't know what the bound is? My subjective belief is something like, utility is bounded, I don't know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think this subject is very important and underrated, so I'm glad you wrote the post, and you raised some points that I wasn't aware of, and I would like to see people write more posts like this one. The post didn't do as much for me as it could have because I found two of its three main arguments hard to understand:
Some (small-sample) data on public opinion:
These surveys suggest that the average person gives considerably less moral weight to non-human animals than the RP moral weight estimates, although still enough weight that animal welfare interventions look better than GiveWell top charities (and the two surveys differed considerably from each other, with the MTurk survey giving much higher weight to animals across the board).
FWIW I haven't looked much into this but my surface impression is that climate change groups are eager to paint CCC as biased/bad science/climate deniers because (1) they don't like CCC's conclusion that many causes in global health and development are more cost-effective than climate change and (2) they tend to exaggerate the expected harms of climate change, and CCC doesn't.
My impression is that most of Lomborg's critics don't understand his claims—they don't understand the difference between "climate change isn't the top priority" and "climate change isn't real".
From what I've read, Lomborg's beliefs on climate change are in line with John Halstead's Climate Change & Longtermism report.
From the Australia Climate Council link, the most egregious claim I see from Lomborg is "But the [2014 IPCC] report also showed that global warming has dramatically slowed or entirely stopped in the last decade and a half." (The link in the article is broken but I found it via archive.org.) It looks to me like Lomborg's claim is literally true according to Australia Climate Council (I actually thought it was false but apparently I was wrong and Lomborg was right), but possibly misleading. In the context of Lomborg's article, it doesn't look to me like he's trying to claim global warming isn't happening, but that it's exaggerated.
A small thought that occurred to me while reading this post:
In fields where most people do a lot of independent diligence, you should defer to other evaluators more. (Maybe EA grantmaking is an example of this.)
In fields where people mostly defer to each other, you're better off doing more diligence. (My impression is VC is like this—most VCs don't want to fund your startup unless you already got funding from someone else.)
And presumably there's some equilibrium where everyone defers N% of their decisionmaking and does (100-N)% independent diligence, and you should also defer N%.
How feasible do you think this is? From my outsider perspective, I see grantmakers and other types of application-reviewers taking 3-6 months across the board and it's pretty rare to see them be faster than that, which suggests it might not be realistic to consistently review grants in <3 months.
eg the only job application process I've ever done that took <3 months was an application to a two-person startup.
Thanks, I hadn't gotten to your comment yet when I wrote this. Having read your comment, your argument sounds solid, my biggest question (which I wrote in a reply to your other comment) is where the eta=0.38 estimate came from.
I think the answer to that question is no, because I don’t trust models like these to advise us on how much risk to take.
How would you prefer to decide how much risk to take?
OP has tried to estimate empirically the spending/impact curvature of a big philanthropic opportunity set – the GiveWell top charities – and ended up with an eta parameter of roughly 0.38.
I would love to see more on this if there's any public writeups or data.
I may be misinterpreting your argument, but it sounds like it boils down to:
The jump from step 1 to step 2 looks like a mistake to me.
You also seemed to suggest (although I'm not quite sure whether you were actually suggesting this) that if a being cannot in principle describe its qualia, then it does not have qualia. I don't see much reason to believe this to be true—it's one theory of how qualia might work, but it's not the only theory. And it would imply that, e.g., human stroke victims who are incapable of speech do not have qualia because they cannot, even in principle, talk about their qualia.
(I think there is a reasonable chance that I just don't understand your argument, in which case I'm sorry for misinterpreting you.)