Builds web apps (eg viewpoints.xyz) and makes forecasts. Currently I have spare capacity.
Talking to those in forecasting to improve my forecasting question generation tool
Writing forecasting questions on EA topics.
Meeting EAs I become lifelong friends with.
Connecting them to other EAs.
Writing forecasting questions on metaculus.
Talking to them about forecasting.
I think there is something here about the kinds of people who are steady hands not necessarily having great leverage either in terms of pay or status. But realistically such a person may be very costly to replace or do a very valuable role.
In that way, a sensible organisation would increase their pay and (to the extent possible) status by reflecting not on the change of their output from year to year, but actually how difficult they are to replace, which might be weeks of hiring, months of training, months of management time and perhaps years of time passing to get back to the function working as well as it previously did.
It is tricky to think how such negotiations can take place properly, but it seems likely to me that the sort of person who is likely to be a steady hand might not be agitatng for such, but that in turn means those who would say if paid more, appreciated more, don't see that option available to them.
I sort of think this is a reason not to have EA-endorsed politicians unless someone has really done the due diligence. This is a pretty high trust community and people expect something someone says confidently to be rubustly tested but political recommendations (and some charity ones to be fair) seem much less well researched than general discussions on policy etc.
I'm making my way through, but so far I guess it's gonna be @Richard Y Chappell🔸's arguments around ripple effects.
Animals likely won't improve the future for consciousness, but more, healthy humans might.
I haven't read the article fully yet though.
Argument: Nietzschean Perfectionism
@Richard Y Chappell🔸 theorises that:
maybe the best things in life—objective goods that only psychologically complex “persons” get to experience—are just more important than creature comforts (even to the point of discounting the significance of agony?). The agony-discounting implication seems implausibly extreme, but I’d give the view a minority seat at the table in my “moral parliament”
To my (Nathan's) ears this is either a discontinuous valuation of pleasure and pain across consciousnesses or one that puts far more value at the higher end. In this way the improvement to the life of a human could be worth infinite insects or some arbitrarily large number.
I am willing to discuss (either in the comments or on a call) any of these arguments. I don't think any of them hold much water and I doubt that in total they are enough to shift the weight of what we should do.
I am glad @Henry Howard🔸 wrote them up, but to the extent there is now a big list of arguments I don't find compelling I am slightly more convinced.
My response to this is that we can always take medians. And to the extent that the medians multiplied by the number of animals suggest this is a very large problem, the burden is on those who disagree to push the estimates down.
There isn't some rule which says that extremely wide confidence intervals can be ignored. If anything extremely wide confidence intervals ought to be inspected more closely because the value inside them can take a lot of different values.
I just sort of think this argumend doesn't hold water for me.
Argument: Approximations are too approximate.
@Henry Howard🔸 argues that much of the scholarship that animal welfare estimates are based on is so wide that that it doesn't make clear conclusions:
Unfortunately these ranges have such wide confidence intervals that, putting aside the question of whether the methodology and ranges are even valid, it doesn't seem to get us any closer to doing the necessary cost-benefit analyses.
Argument: The money can be spent over a long time and like will be able to be spent.
The footnote on the main question says:
In total. You can imagine this is a trust that could be spent down today , or over any time period
Likewise @Will Howard🔹 argues that this isn't that significant an additional amount of money anyway:
"$100m in total is not a huge amount (equiv to $5-10m/yr, against a background of ~$200m). I think concern about scaling spending is a bit of a red herring and this could probably be usefully absorbed just by current intervention"
Interesting take. I don't like it.
Perhaps because I like saying overrated/underrated.
But also because overrated/underrated is a quick way to provide information. "Forecasting is underrated by the population at large" is much easier to think of than "forecasting is probably rated 4/10 by the population at large and should be rated 6/10"
Over/underrated requires about 3 mental queries, "Is it better or worse than my ingroup thinks" "Is it better or worse than my ingroup thinks?" "Am I gonna have to be clear about what I mean?"
Scoring the current and desired status of something requires about 20 queries "Is 4 fair?" "Is 5 fair" "What axis am I rating on?" "Popularity?" "If I score it a 4 will people think I'm crazy?"...
Like in some sense your right that % forecasts are more useful than "More likely/less likely" and sizes are better than "bigger smaller" but when dealing with intangibles like status I think it's pretty costly to calculate some status number, so I do the cheaper thing.
Also would you prefer people used over/underrated less or would you prefer the people who use over/underrated spoke less? Because I would guess that some chunk of those 50ish karma are from people who don't like the vibe rather than some epistemic thing. And if that's the case, I think we should have a different discussion.
I guess I think that might come from a frustration around jargon or rationalists in general. And I'm pretty happy to try and broaden my answer from over/underrated - just as I would if someone asked me how big a star was and I said "bigger than an elephant". But it's worth noting it's a bandwidth thing and often used because giving exact sizes in status is hard. Perhaps we shouldn't have numbers and words for it, but we don't.