or its funny to write like that if you feel like it. charles raises a fair point that social reactions to a post are far in the future, but they can be many more than the value of the time you invested. that probably makes more sense for sposts than comments though
Agreed on the importance of who their potential donor pool is. If I found out that an org had run the event the author describes for highly committed EAs I would be aghast. But by the standards of what is done to solicit ultra high net worth donors who move millions annually and who are not currently interested in EA, it seems entirely reasonable.
I think that most of this is good analysis: I am not convinced by all of it, but it is universally well-grounded and useful. However, the point about Communicating Risk, in my view, misunderstands the point of the original post, and the spirit in which the discussion was happening at the time. It was not framed with the goal of "what should we, a group that includes a handful of policymakers among a large number, be aiming to convince with". Rather, I saw it as a personally relevant tool that I used to validate advice to friends and loved ones about when they should personally get out of town. Evaluating the cost in effective hours of life made a comparison they and I could work with: how many hours of my life would I pay to avoid relocating for a month and paying for an AirBnB? I recognize that it's unusual to discuss GCRs this way, and I would never do it if I were writing in a RAND publication (I would use the preferred technostrategic language), but it was appropriate and useful in this context.
Two points, but I want to start with praise. You noticed something important and provided a very useful writeup. I agree that this is an important issue to take seriously.
While aiming to be the person available when policymakers want expert opinion does not favour more technocratic decision-making, actively seeking to influence policymakers does favour more technocratic decision-making
I don't think that this is an accurate representation of how policymakers operate, either for elected officials or bureaucrats. My view comes from a gestalt of years of talking with congressional aides, bureaucrats in and around DC, and working at a think tank that does policy research. Simply put, there are so many people trying to make their point in any rich democracy that being "available" is largely equivalent to being ignored.
There are exceptions, particularly academics who publish extensively on a topic and gain publicity for it, but most people who don't actively attempt to participate in governance simply won't. Nobody has enough spare time, and nobody has enough spare energy, to actively seek out points of view and ideas reliably.
More importantly, I think that marginal expert influence mostly crowds out other expert influence, and does not crowd out populist impulses. Here I am more speculative, but my sense is that elected officials get a sense of what the expert/academic view is, as one input in a decision making process that also includes stakeholders, public opinion (based on polling, voting, and focus groups), and party attitudes (activists, other elected officials, aligned media, etc). Hence an EA org that attempts to change views mostly displaces others occupying a similar social / epistemic / political role, not any sense of public opinion.
On the bureaucracy side, expert input, lawmaker input, and stakeholder input are typically the primary influences when considering policy change. Occasionally public pressure will be able to notice something, but the federal registry is very boring, and as the punctuated equilibrium model of politics suggests, most of the time the public isn't paying attention. And bureaucrats usually don't have extra time and energy to go out and find people whose work might be relevant, but they don't have anyone actively presenting. Add that most exciting claims are false, so decisionmakers would really have to read through entire literatures to be confident in a claim, and experts ceding influence goes primarily not to populist impulses but existing stakeholders.
Suggesting that a future without industrialization is morally tolerable does not imply opposition to "any and all" technological progress, but the amount of space left is very small. I don't think they're taking an opinion on the value of better fishhooks.
The paper doesn't explicitly mention economic growth, but it does discuss technological progress, and at points seems to argue or insinuate against it."For others who value virtue, freedom, or equality, it is unclear why a long-term future without industrialisation is abhorrent: it all depends on one’s notion of potential." Personally, I consider a long-term future with a 48.6% child and infant mortality rate abhorrent and opposed to human potential, but the authors don't seem bothered by this. But they have little enough space to explain how their implied society would handle the issue, and I will not critique it excessively.There is also a repeated implication that halting technological progress is, at a minimum, possible and possibly desirable."Since halting the technological juggernaut is considered impossible, an approach of differential technological development is advocated""The TUA rarely examines the drivers of risk generation. Instead, key texts contend that regulating or stopping technological progress is either deeply difficult, undesirable, or outright impossible""regressing, relinquishing, or stopping the development of many technologies is often disregarded as a feasible option" implies to me that one of those three options is a feasible option, or is at least worth investigating.While they don't explicitly advocate degrowth, I think it is reasonable to read them as doing such, as John does.
I came here to say this: you have a relatively unique work position relative to most EAs, and are likely to be unusually good at identifying opportunities in countries Wave is located in.
Useful context/prior art.
Should we have received a confirmation that our application was successfully received?
I was parsing your comment here as saying that the marginal impact of a GiveWell donation was pretty close to GiveDirectly. Here it seems like you don't endorse that interpretation?