I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
My - purely anecdotal - sense is that GiveWell pays more than many of its social sector peers.
To put some numbers on this, here is the data from GiveWell's 990 in 2023 -- these do appear to be the highest-paid eleven employees (which is not always the case on the 990)
I can count at least three types of public participation here --
(3) is the easiest to dispose of in my view. Although surely people have changed their minds about things on Reddit and in similar places given their massive size, I get the impression that debate subreddits and the like accomplish very little for the amount of effort people pour into them. People generally aren't going into these kinds of spaces with open minds. And the anecdotal poll you mentioned was conducted on Reddit; a truly random poll would presumably find online debate / discussion spaces to be even less important.
In contrast, people do make progress in like-minded spaces (2). But given that you find these spaces draining, the risk of them burning you out and distracting you from more effective activity presumably exceeds any sort of marginal benefit from active participation. No one can do everything. Each person has different aptitudes, passions, and limitations that influence what actions are best suited to them. It sounds like yours are not well aligned to active participation in discussions, and that's fine. It's important that someone conduct in-depth research, communicate it, and discuss it, but it's not important that any given person does so (especially if it doesn't align with their aptitudes, passions, and limitations).
As far as being up to date, I think it's fine to find someone you trust and defer to their judgment as to donation targets. There are respectable reasons to think the end results would be better than trying to do your own research -- especially if you're not feeling motivated to do in-depth research and analysis.
That leaves (1), which is neither of limited utility like in (3) nor can others clearly substitute as in (2). It would be ideal if you could briefly mention certain things without feeling preachy, moralizing, or cringe. And it might make you feel better in the long run to take small steps to be publicly living in accordance with your values -- not trying to "convert" other people, but not hiding those values in shame either. Maybe a post on your social media linking to (e.g.) GiveWell and identifying yourself as a donor could be a step in that direction?[1] if anyone thinks that "overly self-identified," that's a them problem, not a you problem! But I wouldn't say it is ethically insufficient to be quiet.
Others may have more helpful things to say about how to identify as a vegan in ways that you'd find not too uncomfortable. Whether justified or not, vegans do have a reputation in some circles as being "preachy, moralizing, or overly self-identified by" their veganism. As far as I know, effective givers do not have that kind of general reputation, and posting about a charity to which you donate is a normal thing for people to do at least in my non-EA social circles.
Thanks for the clarification; I struck that bullet point from my comment. Sorry that my phrasing didn't accomplish what I meant to say -- that a non-funding decision would be consistent with anything between the funder being strongly opposed to the organization and the funder concluding that it was just under their bar. I'm glad to hear PauseAI is doing better with fundraising than I thought.
Does the evidence support a conclusion that EAs as a whole have some sort of consensus that is against pause advocacy and/or PauseAI US? The evidence most readily available to me seems mixed.
Holly's posts relating to AI issues have on average received significant karma on net over the past ~2 years, such as:
There's of course much more to EA than the Forum, but its metrics have the advantage of being quantifiable and thus maybe a little less vibes-based than some competing measures.
(If the country being invaded is democratic and holds elections during wartime, this decision would even have collective approval from citizens, since they'd regularly vote on whether to continue their defensive war or change to a government more willing to surrender to the invaders.)
There's some force to this -- but in most cases only a minority of citizens are at risk of conscription. This is usually true on demographics alone -- e.g., men 25-60 in Ukraine, significantly narrower in 20th century US conscriptions -- and then is narrowed down further. A fair number of people who are demographically eligible know they would have discretionary exemptions or mandatory exclusions (e.g., based on disability, occupation, single parenthood, etc.). Those who would reap the benefits of not surrendering to invaders, but would not personally bear the costs of conscription, would be incentivized to vote for more than the optimal amount of conscription (and for undercompensating those who were conscripted).
They weren’t just replicating the effort of experts.
In fact, they were largely building off the efforts of recognized domain experts. See, e.g., this bibliography from 2010 of sources used in the "initial formation of [its] list of priority programs in international aid," and this 2009 analysis of bednet programs.
The U.S. government advice was pretty bad, but I don't think this was from lack of knowledge. I think it was more a deliberate attempt to downplay the effectiveness of masks to mitigate supply issues.
I also wouldn't expect the government to necessarily perform well on getting the truth out there quickly, or on responding well to low-probability / high impact events by taking EV+ actions that cause significant disruption to the public. Government officials have to worry about the risk of stoking public panic and similar indirect effects much more than most private individuals, including rationalist thinkers. For example, @Denkenberger🔸 mentions some rationalists figuring out who they wanted to be locked down with on the early side; deciding that the situation warrants this kind of behavior -- like deciding to short the stock market, or most other private-actor stuff -- doesn't require consideration of indirect effects like government statements do. Nor are a political leader's incentives aligned to maximize expected value in these sorts of situations.
So I'd consider beating the government to be evidence of competence, but not much evidence of particularly early or wise performance by private entities.
Additional reasons this might be true, at least in the EA space: