The professional politicians of the Republican party were not close to siding with Trump. Will the Republican speaker (elected by the median house Republican) see higher expected value in supporting a coup or rejecting it? The party loses massive membership if they support, and gains defacto political power if they win. But Republicans just want to veto bills, so why transition to a populist regime. It will never be a good choice for the party.
The Republican House Minority Leader, Kevin McCarthy, was on Fox News November 6 saying, "Donald Trump won this election, so everyone who's listening: do not be quiet. Do not be silent about this. We cannot allow this to happen before our very eyes...Join together and let's stop this." He later signed onto an amicus brief supporting a lawsuit that, if successful, would have overturned the election in four states after the results were already certified. He then voted to reject certification of the election results in Arizona and Pennsylvania after the insurrection, along with most of his caucus.
Hi Ramiro, that would be fine, although I recommend you caveat with the context that this is all in development/subject to change/etc. Thanks!
In fairness, David Moss was doing useful things in EA way before me, so I should probably be Ian David NO NOT THAT DAVID Moss!
David, I hate to remind you that EA interventions are supposed to be tractable...
I also found that confusing, for what it's worth.
As part of the working group's activities this year, we're currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn't really have an explicit consideration for downside risk. So we've added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we've come up with by which an intervention to improve decision-making could cause more harm than good:
I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.
To clarify, when I wrote "without the promise of scale on the other side it's really hard to justify taking risks," I was talking from the perspective of the founder pouring time and career capital into a project, not a funder deciding whether to fund it.
I generally think that full-time social entrepreneurship (in the sense of being dependent on contributed income) early in one's career is quite risky and a bad idea for most people no matter what context or community you're talking about. I would say that, if anything, EA has made this proposition seem artificially attractive in recent years because of a) the unusual amount of money it's been able to attract to the cause during its first decade of existence and b) the high profile of a few outlier founders in the community who managed to defy the odds and become very successful. But the fundamental underlying reality is that it's really hard to scale anything without a self-sustaining business model, and without the promise of scale on the other side it's really hard to justify taking risks.
With that being said, I do think that risk-taking is really valuable to the community and EA is unusually well positioned to enable it without forcing founders to incur the kinds of costs you're talking about. One option, as tamgent mentioned in another comment, is to encourage entrepreneurship as a side project to be pursued alongside a job, full-time studies, or other major commitment. After all, that's how GiveWell, Giving What We Can, and 80,000 Hours all got started, and the lack of a single founder on the job full-time at the very beginning certainly didn't harm their growth. Another option, as EA Funds is now encouraging, is to make a point of generously funding time-limited experiments or short-term projects that provide R&D value for the community without necessarily setting back a founder or project manager in their career. Finally, EA funders could seek to form stronger relationships with funders outside of the community that are aligned on specific cause areas or other narrow points of interest to be better referral sources and advocates for projects that expect to require significant funds over an extended period.
But coming back to your core point, I would definitely encourage most EAs to pursue full-time employment outside of the EA community, even if they choose to stay within the social sector broadly. It's a vast, vast world out there, and all too easy to draw a misleading line from EA's genuinely impressive growth and reach to a wild overestimate of the share of relevant opportunities it represents for anyone trying to make the world a better place.
Would you include even cases that rely on things like believing there's a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems they'll have?
My general intuition is that if there's a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, I'm hard pressed to think of examples of things that are obviously worth working on that don't meet that test. But I'm happy to be further educated on this subject.
How do you feel about longtermist work that specifically aims at one of the following?
Yeah, that sort of "anti-fragile" approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.
A part of it, definitely. At the same time, there are other projects that may not offer much opportunity for innovation but where I still feel I can make a difference because I happen to be good at the thing they want me to do. So a more complete answer to your original question is that I choose and seek out projects based on a matrix of factors including the scale/scope of impact, how likely I am to get the gig, how much of an advantage I think working with me would offer them over whatever the replacement or alternative would be, how much it would pay, the level of intrinsic interest I have in the work, how much I would learn from doing it, and how well it positions me for future opportunities I care about.