IanDavidMoss

Comments

On Mike Berkowitz's 80k Podcast

The professional politicians of the Republican party were not close to siding with Trump. Will the Republican speaker (elected by the median house Republican) see higher expected value in supporting a coup or rejecting it? The party loses massive membership if they support, and gains defacto political power if they win. But Republicans just want to veto bills, so why transition to a populist regime. It will never be a good choice for the party.

The Republican House Minority Leader, Kevin McCarthy, was on Fox News November 6 saying, "Donald Trump won this election, so everyone who's listening: do not be quiet. Do not be silent about this. We cannot allow this to happen before our very eyes...Join together and let's stop this." He later  signed onto an amicus brief supporting a lawsuit that, if successful, would have overturned the election in four states after the results were already certified. He then voted to reject certification of the election results in Arizona and Pennsylvania after the insurrection, along with most of his caucus.

Share your views on the scope of improving institutional decision-making as a cause area

Hi Ramiro, that would be fine, although I recommend you caveat with the context that this is all in development/subject to change/etc. Thanks!

Announcing "Naming What We Can"!

In fairness, David Moss was doing useful things in EA way before me, so I should probably be Ian David NO NOT THAT DAVID Moss!

Announcing "Naming What We Can"!

David, I hate to remind you that EA interventions are supposed to be tractable...

EA Funds has appointed new fund managers

I also found that confusing, for what it's worth.

AMA: Ian David Moss, strategy consultant to foundations and other institutions

As part of the working group's activities this year, we're currently in the process of developing a prioritization framework for selecting institutions to engage with. In the course of setting up that framework, we realized that the traditional Importance/Tractability/Neglectedness schematic doesn't really have an explicit consideration for downside risk. So we've added that in the context of what it would look like to engage with an institution. With the caveat that this is still in development, here are some mechanisms we've come up with by which an intervention to improve decision-making could cause more harm than good:

  • The involvement of people from our community in a strategy to improve an institution's decision-making reduces the chances of that strategy succeeding, or its positive impact if it does succeed
    • (This seems most likely to be a reputation/optics effect, e.g. for whatever reason we are not credible messengers for the strategy or bring controversy to the effort where it didn't exist before. It will be most relevant where there is already capacity in place among other stakeholders or players in the system to make a change, whereby there is something to lose by us getting involved.)
  • The strategy selected leads to worse outcomes than the status quo due to poor implementation or an incomplete understanding of its full implications for the organization
    • (One way I've seen this go wrong is with reforms intended to increase the amount of information available to decision-makers at the expense of some ongoing investment of time. Often, there is insufficient attention put toward ensuring use of the additional information, with the result that the benefits of the reform aren't realized but the cost in time is still there.)
  • A failed attempt to execute on a particular strategy at the next available opportunity crowds out a what would otherwise be a more successful strategy in the near future
    • (This one could go either way; sometimes it takes several attempts to get something done and previous pushes help to lay the groundwork for future efforts rather than crowding them out. However, there are definitely cases where a particularly bad execution of a strategy can poison critical relationships or feed into a damaging counter-narrative that then makes future efforts more difficult.)
  • The strategy succeeds in improving decision quality at that particular institution, but it doesn't actually improve world outcomes because of insufficient altruistic intent on the part of the institution
    • (We do define this sort of value alignment as a component of decision quality, but since it's only one element it would theoretically be possible to engage in a way that solely focuses on the technical aspects of decision-making, only to see the improved capability directed toward actions that cause global net harm even if they are good for some of the institution's stakeholders. I think that there's a lot our community can do in practice to mitigate this risk, but in some contexts it will loom large.)

I think all of these risks are very real but also ultimately manageable. The most important way to mitigate them is to approach engagement opportunities carefully and, where possible, in collaboration with people who have a strong understanding of the institutions and/or individual decision-makers within them.

Is pursuing EA entrepreneurship becoming more costly to the individual?

To clarify, when I wrote "without the promise of scale on the other side it's really hard to justify taking risks," I was talking from the perspective of the founder pouring time and career capital into a project, not a funder deciding whether to fund it.

Is pursuing EA entrepreneurship becoming more costly to the individual?

I generally think that full-time social entrepreneurship (in the sense of being dependent on contributed income) early in one's career is quite risky and a bad idea for most people no matter what context or community you're talking about. I would say that, if anything, EA has made this proposition seem artificially attractive in recent years because of a) the unusual amount of money it's been able to attract to the cause during its first decade of existence and b) the high profile of a few outlier founders in the community who managed to defy the odds and become very successful. But the fundamental underlying reality is that it's really hard to scale anything without a self-sustaining business model, and without the promise of scale on the other side it's really hard to justify taking risks.

With that being said, I do think that risk-taking is really valuable to the community and EA is unusually well positioned to enable it without forcing founders to incur the kinds of costs you're talking about. One option, as tamgent mentioned in another comment, is to encourage entrepreneurship as a side project to be pursued alongside a job, full-time studies, or other major commitment. After all, that's how GiveWell, Giving What We Can, and 80,000 Hours all got started, and the lack of a single founder on the job full-time at the very beginning certainly didn't harm their growth. Another option, as EA Funds is now encouraging, is to make a point of generously funding time-limited experiments or short-term projects that provide R&D value for the community without necessarily setting back a founder or project manager in their career. Finally, EA funders could seek to form stronger relationships with funders outside of the community that are aligned on specific cause areas or other narrow points of interest to be better referral sources and advocates for projects that expect to require significant funds over an extended period.

But coming back to your core point, I would definitely encourage most EAs to pursue full-time employment outside of the EA community, even if they choose to stay within the social sector broadly. It's a vast, vast world out there, and all too easy to draw a misleading line from EA's genuinely impressive growth and reach to a wild overestimate of the share of relevant opportunities it represents for anyone trying to make the world a better place.

AMA: Ian David Moss, strategy consultant to foundations and other institutions

Would you include even cases that rely on things like believing there's a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems they'll have?

My general intuition is that if there's a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, I'm hard pressed to think of examples of things that are obviously worth working on that don't meet that test. But I'm happy to be further educated on this subject.

How do you feel about longtermist work that specifically aims at one of the following?

Yeah, that sort of "anti-fragile" approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.

AMA: Ian David Moss, strategy consultant to foundations and other institutions

A part of it, definitely. At the same time, there are other projects that may not offer much opportunity for innovation but where I still feel I can make a difference because I happen to be good at the thing they want me to do. So a more complete answer to your original question is that I choose and seek out projects based on a matrix of factors including the scale/scope of impact, how likely I am to get the gig, how much of an advantage I think working with me would offer them over whatever the replacement or alternative would be, how much it would pay, the level of intrinsic interest I have in the work, how much I would learn from doing it, and how well it positions me for future opportunities I care about.

Load More