Yelnats T.J.

521 karmaJoined


Co-founder of Concentric Policies

CE Incubatee 2023

Talk to me about American governance/political systems/democracy


My journey to EA:

  • 2009: start arriving at utilitarian-adjacent ethics
  • Dec 2012: read Peter Singer’s Famine Affluence and Morality
  • Circa 2013/14: find my way to EA through googling about Singer and FAaM
  • 2014-2019: in the orbit of EA. i.e. will talk to people about morality and utilitarian stuff but not very engaged in the community aside from attending uni club meeting every once and while.
  • 2020: EAGxVirtual (I’m starting to move from the orbit closer to the actual community)
  • 2022: Dive deep into the community. And now we arrive at the present day. 


Topic contributions

This is devastating to find out. My interactions with Marisa were very limited but quite significant to me.

I rescued a two-month street kitten while on the EA Zanzibar residency. When I got back stateside, she tested (faint) positive for a disease and my partner did not want to risk exposing the other cats to her. I urgently needed to find a place to stay with her while we waited to re-test her, and I was running out of options.

I had only met Marisa once in-person before. A mutual friend suggested talking to her. She was visiting family for the Easter weekend and let me stay with my kitten at her place. Marisa was invaluable in that moment of personal crisis for me.

Joey has answered this before elsewhere (i.e. why doesn't CE just open programs instead of spin-off charities). The answer is that starting a chair leads to more ownership and thus better results.

I'd also add that many programs in one charity raises the stakes of the leadership's judgement/decision-making. More charities in a way acts like diversification.

I would start with some version of the child drowning scenario in Peter Singer's Famine Affluence and Morality.

Two considerations:
1. does protecting democracy have to be "painting with leftist colors"?
2. even if it was, does the ROI justify it?

On the first, as noted in this EAGxVirtual lightning talk I gave on the US context, the design of the political system is a big upstream cause of authoritarian voter bases and to a larger extent authoritarian politicians. Many of the reforms to the US political system that would in the short to long term reduce the antidemocratic threat are "bi-populist," as I like to call it.

Left- and Right-wing populists are generally for campaign finance reform, preventing politicians from becoming lobbyists, having a districting and voting system that enables third parties, etc. There are some notable ver partisan exceptions like moving from the electoral college to national popular vote and making the Senate less minoritarian.

In the US context, I think disciplined and skilled advocates can keep political system reform as populist/anti-establishment issue and avoid culture war framing. I'm not sure how this generalizes to Europe or Germany. While I don't think it generalizes 100%, I suspect it generalizes at least a little.

On the second, I think the ROI from this issue is uniquely higher than other political issues. small-L liberal democracies (e.g. Germany, United States, Italy, etc) falling into something other than liberal democracies (Hungary is a good example of this) strikes me as patently super bad for suffering of humans/animals, the longterm future, EA agenda, and any other issues that we might care about. I think this is uniquely true of the United States because it is the world superpower and leading place for emerging technologies. However, the stakes are high even in places like Germany. How much would be lost alone by an authoritarian nativist government coming to power and eliminating all German foreign aid?

In short, if the Great Powers stop being liberal democracies, a lot of our agenda becomes moot because we will now have bigger problems. Instead of working to make German aid more effective, we will be working to get the govt to give any aid at all and ensure an authoritarian govt doesn't manipulate the political process to never relinquish power. Instead of working to get the United States Government to approach international AI coordination in a way that doesn't lead to an international arms race in capabilities, we will be fighting to keep a strongman from disrupting the global world order that that coordination is underpinned by.

I would try to improve the political system which is upstream of the dysfunctional politics which is upstream of bad policy that leads to needless policy in the US. I think you'll get better ROI be cause the dividends will affect so many other areas EAs care about.

And the results are in. They do save lives:

I think the ironic thing is that actually many EAs would say that morality is subjective similar to as your friend claimed it to be. However, the fact that morality is subjective doesn't stop us from adopting EA principles.

And what led us to these principles over all the other ones that we could adopt in a universe of subjective morality? It's because we think they are the ones that make most sense. The child drowning exercise is a powerful example that most people's moral intuitions logically extrapolate to principles at the core of EA. If that is the case, then we are simply asking people to be consistent with the logic of their own morality rather than telling them to accept these principles as objective morality.

I think the child drowning thought experiment in Peter Singer's Famine, Affluence and Morality is great and compelling entry point for people of all walks of life to understand why many EAs are driven to do what they do.

People have different definitions of EA within the EA community. However the definitions that get more buy-in end up getting simpler. I think the lowest-common-denominator definition of EA is that it is both a set of principles and community centered around the belief that 1) we have a moral obligation to do good in the world and 2) we should be very thoughtful about how we do good so that we can end up doing more good.

So when it comes to "programs we'd like to see" including "a comprehensive investigation into FTX<>EA connections / problems," I take it that you disagree with that recommendation I.. I'd be interested to hear from those that proposed it what they hope to get out of it.

I'm not in an authoritative vantage point to say it would be fruitful. But from a conversation I had with someone that knew much more intimately how exposed non-FTX EAs were to SBF/FTX prior to the crash, and they said there are still many people in influential positions in EA that have not been held accountable for having enough exposure to have raised a yellow-flag (about SBF/FTX governance practices and behavior that on first look would have been value misaligned).

That to seems to me like one concrete benefit to the community of having another investigation. I've heard from a few people that the multi tens-of-millions dollar penthouse was known by multiple influential EAs and Lewis's book corroborates that. The penthouse and FTX's sponsorship deals (paying way over market to sponsor an E-sports team or buying StoryBook Brawl) appear to me like the clearest yellow/red flags that should have elicited scrutiny from non-FTX EAs.

As a community member, I'd like to know if influential non-FTX EAs

  1. saw this as yellow/red flags. If not, why didn't they.
  2. if so, what did they do about.
  3. If they didn't do anything about it, why was that. Did they just assume SBF knew what he was doing, were they afraid of offending him, etc?

I've heard from one person a rationale for why the penthouse made sense. And there could be more merits as to why some of these things that appear as yellow/red flags actually aren't. yet this discussion doesn't seem to be happening publicly and I don't know to what extent its happening privately, but it strikes me as a discussion that should happen and should be public to the community.

The FTX episode--not whether EAs could have caught the fraud, but if they should have been scrutinizing FTX/SBF more--is an important reflection how well the community/movement currently manages itself, especially the orgs and people with the most power in shaping the movement. I.e. there are important lessons to be learnt from. Especially since their were known concerns about SBF as early Alameda (another thing whispered about on the Forum that we didn't get more insight into until Lewis's book).

"It's unlikely that a similar disaster will happen soon, so it might not be particularly urgent to set up programs to prevent similar future disasters."
^ This sentiment reflects why I'm worried that some parts of EA haven't fully learned the lessons of the FTX saga (which some think only apply to FTX and the broader community, but I've yet to be convinced). When triaging, you can always push off something important that isn't urgent, but this is the slippery slope that leads to never doing it before it's too late. Governance and PR disasters are not always going to foreseeable.
Also, memory fades with time, which can affect the ability to understand what happened.

Load more