1266 karmaJoined Nov 2014


Agree with this and also with the point below that the EA angle is kind of too complicated to be super compelling for a broad audience. I thought this New Yorker piece's discussion (which involved EA a decent amount in a way I thought was quite fair -- https://www.newyorker.com/magazine/2023/10/02/inside-sam-bankman-frieds-family-bubble) might give a sense of magnitude (though the NYer audience is going to be more interested in these sort of nuances than most.

The other factors I think are: 1. to what extent there are vivid new tidbits or revelations in Lewis's book that relate to EA and 2. the drama around Caroline Ellison and other witnesses at trial and the extent to which that is connected to EA; my guess is the drama around the cooperating witnesses will seem very interesting on a human level, though I don't necessarily think that will point towards the effective altruism community specifically.

Yeah I should have clarified that I knew you're not a native speaker and understand why that motivates your argument, but the harm of being exclusionary stems in part because not every reader will know that. (Though I think even if every reader did know that you were a non-native speaker, it still does create a negative effect (via this exclusionary channel) albeit a smaller one).

Also I didn't take your claim to be "investigations should not only take place in cases where their results will be made public." (Which seems to be the implication of your reply above but maybe I'm misunderstanding). I don't think "public exposes are useful" implies that you need to necessarily conduct the work needed for a public expose in cases where you suspect wrongdoing.

Should also say as your friend that I recognize it sucks to be criticized especially when it feels like a group pile-on, and I appreciate your making controversial claims even if I don't agree with them.

Linch, surprised you felt like titotal wasn't reading your comment properly, since I feel like they make a version of the basically right argument here which is around deterrence and the benefits of public knowledge of wrongdoing outside the specific case. Any sort of investigatory/punitive process (e.g. in most legal contexts) will often have resources devoted to it that are very significant compared to the actual potential wrongdoing being interrogated. But having a system that reliably identifies wrongdoing is quite valuable (and even a patchwork system is probably also quite valuable). Plus there are a whole bunch of diffuse positive externalities to information (e.g. not requiring each actor in the system to spend the effort making a private judgment that has a decent chance of being wrong).

I think the broader problem with your argument here is it's an example of consequentialism struggling to deal with collective action problems/the value of institutions. The idea that all acts can be cashed out into utility (i.e. "world is burning" above) struggles to engage with cases where broader institutions are necessary for an ecosystem to function. To use an example from outside this case, if one evaluates  public statements on their individual utility (rather than descriptive accuracy), it can stymie free inquiry and lead to poorer decision-making. (Not saying this can never be accounted for through a consequentialist or primarily consequentialist theory but I think it's a persistent and difficult problem). 

I think "you didn't seem to read my comment, which frustrates me" is a better thing to say to someone than "are you a native english speaker?" since it seems to get at the problem more directly and isn't exclusionary to non-native speakers (which is rude, even if that's not the intention). I also think the instant case should give pause about the way you're attempting to deal with bad faith critics, since labeling a critic mentally as poorly comprehending or in bad faith can be a subconscious crutch to miss the thrust of their argument. 

EA isn't unitary so people should individually just try cooperating with them on stuff and being like "actually you're right and AIs not being racist is important" or should try to make inroads on the actors' strike/writer's strike AI issues. Generally saying "hey I think you are right" is usually fairly ingratiating. 

For what it's worth, a friend of mine had an idea to do Harberger taxes on AI frontier models, which I thought was cool and was a place where you might be able to find common ground with more leftist perspectives on AI

This is really interesting. Thanks for sharing!

I think:

  1. If you have a lot of influence, articles like this are inevitable.
  2. EAs in AI should really try to make nice with the AI ethics crowd (i.e. help accomplish their goals). That's where the most criticism is coming from. From my perspective their concerns are useful angles of attack into the broader AI safety problem, and if EA policy does not meet the salient needs of present-day people it will be politically unpopular and lose influence (a challenge for the political longtermism agenda more broadly).
  3. I agree about EAs needing to cast a wider net, in really every sense of the term. We also need to be flexible to changing circumstances, particularly in something like AI that is so rapidly moving and where the technology and social consequences are likely to be far different in crucial respects to earlier predictions of them (even if the predictions are mostly true -- this is a very hard dynamic to manage).
  4. The article underscores the dangers to a movement so deeply connected to one foundation, and I expect we'll see Open Phil becoming more politically controversial (and very possible perceived as more Soros-esque) fairly soon.
  5. EA is also vulnerable to criticism as an elitist movement, and its interconnection with the AI industry will make it seem biased. 
  6. EA is not a unitary actor and EAs will often have opposing views on things. This makes any sort of reputation management quite challenging.
  7. The most natural precedent to EA are the Freemasons and people hated them.

Thanks for writing this! Some quick thoughts on possibilities for CEA to consider:

  1. Moving to a Membership Model: I think Open Phil's status as the main customer of CEA (raised above) is a problem and that a move to CEA as a membership organization (with board elected by the membership) could help with this. Membership could be anyone who provides evidence of giving >5% of money to charity (maybe excluding other religious groups) who chooses to register as a member. (You could also create some sort of application process for people outside the 5% donors -- that number just seems to be a useful commitment mechanism). 
  2. Rotating Annual Presidents: One way to get broader buy-in and legitimacy would be to do what professional societies do and have the public face of the organization (the president) rotate each year (or on some regular basis) and then have an executive director who manages the organization's operations. This could also help organize how CEA's board should function (since often professional societies structure their board around the transition from past to future presidents, where the board is made up of next year's president, the current president, the past year's president, and a few other potential candidates for the next year's president). 
  3. Dissociate from FTX: It would probably be good for people who worked at FTX/FTX Foundation to leave the EV/CEA board prior to the Sam Bankman Fried trial. 

Also a direction for CEA that would interest me would be to search for, evaluate, and highlight historical or current effective altruist projects in the world (i.e. things that are plausibly altruistic and come from outside the "effective altruist" community but are likely to fall within 1/10th the GiveWell bar). 

Will flag that I think EA should move towards a much more decentralized community/community-building apparatus (e.g. split up EV into separate nonprofits that may contract with the same entity for certain back-office functions). I also think EA community building should be cause neutral/individual centric and not community/cause-centric (i.e. support people who want to be effectively altruistic in their attempt to live a meaningful life rather than drive energy towards effective causes). I think the attempt to sort of be utilitarian all the way down and use the community-building arm to drive towards the most effective goals creates harmful epistemic and political dynamics -- a more neutral and member-empowering approach would be better. 

Thanks, Howie for posting this. Glad to see an experienced and trustworthy hand at the wheel during a difficult time.

A bleg I have would be for some EA with a bit of time on their hands to take a look at the publicly available UK charitable inquiry incident reports to see what % result in regulatory action (and/or findings of wrongdoing) as well as other useful details as precedent. I think this would be helpful in giving a sense of what to expect for EV UK going forward and what steps should be taken in advance. Based on my very quick and rough perusal of the first  five reports listed on the site, it looks like all five inquiries identified misconduct and resulted in regulatory action. 

It looks like the Commission does have an ability not to publish finished reports, so it's possible those are an unrepresentative sample of inquiries, but (on a very very preliminary glance) the outlook does not seem especially promising. 

I wrote this up a couple days ago and haven't gotten a chance to post it -- sorry if this is repetitive with other comments made since then.

I admit my reasoning here my be unduly sketchy: I'm trying to act on the view that EA forum commenting should be mainly recreational. But I was fairly surprised to see my opinion on this FAQ differed sharply from the other comments I read. On the one hand, signing off on a grant to a Holocaust denialist doesn't mean you're a bad person or your foundation isn't doing good work. On the other, it's a serious lapse in judgment that deserves some sort of root-cause analysis and attempts to fix the problem, which I don't see in the current FAQ, which I find to be an (understandably) one-sided PR document.  That's fine as far as it goes, but for me personally a good faith attempt to prove this was an isolated incident needs to go deeper and has to at the very least involve publicly posting the November correspondence rejecting the grant prior to the December media inquiry.

I admittedly don't understand Swedish politics or culture and may be misunderstanding the nature of Nya Dagbladet's political positioning or of the various documents disclosed. But as someone who's run nonprofits for a while, every time I've received a letter like what Future of Life Institute provided Nya Dagbladet, I've received a donation. (Query if FLI has ever issued a letter like this without making a donation). I don't know how much FLI has under management or how it makes grant decisions, but $100K is 1-2 years of someone's salary, so foundations I've worked with have always been very careful not to send clear messages of grantmaking like that unless a final decision had been reached. 

Picture this in an Open Philanthropy context. In my experience,  one way Open Phil has provided grants is by making a recommendation to Silicon Valley Community Fund (SVCF), which then handles the logistics of making the grant (including due diligence). Imagine that Open Phil sent a recommendation to SVCF to make a $100,000 grant to Infowars (a far-right purveyor of mistruth) and then decided against providing the money after diligence. That would be alarming! On the one hand, good that the diligence process caught it, on the other, how the hell did they decide that an InfoWars grant  would be a good idea? 

By my read, the FLI/Nya Dagbladet case seems similar. The FAQ claims that Tegmark was not aware of the organization's far-right sympathies, which seems either (a) a sign of a poor process at FLI or (b) untrue (given that Tegmark's brother had written for  Nya Dagbladet on multiple occasions and Tegmark had apparently appeared on a podcast featured on their website and hosted by the same brother). Either way, why is FLI making $100,000 grants (or telling grantees its making grants) to an outlet tied to Holocaust deniers?

My nonprofit 1Day Sooner has received funding from Jaan Talinn (a major funder of FLI), and we appreciate that funding and his overall generosity for good causes. And I do endorse the principle that charitable giving is praiseworthy and should be incentivized (i.e. a foundations' decision-making doesn't have to be perfect for it to be valuable and the default framing of attention towards charitable giving should be positive and not negative). I also respect Max Tegmark and find him to be a brilliant scientist. But I worry this could be a place where the discussion of the EA Forum involves tribal affinity politics around an effective altruist identity and is blinded via high trust to a more natural explanation that requires a deeper fix.

Thanks for this comment! My argument about community building's particular role  is that I think there were certain "community building" efforts specifically that caused the existence of FTX. The founder was urged to work in finance rather than on animal welfare, and then worked at CEA prior to launching Alameda. Alameda/FTX were seen as strategies to expand the amount of funding available to effective altruist causes and were founded and run by a leadership team that identified as effective altruist (including the former CEO of the Center for Effective Altruism). The initial funding was from major EA donors.  To me the weight of public evidence really points to Alameda as having been incubated by Center for Effective Altruism in a fairly clear way. 

It's possible that in the absence of Alameda/FTX's existence its niche would have been filled by another entity that would have done similarly bad things, but it seems hard for me to imagine that without institutional EA's backing FTX would have existed.

Load more