All of David Mathers's Comments + Replies

I should probably stop posting on this or reading the comments, for the sake of my mental health (I mean that literally, this is a major anxiety disorder trigger for me.) But I guess I sort of have to respond to a direct request for sources. 

 

Scott's official position on this is agnosticism, rather than public endorsement*. (See here for official agnosticism: https://www.astralcodexten.com/p/book-review-the-cult-of-smart)

However, for years at SSC he put the dreaded neo-reactionaries on his blogroll. And they are definitely race/IQ guys. Meanwhile... (read more)

I think this is too pessimistic:  why did one of Biden's cabinet ask for Christiano in one of the top positions at the US gov's AI safety org if the government will reliably prioritize the sort of factors you cite here to the exclusion of safety?: https://www.nist.gov/news-events/news/2024/04/us-commerce-secretary-gina-raimondo-announces-expansion-us-ai-safety

I also think that whether or not the government regulates private AI has little to do with whether it militarizes AI. It's not like there is one dial with "amount of government" and it just gets ... (read more)

I trust EV more than the charity commission about many things, but whether EV behaved badly over SBF is definitely not one of them. One judgment here is incredibly liable to distortion through self-interest and ego preservation, and it's not the charity commission's. (That's not a prediction that the charity commission will in fact harshly criticize EV. I wouldn't be surprised either way on that.) 

'also on not "some moral view we've never thought of".'

Oh, actually, that's right. That does change things a bit. 

People don't reject this stuff, I suspect, because there is frankly, a decently large minority of the community who thinks "black people have lower IQs for genetic reasons" is suppressed forbidden knowledge. Scott Alexander has done a lot, entirely deliberately in my view, to spread that view over the years (although this is probably not the only reason), and Scott is generally highly respected within EA. 

Now, unlike the people who spend all their time doing race/IQ stuff, I don't think more than a tiny, insignificant fraction of the people in the com... (read more)

1
nathan98000
3d
Any links to where Scott Alexander deliberately argues that black people have lower IQs for genetic reasons? I've been reading his blog for a decade and I don't recall any posts on this.

Materialism is an important trait in individuals, and plausibly could be an important difference between groups. (Certainly the history of the Jewish people attests to the fact that it has been considered important in groups!) But the horrific recent history of false hypotheses about innate Jewish behavior helps us see how scientifically empty and morally bankrupt such ideas really are.

Coincidentally, I recently came across an academic paper that proposed a partial explanation of the current East Asian fertility crisis (e.g., South Korea's fertility dec... (read more)

I find it easy to believe there was a heated argument but no threats, because it is easy for things to get exaggerated, and the line between telling someone you no longer trust them because of a disagreement and threatening them is unclear when you are a powerful person who might employ them. But I find Will's claim that the conversation wasn't even about whether Sam was trustworthy or anything related to that, to be really quite hard to believe. It would be weird for someone to be mistaken or exaggerate about that, and I feel like a lie is unlikely, simply because I don't see what anyone would gain from lying to TIME about this.

Nathan's comment here is one case where I really want to know what the people giving agree/disagree votes intended to express. Agreement/disagreement that the behaviour "doesn't sound like Will'? Agreement/disagreement that Naia would be unlikely to be lying? General approval/disapproval of the comment? 

Jonas V
6d30
10
4
1
1

I disagree-voted because I have the impression that there's a camp of people who left Alameda that has been misleading in their public anti-SBF statements, and has a separate track record of being untrustworthy.

So, given that background, I think it's unlikely that Will threatened someone in a strong sense of the word, and possible that Bouscal or MacAulay might be misleading, though I haven't tried to get to the bottom of it.

Yes, but not at great length. 

From my memory, which definitely could be faulty since I only listened once: 

He admits people did tell him Sam was untrustworthy. He says that his impression was something like "there was a big fight and I can't really tell what happened or who is right" (not a direct quote!). Stresses that many of the people who warned him about Sam continued to have large amounts of money on FTX later, so they didn't expect the scale of fraud we actually saw either. (They all seem to have told TIME that originally also.) Says Sam w... (read more)

 Will's expressed public view on that sort of double or nothing gamble is hard to actually figure out, but it is clearly not as robustly anti as commonsense would require, though it is also clearly a lot LESS positive than SBF's view that you should obviously take it: https://conversationswithtyler.com/episodes/william-macaskill/

(I haven't quoted from the interview, because there is no one clear quote expressing Will's position, text search for "double" and you'll find the relevant stuff.) 

Actually, I have a lot of sympathy with what you are saying here. I am ultimately somewhat inclined to endorse "in principle, the ends justify the means, just not in practice" over at least a fairly wide range of cases. I (probably) think in theory you should usually kill one innocent person to save five, even though in practice anything that looks like doing that is almost certainly a bad idea, outside artificial philosophical thought experiments and maybe some weird but not too implausible scenarios involving war or natural disaster. But at the same time... (read more)

I don't necessarily disagree with most of that, but I think it is ultimately still plausible that people who endorse a theory that obviously says in principle bad ends can justify the means are somewhat (plausibly not very much though) more likely to actually do bad things with an ends-justifies-the-means vibe. Note that this is an empirical claim about what sort of behaviour is actually more likely to co-occur with endorsing utilitarianism or consequentialism in actual human beings. So it's not refuted by "the correct understanding of consequentialism mos... (read more)

I don't necessarily disagree with any of that, but the fact that you asserted it implicates you think it has some kind of practical relevance which is where I might want to disagree.

I think it's fundamentally dishonest (a kind of naive instrumentalism in its own right) to try to discourage people from having true beliefs because of faint fears that these beliefs might correlate with bad behavior.

I also think it's bad for people to engage in "moral profiling" (cf. racial profiling), spreading suspicion about utilitarians in general based on very speculative... (read more)

The 3% figure for utilitarianism strikes me as a bit misleading on it's own, given what else Will said. (I'm not accusing Will of intent to mislead here, he said something very precise that I, as a philosopher, entirely followed, it was just a bit complicated for lay people.) Firstly, he said a lot of the probability space was taken up by error theory, the view that there is no true morality. So to get what Will himself endorses, whether or not there is a true morality, you have to basically subtract an unknown but large amount for his credence in error th... (read more)

My memory of the podcast (could be wrong, only listened once!) is that Will said that, conditional on error theory being false, his credence in consequentialism, is about 0.5.

I think he meant conditional on error theory being false, and also on not "some moral view we've never thought of".

Here's a quote of what Will said starting at 01:31:21: "But yeah, I tried to work through my credences once and I think I ended up in like 3% in utilitarianism or something like. I mean large factions go to, you know, people often very surprised by this, but large fact... (read more)

Actually, Wikipedia is characterizing the US bills a bit misleadingly: at least the one I looked at is not a full legal ban on trans women using women's bathrooms, but seemed to only cover schools specifically. 

3
EclecticAltruism
8d
A not insignificant portion of funding for transphobia in the UK and EU comes from evangelists in the US. 
5
David Mathers
8d
Actually, Wikipedia is characterizing the US bills a bit misleadingly: at least the one I looked at is not a full legal ban on trans women using women's bathrooms, but seemed to only cover schools specifically. 
2[comment deleted]8d
-17
huw
8d

The answer to 2 is almost certainly yes (binary) trans people in the UK are legally allowed to use a toilet that matches their gender identity, as far as I can tell. The first hit on google for 'are trans women allowed to use women's toilets in the UK' is an official Metropolitan (i.e. London) Police doc which states:

'If someone (whether binary or non-binary) presents as, say, female then they use the female toilet and vice versa. There is no law or policy prohibiting anyone from using whichever toilet matches their gender identity, and a trans* individual... (read more)

But I do take your point that Will was not the only person involved, or necessarily the most important internally. 

I think part of the issue is that the Time article sort of contains a very serious allegation about Will specifically, which he hasn't (yet*) publicly given evidence against (or clearly denied), namely Naia Bouscal's allegation that he threatened Tara Mac Aulay. I say "sort of", because saying someone "basically" X-ed, where X-ing is bad arguably carries the implication that it maybe wasn't really X exactly, but something a bit like it. Which makes it a bit hard to tell exactly what Naia was actually accusing Will of. Alas, as far as I know, she's not said... (read more)

But I do take your point that Will was not the only person involved, or necessarily the most important internally. 

I must say that, given that I know from prior discussion on here that you are not Will's biggest fan, your attempt to be fair here is quite admirable. There should maybe be an "integrity" react button? 

I think that even if you buy that, Will's behavior is still alarming, just in a different way. Why exactly should we, as a community, think of ourselves as being  fitted to steer public opinion? Weren't we just meant to be experts on charity, rather than everything under the sun? (Not to mention that Musk is not the person I would choose to collaborate with on that, but that's for another day.) Will complains about Sam's hubris, but what could be more hubristic than that? 

I remember feeling nervous when I first started working in EA that (otherwi... (read more)

Yeah, I think just buying Twitter to steer the narrative seems quite bad. But like, I have spent a large fraction of my career trying to think of mechanism design for discussion and social media platforms and so my relation to Twitter is I think a pretty healthy "I think I see lots of ways in which you could make this platform much more sanity-promoting" in a way that isn't about just spreading my memes and ideologies. 

Will has somewhat less of that background, and I think would have less justified confidence in his ability to actually make the platform better from a general sanity perspective, though still seems pretty plausible to me he saw or sees genuine ways to make the platform better for humanity.

Unendorsed as I trust Jason on this far more than my own judgment, since he's an actual lawyer.

I strongly suspect Will is trying to avoid being sued.

Even from a purely selfish point of view, explicitly apologising and saying "sorry I made a mistake in not trusting the people who told me Sam behaved badly at Alameda", since it would actually help restore his reputation a bit.

EDIT: Actually the bit about Beckstead here is wrong, I'd misremembered the exact nature of the suit. See Jason's comment below. But Nick Beckstead has already been sued (unsuccessfully) on the grounds that he in some sense endorsed SBF whilst he "should have known" that he was... (read more)

[This comment is no longer endorsed by its author]Reply
4
David Mathers
10d
Unendorsed as I trust Jason on this far more than my own judgment, since he's an actual lawyer.

Can you link to a discussion of the suit in question? I don't think that would be an accurate characterization of the suit I am aware of (complaint analyzed here). That suit is about roughly about aiding SBF in making specific transfers in breach of his fiduciary duty to Alameda. I wouldn't say it is about "endors[ing]" in the ordinary meaning of the word, or that it relied on allegations that Nick should have known about the criminal frauds going on at FTX.

That being said, I do agree more generally that "people who had a role at FTXFF" tend to be at the t... (read more)

If this is the case that MacAskill cannot be forthcoming for valid reasons (opening himself up to legal vulnerability), as a community it would still make sense for us to err on the side of caution and have other leaders for this community as Chris argues for.

I meant something in between "is" and "has a non-zero chance of being", like assigning significant probability to it (obviously I didn't have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy. 

I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures. Also, even if people only did receive stuff that was more centrally rumour, I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside... (read more)

I feel like "people who worked with Sam told people about specific instances of quite serious dishonesty they had personally observed" is being classed as "rumour" here, which whilst not strictly inaccurate, is misleading, because it is a very atypical case relative to the image the word "rumour" conjures.

I agree with this.

[...] I feel like we still want to know if any one in leadership argued "oh, yeah, Sam might well be dodgy, but the expected value of publicly backing him is high because of the upside". That's a signal someone is a bad leader in my view

... (read more)
2[comment deleted]14d

(Well not quite, Wiki edit out "or our culture" as an alternative to "form of government").

3
Ebenezer Dukakis
17d
Thank you. Is your thought that "revolution in our culture or system of government" is supposed to be a call for some kind of fascist revolution? My take is, like a lot of right-leaning people, Hanania sees progressive influence as deep and pervasive in almost all American institutions. From this perspective, a priority on fighting crime even when it means heavily disparate impact looks like a revolutionary change. Hanania has been pretty explicit about his belief that liberal democracy is generally the best form of government -- see this post for example. If he was crypto-fash, I think he would just not publish posts like that. BTW, I don't agree with Hanania on everything... for example, the "some humans are in a very deep sense better than other humans" line from the post I just linked sketches me out some -- it seems to conflate moral value with ability. I find Hanania interesting reading, but the idea that EA should distance itself from him on the margin seems like something a reasonable person could believe. I think it comes down to your position in the larger debate over whether EA should prioritize optics vs intellectual vibrancy. Here is another recent post (titled "Shut up About Race and IQ") that I struggle to imagine a crypto-Nazi writing. E.g. these quotes:
2
David Mathers
17d
(Well not quite, Wiki edit out "or our culture" as an alternative to "form of government").

Please people, do not treat Richard Hannania as some sort of worthy figure who is a friend of EA. He was a Nazi, and whilst he claims he moderated his views, he is still very racist as far as I can tell.

Hannania called for trying to get rid of all non-white immigrants in the US, and the sterilization of everyone with an IQ under 90 indulged in antisemitic attacks on the allegedly Jewish elite, and even post his reform was writing about the need for the state to harass and imprison Black people specifically ('a revolution in our culture or form of governmen... (read more)

2
yanni kyriacos
14d
This is such a common-sense take, that it worries me it needs writing. I assume this is happening over on twitter (where I don't have an account)? The average non-EA would consider this take to be extremely obvious and is partly why I think we should be considered about the composition of the movement in general.

I'd just like to clarify that my blogroll should not be taken as a list of "worthy figure[s] who [are] friend[s] of EA"!  They're just blogs I find often interesting and worth reading. No broader moral endorsement implied!

fwiw, I found TracingWoodgrains' thoughts here fairly compelling.

ETA, specifically:

I have little patience with polite society, its inconsistencies in which views are and are not acceptable, and its games of tug-of-war with the Overton Window. My own standards are strict and idiosyncratic. If I held everyone to them, I'd live in a lon

... (read more)

Your comment seems a bit light on citations, and didn't match my impression of Hanania after spending 10s of hours reading his stuff. I've certainly never seen him advocate for an authoritarian government as a means of enforcing a "natural" racial hierarchy. This claim stood out to me:

Hannania called for trying to get rid of all non-white immigrants in the US

Hanania wrote this post in 2023. It's the first hit on his substack search for "immigration". This apparent lack of fact-checking makes me doubt the veracity of your other claims.

It seems like ... (read more)

I don't think it makes any sense to punish people for past political or moral views they have sincerely recanted. There is some sense in which it shows bad judgement but ideology is a different domain from most. I am honestly quite invested in something like 'moral progress'. Its a bit of a naive position to have to defend philosophically but I think most altruists are too. At least if they are being honest with themselves. Lots of people are empirically quite racist. Very few people grew up with what I would consider to be great values. If someone sincere... (read more)

3
cata
18d
I have been extremely unimpressed with Richard Hanania and I don't understand why people find his writing interesting. But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views. Whether someone speaks at Manifest (or is on a blogroll, or whatever) should be about whether they are going to give an interesting talk to Manifest, not because of their general moral character. Especially not because of the moral character of their beliefs, rather than their actions. And really especially not because of the moral character of things they used to believe.

I'd like to give some context for why I disagree.

Yes, Richard Hanania is pretty racist. His views have historically been quite repugnant, and he's admitted that "I truly sucked back then". However, I think EA causes are more important than political differences. It's valuable when Hanania exposes the moral atrocity of factory farming and defends EA to his right-wing audience. If we're being scope-sensitive, I think we have a lot more in common with Hanania on the most important questions than we do on political issues.

I also think Hanania has excellent tak... (read more)

I think it's pretty unreasonable to call him a Nazi--he'd hate Nazis, because he loves Jews and generally dislikes dumb conservatives.

I agree that he seems pretty racist.

I have very mixed views on Richard Hannania.

On one hand, some of his past views were pretty terrible (even though I believe that you've exaggerated the extent of these views).

On the other hand, he is also one of the best critics of conservatives. Take for example, this article where he tells conservatives to stop being idiots who believe random conspiracy theories and another where he tells them to stop scamming everyone. These are amazing, brilliant articles with great chutzpah. As someone quite far to the right, he's able to make these points far more cr... (read more)

Given his past behavior, I think it's more likely than not that you're right about him. Even someone more skeptical should acknowledge that the views he expressed in the past and the views he now expresses likely stem from the same malevolent attitudes.

But about far-left politics being 'not racist', I think it's fair to say that far-left politics discriminates in favor or against individuals on the basis of race. It's usually not the kind of malevolent racial discrimination of the far-right - which absolutely needs to be condemned and eliminated by society... (read more)

When someone makes the accusation that transhumanism or effective altruism or longtermism or worries about low birth rates is a form of thinly veiled covert racism, I generally think they don’t really understand the topic and are tilting at windmills.

But then I see people who are indeed super racist talking about these topics and I can’t really say the critics are fully wrong. Particularly if communities like the EA Forum or the broader online EA community don’t vigorously repudiate the racism.

4
Jason
19d
To clarify, I think when you say "sterilization of everyone under 90" you mean that he favored the "forcible sterilization of everyone with an IQ below 90" (quoting Wikipedia here)?

https://twitter.com/letsgomathias/status/1687543615692636160   (Just so people can get a sense of how very bad his views at least were, and could still be.) 

I think she provided excellent evidence that at least some of your sources are in fact accurately characterized as "Nazi". Did you actually read the article she linked? 

7
Ives Parr
19d
I still defend that I am not parroting Nazi arguments or citing Nazi sources. I think that's an inaccurate thing to say, and it is quite an accusation. MQ is not a Nazi journal and from what I read in the article not even that guy is a Nazi if we are being technical. The logic here seems to be that he is basically a Nazi, so MQ is basically a Nazi journal, so I'm basically parroting Nazi arguments and citing Nazi sources. This is like calling EA a "crypto-scam-funded organization." I especially take issue with it being said that I'm parroting Nazi arguments. Do you think that is a fair assessment after reading my article?  I think the purpose of saying such a  thing is to throw mud over the whole article because some citations are a journal connected to a racist. But this is the worst way to argue -- the person who ran a journal that published the article says bad things and associates with bad people -- is very far from the central point of the argument. Critiques should strike at the heart of the argument instead of introduce moral disgust about some non-central aspect.  If you had a good critique of the empirical or moral claims, you should forward that. The moral arguments are wholly-EA -- we should help the world through charitable actions to improve the world's welfare (nothing wrong here!). So, then this is just a claim some of the citations are questionable in terms of their empirical quality. Fine, throw out all the MQ citations. I could rewrite my article without them.  I am considering doing this. I still maintain (1) national measures of cognitive ability are associated with good outcomes, and (2) we can use genetic enhancement to boost them. My overall argument still holds IMO, and so this feels like nitpicking used to distort people's intuitions about my article through introducing a lot of moral disgust and then trying to get me banned. This seems like the opposite of what EAs should do.

This is a meta-level point, but I'd be very, very wary of giving any help to Hanania if he attempts (even sincerely) to position himself publicly as a friend of EA. He was outed as having moved in genuinely and unambiguously white supremacist political circles for years a while ago. And while I accept that repentance is possible, and he claims to have changed (and probably has become less bad), I do not trust someone at all who had to have this be exposed rather than publicly owning up and denouncing his (allegedly) past views of his own accord, especially... (read more)

2
David Mathers
19d
https://twitter.com/letsgomathias/status/1687543615692636160   (Just so people can get a sense of how very bad his views at least were, and could still be.) 

'Naive consequentialist plans also seem to have increased since FTX, mostly as a result of shorter AI timelines and much more involvement of EA in the policy space.'

This gives me the same feeling as Rebecca's original post: that you have specific information about very bad stuff that you are (for good or bad reasons) not sharing. 

2
Habryka
20d
I don't particularly feel like my knowledge here is confidential, it would just take a bunch of inferential distance to cross. I do have some confidential information, but it doesn't feel that load-bearing to me.  This dialogue has a bit of a flavor of the kind of thing I am worried about: https://www.lesswrong.com/posts/vFqa8DZCuhyrbSnyx/integrity-in-ai-governance-and-advocacy?revision=1.0.0 

Wasn't the OpenAI thing basically the opposite of the mistake with FTX though? With FTX people ignored what appears to have been a fair amount of evidence that a powerful, allegedly ethical businessperson was in fact shady. At OpenAI, people seem to have got (what they perceived as, but we've no strong evidence they were wrong) evidence, that a powerful, allegedly ethically motivated businessperson was in fact shady, so they learnt the lessons of FTX and tried to do something about it (and failed.) 

1
George Noether
20d
To be more clear, I am bringing the OpenAI drama up as it is instructive for highlighting what is and is not going wrong more generally. I don't think the specifics of what went wrong with FTX point at the central thing that's of concern. I think the key factor behind EA's past and future failures come down to poor quality decision-making among those with the most influence, rather than the degree to which everybody is sensitive to someone's shadiness. (I'm assuming we agree FTX and the OpenAI drama were both failures, and that failures can happen even among groups of competent, moral people that act according to the expectations set for them.) I don't know what the cause of the poor decision-making is. Social norms preventing people from expressing disagreement, org structures, unclear responsibilities, conflicts of interests, lack of communication, low intellectual diversity — it could be one of these, a combination, or maybe something totally different. I think it should be figured out and resolved, though, if we are trying to change the world. So, if there is an investigation, it should be part of a move to making sure EAs in positions of power will consistently handle difficult situations incredibly well (as opposed to just satisfying people's needs for more specific explanations of what went wrong with FTX). There are many ways in which EA can create or destroy value, and looking just at our eagerness to 'do something' in response to people being shady is a weirdly narrow metric to assess the movement on. EDIT: would really appreciate someone saying what they disagree with
8
George Noether
20d
I think that's why it's informative. If EA radically changes in response to the FTX crisis, then it could easily put itself in a worse position (leading to more negative consequences in the world). The intrinsic problem appears to be in the quality of the governance, rather than a systematic error/blind-spot.

'Am I correct in interpreting your comment as something like "Rebecca says it's costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light"?'

Yes, that's what I meant. Maybe not "not all ready disclosed" though. It might just be confirmation that the portraited painted here is indeed fair and accurate: https://time.com/6262810/sam-bankman-fried-effective-altruism-alameda-ftx/  EDIT: I don't doubt that the article is broadly literally accurate, but there's always a big gap between what... (read more)

As am aside, this isn't really action relevant, but insofar as being involved with the legal system is a massive punishment even when the legal system itself is very likely going to eventually come to the conclusion you've done nothing legally wrong, that seems bad? Here it also seems to be having a knock on effect of making it harder to find out what actually happened, rather than being painful but producing useful information.

The suit against Brady also sounds like a complete waste of society's time and money to me.

8
Jason
20d
The legal system doesn't know ex ante whether you've done anything wrong, though. It's really hard to set up a system that balances out all the different ways a legal system can be imbalanced. If you don't give plaintiffs enough leeway to discover evidence for their claims, then tortfeasors will be insufficiently deterred from committing torts. If you go too far (the current U.S. system), you incentivize lawfare, harassment, and legalized extortion of some defendants. Imposing litigation costs / attorney fees on the losers often harms the little guy due to lower ability to shoulder risk & the marginal utility of money. Having parties bear their own costs / fees (generally, the U.S. system) encourages tactics that run up the bill for the other guy. And defendants are more vulnerable to that than plaintiffs as a general rule. Maybe. Maybe people would talk but for litigation exposure. Or maybe people are using litigation exposure as a convenient excuse to cover the fact that they don't want to (and wouldn't) talk anyway. I will generally take individuals at face value given the difficulty of discerning between the two, though.

Who would be able to sue? Would it really be possible for FTX customers/investors to sue someone for not making public "I heard Sam lies a lot and once misplaced money at Alameda early on it and didn't seem too concerned, and reneged on a verbal agreement to share ownership". Just because someone worked at the Future Fund? Or even someone who worked at EV? 

I'd note that Nick Beckstead was in active litigation with the Alameda bankruptcy estate until that was dismissed last month (Docket No. 93). I think it would be very reasonable for anyone who worked at FTXFF to be concerned about their personal legal exposure here. (I am not opining as to whether exposure exists, only that I would find it extremely hard to fault anyone who worked at FTXFF for believing that they were at risk. After all, Nick already got sued!)

It's harder to assess exposure for other groups of people. To your question, there may be a diffe... (read more)

I'm pretty sure if you're aware of it Will is. (Not sure about Harris.) 

The complaints here seem to be partly about HOW EtG is promoted, rather than how MUCH. Though I am mildly skeptical that people in fact did not warn against doing harm to make money while promoting EtG, and much more skeptical that SBF would have listened if they had done this more. 

True. We should make sure any particular safeguard wasn't in place around how people advocated for it before assuming it would have helped though. For what it's worth my sense is that a much more culpable thing was not blowing the whistle on Sam's bad behaviour at early Alameda even after Will and other leaders-I forget exactly who, if it's even known-were informed about it. That mistake was almost certainly far less consequential for the people harmed by FTX (I don't think it would have stopped the fraud; it might have protected EA itself), but I strongly suspect it was more knowably wrong at the time than anything anyone did or said about EtG as a general idea. 

I think there are two separate but somewhat intertwined chains of inquiry under discussion here:

  1. A historical inquiry: what happened in this case, what safeguards failed, what would have helped but wasn't in place?
  2. A ~first-principles re-evaluation of EtG based on an update: The catastrophic failure of the supposedly most successful instance of EtG should update us that we underestimated the risk and severity of EtG downsides. That suggests a broader re-examination of potential risks and safeguards, which may look more appropriate than they did before the up
... (read more)

Also, I don't know if Spencer Greenberg's podcast with Will is recorded yet, but if it hasn't been I think he absolutely should ask Will what he thinks the phrase about "extensive and significant mistakes" here actually refers to. EDIT: Having listened (vaguely, while working) to most of the Sam Harris interview with Will, as far as I can tell Harris entirely failed to ask anything about this, which is a huge omission. Another question Spencer could ask Will is: did you specify this topic was off-limits to Harris? 

I felt the Sam Harris interview was disappointingly soft and superficial. To be fair to MacAskill, Harris did an unusually bad job of pushing back and taking a harder line, and so MacAskill wasn't forced to get deeper into it.

And basically nothing about how to avoid a similar situation happening again? Except for a few lines about decentralisation. Quite uninspiring.

I mostly agree with this, and upvoted strongly, but I don't think the scare quotes around "criticism" is warranted. Improving ideas and projects through constructive criticism is not the same thing as speaking truth to power, but it is still good and useful, it's just a different good and useful thing. 

Also, I feel mean for pressing the point against someone who is clearly finding this stressful and is no more responsible for it than anyone else in the know, but I really want someone to properly explain what the warning signs the leadership saw were, who saw them, and what was said internally in response to them. I don't even know how much that will help with anything, to be honest, so much as I just want to know. But at least in theory, anyone who behaved really badly should be removed from positions of power. (And I do mean just that: positions where t... (read more)

4
Ulrik Horn
20d
Am I correct in interpreting your comment as something like "Rebecca says it's costly to say more which might imply she is sitting on not yet disclosed information that might put powerful EAs in a bad light"? I did not really pick up on this when reading the OP but your comment got me worried that maybe there is some information that should be made public?

ICYMI: I wrote this in response to a previous "EA leaders knew stuff" story. [Although I'm not sure if I'm one of the "leaders" Becca is referring to, or if the signs I mentioned are what she's concerned about.]

'and think confusion on this issue has indirectly resulted in a lot of harm.'

Can you say a bit more about this?

I don't actually find either all THAT reassuring. The GW blogpost just says most nets are used for their intended purpose, but 30% being used otherwise is still a lot, not to mention they can be used for their intended purpose and the later to fish. The Cold Takes blog post just cites the same data about most nets being used for their intended purpose. 

I don't think it's necessary, no. But I do think some early critics of EtG were motivated at least partly by a general anticapitalist case that business or at least finance careers were generically morally problematic in themselves. 

6
Jason
22d
Fair, but that wouldnt be a steelmanned -- or even fairly balanced -- version of criticisms of EtG. It's the weaker part of a partial motivation held by some critics.

Wait, why do you think 2 is false for FTX? (Good comment though!) 

7
huw
22d
I haven’t run the numbers myself but I generally assume that FTX’s account-holders were mostly moderately well-off HIC residents (based on roughly imbibed demographics of crypto), and the Future Fund’s beneficiaries are by and large worse off. There were probably some number of people who invested their life savings or were otherwise poor to begin with that were harmed more significantly then the beneficiaries of their money. But on the whole it feels like it was an accidental wealth transfer, and much of that harm will be mitigated if they’re made whole (but admittedly, the make-whole money just comes from crypto speculation that trades on the gullibility of yet more people). But much less confident in this take; my point is much more around the real harms it caused being worth thinking about.

I had seen the second of these at some point I think, but not the first. 

Any claim that advising people to earn to give is inherently really bad needs to either defend the view that "start a business or take another high paying job" is inherently immoral advice, or explain why it becomes immoral when you add "and give the money to charity" or when it's aimed at EAs specifically. It's possible that can be done, but I think it's quite a high bar. (Which is not to say EtG advice couldn't be improved in ways that make future scandals less likely.) 

huw
23d13
3
2
1

You're right! It's not that ETG is inherently bad (and frankly, I haven't seen anyone make this argument), it's that specific EV-maximising interpretations of ETG cause people to pursue careers that are (1) harmful, (2) net harmful, or (3) too risky to pay off.

Personally, I think FTX was (1) and (3), and unlikely to be (2) probably also (2). I'm not really sure where the bar is, but under any moderately deontological framework (1) is especially concerning, and many of the people EA might want to have a good reputation with believe (1). So that's roughly the worldview-neutral case for caring about strongly rejecting EV-maximising forms of ETG.

3
Jason
23d
I don't think "advising people to earn to give is inherently really bad" is necessary to reach the conclusion that there is a case for EA responsibility here. There exist many ideas that are not inherently bad, but yet it is irresponsible to advocate for them in certain ways / without certain safeguards. An ends-justifies-the-means approach to making money was a foreseeable response to EtG advocacy. Did EA actors do enough to discourage that kind of approach when advocating for EtG?
Load more