All of Sharmake's Comments + Replies

I basically grant 2, sort of agree with 1, and drastically disagree with three (that timelines will be long.)

Which makes me a bit weird, since while I do have real confidence in the basic story that governments are likely to influence AI a lot, I do have my doubts that governments will try to regulate AI seriously, especially if timelines are short enough.

Yeah, at least several comments have much more severe issues than tone or stylistic choices, like rewording ~every claim by Ben, Chloe and Alice, and then assuming that the transformed claims had the same truth value as the original claim.

I'm in a position very similar to Yarrow here: While I think Kat Woods has mostly convinced me that the most incendiary claims are likely false, and I'm sympathetic to the case for suing Ben and Habryka, there was dangerous red flags in the responses, so much so that I'd stop funding Nonlinear entirely, and I think it's quite bad that Kat Woods responded the way they did.

I unendorsed primarily because apparently, the board didn't fire because of safety concerns, though I'm not sure this is accurate.

It seems like the board did not fire Sam Altman for safety reasons, but instead for other reasons instead. Utterly confusing, and IMO demolishes my previous theory, though a lot of other theories also lost out.

Sources below, with their archive versions included:

https://twitter.com/norabelrose/status/1726635769958478244

https://twitter.com/eshear/status/1726526112019382275

https://archive.is/dXRgA

https://archive.is/FhHUv

While I generally agree that they almost certainly have more information on what happened, which is why I'm not really certain on this theory, my main reason here is that for the most part, AI safety as a cause basically managed to get away with incredibly weak standards of evidence for a long time, until the deep learning era in 2019-, especially with all the evolution analogies, and even now it still tends to have very low standards (though I do believe it's slowly improving right now). This probably influenced a lot of EA safetyists like Ilya, who almos... (read more)

2
David Mathers
5mo
Why did you unendorse?

If Ilya can say "we're pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn't be trusted to manage this safely" with proof that might work - but then why not say that?

I suspect this is due to the fact that quite frankly, the concerns they had about Sam Altman being unsafe on AI basically had no evidence except speculation from the EA/LW forum, which is not enough evidence at all in the corporate world/legal world, and to be quite frank, the EA/LW standard of evidence on AI risk being a big deal ... (read more)

8
Lukas_Gloor
5mo
Haha, I was just going to say that I'd be very surprised if the people on the OpenAI board didn't have access to a lot more info than the people on the EA forum or Lesswrong, who are speculating about the culture and leadership at AI labs from the sidelines. TBH, if you put a randomly selected EA from a movement of 1,000s of people in charge of the OpenAI board, I would be very concerned that a non-trivial fraction of them probably would make decisions the way you describe. That's something that EA opinion leaders could maybe think about and address. But I don't think most people who hold influential positions within EA (or EA-minded people who hold influential positions in the world at large, for that matter) are likely to be that superficial in their analysis of things. (In particular, I'm strongly disagreeing with the idea that it's likely that the board "basically had no evidence except speculation from the EA/LW forum". I think one thing EA is unusually good at – or maybe I should say "some/many parts of EA are unusually good at" – is hiring people for important roles who think for themselves and have generally good takes about things and acknowledge the possibility of being wrong about stuff. [Not to say that there isn't any groupthink among EAs. Also, "unusually good" isn't necessarily that high of a bar.])  I don't know for sure what they did or didn't consider, so this is just me going off of my general sense for people similar to Helen or Tasha. (I don't know much about Tasha. I've briefly met Helen but either didn't speak to her or only did small talk. I read some texts by her and probably listened to a talk or two.)

My general thoughts on this can be stated as: I'm mostly of the opinion that EA will survive this, bar something massively wrong like the board members willfully lying or massive fraud from EAs, primarily because most of the criticism is directed to the AI safety wing, and EA is more than AI safety, after all.

Nevertheless, I do think that this could be true for the AI safety wing, and they may have just hit a key limit to their power. In particular, depending on how this goes, I could foresee a reduction in AI safety power and influence, and IMO this was completely avoidable.

4
JWS
5mo
I think a lot will depend on the board justification. If Ilya can say "we're pushing capabilities down a path that is imminently highly dangerous, potentially existentially, and Sam couldn't be trusted to manage this safely" with proof that might work - but then why not say that?[1] If it's just "we decided to go in a different direction", then firing him and demoting Brockman with little to no notice, and without informing their largest business partner and funder, its bizarre that they took such a drastic step in the way they did I was actually writing up my AI-risk sceptical thoughts and what EA might want to take from that, but I think I might leave that to one side for now until I can approach it with a more even mindset 1. ^ Putting aside that I feel both you and I are sceptical that a new capability jump has emerged, or that scaling LLMs is actually a route to existential doom

Yeah, this is one of the few times where I believe that the EAs on the board likely overreached here, because they probably didn't give enough evidence to justify their excoriating statement there that Sam Altman was dishonest, and he might be coming back to lead the company.

I'm not sure how to react to all of this, though.

Edit: My reaction is just WTF happened, and why did they completely play themselves? Though honestly, I just believe that they were inexperienced.

I'm not sure how to react to all of this, though.

Kudos for being uncertain, given the limited information available.

(Not something one cay say about many of the other comments to this post, sadly.)

The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn't know what it's doing and won't do anything. [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]

Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politi... (read more)

Okay, my crux is that the simplicity/Kolmogorov/Solomonoff prior is probably not very malign, assuming we could run it, and in general I find the prior not to be malign except for specific situations.

This is basically because it relies on the IMO dubious assumption that the halting oracle can only be used once, and notably once we use the halting/Solomonoff oracle more than once, the Solomonoff oracle loses it's malign properties.

More generally, if the Solomonoff Oracle is duplicatable, as modern AIs generally are, then there's a known solution to mitigate... (read more)

I want to flag that I see quite a lot of inappropriate binarization happening, and I generally see quite a lot of dismissals of valid third options.

Either they take the extinction risks seriously, or they don't.

There are other important possibilities, like a potential belief in AI progress helping or solving the existential risk, thinking that the intervention of increasing AI progress is actually the best strategy, etc. More generally, once we make weaker or no assumptions about AI risk, we no longer obtain the binary you've suggested.

So this doesn't ... (read more)

4
Geoffrey Miller
6mo
Sharmake -- in most contexts, your point would be valid, and inappropriate binarization would be a bad thing. But when it comes to AI X-risk, I don't see any functional difference between dismissing AI X risks, and thinking that AI progress will help solve (other?) X risks, or thinking that increasing AI progress with somehow reduce AI X risks. Those 'third options' just seem like they fall into the overall category of 'not taking AI X risk seriously, at all'.  For example, if people think AI progress will somehow reduce AI X risk, that boils down to thinking that 'the closer we get to the precipice, the better we'll be able to avoid the precipice'.  If people think AI progress will somehow reduce other X risks, I'd want a realistic analysis of what those other alleged X risks really are, and how exactly AI progress would help. In practice, in almost every blog, post, and comment I've seen, this boils down to the vague claim that 'AI could help us solve climate change'. But very few serious climate scientists think that climate change is a literal X risk that could kill every living human. 

Yeah, I'd really like to know how they'd respond to information that says that they'd have to stop doing something that would go against their incentives, like accelerating AI progress.

I don't think it's very likely, but given the incentives at play, it really matters that the organization will actually be able to at least seriously consider the possibility that the solution to AI safety might be something that they aren't incentivized to do, or have anti-incentives to doing.

4
Geoffrey Miller
6mo
Sharmake -- this is also my concern. But it's even worse than this. Even if OpenAI workers think that their financial, status, & prestige incentives would make it impossible to slow down their mad quest for AGI, it shouldn't matter, if they take the extinction risks seriously. What good would it do for the OpenAI leaders and devs to make a few extra tens of millions of dollars each, and to get professional kudos for creating AGI, if the result of their hubris is total devastation to our civilization and species? Either they take the extinction risks seriously, or they don't. If they do, then there are no merely financial or professional incentives that could rationally over-ride the extinction risks.  My conclusion is that they say they take the extinction risks seriously, but they're lying, or they're profoundly self-deceived. In any case, their revealed preferences are that they prefer a little extra money, power, and status for themselves over a lot of extra safety for everybody else -- and for themselves.

The basic reasoning is that SGD is an extremely powerful optimizer, and even the imperfections of SGD in real life that mesa-optimizers can use are detectable without much interpretability progress at all. Also, there is an incentive by capabilities groups to improve SGD, so we have good reason to expect that these flaws become less worrisome over time.

In particular, it is basically immune to acausal trade setups or blackmail setups by mesa-optimizers.

Some choice quotes from Beren's post below:

The key intuition is that gradient descent optimizes the enti

... (read more)

Even though I don't think EA needs to totally replicate outside norms, I do agree that there are good reasons why quite a few norms exist.

I'd say the biggest norms from outside that EA needs to adopt are less porous boundaries on work/dating, and importantly actually having normalish pay structures/work environments.

Are you saying AIs trained this way won’t be agents?

Not especially. If I had to state it simply, it's that massive space for instrumental goals isn't useful today, and plausibly in the future for capabilities, so we have at least some reason to not worry about misalignment AI risk as much as we do today.

In particular, it means that we shouldn't assume instrumental goals to appear by default, and to avoid overrelying on non-empirical approaches like your intuition or imagination. We have to take things on a case-by-case basis, rather than using broad jud... (read more)

I agree, but that implies pretty different things than what is currently being done, and still implies that the danger from AI is overestimated, which bleeds into other things.

Basically, kind of. The basic issue is that instrumental convergence, and especially effectively unbounded instrumental convergence is a central assumption of why AI is uniquely dangerous, compared to other technologies like biotechnology. And in particular, without the instrumental convergence assumption, or at least with the instrumental convergence assumption being too weak to make the case for doom, unlike what Superintelligence told you, matters a lot because it kind of ruins a lot of our inferences of why AI would likely doom us, like deception or un... (read more)

9
Greg_Colbourn
10mo
Even setting aside instrumental convergence (to be clear, I don't think we can safely do this), there is still misuse risk and multi-agent coordination that needs solving to avoid doom (or at least global catastrophe).
4
Holly_Elmore
10mo
I guess my real question is “how can you feel safe accepting the idea that ML or RL agents won’t show instrumental convergence?” Are you saying AIs trained this way won’t be agents? Because i don’t understand how we could call something AGI that doesn’t figure out it’s own solutions to reach it’s goals, and I don’t see how it can do that without stumbling on things that are generally good for achieving goals. And regardless of whatever else you’re saying, how can you feel safe that the next training regime won’t lead to instrumental convergence?

I mean, regardless of how much better their papers are in the meantime, does it seem likely to you that those labs will solve alignment in time if they are racing to build bigger and bigger models?

Basically, yes. This isn't to say that we aren't doomed at all, but contrary to popular beliefs of EAs/rationalists, the situation you gave actually has a very good chance, like >50% of working, for a few short reasons:

  1. Vast space for instrumental convergence/instrumental goals aren't incentivized in current AI, and in particular, the essentially unbounded
... (read more)
4
Holly_Elmore
10mo
Does this really make you feel safe? This reads to me as a possible reason for optimism, but hardly reassures me that the worst won’t happen or that this author isn’t just failing to imagine what could lead to strong instrumental convergence (including different training regimes becoming popular).

I want to be clear for the record here that this is enormously wrong, and Greg Colbourn's advice should not be heeded unless someone else checks the facts/epistemics of his announcements, due to past issues with his calls for alarm.

3
Greg_Colbourn
10mo
Or at least link to these "past issues" you refer to.
4
Greg_Colbourn
10mo
I've detailed my reasoning in my posts. They are open for people to comment on them and address things at the object level. Please do so rather than cast aspersions.

I definitely agree with this, and I'm not very happy with the way Omega solely focuses on criticism, at the very least without any balanced assessment.

And given the nature of the problem, some poor initial results should be expected, by default.

I actually would find this at least somewhat concerning, because selection bias/selection effects are my biggest worry with smart people working in an area. If a study area is selected based upon any non-truthseeking motivations, or if people are pressured to go along with a view for non-truthseeking reasons, then it's very easy to land into nonsense, where the consensus is based totally on selection effects, making them useless to us.

There's a link to the comment by lukeprog below on the worst case scenario for smart people being dominated by selection ef... (read more)

This. I generally also agree with your 3 observations, and the reason I was focusing on truth seeking is because my epistemic environment tends to reward worrying AI claims more than it probably should due to negativity bias, as well as looking at AI Twitter hype.

What kind of a breakthrough are you envisaging? How do we get from here to 100% watertight alignment of an arbitrarily capable AGI?

Scalable alignment is the biggest way to align a smarter intelligence.

Now, Pretraining from Human Feedback showed that at least for one of the subproblems of alignment, outer alignment, we managed to make the AI more aligned as it gets more data.

If this generalizes, it's huge news, as it implies we can at least align an AI's goals with human goals as we get more data. This matters because it means that scalable alignment isn... (read more)

5
Greg_Colbourn
1y
Generalizing is one thing, but how can scalable alignment ever be watertight? Have you seen all the GPT-4 jailbreaks!? How can every single one be patched using this paradigm? There needs to be an ever decreasing number of possible failure modes, as power level increases, to the limit of 0 failure modes for a superintelligent AI. I don't see how scalable alignment can possibly work that well. Open AI says in their GPT-4 release announcement that "GPT-4 responds to sensitive requests (e.g., medical advice and self-harm) in accordance with our policies 29% more often." A 29% reduction of harm. This is the opposite of reassuring when thinking about x-risk. (And all this is not even addressing inner alignment!)

No, at the higher end of probability. Things still need to be done. It does mean we should stop freaking out every time a new AI capability is released, and importantly it means we probably don't need to go to extreme actions, at least right away.

The reason I'm so confident in this is that right now is because I assess a significant probability of the AI developments starting with GPT-4 to be a hype cycle, and in particular I am probably >50% confident in the idea that most of the flashiest stuff on AI will prove to be over hyped.

In particular, I am skeptical of the general hype on AI right now, and that a lot of capabilities tests essentially test it on paper tests, not real world tasks, which are much less Goodhartable than paper tests

Now I'd agree with you conditioning on the end game being 2... (read more)

4
Greg_Colbourn
1y
What kind of a breakthrough are you envisaging? How do we get from here to 100% watertight alignment of an arbitrarily capable AGI? Climate change is very different in that the totality of all emissions reductions / clean tech development can all add up to solving the problem. AI Alignment is much more all or nothing. For the analogy to hold it would be like emissions rising on a Moore's Law (or faster) trajectory, and the threshold for runaway climate change reducing each year (cf algorithm improvements / hardware overhang), to a point where even a single start up company's emissions (Open AI; X.AI) could cause the end of the world. Re ARC Evals, on the flip side, they aren't factoring in humans doing things that make things worse -  chatGPT plugins, AutoGPT, BabyAGI, ChaosGPT etc all showing that this is highly likely to happen! We may never get a Fire Alarm of sufficient intensity to jolt everyone into high gear. But I think GPT-4 is it for me and many others. I think this is a Risk Aware Moment (Ram).

I think that we are pretty close to the end game already (maybe 2 years), and that there's very little chance for alignment to be solved / x-risk reduced to acceptable levels in time.

I believe with 95-99.9% probability that this is purely hype, and that we will not in fact see AI that radically transforms the world or that is essentially able to do every task that automates physical infrastructure in 2 years.

Given this, I'd probably disagree with this:

Have you considered that the best strategy now is a global moratorium on AGI (hard as that may be)? I

... (read more)

You’re entitled to disagree with short-timelines people (and I do too) but I don’t like the use of the word “hype” here (and “purely hype” is even worse); it seems inaccurate, and kinda an accusation of bad faith. “Hype” typically means Person X is promoting a product, that they benefit from the success of that product, and that they are probably exaggerating the impressiveness of that product in bad faith (or at least, with a self-serving bias). None of those applies to Greg here, AFAICT. Instead, you can just say “he’s wrong” etc.

Also, reversing 95-99.9%, are you ok with a 0.1-5% x-risk?

5
Greg_Colbourn
1y
The "Sparks" paper; chatGPT plugins, AutoGPTs and other scaffolding to make LLM's more agent-like. Given these, I think there's way too much risk for comfort of GPT-5 being able to make GPT-6 (with a little human direction that would be freely given), leading to a foom. Re physical infrastructure, to see how this isn't a barrier, consider that a superintelligence could easily manipulate humans into doing things as a first (easy) step. And such an architecture, especially given the current progress on AI Alignment, would be default unaligned and lethal to the planet.

The point is this distorts the apparent balance of reason - maybe this is like Marxism, or NGDP targetting, or Georgism, or general semantics, perhaps many of which we will recognise were off on the wrong track.

I do note one is not like the others here. Marxism is probably way more wrong than any of the other beliefs, and I feel like the inclusion of the others rather weakens the case here.

2
NunoSempere
1y
Note that you can also think of many other topics people feel strongly about and how the balance of reason looks like e.g., feminist theory, monarchism & divine right of kings, anarcho-capitalist theory, Freudian psychology, Maxwell's theory of electromagnetism, intelligent design, etc.

Honestly, a lot of the problems from politics stems from both it's totalizing nature, comparable to strong longtermism, plus emotion hampers more often than it helps in political discussions compared to longtermism.

I'd say that if EA can't handle politics in the general forum, then I think a subforum for EA politics should be made. Discussions about the politics of EA or how to effectively do politics can go there.

Meanwhile, the general EA forum can simply ban political posts and discussions.

Yes, it's a strong measure to ban politics here. But bluntly, in ... (read more)

Finally, I do think that there is a risk of updating too quickly the other way. Back on the original post, some users have responded this comment saying that it's 'entirely correct'[1], but I don't think it's reasonable to view Expo's piece a 'major misrepresentation' of what happened - their reporting on the case seems to have been accurate. While 'Option 1' seems to be what happened there is still a question of how the grant made it to stage 5 out of 7 on FLI's grant-making procedure. It's not the major scandal we feared, but it's ramifications are more

... (read more)

My big concern is that permanent harm could be suffered by either EA or it's championed causes. Somewhat like how transhumanism became tarred with the brush of racism and eugenics, I worry things like AI safety or X-risk work could be viewed in the same light as racism. And there may be much more at stake than people realize.

The problem is that even without a hinge of history, our impacts, especially in a longtermism framework, are far far larger than previous generations, and we could very well lose all that value if EA or it's causes become viewed as badly as say eugenics or racism was.

PR/political

you want people to believe a certain thing (even if it's something you yourself sincerely believe), in this case that EA is not racist it's about managing impressions and reputations (e.g. EA's reputation as not racist) Your initial comment (and also the Bostrom email statement) both struck me as "performative" in how they demonstrated really harsh and absolute condemnation ("absolutely horrifying", "[no] place in this community", "recklessly flawed and reprehensible" – granted that you said "if true", but the tone and other comments seemed

... (read more)
9
Lumpyproletariat
1y
I think the extent to which nuanced truth does not matter to "most of the world" is overstated.  I additionally think that EA should not be optimizing for deceiving people who belong to the class "most of the world". Both because it wouldn't be useful if it worked (realistically most of the world has very little they are offering) and because it wouldn't work. I additionally think think that trying to play nitwit political games at or around each hecking other would kill EA as a community and a movement dead, dead, dead.

Psychologists know IQ is a somewhat mysterious measure (no, scoring lower on an IQ test does not necessary mean a person is "more stupid"). It is affected by things like income shifts across generations and social position. For Bostrom to even have that opinion as an educated 23-year-old was bad, but to not unequivocably condemn it today - despite the harm it can clearly cause - seems even worse.

I disagree, because I think the evidence from psychology is that IQ is a real measure of intelligence, and while a lot of old tests had high cultural biases, ... (read more)

IMO I think Ecoterrorism's deaths were primarily the Unabomber, which was at least 3 deaths and 23 injuries. I may retract my first comment if I don't have more evidence than this.

[This comment is no longer endorsed by its author]Reply
7
Habryka
1y
The unabomber does feel kind of weird to blame on environmentalism. Or like, I would give environmentalism a lot less blame for the unabomber than I would give us for FTX.

Alas, I do think this defense no longer works, given FTX, which seems substantially worse than all the ecoterrorism I have heard about.

I disagree with this because I believe FTX's harm was way less bad than most ecoterrorism, primarily because of the disutility involved. FTX hasn't actually injured or killed people, unlike a lot of ecoterrorism. It stole billions, which isn't good, but right now no violence is involved. I don't think FTX is good, but so far no violence has been attributed or even much advocated by EAs.

[This comment is no longer endorsed by its author]Reply
3
Habryka
1y
Yeah, doesn't seem like a totally crazy position to take, but I don't really buy it. I bet a lot of people would take a probability of having violence inflicted on them in exchange for $8 billion dollars, and I don't think this kind of categorical comparison of different kinds of harm checks out. It's hard to really imagine the scale of $8 billion dollars, but I am confident that Sam's action have killed, indirectly via a long chain of actions, but nevertheless directly responsibly, at least 20-30 people, which I think is probably more than any ecoterrorism that has been committed (though I am not that confident about the history of ecoterrorism, so maybe there was actually something that got to that order of magnitude?)

Only a small red flag, IMO, because it's rather easy to convince people of alluring falsehoods, and not so easy to convince people of uncomfortable truths.

I don't think there's been a huge scandal involving Will? Sure, there are questions we'd like to see him openly address about what he could have done differently re FTX - and I personally am concerned about his aforementioned influence because I don't want anyone to have that much - but very few if any people here seem to believe he's done anything in seriously bad faith.

I was imagining a counterfactual world where William Macaskill did something hugely wrong.

And yeah come to think of it, selection may be quite a bit stronger than I think.

Putting Habryka's claim another way: If Eliezer right now was involved in a huge scandal like say SBF or Will Macaskill was, then I think modern LW would mostly handle it pretty fine. Not perfectly, but I wouldn't expect nearly the amount of drama that EA's getting. (Early LW from the 2000s or early 2010s would probably do worse, IMO.) My suspicion is that LW has way less personal drama over Eliezer than say, EA would over SBF or Nick Bostrom.

3
Arepo
1y
I think there are a few things going on here, not sure how many we'd disagree on. I claim: * Eliezer has direct influence over far fewer community-relevant organisations than Will does or SBF did (cf comment above that there exist far fewer such orgs for the rationalist community).  Therefore a much smaller proportion of his actions are relevant to the LW community than Will's are and SBF's were to the EA community. * I don't think there's been a huge scandal involving Will? Sure, there are questions we'd like to see him openly address about what he could have done differently re FTX - and I personally am concerned about his aforementioned influence because I don't want anyone to have that much - but very few if any people here seem to believe he's done anything in seriously bad faith. * I think the a priori chance of a scandal involving Eliezer on LW is much lower than the chance of a scandal on here involving Will because of the selection effect I mentioned - the people on LW are selected more strongly for being willing to overlook his faults. The people who both have an interest in rationality and get scandalised by Bostrom/Eliezer hang out on Sneerclub, pretty much being scandalised by them all the time. * The culture on here seems more heterogenous than LW. Inasmuch as we're more drama-prone, I would guess that's the main reason why - there's a broader range of viewpoints and events that will trigger a substantial proportion of the userbase. So these theories support/explain why there might be more drama on here, but push back against the 'no hero-worship/not personality-oriented' claims, which both ring false to me. Overall, I also don't think the lower drama on LW implies a healthier epistemic climate.

Those with high quality epistemics usually agree on similar things

On factual questions, this is how it should be, and this matters. Putting it another way, it's not a problem for EAs to come to agree on factual questions, without more assumptions.

Yeah, given that no violence by the people criticizing Bostrom's apology is happen, unlike the actual struggle sessions, I don't understand how they're very comparable.

Also: while those with high quality epistemics usually agree on similar things is a distortion making the argument personal, about people, in reality yes, good reasoning often converges to similar conclusions

This. Aumann's Agreement Theorem tells us that Bayesian that have common priors and trust each other to be honest cannot disagree.

The in practice version of this is that a group agreeing on similar views around certain subjects isn't automatically irrational, unless we have outside evidence or one of the conditions is wrong.

4
Karthik Tadepalli
1y
Aumann's agreement theorem is pretty vacuous because the common prior assumption never holds in important situations, e.g. everyone has different priors on AI risk.

Also, as far as "social capital," comments from this forum are regularly reposted as evidence of what "EA thinks" of a given controversy. If an apology is insufficient and we are all silent, the inference that we think the apology sufficient will be drawn.

And arguably rightly, IMO.

If I were to extract generalizable lessons from the FTX, the major changes I would make are:

  1. EA should stay out of crypto, until and unless the situation improves to the extent that it doesn't have to rely on speculators. One big failure is EAs thought they could invest in winner stocks more than other investors.

  2. Good Governance matters. By and large, EA failed at basic governance tasks, and I think governance needs to be improved. My thoughts are similar to this post:

https://forum.effectivealtruism.org/posts/sEpWkCvvJfoEbhnsd/the-ftx-crisis-highlig... (read more)

My views on what EA should learn from this event is the following:

  1. EA needs to articulate what moral views or moral values it will not accept in the pursuit of it's goals. I don't believe EA can consider every moral point valid due to the Paradox of Tolerance. Thus, moderators and administrators need to start working on what values or moral viewpoints it will not accept, and it will need to be willing to ban or cancel people who violate this policy.

  2. EA has an apology problem. Titotal has a good post on it, but a lot of apologies tend to be bad. Linking

... (read more)

I don't see it that way. Lots of relatively normal folks put money into FTX. Journalists and VCs were very positive about SBF/FTX.

This. I do think that blaming rationalist culture is mostly a distraction, primarily because way too much normie stuff promoted SBF.

I had a very different opinion of the whole crypto train (that is, crypto needs to at least stop having real money, if not flat out banned altogether.)

Yes, EA failed. But let's be more careful about suggesting that normies didn't fail here.

I agree that EA failed pretty hard here. My big disagreements are probably on why EA failed, not that EA failed to prevent harm.

1
Wil Perkins
1y
What would you say caused EA to fail?

Many people outside the rat sphere in my life think the whole FTX debacle, for instance, is ridiculous because they don't find SBF convincing at all. SBF managed to convince so many people in the movement of his importance because of his ability to expound and rationalize his opinions on many different topics very quickly. This type of communication doesn't get you very far with normal, run of the mill folks.

I ignored SBF and the crypto crowd, however I disagree with this primarily because I think this is predictably overrating how much you wouldn't fal... (read more)

3
Wil Perkins
1y
Your comment minimizes EA’s role in getting SBF as far as he got. If you read the now-deleted Sequoia article it’s clear that the whole reason he was able to take advantage of the Japan crypto arbitrage is because he knew and could convince people in the movement to help him. Most of the million who hopped on the crypto/SBF train were blatantly speculating and trying to make money. I see those in EA who fell for it as worse because they were ostensibly trying to do good.

I don't exactly agree with the case that cultural knowledge is really important like Henrich wants to say, though I do credit cultural knowledge for increasing returns to scale.

Basically the problem with neglectedness is that it assumes strictly declining returns like a logarithmic scale. But if the problem has the quality "the more the merrier", or increasing returns to scale, then neglectedness is a problem. In other words, leverage matters.

1
Indra Gesink
1y
Indeed!

I assume you consider Blank Slate doctrine false. Do you believe communism would've worked out in a world where it was true? (My view is that most or all of the problems with communism would remain.)

Yeah, this. The real issues with communism ultimately come down to ignoring thermodynamics exists. Once you accept that idea, a lot of other false ideas from communism starts to make more sense.

I also think that, even for a high decoupler (which I consider myself to be, though as far as I know I'm not on the autism spectrum) the really big taboos - like race and intelligence - are usually obvious, as is the fact that you're supposed to be careful when talking about them. The text of Bostrom's email demonstrates he knows exactly what taboos he's violating.

And honestly, I think this is a great taboo for many reasons. I'd argue this is one of the more intelligent taboos here by the left.

1
AnonymousQualy
1y
Agreed. I'm no cultural conservative, but norms are important social tools we shouldn't expect to entirely discard.  Anthropologist Joe Henrich's writing really opened my eyes to how norms pass down complex knowledge that would be inefficient for an individual to try to learn on their own.
Load more