All of Michael_PJ's Comments + Replies

On the object level, the original question was:

> Are financial statements going to be released? In particular, how much was spent on estate agent fees, maintaince and bills? And the value of the events hosted. Is the reason for the change that EA has less money or that there was an error in the initial reasoning for buying it?

Even given the context I think this is asking too much. I would support a question like "I would love to know what the reasoning was: in particular, was the project financially unsustainable or were there other reasons?". 

Aski... (read more)

I read Dean's ask a bit more narrowly or at least ambiguously -- although there is a reference to financial statements, the more specific asks are to how much was spent, how much value was achieved, and whether the project was not desirable in retrospect vs. merely the victim of changed circumstances. I don't read him as proposing an audit, though I could be wrong.

The last one may be valuable for others who can be considering future capital expense vs. rental options. The other two are valid things for donors to consider when deciding whether to give to EV... (read more)

I think there is a bit of tendency to assume that it is appropriate to ask for arbitrary amounts of transparency from EA orgs.  I don't think this is a good norm: transparency has costs, often significant, and constantly asking for all kinds of information (often with a tone that suggests that it ought to be presented) is I think often harmful.

Transparency has costs, but potentially so does opacity (in terms of both loss of trust and reduced consideration for decisions that don't need to be justified externally). Arguably both apply here: the decision obviously wasn't uncontroversial, and the decision making process sounds quite limited for a significant commitment in a novel area. It's also possible some of that information that wasn't shared or subsequent metrics (I don't think anyone is asking for original research here) would actually cast the original decision in a more favourable light

I al... (read more)

Jason
2d34
10
7

It's on a matter of significant community concern, as well as documented concern/interest among the general public. On average, there are a couple of items at this level each year? I don't think the downsides of "constantly asking for all kinds of information" factor into this particular case.

I wonder if we would benefit from something like a system that hides the karma (and treats it as zero for visibility purposes) of posts that have less than <some quantity> of engagement. That way posts would get a "grace period" before getting hidden.

6
Jason
22d
The potential cost there is that there are occasionally some really bad posts that deserve the rapid downvote -- such as those that don't respect important Forum norms, even if not quite to the point of formal moderator action. In those cases, the hail of downvotes allows the Forum to deal with those quickly and without moderator action that could be seen as "censoring" someone. But maybe those kinds of posts are rare enough to incur the cost.

Again, I don’t think my picture here is a stretch from the normal English sense of the word “wholesomely”.

The more I read of these essays the less I agree with this. On my subjective authority as a native English speaker, your usage seems pretty far from the normal sense to me. I think what you're gesturing at is a reasonable concept but I think it's quite confusing to call it "wholesome". 

As some evidence, I kept finding myself having to reinterpret sentences to use your meaning rather than what I would consider the more normal meaning. For example, "What is wholesome depends on the whole system." This is IMO kind of nonsensical in normal English.

1
Owen Cotton-Barratt
1mo
I'm guessing that the word is just used differently in different contexts or circles? Your comment made me wonder how much I was just stuck in my own head about this. So I asked ChatGPT about the sentence you're labelling as nonsensical, and it said: Of course I guess that ChatGPT is pretty good at picking up on meanings which are known anywhere, so this is evidence more that I'm aligning with one existing usage of the word, rather than that all native English speakers will understand it that way (and you're providing helpful evidence against the latter claim).

I don't think we would have been able to use the additional information we would have gained from delaying the industrial revolution but I think if we could have the answer might be "yes". It's easy to see in hindsight that it went well overall, but that doesn't mean that the correct ex ante attitude shouldn't have been caution!

100% agree. I think it is almost always better to be honest, even if that makes you look weird. If you are worried about optics, "oh yeah, we say this to get people in but we don't really believe it" looks pretty bad.

I would qualify this statement by saying that it would be nice for OP to have more reasoning transparency, but it is not the most important thing and can be expensive to produce. So it would be quite reasonable for additional marginal transparency to not be the most valuable use of their staff time.

I think if there's anything they should bother to be publicly transparent about in order to subject to further scrutiny, it's their biggest cruxes for resource allocation between causes. Moral weights, theory of welfare and the marginal cost-effectiveness of animal welfare seem pretty decisive for GHD vs animal welfare.

  • Some EAs knew about his relationship with Caroline, which would undermine the public story about FTX<->Alameda relations, but didn't disclose this.
  • Some EAs knew that Sam and FTX weren't behaving frugally, which would undermine his public image, but also didn't disclose.

FWIW, these examples feel hindsight-bias-y to me. They have the flavour of "we now know this information was significant, so of course at the time people should have known this and done something about it". If I put myself in the shoes of the "some EAs" in these examples, it's not clea... (read more)

I think the best thing here would have been for much of the information to be shared casually, without needing to justify itself as important. People gossip about relationships and their terrible bosses all the time I suspect that if that had happened, people would have gathered more clues earlier, enough to make a difference on the margin.

To reuse your example, if you were the only person the perpetrator of the heist could con into lending their car to act as a getaway vehicle, then that would make P(Heist happens | Your actions) quite a bit higher than P(Heist happens | You acting differently), but you would still be primarily a mark or (minor) victim of the crime

Yes, this is a good point. I notice that I don't in fact feel very moved by arguments that P(FTX exists | EA exists) is higher, I think for this reason. So perhaps I shouldn't have brought that argument up, since I don't think it's the crux (although I do think it's true, it's just over-determining the conclusion).

Only ~10k/10B people are in EA, while they represent ~1/10 of history's worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs.

This seems wildly off to me - I think the strength of the conclusion here should make you doubt the reasoning!

I think that the scale of the fraud seems like a random variable uncorrelated with our behaviour as a community. It seems to me like the relevant outcome is "producing someone able and willing to run a company-level fraud"; given that, whether or not it's a big one or a small on... (read more)

I estimated that 1-2% of YCombinator-backed companies commit substantial fraud. It seems hard to make the case that the rate in EA is 10^7x this.

I do think it's an interesting question whether EA is prone to generate Sams at higher than the base rate. I think it's pretty hard to tell from a single case, though.

I am one of the people who thinks that we have reacted too much to the FTX situation. I think as a community we sometimes suffer from a surfeit of agency and we should consider the degree to which we are also victims of SBF's fraud. We got used. It's like someone asking to borrow your car and then using it as the getaway car for a heist. Sure, you causally contributed, but your main sin was poor character judgement. And many, many people, even very sophisticated people, get taken in by charismatic financial con men.

I also think there's too much uncritical ... (read more)

There are various reasons to believe that SBF's presence in EA increased the chance that FTX would happen and thrive:

  • Only ~10k/10B people are in EA, while they represent ~1/10 of history's worst frauds, giving a risk ratio of about 10^5:1, or 10^7:1, if you focus on an early cohort of EAs. This should give an immediate suspicion that P(FTX thrives | SBF in EA)/P(FTX thrives | SBF not in EA) is very large indeed.
  • Sam decided to do ETG due to conversations with EA leaders. 
  • EA gave Alameda a large majority of its funding and talent.
  • EA gave FTX at least 1-2 of the other leaders of the company.
  • ETG was a big part of Sam's public image and source of his reputation.
8
aprilsun
7mo
Anecdotally, among the EAs I've spoken to IRL about all this and among the non-EAs I've spoken to about it, 'EA got used' is by far the more common narrative. I think the mood on this forum is quite different and I thought of the OP as the EA most committed to this 'it's all EA's fault' narrative even before he made this post. So I worry that his post paints a very skewed picture (also obviously because of the research others have mentioned in the comments).

This is a very interesting take, and very well expressed. You could well be right that the narrative that 'we got used' is the most correct simple summary for EAs/EA. And I definitely agree that it is an under-rated narrative. There could even be psychological reasons for that (EAs being more prone to guilt than to embarassment?).

I note that even if P(FTX exists | EA exists) were quite a bit higher than P(FTX exists | ~EA exists), that could be compatible with your suggested narrative of EAs being primarily marks/victims. To reuse your example, if you were... (read more)

5
Manuel Del Río Rodríguez
7mo
I mostly agree that people seem to have overreacted and castigated themselves about SBF-FTX, but also feel the right amount of reaction should be non-trivial. We aren't just talking about SBF, as the whole affair included other insiders who were arguably as 'true believers' in EA as it is reasonable to expect (like Caroline Ellison) and SBF-FTX becoming poster-children of the movement at a very high level. But I think you are mostly right: one can't expect omniscience and a level of character-detection amongst EAs when among the fooled were much more cynical, savvy and skeptic professionals in finance. For what it's worth, I feel some EA values might have fueled some of Sam's bad praxis, but weren't the first mover. From what I've read, he absorbed (naive?) utilitarianism and a high-risks stake from the home. As for the counterfactual of him having ending up where he has without any involvement with EA... I just don't know. the story that is usually told is that his intent was working in charity NGOs before Will McAskill steered him towards an 'earning to give' path. Perhaps he would have gone into finance anyway after some time. It's very difficult to gauge intentions and mental states- I have never been a fan of Sam's (I discovered his existence, along with that of EA after and because of the FTX affair), but I can still assume that, if it comes to 'intent', his thoughts were probably more in a naive utilitarian, 'rules are for the sheep, I am smart enough to take dangerous bets and do some amoral stuff towards creating the greater good' frame than 'let me get rich by a massive scam and fleece the suckers'. Power and vanity would probably reinforce these as well.

It seems like we could use the new reactions for some of this. At the moment they're all positive but there could be some negative ones. And we'd want to be able to put the reactions on top level posts (which seems good anyway).

I wonder if the focus on "narrow EA" is a reflection of short AI timelines and/or a belief that we need to make changes sooner rather than later.

It seems to me that "global EA" looks better the longer the future we have. Gains in people compound, and other countries may be much more influential in the future. If Nigeria is a global power in 50 years, then growing a community there now might be a good investment.

I don't think this has a clear answer though, since benefits from actually solving problems sooner can compound too.

I wholeheartedly agree with this post.

I think there has been a bit of over-reacting to recent events. I don't think the damage is that bad, and to some degree I think we've just been unlucky. Maybe we need to do some things differently (e.g. try to project less of an air of certainty, which many critics seem to perceive) but we should also beware the illusion of control.

You say, in effect, "not that centralised", but, from your description, EA seems highly centralised

Your argument that it's not centralised seems to be that EA is not a single legal entity

These are two examples, but I generally didn't feel like your reply really engaged with Will's description of the ways in which EA is decentralized, nor his attempt to look for finer distinctions in decentralization. It felt a bit like you just said "no, it is centralised!".

democracy has the effect of decentralising power.

I don't agree with this at all. IMO democracy often... (read more)

I think the average EA might underestimate the extent to which being visible in EA (e.g. speaking at EAG) is seen as a burden rather than an opportunity.

Related: https://www.lesswrong.com/posts/pDzdb4smpzT3Lwbym/my-model-of-ea-burnout?commentId=Xz2xzWEuLAiHsFWzf

Having read this I'm still unclear what the benefit of your restructuring of CEA is. It's not a decentralising move (if anything it seems like the opposite to me); it might be a legitimising move, but is lack of legitimacy an actual problem that we have?

The main other difference I can see is that it might make CEA more populist in the sense of following the will of the members of the movement more. Maybe I'm as much of an instinctive technocrat as you are a democrat, but it seems far from clear to me that that would be good. Nor that it solves a problem we actually have.

3
James Herbert
9mo
I think the standard arguments for democratic membership associations apply. Increases in: membership engagement, perspective diversity, legitimacy and trust (from POV of members), accountability, transparency, and perhaps also stability (less reliant on individual personalities).

Yes, I think there's a lot of sliding between "decentralised" and "democratic" even though these have pretty much nothing to do with each other.

As a pretty clear example, the open source software community is extremely decentralised but has essentially zero democracy anywhere.

I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.

IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.

I would love the community to be more supportive in ways that would help with that. Things I would like:

  • Accept that new projects may be not that great, encourage them to grow and maybe even chip in as well as criticising.
  • Accept and even celebrate failure.
  • Even more incubator style things. I love what CE does here.

A few thoughts:

  • The level of quality and professionalism has risen since the old days which makes it intimidating to contribute your own half-assed thing.
  • Doing things does usually require time, and a lot of the early doing was done by students (and still is!). It's much harder to be that involved when you're older without becoming professionally involved. These days we have a lot more non-students!
  • I think all Will's stuff about the perceived allocation of responsibility and control has a big impact.

I'm not super convinced that the fundraising situation is tougher? It seems much easier to me than it was. Especially for small things we have a decent range of funders.

I also had this reaction, and I think it was mostly just the phrasing. "Break up OP" suggests that we have the power or right to do that, which we definitely don't. I think if the post said "OP could consider breaking itself up" it wouldn't sound like that.

I think the proposal to have significant re-grantors is a more approachable way of achieving something similar, in that it delegates control of some funds.

I think the intention wasn't "have lots of forums where EA topics are discussed", so much as "don't make it sound like the (in practice, one) forum is the only one that can be". 

Thank you! This post says very well a lot of things I had been thinking and feeling in the last year but not able to articulate properly. 

I think it's very right to say that EA is a "do-ocracy", and I want to focus in on that a bit. You talked about whether EA should become more or less centralized, but I think it's also interesting to ask "Should EA be a do-ocracy?"

My response is a resounding yes: this aspect of EA feels (to me) deeply linked to an underrated part of the EA spirit. Namely, that the EA community is a community of people who not only i... (read more)

Thanks for this comment, it’s very inspiring!

One thought I had is that do-ocracy (as opposed to “someone will have got this covered, right?”) describes other areas, as well as EA. On the recent 80k podcast, Lennart Heim describes a similar dynamic within AI governance:

“at some point, I would discover that compute seems really important as an input to these AI systems — so maybe just understanding this seems useful for understanding the development of AI. And I really saw nobody working on this. So I was like, “I guess I must be wrong if nobody’s worki... (read more)

What cultural and structural features do you think might contribute to the perceived decline in a just-do-it attitude?

While I think there is considerable merit to what you're saying, I think it's also important to acknowledge the existence of challenges for would-be doers in 2023 that weren't necessarily (as) present in 2008 or 2013. Some of these challenges are related to the presence and/or actions of big organizations and funders (e.g., the de-emphasis on earning to give affecting the universe of potential viable funders for upstarts). Others are relate... (read more)

I think the upside is that if it is "generational" people grow up and become more agentic as long as we foster the culture. I was remarking to a friend that it's interesting how people don't want to get up and learn to code to help with AI Safety (given the rates of AI doomerism) but people were willing to go into quant trading at seemingly higher rates to earn to give in early EA.

I definitely think there's a "generational" thing here. For those of us who've been around long enough to see how everything came from nothing but people doing things they thought needed to be done, it's perfectly obvious. But I can very much see how if you join the community today it looks like there are these serious, important organizations who are In Charge. But I do think it's still not really true.

+1.

I was slow to realise that, over the period of just a few years of growth, this bunch of uncertain, scrappy, loosely coordinated students had come to be... (read more)

That's much stronger than what I read it as. I think Sjir was saying something more like "if you turn up to a local EA event you should feel welcomed and like you are 'one of the gang' even if you only donate".

The purpose of EAG these days seems a bit murky to me, but it seems to be to be mostly for people who are highly engaged, and I think it's fair to say that if you just donate you are probably not highly engaged (although you might be).

2
Sjir Hoeijmakers
9mo
Yes I was making a weaker claim, along the lines of what Michael says. I don't have a strong view on EAG's admission policy in particular (I think this is a tricky topic with many considerations). I do however stand by what I say in the recommendations section: "There can be clearer places to go in the EA community for people who give effectively and significantly but aren’t currently in a position to change careers. For example, EA Global could feature more relevant content for them, or be more explicitly career-focused itself to make space for a separate conference/event for this group." I.e. I think EA Global could probably improve in communicating how it serves or doesn't serve people for whom effective giving (currently) is their main pathway to impact.

Great post, I agree with a lot of it. There is definitely a kind of large org sclerosis that can develop, but IME it's generally associated with much larger orgs unless you have very dysfunctional management (which is a risk!).

One missing factor I think is fungibility. It's hard for donations to be funged between organizations, but it's very easy for them to be funged between protects within an organisation. So we might expect donors to have some preference for separate orgs for separate projects.

2
Ozzie Gooen
9mo
I agree this could be a benefit, but I see it as being fairly minor. I think that the cost that larger organizations would need to incur to make trustable promises of separation, can easily be less than the costs of having totally different small organizations for each of these projects.

I think few people disagree with these directly, but I think many people believe or act in ways that are in tension with them. Going through the claims:

  1. Many people in EA have not in fact taken the GWWC pledge. I agree with Sjir that it would be better if more of them did (I would probably be even more forceful and say that you probably should unless you have a good reason not to)
  2. One reason people don't push people in EA to take the pledge is that they don't want to make it seem like you have to do it in order to be in EA. So it's important to clarify that
... (read more)
1
jackva
9mo
Thanks for making this more explicit, this is v helpful!

I think another contributing factor is that as a community we a) prize (and enjoy!) unconventional thinking, and b) have norms that well-presented ideas should  be discussed on their merits, even if they seem strange. That means that even the non-insane members of the community often say things that sound insane from a common-sense perspective, and that many insane ideas are presented at length and discussed seriously by the community.

I think experiencing enough of this can disable your "insane idea alarm". Normally, if someone says something that sou... (read more)

5
ChanaMessinger
9mo
I strongly resonate with this; I think this dynamic also selects for people who are open-minded in a particular way (which I broadly think is great!), so you're going to get more of it than usual.

A "together but divided" future is also possible. There are many divisions in e.g. feminism, but everyone still wants to lay claim to the big banner, and will generally regard the others as allies to some degree (even if they write scathing critiques of each other).

I think this is a useful exercise for a few reasons.

  1. It's helpful for outsiders and new people to reconcile different material that they see from different points in time.
  2. It's helpful for people to clarify if they hold a bundle of positions that's associated with a particular wave. The old waves don't necessarily go away, they just cease to dominate. It's quite handy to be able to say "To understand my positions it may help to know that I'm a first-wave EA" or whatever. 

I would be tempted to divide the second wave in two. I think there was a distinct p... (read more)

If we interpret an up-vote as "I want to see more of this kind of thing", is it so surprising that people want to see more such supportive statements from high-status people?

I would feel more worried if we had examples of e.g. the same argument being made by different people and the higher-status person getting rewarded more. Even then - perhaps we do really want to see more of high-status people reasoning well in public.

Generally, insofar as karma is a lever for rewarding behaviour, we probably care more about the behaviour of high-status people and so we... (read more)

I think a good reading item on the empathy front is this article from a disability rights lawyer about her encounter with Singer. It is a very clear and honest piece, I think about it often.

https://www.nytimes.com/2003/02/16/magazine/unspeakable-conversations.html

Comic sans is a great font that is very readable even at small font sizes. Surely a community which cares about effectiveness over image will embrace this change.

I have a (somewhat unfair) vibe of this as the famous people deposit their work into the forum and leave for higher pursuits

I do think there's a big difference in how much various high-status people engage on the forum. And I think that the people who do engage feel like they're more "part of" the community... or at least that small part of it that actually uses the forum! It also gives them more opportunity to say stupid things and get downvoted, very humanising!

(An example that comes to mind is Oliver Habryka, who comments a lot on posts, not just his... (read more)

6
Ben Millwood
1y
Yeah I agree that for many people, not engaging is the right choice, I don't intend to suggest that all or even most technical debates or philosophical discussions happen here, just that keeping a sprinkling of them here helps give a more accurate impression of how these things evolve.

This is a great point. I also think there's a further effect, which is that older EAs were around when the current "heroes" were much-less -impressive university students or similar. Which I think leads to a much less idealising frame towards them.

But I can definitely see that if you yourself are young and you enter a movement with all these older, established, impressive people... hero-worshipping is much more tempting.

3
Geoffrey Miller
1y
Michael -- interesting point. EA is a very unusual movement in that the founders (Will MacAskill Toby Ord, etc) were very young when they launched the movement, and are still only in their mid-30s to early 40s. They got some guidance & inspiration from older philosophers (e.g. Derek Parfit, Peter Singer), but mostly they recruited people even younger than them into the movement ... and then eventually some older folks like me joined as well. So, EA's demographics are quite youth-heavy, but there's also much less correlation between age and prestige in EA than in most moral/activist movements.

I think that's a good example of "why would people do this given what they knew?", I'm not sure it's an example of pedestalising etc. I'm being a bit fussy here because I do think I've seen the specific claim that there was lots of public promotion of Sam and I'm just not sure if it's true.

4[anonymous]1y
Fuss away. E.g. 1. Jack Lewars "to avoid putting single donors on a huge pedestal inside and outside the community" and again "putting [SBF] on a pedestal and making them symbolic of EA" 2. Gideon Futerman "making and encouraging Will (and I guess until recently to a lesser extent SBF) the face of EA" 3. tcheasdfjkl "while a lot of (other?) EAs are promoting him publicly" 4. Peter S. Park "But making SBF the face of the EA movement was a really bad decision" 5. Devon Fritz "EA decided to hold up and promote SBF as a paragon of EA values and on of the few prominent faces in the EA community" 6. Dean Abele "I don't know if I should stay in EA. I would feel very sad if [Will] publicly praised someone who turned out to be morally bankrupt. Of course, everyone makes mistakes. But still, some trust has been lost." 7. Peter Wildeford "The other clear mistake was promoting Sam so heavily as the poster child of EA." [This comment is no longer endorsed by its author] 8. David_Althaus "I think it's clear that we put SBF on a pedestal and promoted him as someone worth emulating, I don't really know what to say to someone who disagrees with this." 9. Habryka "Some part of EA leadership ended up endorsing SBF very publicly and very strongly despite having very likely heard about the concerns, and without following up on them (In my model of the world Will fucked up really hard here)" and again "[Will] was the person most responsible for entangling EA with FTX by publicly endorsing SBF multiple times" 10. Jonas Vollmer "while Nick took SBF's money, he didn't give SBF a strong platform or otherwise promote him a lot...So...Will should be removed"

to put this guy on a pedestal; to elevate him as a moral paragon and someone to emulate; to tie EA's reputation so closely to his

I always find this claim a bit confusing: did we actually do those things? Are there some specific examples of doing this?

I can think of... the 80k interview and that's about it? I guess engaging with the FTX Foundation was somewhat positive but I don't think it was putting him on a pedestal. In fact when I look back I feel like a lot of the content linking Sam to EA came from people talking to Sam. I may well just not be reme... (read more)

I agree with what others have said re: pedestal, so am not going to produce more quotes or anecdotes. I stand by the claim, though. 

I think people may have been inclined to put SBF on a pedestal because earning to give was the main thing people criticized about early EA. People were otherwise pretty supportive of early EA ideas; I mean, it's hard not to support finding more cost-effective global health charities. When SBF emerged, I think this was a bit of a "see, we told you so" moment for EAs who had been around for a long time, especially because S... (read more)

Some specific examples of EA leaders putting SBF on a pedestal that I found with a bit of brief digging:

  • At the time FTX blew up, SBF was featured on 80k’s homepage. Also, if you clicked “start here” on that homepage (the first link aside from a subscription form) you were brought to an article that featured SBF as one of three individual profiles.
    • Both of these mentions linked to a more in depth profile of SBF that had been created in 2014 and regularly updated, and clearly “puts him on a pedestal” (“This approach — where he donates a significant proportion
... (read more)

ETA July: I regret posting the following comment for several reasons, partly because I got crucial information wrong and failed to put things into context and prevent misunderstandings. Please consider reading my longer explanation at the top of my follow-up comment here. I'm sorry to anyone I upset.

------------------------------------------------------------------

At EAG London 2022, they [ETA: this was an individual without consent of the organizers] distributed hundreds of stickers depicting Sam on a bean bag with the text "what would SBF do?". To my kno... (read more)

Yes, I think that him, e.g. being interviewed by 80K didn't make much of a difference. I think that EA's reputation would inevitably be tied to his to an extent given how much money they donated and the context in which that occurred. People often overrate how much you can influence perceptions by framing things differently.

6[anonymous]1y
Also the 80k interview was done by 80k...has anyone claimed that Rob Wiblin or anyone at 80k knew Sam was "sketchy"? Apparently Rob wasn't even corrected about Sam's non-frugality - he sounds kind of out of the loop.

There is the whole vouching for SBF as prospective purchaser of Twitter:

You vouch for him?

Very much so! Very dedicated to making the long-term future of humanity go well.

Differently but same idea: men's groups. Part of the problem with masculinity is that men don't actually talk about it. And it's often easiest to learn from people you feel similar to and respect.

EDIT: I see you suggested the same thing further down so I just agree with you :)

I think this is potentially very good and helpful advice if quite controversial. Alcohol is endemic in our society, but it seriously compromises your judgement! I say this as someone who has historically benefited a lot from alcohol as an aide to getting over social anxiety - it can be useful but it's a very double-edged sword.

That suggests another possible suggestion for community leaders: organise more dry events. Most meetups would probably be totally fine without alcohol, even social mixers (controversial!).

Yeah, I think this would be much better as a poll on a specific course of action. "Something needs to change... nothing in particular, but something" is an easy feeling to fall into.

2
Nathan Young
1y
Feel free to add some. Also when noone has said their thinking, I think it's reasonable to be like "we should consider whether something should change"

I agree that this is super confusing. However, I do think that some claims about EA being too centralised have been about there being too few big orgs. I think that's just not true, and I think the confusion between these two points has probably made it difficult to discuss in the past.

All of which is to say: we should probably prefer to use more specific words than just "centralisation" unqualified.

Okay, but different groups and orgs can already have different norms today, right? Nobody is enforcing conformity. The worst that can happen is that CEA can ban you from EAG, so I guess yes it would be nice to have someone else running conferences so you could go to those?

I'm not playing dumb here, I genuinely find it confusing in what ways people feel they are being coerced by a central power in EA.

I don't agree with any of your criteria for "death". All of those sound totally survivable. "EA exits its recent high-growth phase" is very different from dying.

I would modify them to:

  1. Significant year-on-year decreases in funding
  2. Significant year-on-year decreases in self-identifying EAs

i.e. we transition to a negative growth regime and stay there.

And I think we could survive a lot of organizational collapse so I wouldn't even include that.

Just to add a note of optimism: a) people always take recent news too seriously; and b) many people don't read the forum. It's easy to think that everything is gloom if you spend too much time reading the drama on the forum, but most of reality hasn't changed. We still have thousands of people deeply engaged in doing good and their projects are still going as well as they were before. There are problems, sure, but announcing death is extremely premature IMO.

6
lilly
1y
I appreciate your point, but this isn't consistent with my experience. I find that the Forum seems to be more bullish on EA than both EAs and non-EAs I talk to elsewhere/privately. [Edit: If you feel like it, I’d also appreciate a response to my substantive points. Is it that: (1) Your framework for what it’d look like for EA to be dying is different from mine? (2) You accept my framework, but don’t think EA currently meets the criteria I’ve delineated? And, separately, do you disagree with my point that even if EA dying is unlikely, we should still make a contingency plan?]

What is making things non-federal today? There already are, e.g. groups for Christians in EA which have some quite different ideas to the rest of the movement but coexist pretty peacefully. Is there something more that you would want there?

In (US-style) federalism, the subunits (US states) have quite a bit of power and autonomy. I don't have to worry myself about what Alabama decides about abortion or education. Being a South Carolinian or a Oregonian is a significant part of one's political identity in a sense.

So, if for instance, if the "edgy" / "normie" divide became a critical fault line, under federalism you might see substantial meta groups focusing on one side of the line or the other with for example their own high-end conferences and internal networking. It's not a rupture because ... (read more)

Yes, it would be very different if he'd said "I'm going to skill up on ML and get coding"!

(I work at Open Phil, speaking for myself)

FWIW, I think this could also make a lot of sense. I don't think Holden would be an individual contributor writing code forever, but skilling up in ML and completing concrete research projects seems like a good foundation for ultimately building a team doing something in AI safety.

Load more