All of MaxRa's Comments + Replies

Just in case someone interested in this has not done so yet, I think Zvi‘s post about it was worth reading.

https://thezvi.substack.com/p/openai-the-board-expands

Thanks for your work on this, super interesting!

Based on just quickly skimming, this part seems most interesting to me and I feel like discounting the bottom-line of the sceptics due to their points seeming relatively unconvincing to me (either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is):

We asked participants when AI will displace humans as the primary force that determines what happens in the future. The concerned group’s median date is 2045 and t

... (read more)

either unconvincing on the object level, or because I suspect that the sceptics haven't thought deeply enough about the argument to evaluate how strong it is

 

The post states that the skeptics spent 80 hours researching the topics, and were actively engaged with concerened people. For the record, I have probably spent hundreds of hours thinking about the topic, and I think the points they raise are pretty good. These are high quality arguments: you just disagree with them. 

I think this post pretty much refutes the idea that if skeptics just "thought deeply" they would change their minds. It very much comes down to principled disagreement on the object level issues. 

I agree that things like confirmation bias and myside bias are huge drivers impeding "societal sanity". And I also agree that it won't help a lot here to develop tools to refine probabilities slightly more.

That said, I think there is a huge crowd of reasonably sane people who have never interacted with the idea of quantified forecasting as a useful epistemic practice and a potential ideal to thrive towards when talking about important future developments. Like other commentators say, it's currently mostly attracting a niche of people who thrive for higher ... (read more)

Thanks, I think that's a good question. Some (overlapping) reasons that come to mind that I give some credence to:

a) relevant markets are simply making an error in neglecting quantified forecasts

  • e.g. COVID was an example where I remember some EA adjacent people making money because investors were underrating the pandemic potential signifiantly
  • I personally find it plausible when looking e.g. at the quality of think tank reports which seems significantly curtailed due to the amount of vague propositions that would be much more useful if more concrete and
... (read more)

I don't think there's actually a risk of CAISID damaging their EA networks here, fwiw, and I don’t think CAISID wanted to include their friendships in this statement.

My sense is that most humans are generally worried about disagreeing with what they perceive to be a social group’s opinion, so I spontaneously don’t think there’s much specific to EA to explain here.

3
CAISID
1mo
You are correct in that I was referring more to the natural risks associated with disagreeing with a major funder in a public space (even though OP have a reputation for taking criticism very well), and wasn't referring to friendships. I could well have been more clear, and that's on me.
-8
SuperDuperForecasting
1mo

I‘m really excited about more thinking and grant-making going into forecasting!

Regarding the comments critical of forecasting as a good investment of resources from a world-improving perspective, here some of my quick thoughts:

  1. Systematic meritocratic forecasting has a track record of outperforming domain experts on important questions - Examples: Geopolitics (see Superforecasting), public health (see COVID), IIRC also outcomes of research studies

  2. In all important domains where humans try to affect things, they are implicitly forecasting all the time a

... (read more)
6
Jason
1mo
Why do you think there is currently little/no market for systematic meritocratic forecasting services (SMFS)? Even under a lower standard of usefulness -- that blending SMFS in with domain-expert forecasts would improve the utility of forecasts over using only domain-expert input -- that should be worth billions of dollars in the financial services industry alone, and billions elsewhere (e.g., the insurance market). I don't think the drivers of low "societal sanity" are fundamentally about current ability to estimate probabilities. To use a current example, the reason 18% of Americans believe Taylor Swift's love life is part of a conspiracy to re-elect Biden isn't that our society lacks resources to better calibrate the probability that this is true. The desire to believe things that favor your "team" runs deep in human psychology. The incentives to propagate such nonsense are, sadly, often considerable. The technological structures that make disseminating nonsense easier are not going away.

Some other relevant responses:

Scott Alexander writes

My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.

Zvi Mowshowitz writes

Even scaling back the misunderstandings, this is what ambition looks like.

It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to

... (read more)
4
SiebeRozendal
1mo
Thanks, these are good
MaxRa
4mo21
9
0
20

Thanks a lot for sharing, and for your work supporting his family and for generally helping the people who knew him in processing this loss. I only recently got to know him during the last two EA conferences I attended but he left a strong impression of being a very kind and caring and thoughtful person.

Huh, I actually kinda thought that Open Phil also had a mixed portfolio, just less prominently/extensively than GiveWell. Mostly based on hearling like once or twice that they were in talks with interested UHNW people, and a vague memory of somebody at Open Phil mentioning them being interested in expanding their donors beyond DM&CT... 

Cool!

the article is very fair, perhaps even positive!

Just read the whole thing, wondering whether it gets less positive after the exerpt here. And no, it's all very positive. Thanks you guys for your work, so good to see forecasting gaining momentum.

1
ElliotJDavies
6mo
Thanks for sharing this, I had the same question

For example, the fact that it took us more than ten years to seriously consider the option of "slowing down AI" seems perhaps a bit puzzling. One possible explanation is that some of us have had a bias towards doing intellectually interesting AI alignment research rather than low-status, boring work on regulation and advocacy.

I'd guess it's also that advocacy and regulation seemed just less marginally useful in most worlds with the suspected AI timelines of even 3 years ago?

2
David_Althaus
6mo
Definitely!

Hmmm, your reply makes me more worried than before that you'll engage in actions that increase the overall adversarial tone in a way that seems counterproductive to me. :')

I also think we should reconceptualize what the AI companies are doing as hostile, aggressive, and reckless. EA is too much in a frame where the AI companies are just doing their legitimate jobs, and we are the ones that want this onerous favor of making sure their work doesn’t kill everyone on earth.

I'm not completely sure what you refer to with "legitimate jobs", but I generally have t... (read more)

It would be convenient for me to say that hostility is counterproductive but I just don’t believe that’s always true. This issue is too important to fall back on platitudes or wishful thinking.

Also, the way you frame your pushback makes me worry that you'll loose patience with considerate advocacy way too quickly

I don’t know what to say if my statements led you to that conclusion. I felt like I was saying the opposite. Are you just concerned that I think hostility can be an effective tactic at all?

MaxRa
6mo42
11
1
2

Thanks for working on this, Holly, I really appreciate more people thinking through these issues and found this interesting and a good overview over considerations I previously learned about.

I'm possibly much more concerned than you about politicization and a general vague feeling of downside risks. You write:

[Politization] is a real risk that any cause runs when it seeks public attention, and unfortunately I don’t think there’s much we can do to avoid it. Unfortunately, though, AI is going to become politicized whether we get involved in it or not. (I wou

... (read more)

On the discussion that AI will have deficits in expressing care and eliciting trust, I feel like he’s neglecting that AI systems can easily get a digital face and a warm voice for this purpose?

Interesting discussion, thanks! The discussion of AI potentially driving explosive innovations seemed much more relevant than the replacement of the jobs you spent most time discussing, and at the same time unfortunately much more rushed.

But it’s a kind of thing where, you know, I can keep coming up with new bottlenecks [for explosive innovations leading to economic growth], and [Tom Davidson] can keep dismissing them, and we can keep going on forever.

Relatedly, I'd've been interested how Michael relates to the Age of Em scenario, in which IIRC explosive i... (read more)

3
Tereza_Flidrova
7mo
Awesome, thanks Max! Hope you will be able to join us for the conference :)

Hey Kieren :) Thanks, yeah, it was intentional but badly worded on my part. :D I adopted your suggestion.

(Very off-hand and uncharitably phrased and likely misleading reaction to the "Holden vs. hardcore utilitarianism" bit, thought it's just useful enough to quickly share anyways)

  • Holden's and Rob's takes felt a bit like "Hey, we have these confused ideas of infitinies, and then apply it to Utilitarianism and make Utilitarianism confusing ➔ let's throw out Utilitarianism and deprioritize the welfare of future generations relative to what the caring and calculating approach tells us. And maybe even consider becoming nihilists haha, but for real, let's just lea
... (read more)

Fwiw, despite the tournmant feeling like a drag at points, I think I kept at it due to a mix of:
a) I committed to it and wanted to fulfill the committment (which I suppose is conscientiousness),
b) me generally strongly sharing the motivations for having more forecasting, and
c) having the money as a reward for good performance and for just keeping at it.

I was also a participant. I engaged less than I wanted mostly due to the amount of effort this demanded and losing more and more intrinsic motivation. 

Some vague recollections:

  • Everything took more time than expected and that decreased my motivation a bunch
    • E.g. I just saw one note that one pandemic-related initial forecast took me ~90 minutes
    • I think making legible notes requires effort and I invested more time into this than others. 
    • Also reading up on things takes a bunch of time if you're new to a field (I think GPT-4 would've especially helped w
... (read more)

OpenAI lobbied the European Union to argue that GPT-4 is not a ‘high-risk’ system. Regulators assented, meaning that under the current draft of the EU AI Act, key governance requirements would not apply to GPT-4. 

Somebody shared this comment from Politico, which claims that the above article is not an accurate representation:

European lawmakers beg to differ: Both Socialists and Democrats’ Brando Benifei and Renew’s Dragoș Tudorache, who led Parliament’s work on the AI Act, told my colleague Gian Volpicelli that OpenAI never sent them the paper, nor re

... (read more)

A simple analogy to humans applies here: Some of our goals would be easier to attain if we were immortal or omnipotent, but few choose to spend their lives in pursuit of these goals.

I feel like the "fairer" analogy would be optimizing for financial wealth, which is arguably also as close to omnipotence as one can get as a human, and then actually a lot of humans are pursuing this. Further, I might argue that currently money is much more of a bottleneck for people than longevity for ~everyone to pursue their ultimate goals. And for the rare exceptions (maybe something like the wealthiest 10k people?) those people actually do invest a bunch in their personal longevity? I'd guess at least 5% of them?

I spontaneously thought that the EA forum is actually a decentralizing force for EA, where everyone can participate in central discussions.

So I feel like the opposite, making the forum more central to the broader EA space relative to e.g. CEAs internal discussions, would be great for decentralization. And calling it „Zephyr forum“ would just reduce its prominence and relevance.

I think this is a place where the centralisation vs decentralisation axis is not the right thing to talk about. It sounds like you want more transparency and participation, which you might get by having more centrally controlled communication systems.

IME decentralised groups are not usually more transparent, if anything the opposite as they often have fragmented communication, lots of which is person-to-person.

Yeah, seems helpful to distinguish central functions (something lots of people use) from centralised control (few people have power). The EA forum is a central function, but no one, in effect, controls it (even though CEA owns and could control it). There are mods, but they aren't censors.

Moral stigmatization of AI research would render AI researchers undateable as mates, repulsive as friends, and shameful to family members. Parents would disown adult kids involved in AI. Siblings wouldn’t return their calls. Spouses would divorce them. Landlords wouldn’t rent to them. 

I think such a broad and intense backlash against AI research broadly is extremely unlikely to happen, even if we put all our resources on it.

  • AI is way too broad of a category and the examples of potential downsides of some of its applications (like offputting AI porn or
... (read more)

I'd be very surprised if AI will predominantly be considered risk-free in long-timelines worlds. The more AI will be integrated into the world, the more it will interact with and cause harmful events/processes/behaviors/etc., like take the chatbot that apparently facilitated a suicide

And I take Snoop Doggs reaction to recent AI progress as somewhat representative of a more general attitude that will get stronger even with relatively slow and mostly benign progress

Well I got a motherf*cking AI right now that they did made for me. This n***** could ta

... (read more)
8
Davidmanheim
10mo
"Considered risk free" is very different than what I discussed, which is that the broad public will see much more benefit, and have little direct experience of the types of harms that we're concerned about. Weird and novel won't change the public's minds about the technology, if they benefit, and the "more serious people" in the west who drive the narrative, namely, politicians, pundits, and celebrities, still have the collective attention span of a fish. And in the mean time, RLHF will keep LLMs from going rogue, they will be beneficial, and it will seem fine to everyone not thinking deeply about the risk. 

Thanks for sharing, I like how concrete all of this is and think it's generally a really important practice.

One "hack" that came to mind that I think helped me feeling more relaxed about the prospect of even pretty harsh criticism: Think of some worst cases already in advance. Like when you do a project/plan your life, consider the hypotheses that e.g.

  • you should not do this in theory good project because you are the wrong person for it, e.g. due to you not being [insert relevant features/skills] enough (yet!)
  • the project you plan working on will actually ma
... (read more)

Hmm, fwiw, I spontaneously think something like this is overwhelmingly likely. 

Even in the (imo unlikely) case of AI research basically stagnating from now on, I expect AI applications to have effects that will significantly affect the broader public and not make them think anything close to "what a nothingburger" (e.g. like I've heard it happen for nanotechnology). E.g. I'm thinking of things like the broad availabiltiy of personal assistants & AI companions, automating of increasingly many tasks, impacts on education, on the productivity of soft... (read more)

Most news outlets seem to jump on everything he does.

That's where my thoughts went, maybe he and/or CAIS thought that the statement would have a higher impact if reporting focuses on other signatories. That Musk thinks AI is an x-risk seems fairly public knowledge anyways, so there's no big gain here.

This is so awesome, thank you so much, I'm really glad this exists. The recent shift of experts publicly worrying about AI x-risks has been a significant update for me in terms of hoping humanity avoids losing control to AI.

(but notably not Meta)

Wondering how much I should update from Meta and other big tech firms not being represented on the list. Did you reach out to the signing individuals via your networks and maybe the network didn't reach some orgs as much? Maybe there are company policies in place that prevent employees from some firms from signing the statement? And is there something specific about Meta that I can read up on (besides Yann LeCun intransigence on Twitter :P)?

4
Jörg Weiß
10mo
I'm not sure, we can dismiss Yann LeCun's statements so easily; mostly, because I do not understand how Meta works. How influential is he there? Does he set general policy around things like AI risk? I feel there is this unhealthy dynamic where he represents the leader of some kind of "anti-doomerism" – and I'm under the impression that he and his Twitter crowd do not engage with the arguments of the debate at all. I'm pretty much looking at this from the outside, but LeCun's arguments seem to be so far behind. If he drives Meta's AI safety policy, I'm honestly worried about that. Meta just doesn't seem to be an insignificant player.
Answer by MaxRaMay 25, 20236
0
0

I also have the impression that there's a gap and would be interested in whether funders are not prioritizing it too much, or whether there's a lack of (sufficiently strong) proposals.

Another AI governance program which just started its second round is Training For Good's EU Tech Policy fellowship, where I think the reading and discussion group part has significant overlap with the AGISF program. (Besides that it has policy trainings in Brussels plus for some fellows also a 4-6 months placement at an EU think tank.)

MaxRa
10mo7
❤️ 1

Thanks for sharing Luise, I also have some issues with tiredness and probably something burn-out-related and found this helpful to read. E.g. this feels very familiar when I want to engage more complicated research questions:

That depth of thinking and amount of working memory sounds way too hard right now. I try, but 3 minutes later I give up. I decide to read something instead. I feel the strong desire to sit in a comfy bean bag and get a blanket.

Had to laugh at this one, sounds like torture to me xD 

Even staring at the wall for 10 minutes sounds gre

... (read more)
4
Luise
10mo
Thanks Max! Sounds like a plausible theory that you lost motivation because you pushed yourself too hard. I'd also pay attention to "dumber" reasons like maybe you had more motivation from supervisors/social environment/more achievable goals in the past. Similar to my call to take a vacation, maybe it's worth it for you to only do motivating work (like a side project) for 1.5 weeks and see if the tiredness disappears. All of this with the caveat that you understand your situation a lot better than I do ofc!

How do you evaluate community notes? Multiple times they have given me fairly informative context on some viral tweets, and it seems like they were introduced under Musk.

1
ludwigbald
10mo
Community notes are great, even though they are (still?) restricted to the US. The good thing is that they seem to work fast enough so most tweet impressions are actually annotated.
4
Jackson Wagner
10mo
Community notes seem like a genuinely helpful improvement on the margin -- but coming back to this post a year later, I would say that on net I am disappointed.  (Disclaimer -- I don't use twitter much myself, so I can't evaluate people's claims of whether twitter's culture has noticeably changed in a more free-speech direction or etc.  From my point of view just occasionally reading others' tweets, I don't notice any change.) During the lead-up to the purchase, people were speculating about all kinds of ways that Twitter could try to change its structure & business model, like this big idea that it could split apart the database from the user-interface, then allow multiple user-interfaces (vanilla Twitter plus third-party alternatives) to compete and use the database in different ways, including doing the federated censorship that Larks mentioned in his comment.  The database would almost become the social version of what blockchains are for financial transactions -- a kind of central repository of everything that everyone's saying, which is then used and filtered and presented in many different ways. But instead, the biggest change so far has been the introduction of a subscription model.  Maybe this is just Step 1 of a larger process (gotta start by stabilizing the company and making it profitable)... but it seems like there is no larger vision for big changes/experiments like this.  With a year of hindsight, it seems like Elon's biggest concerns were just the sometimes aggressively left-wing moderation/norms of the site, and the way that the bluecheck system favored certain groups like journalists.  It seems like now he's fixed those perceived problems, but it hasn't resulted in a transformative improvement to the platform, and there are simply no more steps in the plan. So, that's unfortunate.  But I am still optimistic that Twitter is interested in experimenting and trying new things -- even if there isn't a concrete vision, I guess I am still optimistic th

Thanks for sharing, that's a refreshingly nice article. :D Big fan of HIA!

“In New Zealand, we’ve got ‘tall poppy syndrome’,” says Inglis, “where those who like to stand out will get cut down. And the New Zealand public love to do that sometimes, which is great because it keeps us humble, but at the same time, it can reduce people’s confidence to put themselves out there and talk about issues which they care about. So we’re trying to work with athletes so they can put themselves out there to deliver messages that they can be really confident in and that the

... (read more)
1
George Timms
1y
Thank you, Max!

I often feel guilty for eating out at restaurants. Especially when meat is involved.

I kinda feel like I personally wouldn't want to use the app like this, it spontaneously feels like I wouldn't fully own the tradeoffs I'm under or something? Like I'd be trying to distract myself from the outcomes of the choices I'm making? If I'd think I made the best tradeoffs by eating meat now and then, I'd probably just want to one time cry about how sad it is and make peace with living in a world that also features this particular cruel tradeoff.

(And now back to reducing x-risks from AI! <3 )

Thanks for the updates, I'm really grateful for your work and wish you all the best for the rest of the year! 

Since this update, we’ve hired 38 more staff members!

Pretty cool to see you growing the team, would be interested in the challenges and lessons learned.

Thanks for your work, and for sharing your thoughts, that all makes sense to me and I'm glad that you seem to have success in making people feel psychologically safe and encouraged to make their ideas happen! (And thanks for reminding me of the Google study)

I'm not yet sure why socials and rationality skill trainings appear to be everything the Berlin crowd wants.

Well, we also have a very popular TEAMWORK speaker series, and I'm part of one highly regarded cause-specific dinner networking thing! :P So maybe I'd indeed guess that this is partially a founder... (read more)

3
Severin
1y
Thanks! Yep, the "socials is all people want." is a bit of a hyperbole. In addition to the TEAMWORK talks, we also have the Fake Meat - Real Talk reading/discussion group dinners, and will have a talk at the next monthly social, too. The one-day career workshops sound great, added to the to-do list.

But "the reasonable restrictionist mood is anguish that a tremendous opportunity to enrich mankind and end poverty must go to waste." You might think that restricting immigration is sometimes the lesser evil, but if you don't have this mood, you're probably just ~xenophobic.

I don’t see why they would feel anguish if they don’t believe in the first place that open borders would enrich mankind and end poverty? I guess it works if they value something else, like cultural homogeneity. But even then it seems reasonable not to feel anguish about tradeoffs one... (read more)

For an NVIDIA A100, the on-board memory bandwidth is around 2GB/s

I think this should be 2TB/s? 
 

And ping!

We are working on a piece with more insights on the utilizations and some advice on how to estimate training compute and the connected utilization of the system (link to be added by the end of 2021; ping me if not).

Thanks for this! Your summaries usually cause me to add ~1-2 relevant posts to my reading list, and remove ~2-3 others from my reading list for which I feel satisfied to just have read your summary. :) 

Thanks for your work here, it's a useful overview for the compute metrics project I'm working on with Peter. Minor errors:

Also commonly used is Petaflop/s-day. It's also a quantity of operations. A petaflop/s is  floating point operations per second for one day. A day has . That makes  FLOPs.

  • A petaflop/s-day is  
  • A day has 10^5 seconds
2
MaxRa
1y
I think this should be 2TB/s?    And ping!

Cool, thanks for doing that analysis! I'm wondering whether the scores you derived would be a great additional performance metric to provide to forecasters, specifically

a) the average contribution over all questions, and

b) the individual contribution for each question.

Thanks for writing this up. I saw people asking for upvotes a couple of times e.g. on Slack channels, from people I'm fairly sure are well-meaning and cooperative and who just haven't considered the problems this behavior causes. I never said something because I had the suspicion they are pretty self-conscious about getting up/downvotes and about posting on the forum in general (to which I can relate :D), and starting a conversation that touches on those issues seemed a bit too much every time.

FWIW, different communities treat it differently. It's a no-go to ask for upvotes at https://hckrnews.com/ but is highly encouraged at https://producthunt.com/.

Thanks for the comment. I agree that well-meaning and cooperative people sometimes end up vote-brigading (or borderline), and I imagine that there are people who might read this and feel quite bad. I really don't want that. 

I'm just hoping that we can make lots of people aware of this to prevent accidental/uninformed/absent-minded cases of this happening. 

(Then we'd still be left with clearer-cut cases of uncooperative behavior, but if nothing else, the people being asked to vote-brigade might be able to warn the mods more easily with increased awareness of the problem.)

Really appreciate the level of detail you provide on your thinking here! And I’m very glad to hear that it’s been going so well, hope the next year will be even better. :)

4
Alfredo_Parra
1y
Thanks for the feedback, Max! And also for your support in the past. Super appreciated. :)

Fwiw, I think your examples are all based on less controversial conditionals, though, which makes them less informative here. And I also think the topics that are conditioned on in your examples already received sufficient analyses that make me less worried about people making things worse* as they will be aware of more relevant considerations, in contrast to the treatment in the background discussions that Larks discussed.

*(except the timelines example, which still feels slightly different though as everything seems fairly uncertain about AI strategy)

Hmm good point that my examples are maybe too uncontroversial, so it's somewhat biased and not a fair comparison. Still, maybe I don't really understand what counts as controversial, but at the very least, it's easy to come up with examples of conditionals that many people (and many EAs) likely place <50% credence on, but are still useful to have on the forum:

... (read more)

I also relate a lot due to my PhD experience. Thanks so much for writing this, I’m glad you got out of it as well.

Maybe the lesson here is that we should be more proactive about watching and checking in on other members of the EA community

I think that’s a really good idea. While people saw my struggles during my PhD, I think there was never a real intervention of someone talking it out systematically with me. I haven’t followed up on their work, but maybe this project is covering something like this and is still ongoing/worth expanding? https://forum.e... (read more)

3
zekesherman
1y
I love this. I want to be their ambassador and give speeches in elementary schools.

Thanks for sharing your thoughts, I particularly appreciated you pointing out the plausible connection between experiencing scarcity and acting less prosocially / with less integrity. And I agree that experiencing scarcity in terms of social connections and money is unfortunately still sufficiently common in EA that I'm also pretty worried when people e.g. want to systematically tone down aspects that would make EA less of a community.

Game-theoretically, it makes total sense for people to be a bit untrustworthy while they are in a bad place in their life.

... (read more)
1
Severin
1y
Yep, I agree with that point - being untrustworthy and underresourced are definitely not the same thing.

What do you think about the idea of large donors holding back some of their funding and directly transferring it to the people in the board? Or the donors could maybe earmark some part of their funding for that purpose. Then the people in the board don't have to feel like their income is dependent on their relationships to people in the org.

7
Renan Araujo
1y
I think the issue is more that such an income would depend on the org's performance or existence even in that arrangement, and that directors should be ready to make hard decisions that could e.g., shut down the organization. Depending on the org in any way would limit their decision power to make such calls.

This comment made me wonder what type of norm you're asking about. From Wiktionary:

  1. That which is normal or typical.
    1. Unemployment is the norm in this part of the country.
  2. A rule that is imposed by regulations and/or socially enforced by members of a community.
    1. Not eating your children is just one of those societal norms.

On second thought, veganism maybe is neither 1. nor 2., at least this is what the EA survey found 5 years ago about the frequency of veganism and vegetarianism:

39% of the effective altruism population reported being vegan or vegetarian

MaxRa
1y47
22
0

Fwiw, I'm active in the broader EA community for a couple years now (mostly in Germany & online) and your examples felt fairly foreign to me, and like I'd cringe if I heard somebody say them.

  • “You’re not really an EA unless you live frugally so you can donate a lot”
  • “I don’t think we should weigh people’s opinions too heavily unless they actually understand Bayesian reasoning”
  • “Real EAs are vegan”

Same, I have never heard any of these. Perhaps some people are saying these things, but I'd be very surprised to, say, hear anything like this being shared in the EA leaders Slack (not that I'm in it but as someone who has spoken to many EA leaders, they are all chill)

EAs tend to speak in really nuanced ways, so the furthest I've heard someone go is saying things like "I've found Bayesian reasoning to be an irreplaceable tool and want us to help new EAs learn it and be aware of the value themselves" or "Eating vegan has been shown to increase compassi... (read more)

Same, been active since 2016 and these seem odd to me. I would say anyone who's really interested in the question of how to help others effectively using reason and evidence is an EA.

4
Kirsten
1y
I have occasionally heard people say things like this, but more often I've heard things that sound like this is the underlying assumption. I agree that it would be super cringe for someone to actually come out and say one of those statements!
Load more