All of Buck's Comments + Replies

I think it's pretty unreasonable to call him a Nazi--he'd hate Nazis, because he loves Jews and generally dislikes dumb conservatives.

I agree that he seems pretty racist.

Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn't live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful

Who on your list matches this description? Maybe Becca if you think she's thoughtful on these issues? But isn't that one at most?

-1
Habryka
19d
Becca, Nicole and Max all stand out as people who I think burned out trying to make things go better around FTX stuff.  Also Claire leaving her position worsened my expectations of how much Open Phil will do things that seem bad. Alexander also seems substantially worse than Holden on this dimension. I think Holden was on the way out anyways, but my sense was Claire found the FTX-adjacent work very stressful and that played a role in her leaving (I don't thinks she agrees with me on many of these issues, but I nevertheless trusted her decision-making more than others in the space).

I think that one reason this isn’t done is that the people who have the best access to such metrics might not think it’s actually that important to disseminate them to the broader EA community, rather than just sharing them as necessary with the people for whom these facts are most obviously action-relevant.

Yeah, I think that might be one reason it isn't done. I personally think that it is probably somewhat important for the community to understand itself better (e.g., the relative progress and growth in different interests/programs/geographies). Especially for people in the community who are community builders, recruiters, founders, etc. I also recognise that it might not be seen as priority for various reasons or risky for other reasons and I haven't thought a lot about it. 

Regardless, if people who have data about the community that they don't want to... (read more)

I think you're right that my original comment was rude; I apologize. I edited my comment a bit.

I didn't mean to say that the global poverty EAs aren't interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell's meticulous reasoning. I've edited my comment to make it less sound like I'm saying that the global poverty EAs are dumb or uninterested in thinking.

But I do stand by the claim that you'll understand EA better if you think of "promote AMF" and "try to reduce AI x-risk" as results of two fairly differe... (read more)

9
NickLaing
6mo
Nice one makes much more sense now, appreciate the change a lot :), have retracted my comment now (I think it can still be read, haven't mastered the forum even after hundreds of comments...)
Buck
6mo34
16
4

I don't think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you're advocating here. My oversimplified model of the situation is more like:

  • Some EAs don't feel very persuaded by this kind of reasoning, and end up donating to global development stuff like bednets.
  • Some EAs are moved by this kind of reasoning, and decide not to engage with global development because this kind of reasoning suggests higher impact alternatives. They don't really spend much time thinking about h
... (read more)
5
richard_ngo
6mo
Makes sense, though I think that global development was enough of a focus of early EA that this type of reasoning should have been done anyway. I’m more sympathetic about it not being done after, say, 2017.

Thanks a lot that makes sense, this comment no longer stands after the edits so have retracted really appreciate the clarification!

(I'm not sure its intentional, but this comes across as patronizing to global health folks. Saying folks "don't want to do this kind of thinking" is both harsh and wrong. It seems like you suggest that "more thinking" automatically leads people down the path of "more important" things than global health, which is absurd.

Plenty of people have done plenty of thinking through an EA lens and decided that bed nets are a great place ... (read more)

[This comment is no longer endorsed by its author]Reply
Buck
7mo17
11
7

Note that L was the only example in your list which was specifically related to EA. I believe that that accusation was false. See here for previous discussion.

The situation with person L was deeply tragic. This comment explains some of the actions taken by CEA’s Community Health team as a result of their reports.

Even if most examples are unrelated to EA, if it's true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?

EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.

8
Lucretia
7mo
I don’t see anything on the linked post in this comment that L’s report was false from legitimate sources.

I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:

We were pressured to sign non-disclosure agreements or “consent statements” in a manipulative “community process”.

this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn't so people don't jump to the wrong conclusions.

That being said, I think the AI community in the Bay Ar... (read more)

7
J
7mo
OP stated that L’s accusations were dismissed by the EA community.

I believe that these accusations are false. See here for previous discussion.

-2
Lucretia
7mo
I do not see valid claims that L’s report was false on the post you link, and to be totally honest, this comment is a bit of a red flag.
4
J
7mo
OP stated that L’s accusations were dismissed by the EA community. Your link doesn’t provide any proof that they’re false. So it seems like you’re proving OP right.

Can you give some examples of other strategies you think seem better?

eg, some (much lighter) investigation, followed by:

  • denying them power/resources if you are personally in a position to do so
  • talking to the offenders if you think they are corrigible and not retributive
    • alternatively, talking to someone in a senior position/position of authority over the offenders who can deliver the message more sternly etc
  • (if nonprofit) talking to the nonprofit's board if it's not captured
  • (if grad student, and the problems are professional) talking to their advisor if you think the advisor's sympathetic to your concerns
  • (if funded by EA fol
... (read more)
Buck
8mo9
7
2
1
1

I think it was unhelpful to refer to “Harry Potter fanfiction” here instead of perhaps “a piece of fiction”—I don’t think it’s actually more implausible that a fanfic would be valuable to read than some other kind of fiction, and your comment ended up seeming to me like it was trying to use the dishonest rhetorical strategy of implying without argument that the work is less likely to be valuable to read because it’s a fanfic.

2
Linch
8mo
Adjusted for popularity or likelihood of recommendation, you might naively expect fiction that someone is presented with to be more likely to stand the test of time than fan fiction, since the selection effects are quite different.
2
Joseph Lemien
8mo
I think that is a fair and accurate criticism. I do view most fan fiction as fairly low quality, but even if that is true it doesn’t imply that all fan fiction is low quality. And I do think that some fiction can be used for self-improvement purposes.

I found Ezra's grumpy complaints about EA amusing and useful. Maybe 80K should arrange to have more of their guests' children get sick the day before they tape the interviews.

I agree that we should tolerate people who are less well read than GPT-4 :P

4
Joseph Lemien
10mo
(I'm writing with a joking, playful, tongue-in-cheek intention) If we are setting the bar at "to join our community you need to be at least as well read at GPT4," then I think we are setting the bar too high. More seriously: I agree that it isn't impossible for someone to figure out what it means, it is just a bit harder than I would like. Like when someone told me to do a "bow tech" and I had no idea what she was talking about, but it turns out she was just using a different name for a Fermi estimate (a BOTEC).
Buck
1y15
5
1

I think this is a great question. My answers:

  • I think that some plausible alignment schemes seem like they could plausibly involve causing suffering to the AIs. I think that it seems pretty bad to inflict huge amounts of suffering on AIs, both because it's unethical and because it seems potentially inadvisable to make AIs justifiably mad at us.
  • If unaligned AIs are morally valuable, then it's less bad to get overthrown by them, and perhaps we should be aiming to produce successors who we're happier to be overthrown by. See here for discussion. (Obviously the
... (read more)
2
Vasco Grilo
1y
Hi Buck, Are you confident that being overthrown by AIs is bad? I am quite uncertain. For example, maybe most people would say that humans overpowering other animals was good overall.
9
Jason
1y
I'm curious to what extent the value of the "happiness-to-be-overthrown-by" (H2BOB) variable for the unaligned AI that overthrew us would be predictive of the H2BOB value of future generations / evolutions of AI. Specifically, it seems at least plausible that the nature and rate of unaligned AI evolution could be so broad and fast that knowing the nature and H2BOB of the first AGI would tell us essentially nothing about prospects for AI welfare in the long run.
4
JP Addison
1y
I like this answer and will read the link in bullet 2. I'm very interested in further reading in bullet 1 as well.
Answer by BuckApr 27, 202334
12
1

My attitude, and the attitude of many of the alignment researchers I know, is that this problem seems really important and neglected, but we overall don't want to stop working on alignment in order to work on this. If I spotted an opportunity for research on this that looked really surprisingly good (e.g. if I thought I'd be 10x my usual productivity when working on it, for some reason), I'd probably take it.

It's plausible that I should spend a weekend sometime trying to really seriously consider what research opportunities are available in this space.

My guess is that a lot of the skills involved in doing a good job of this research are the same as the skills involved in doing good alignment research.

Thanks Lizka. I think you mean to link to this video: 

fwiw the two videos linked look identical to me (EAG Bay Area 2023, "The current alignment plan, and how we might improve it")

Holden's beliefs on this topic have changed a lot since 2012. See here for more.

Buck
1y54
16
1

I really like this frame. I feel like EAs are somewhat too quick to roll over and accept attacks from dishonest bad actors who hate us for whatever unrelated reason.

Yeah I noticed a huge difference between EAs and my politically active right wing friends, for whom disingenuous media articles calling you racist are just an occupational hazard. I think especially a lot of younger EA straight out of college are used to affiliating with moral-language coded condemnation and find being the recipient, or adjacent to the recipient, very disorientating. 

Answer by BuckFeb 28, 202320
8
0

Yes, I think this is very scary. I think this kind of risk is at least 10% as important as the AI takeover risks that I work on as an alignment researcher.

1
diodio_yang
1y
I am almost inclined to believe this will bring more eminent ruin to humanity. Xi and Putin are both aging, but it will be likely that they both live long enough to us AI to significantly extend their power and create a more obedient people. Dictators get more paranoid as they age; I am afraid that the combination of this paranoia and their perceived increase in power will encourage them to wage global war.
Buck
1y12
6
1

I don't think Holden agrees with this as much as you might think. For example, he spent a lot of his time in the last year or two writing a blog.

Buck
1y26
12
0

I think it's absurd to say that it's inappropriate for EAs to comment on their opinions on the relative altruistic impact of different actions one might take. Figuring out the relative altruistic impact of different actions is arguably the whole point of EA; it's hard to think of something that's more obviously on topic.

8
ThomasW
1y
Agreed. My sense is that much of the discomfort comes from the tendancy that people have to want to have their career paths validated by a central authority. But that isn't the point of 80k. The point of 80k is to direct people towards whatever they think is most impactful. Currently that appears to be mostly x-risk. If you meet some of the people at places like 80k and so forth, I think it's easier to realize that they are just people who have opinions and failings like anyone else. They put a lot of work into making career advising materials, and they might put out materials that say that what you  are doing is "suboptimal." If they are right and what you're doing really is clearly suboptimal, then maybe you should feel bad (or not; depends on how much you want to feel bad about not maximizing your altruistic impact) . But maybe 80k is wrong! If so, you shouldn't feel bad just because some people who happen to work at 80k made the wrong recommendation.
0
Mack the Knife
1y
Like I've said in many other comments, I don't have a problem with their ranking or the fact that there is a ranking in the first place. And of course they are explicit about their values. But I still think there are ways to push x-risk as the top priority whilst also conveying other cause areas as more valuable than they currently are. Difficult of course, but not impossible. The key problem is that I'm not sure many people discouragd from "less important causes" then happily go into longtermism. I think it's more likely they stop being active altogether (this is my personal impression of course from my own experiences and many conversations). Because you can't force yourself to care about something when you simply don't - even if you want to and even if that'd be the "best" and most rational thing to do. So people in "less important causes" might be lost altogether and not doing their "less important"  but still pretty valuable (I think) work anymore. And that I the concern I wanted to voice. Not all that "absurd", I think. 
Buck
1y29
12
0

Obviously it would have been better if those organizers had planned better. It's not clear to me that it would have been better for the event to just go down in flames; OP apparently agreed with me, which is why they stepped in with more funding.

I don't think the Future Forum organizers have particularly strong relationships with OP.

The main bottleneck I'm thinking of is energetic people with good judgement to execute on and manage these projects.

How come you think that? Maybe I'm biased from spending lots of time with Charity Entrepreneurship folks but I feel like I know a bunch of talented and entrpreneurial people who could run projects like the ones mentioned above. If anything, I would say neartermist EA has a better (or at least, longer) track record of incubating new projects relative to longtermist EA!

Buck
1y20
7
5

I disagree, I think that making controversial posts under your real name can improve your reputation in the EA community in ways that help your ability to do good. For example, I think I've personally benefited a lot from saying things that were controversial under my real name over the years (including before I worked at EA orgs).

3
RyanCarey
1y
Yes, but you've usually been arguing in favour of (or at least widening the overton window around) elite EA views vs the views of the EA masses, have been very close to EA leadership, and are super disagreeable - you are unrepresentative on many relevant axes.
Buck
1y26
15
3

Stand up a meta organization for neartermism now, and start moving functions over as it is ready.

As I've said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.

 Neartermism has developed meta organizations from scratch before, of course.

[...]

which is quite a bit more than neartermism had when it created most of the current meta.

I don't think it's fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by ... (read more)

2
Jason
1y
While neartermists may be a  "small fraction" of the pie of "most energetic/ambitious/competent people," that pie is a lot larger than it was in the 2000s. And while funding is not a replacement for good people, it is (to a point) a force multiplier for the good people you have. The funding situation would be much better than it was in the 2000s. In any event, I am inclined to think that many neartermists would accept B-list infrastructure if that meant that the infrastructure would put neartermism first  -- so I don't think the infrastructure would have to be as good. I'm just not sure if there is another way to address some of the challenges the original poster alludes to. For the current meta organizations to start promoting neartermism when they believe it is significantly less effective would be unhealthy from an epistemic standpoint. Taking the steps necessary to help neartermism unlock the potential in currently unavailable talent/donor pools I mentioned above would -- based on many of the comments on this forum -- impair both longtermism's epistemics and effectiveness. On the other hand, sending the message that neartermist work is second-class work is not going to help with the recruitment or retention of neartermists. It's not clear to me what neartermism's growth (or maintenance) pathway is under current circumstances. I think the crux may be that I put a lot of stock in potentially unlocking those pools as a means of creating counterfactual value.  I understand that a split would be sad, although I would view it more as a sign of deep respect in a way -- as an honoring of longtermist epistemics and effectiveness by refusing to ask longtermists to compromise them to help neartermism grow. (Yes, some of the reason for the split may have to do with different needs in terms of willingness to accept scandal risk, but that doesn't mean anyone thinks most longtermists are scandalous.)
Buck
1y29
9
3

Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).

One issue for me is just that EA has radically different standards for what constitutes "impact." If near-term: lots of rigorous RCTs showing positive effect sizes.

 If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . .  BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact. 

Buck
1y22
10
2

and then explains why these longtermists will not be receptive to conventional EA arguments.

I don't agree with this summary of my comment btw. I think the longtermists I'm talking about are receptive to arguments phrased in terms of the classic EA concepts (arguments in those terms are how most of us ended up working on the things we work on).

Buck
1y34
14
8

Holden Karnofsky on evaluating people based on public discourse:

I think it's good and important to form views about people's strengths, weaknesses, values and character. However, I am generally against forming negative views of people (on any of these dimensions) based on seemingly incorrect, poorly reasoned, or seemingly bad-values-driven public statements. When a public statement is not misleading or tangibly harmful, I generally am open to treating it as a positive update on the person making the statement, but not to treating it as worse news about the

... (read more)

Buck seems to be consistently missing the point. 

Although leaders may say "I won't judge or punish you if you disagree with me", listeners are probably correct to interpret that as cheap talk. We have abundant evidence from society and history that those in positions of power can and do act against them. A few remarks to the contrary should not convince people they are not at risk.

Someone who genuinely wanted to be open to criticism would recognise and address the fears people have about speaking up. Buck's comment of "the fact that people want to hid... (read more)

6
quinn
1y
"say things that feel true but take actual bravery" (as opposed to "perceived bravery" where they're mis-estimating how unpopular a sentiment is) is definitely a high variance strategy for building relationships in which you're valued and respected, unavailable to the risk-intolerant, can backfire. 
Buck
1y36
24
10

I think you're imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don't think that's what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infras... (read more)

My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does--it's not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.

This paragraph feels pretty over the top. When you say "resources" I assume you mean... (read more)

Also worth noting that "all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then." (Comment by lukeprog)

6
Jason
1y
I think a split proposal is more realistic on a multi-year timeframe. Stand up a meta organization for neartermism now, and start moving functions over as it is ready. (Contra the original poster, I would conceptualize this as neartermism splitting off; I think it would be better to fund and grow new neartermist meta orgs rather than cripple the existing ones with a longtermist exodus. I also think it may be better off without the brand anyway.)  Neartermism has developed meta organizations from scratch before, of course. From all the posts about how selective EA hiring practices are, I don't sense that there is insufficient room to staff new organizations. More importantly, meta orgs that were distanced from the longtermist branch would likely attract people interested in working in GHD, animal advocacy, etc. who wouldn't currently be interested in affiliating with EA as a whole. So you'd get some experienced hands and a good number of new recruits . . . which is quite a bit more than neartermism had when it created most of the current meta. In the end, I think neartermism and longtermism need fundamentally different things. Trying to optimize the same movement for both sets of needs doesn't work very well. I don't think the need to stand up a second set of meta organizations is a sufficient reason to maintain the awkward marriage long-term.
-9
... wild hair
1y
3
harfe
1y
I broadly agree. My guess would be that the people who want an EA-without-longtermism movement would bite that bullet. The kind of EA-without-longtermism movement that is being imagined here would probably need less of those things? For example, going to EAG is less instrumentally useful when all you want is to donate 10% of your income to the top recommended charity by GiveWell, and more instrumentally useful when you want to figure out what AI safety research agenda to follow.
Buck
1y13
5
0

Note that while Caleb is involved with grantmaking, I don’t think he has funded atlas, so this post isn’t about a grantee of his.

Buck
1y47
20
13

Note that "lots of people believe that they need to hire their identities" isn't itself very strong evidence for "people need to hide their identities". I agree it's a shame that people don't have more faith in the discourse process.

7
JWS
1y
I agree, but many people - some of whom seem to be highly engaged in the community - clearly believe that they do need to do so. Even if the truth is that they don't, the fact that this belief is widespread is a significant issue for our community. We need to have open mechanisms for feedback on actions and ideas, so that we do end up doing the most good. Tl,dr: You are correct, these are logically separate points. But people not having faith in the process is the important issue imo. 

While I do think I maybe disagree with Buck on the actual costs to people speaking openly, I also think there are pretty big gains in terms of trust and reputation to be had by speaking out openly. In my experience it's a kind of increase-in-variance of how people relate to you, with an overall positive mean, with there definitely being some chance of someone disliking you being open, but there also being a high chance of someone being very interested in supporting you and caring a lot about you not being unfairly punished for speaking true things, and rew... (read more)

Buck
1y15
10
7

Thanks for the specific proposals.

The reasonable person also knows that senior EAs have a lot of discretionary power, and thus there is a significant chance retailatory action would not be detected absent special safeguards.

FWIW, I think you're probably overstating the amount of discretionary power that senior EAs could use for retaliatory action.

IMO, if you proposition someone, you're obligated to mention this to other involved parties in situations where you're wielding discretionary power related to them. I would think it was wildly inappropriate for a ... (read more)

Thanks, Buck. It is good to hear about those norms, practices, and limitations among senior EAs, but the standard for what constitutes harassment has to be what a reasonable person in other person's shoes would think. The student or junior EA experiences harm if they believe a refusal will have an adverse effect on their careers, even if the senior EA actually lacks the practical ability to create such an effect.[1]

The reasonable student or junior EA doesn't know about undocumented (or thinly documented) norms, practices, and limitations among senior EAs. ... (read more)

For what it's worth, my current vote is for immediate suspension in situations if there is credible allegations for anyone in a grantmaking etc capacity where they used such powers in a retaliatory action for rejected romantic or sexual advances. In addition to being illegal, such actions are just so obviously evidence of bad judgement and/or poor self-control that I'd hesitate to consider anyone who acted in such ways a good fit for any significant positions of power. I have not thought the specific question much, but it's very hard for me to imagine any realistic situation where someone with such traits is a good fit for grantmaking. 

2
Arepo
1y
Fwiw, someone was just observing on a different thread how many 'burner' or similar accounts have recently been showing up on the forum. So it seems like many junior EAs do in fact believe that being negatively identified by senior EAs could be harmful to their prospects.
Buck
1y18
6
8

Re 2: You named a bunch of cases where a professional relationship comes with restrictions on sex or romance. (An example you could given, which I think is almost universally followed in EA, is "people shouldn't date anyone in their chain of management"; IMO this is a good rule.) I think it makes sense to have those relationships be paired with those restrictions. But it's not clear to me that the situation in EA is more like those situations than like these other situations:

  • Professors dating grad students who work at other universities
  • Well-respected artis
... (read more)

So, I think the first question is something like: "Could a reasonable person in the shoes of the lower-status person conclude that rebuffing the overtures of the higher-status person [1] could result in a meaningfully adverse impact on their career due to the higher-status person taking improper action?"[2] 

I think in the vast majority of cases following your three hypotheticals would result in a "no" answer to this question. For instance, most professors have relatively little influence on the operations of other universities, or on the nat... (read more)

Yeah, IMO medals definitely don't suffice for me to think it's extremely likely someone will be AFAICT good at doing research.

Buck
1y21
11
2

I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.

For example:

  • People who are "highly intelligent" are generally more suitable for projects/jobs/roles.
  • People who agree with the foundational claims underlying a theory of change are more suitable for projects/jobs/roles that are based on that theory of change.

For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to

... (read more)

Thanks for the nuanced response. FWIW, this seems reasonable to me as well:

I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.

Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But th... (read more)

I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is "fair game", or a constructive way to approach what you want to say. Here is my real response:

As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.

@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of  critical writing on the EA Forum and critiques of specific people (me or the OP author).

I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should

I agree! But given this, I think the two things you mention often feel highly correlated, and it's hard for people to actually know that when you make a statement like that, that there's no negative judgement either from you, nor from other readers of your statement. It also feels a bit weird to suggest there's no negative judgement if you also think the forum is a better place without their critical writing?

In gen

... (read more)

By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.

I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).

Thanks for your offer to receive critical feedback.

4
weeatquince
1y
Also, I wonder if we should try (if we can find the time) cowriting a post on giving and receiving critical feedback on EA. Maybe we diverge in views too much and it would be a train wreck of a post but it could be an interesting exercise to try, maybe try to pull out toc. I do agree there are things that both I think I and the OP authors (and those responding to the OP) could do better 

Thank you Buck that makes sense :-)

 

“the content/framing seems not very useful and I am sad about the effect it has on the discourse”

I think we very strongly disagree on this. I think critical posts like this have a very positive effect on  discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content. 

I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positi... (read more)

Buck
1y30
8
2

Thanks for your sincere reply (I'm not trying to say other people aren't sincere, I just particularly felt like mentioning it here).

Here are my thoughts on the takeaways you thought people might have.

  • There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)

As I s... (read more)

  • Why is only "arguable" that you had more power when you were an active grantmaker?

I removed "arguable" from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn't clearly mean I had "that much" power--e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending.

  • Do you mean you don't have much power, or that you don't use much power? 

I mean that I don't have much discretionary power (except inside Redwood). I can't unila... (read more)

6[anonymous]1y
I appreciate the clarification!  It sounds to me that what you're saying is that you don't have any formal power over non-Redwood decisions, and most of your power comes from your ability to influence people. Furthermore, this power can be taken away from you without you having any choice in the matter. That seems fair enough. But then you seem to believe that this means you don't actually have much power? That seems wrong to me. Am I misunderstanding something? 
Buck
1y43
19
6

I think Lark's response is reasonably close to my object-level position.

My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal "OpenPhil should diversify its grantmaking by... (read more)

Buck
1y51
14
4

Some things that I might come to regret about my comment:

  • I think it's plausible that it's bad for me to refer to disagreeing with arguments without explaining why.
  • I've realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I'm less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
  • I was not very transparent about my goal with this comment, whi
... (read more)
5
ChanaMessinger
1y
Fwiw I think there was an acknowledgement of soft power missing.

I consider this a good outcome--I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).

This was a disappointing comment to read fro... (read more)

I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.

I think this is somewhat unfair. I think it is unfair to describe this OP as "unpleasant", it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history t... (read more)

Buck
1y11
6
2

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly.

I think you're right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of "please, core EA orgs, start telling people that they should be different in these ways" rather than "here is my argument for why people should be different in these ways").

Buck
1y44
19
11

A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation

I'm sympathetic to the position that it's bad for me to just post meta-level takes without defending my object-level position.

Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.

I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact.

 

Obviously it's the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I'm just noting that it seems unlikely to me that this post will actually persuade EA orgs to do  things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.

[This comment is no longer endorsed by its author]Reply
Buck
1y41
33
27

I think that the class of arguments in this post deserve to be considered carefully, but I'm personally fine with having considered them in the past and decided that I'm unpersuaded by them, and I don't think that "there is an EA Forum post with a lot of discussion" is a strong enough signal that I should take the time to re-evaluate a bunch--the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.

(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)

GideonF
1y62
44
17

I'd be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don't think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but... (read more)

Buck
1y117
102
87

[For context, I'm definitely in the social cluster of powerful EAs, though don't have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]

This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven't happened, and probably won't happen, is that no-one who would be able to make them happen has decided t... (read more)

I think all your specific points are correct, and I also think you totally miss the point of the post.

You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don't know how our community is run or why. 

On top of that, there are actually consequences for complaining or disagreeing too m... (read more)

One irony is that it's often not that hard to change EA orgs' minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]

(CEA is actually doing basically this experiment soon, and I'm >2/3 chance the resu... (read more)

I strongly disagree with this response, and find it bizarre.  

I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.  

I agree with freedomandutility's description of this as an "isolated demand for [something like] rigor".
 

2[anonymous]1y
Can you clarify this statement? I'm confused about a couple of things: * Why is only "arguable" that you had more power when you were an active grantmaker? * Do you mean you don't have much power, or that you don't use much power? 

There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.

I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:

I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to 'Sug... (read more)

Buck
1y51
14
4

Some things that I might come to regret about my comment:

  • I think it's plausible that it's bad for me to refer to disagreeing with arguments without explaining why.
  • I've realized that some commenters might not have seen these arguments before, which makes me think that there is more of an opportunity for me to explain why I think these arguments are wrong. (EDIT I'm less worried about this now, because other commenters have weighed in making most of the object-level criticisms I would have made.)
  • I was not very transparent about my goal with this comment, whi
... (read more)

Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc.  People in different roles have different abilities (and limitations) in moving a reform effort forward.

I think "I didn't walk away w... (read more)

"If elites haven't already thought of/decided to implement these ideas, they're probably not very good. I won't explain why. " 

"Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won't explain why, but I will be patronising." 

"Meaningful organisational change  comes from the top down, and you should be more polite in requesting it. I doubt it'll do anything, though." 

Do you see any similarities between your response here and the problems highlighted by the original post... (read more)

If that's your goal, I think you should try harder to understand why core org EAs currently don't agree with your suggestions, and try to address their cruxes. For this ToC, "upvotes on the EA Forum" is a useless metric--all you should care about is persuading a few people who have already thought about this all a lot. I don't think that your post here is very well optimized for this ToC. 

... I think the arguments it makes are weak (and I've been thinking about these arguments for years, so it would be a bit surprising if there was a big update from t

... (read more)

I strongly downvoted this response.

The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".

This response seems to be putting all the burden of making progress in EA onto those  trying to constructively critique the movement, those who are putting their limited spare time in... (read more)

Jason
1y31
14
1

I think that's particularly true of some of the calls for democratization. The Cynic's Golden Rule ("He who has the gold, makes the rules") has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren't happy with the idea of random EAs spending their money, it just isn't going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn't be -- someone is going to take the donor... (read more)

Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!

May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.

I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.

I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.

I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.

I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.

It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.

GideonF
1y89
52
14

One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm luc... (read more)

Buck
1y16
2
0

Title: Paul Christiano on how you might get consequentialist behavior from large language models
Author: Paul Christiano
URL: https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-directed?commentId=cbJDeSPtbyy2XNr8E
Why it's good: I think lots of people are very wrong about how LLMs might lead to consequentialist behavior, and Paul's comment here is my favorite attempt at answering this question. I think that this question is extremely important.

Buck
1y13
10
1
  • Snacks also feel more optional

I strongly disagree fwiw; it was seriously inconvenient for me that there weren't snacks at EAGx Berkeley recently. And they're not very expensive compared to catering actual meals, I think.

4
Vaidehi Agarwalla
1y
Good to know a counterpoint - I guess different people have different preferences (e.g. breakfast is very important for me to be focused / able to fully participate in the mornings). I think I would still make the trade-off of to have breakfast over snacks (for reasons I mention in response to Neel below) And now that I think about it, breakfast could just be snacks if it's cheaper. 
Buck
1y13
6
9

I don't think we should think of EA as having "not enough money to pay for all the things they're now considering cancelling". Open Phil has enough money for at least a decade of longtermist spending at current rates; the fact that they aren't spending all their money right now means that they think that there will be grant opportunities in the future that are better than grant opportunities that they choose not to make now.

Decisions to cut back on spending on a community-wide level shouldn't be made from the perspective of short-term budgetary constraints... (read more)

3
Davidmanheim
1y
This doesn't match my model of philanthropic portfolio investment management. One key problem is that there is a lot of value in ongoing funding of organizations and projects, and having a donor who can fund you for a decade is easily >20x as valuable as one who will fund a bunch of work all at once, then disappear - and giving an organization a decade worth of funding all at once is far less useful than monitoring and calibrating to the organization's success and needs.

Your first comment sounded like you're criticizing CEA for their allocation of resources. Your second reply now sounds more like you're criticzing funders (like Open Phil) for not increasing CEA's budget. (Or maybe CEA for not asking for a funding increase more aggressively.) I guess the main thing you're saying  is that you find it hard to believe everyone is acting optimally if we have to cut back on EAGs in these ways, given that money isn't that tight in the EA movement as a whole. 

Load more