110 karmaJoined Oct 2021


Sorted by New


I thought I'd work through how my reasoning goes for the provided examples. 

Many of these examples are grants that have later been funded by other grantmakers or private donors.

In my judgement, most of these (very helpful) concrete examples fall under either a) this deserves a public statement, or b) this represents a subjective judgement call where other funders should make that call independently. It's not that I think a private communication is never the right way to handle it, it's just that it seems to me like they usually aren't, even in the examples that are picked out.

The first three examples all involve subjective judgement calls, by a scientist, by yourself, and by an acquaintance, and it would be bad if these judgement calls (especially by just an acquantance!) propagated via a whisper network instead of other people making an independent decision. 

The next two examples, which involve grantees not delivering on promises, if they involve sufficiently large grants...well, I think a grantmaker ought to state what the impact of their grant are, and if a grant didn't have impact then that should be a made a note of publicly. This should not be an attack on the grantee, this is transparency by the grantmaker about what the impact their grant had, and bad grants should be acknowledged. However, I guess in the scenario where the grantee is intended to remain anonymous, then it is fair to propagate that info via whisper network but not public statement, but I would question the practice of giving large grants to anonymous grantees. For small grants to individuals, I guess if someone failed to deliver once isn't it best to let it go and let them try again elsewhere with someone else, the way it would be in any other professional realm? If they failed to deliver multiple times a whisper network is justified. If they seem to be running a scam then it's time for a public statement.

The rest of the examples save the last, which involve concerns about character...I mean, outright plagiarism and faking data absolutely should be called out publicly. When it's about less substantial and more vague reputational concerns, I can see the case for private comms a bit more, although it's a goldilocks scenario even then because if the concerns aren't substantially verified then shouldn't others independently make their judgement calls?

(The final example is valid for a private check, but tautologically so - yes of course, if the rational for a grantmaker is "LTFF would probably fund it" they ought to check if LTFF did in fact evaluate and reject it.)

In summary I think for the majority of these examples, I think either the public statement should be made, or the issue should be dropped, and it's only the very rare borderline case where private communications such that all grantmakers are actually secretly talking to each other and deferring to each other are the way to go.

it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection.

I think these are all sensible reasons, the trouble is that all of these considerations also apply to the private communication networks proposed as solutions, not in the body of the post but in thee comment section (such as common slack channel which only funders are on, norm of checking in with LTFF, etc).

It seems like a rare scenario that something is, by professional standards, too "private" or too "punching down" for a public statement, but sufficiently public and free of power disparities to be fair game for spreading around the rumor network. And concerns about reifying your subjective choices and fears by applicants that you would share negative information about them arguably become worse when the reification and spread occurs in private, rather than in public. 

I think an expectation that longtermist nonprofit grantmakers talk to each other by default would be an improvement over the status quo.

This sounds obviously good in general if we talk to each other, but I get the impression that in this context we're talking not about the latest research and best practices but specifically about the communication of sensitive applicant info which would ordinarily be a bit private... if negative evaluations of people which are too time consuming to bother with and are not made public, not even to the applicant themselves, tend to just disappear or remain privately held - maybe that's basically fine as a status quo? Does negative-tinged information that is too trivial to bother with for public statements and formal channels being spread through private informal channels that only the inner grantmaker circle can access really constitute an improvement?

I’ve started feeling super guilty and sad about how much I, and the EA community, have wasted on supporting my participation i

I think that in saying this, you're technically putting a rather low upper bound on the marginal value of a community building staff member as lower than your recruitment and moving costs, which has implications for what you ought to think about community building (vs earning to give an amount greater than your moving costs, or working in a different cause area).

To expand on this in more detail: I think there is something incoherent in saying that it wasn't justified for you in particular to move to the US with the 100% intention to work (which is a much stronger case than flying people out to conferences who might one day work), but it is justifed for you to work on the project now that you have moved.  Why not discount the value of working on the project at all even if you are local, if there's truly such a big supply of other locals ready to do it who could have done just as well, such that the marginal impact is ultimately less than a moving and recruit cost? You can probably find a more neglected project to work on instead, one in which no one equally talented would replace you, one which is important enough that the flight isn't a material consideration, right? 

In fact why do community building and work so hard to recruit anyone, if it's not even worth the cost of flying yourself and your suitcases on site to get one new recruit, then why is it worth spending so many much more expensive hours of expensive labor on recruitment in general, regardless of if they're local to an ea hub or not? I wouldn't be surprised if the cost of staff time spent to vet and recruit and hire one local is greater than your moving costs.  

Is the scale really so precisely balaned that the your flight is what tips it, probably not, probably either you're working on entirely the wrong thing, or you're working on the right thing and the flight and moving is a rounding error. If money is at such a premium that moving tips the balance, why not instead move to the highest earning city and earn to give? I bet boston/nyc jobs pay better than south african ones, you can make back the cost of the moving manyfold. 

So I think, logically speaking, either you're placing  the value of community building as net positive but inferior to earning to give an amount more than your move and recruit cost, or you should think the move was a net-positive relative to the next best person being hired, there isn't really a coherent narrative where the move wasn't worth it and locals who don't need to move should on the margin continue to work on community building rather than ETG (or a better direct work alternative).
(Edit: moving this to its own comment as it is a separate point)


The paragraphs below are partly responding to your framing of the issue here. If you frame it as "we can either have 4x attendees and not be inclusive by sponsering flights and visas, or 3x attendees and be inclusive" that's pursuasive, if you're saying "we can cut the costs of this conference by a large amount by not sponsering any flights or visas, which means more malaria nets or more ai grants and I think that's worth it" that's potentially pursuasive, but when you frame it as about about the project of inclusion in general, then I do feel like you're making a mistake of unevenly placed skepticism here. 

I think community builders and those funding/steering community building efforts should be more explicit and open about what their theory of change for global community building is

I do think meta orgs could be clearer about their theory of change, but to get there via the questioning of the value of diversity seems like an odd reasoning path, the lack of clarity is so much deeper than that! I feel like there is some selective scepticism going on here. If you apply this skepticism to the bigger picture then I don't see why one ought to zero in on diversity initiatives in particular as the problem. 

Firstly, I think it would be illustrative if you said what you think is the point of community building, in your view? Community building is inherently pretty vague and diffuse as a path to impact and why you do it changes what you do.

 For instance, suppose you think the point of community is to recruit new staff. Then I'd say maybe you ought to focus on targetted headhunting specifically rather than doing community-building? Or failing that, training people for roles? As far as non-technical roles, it doesn't seem like there's a huge shortage of 95th+ percentile generally-high-talent people who want an EA job but don't have one, but there's lots of work to be done in vetting them, or training them. As far as technical roles, you can try and figure out who the leaders of relevant technical fields are and recruit them directly. If I wanted to just maximize staff hires I wouldn't do community building, I'd do headhunting, training, vetting, recruitment, matchmaking etc in tight conjunction with the high impact orgs i was trying to serve.

 Or, if you think the point of the community building is to have meetings between key players, then why not just only invite existing staff members within your specific cause area in a small room? From a networking perspective community building is too diffuse, there's not much in the way of real professional reasons why the AI safety people and the animal rights people need to meet. You don't need a huge conference or local groups for that.
I think when someone focuses on community building, when someone thinks that's the best way to make change, then (assuming they are thinking from an impact maximizing perspective at all, i suspect at least some resources people direct towards meta has more in common with the psychology of donating to your university or volunteering with your church than with cold utilitarian calculus, and i think that's okay) they're probably thinking of stuff which is quite indirect and harder to quantify, like the value of having people from very different segments of the community who would ordinarily have no reason to meet encounter each other, or the value of provoding some local way to connect to EA for everyone who is part of it. For these purposes, being geographically inclusive makes sense. Questions like whether people could sponser their own flights depend on how valueable you think that type of community building is, I agree that there's a difference between thinking it's valueable and thinking it's valuable enough to fly everyone in even if they don't have a clear intent to work on something that requires flying in like you did.  If community building is intended to capture soft and not easily quantified effects that don't have an obvious reason behind them, then I don't see why those soft and not easily quantified effects shouldn't include global outreach. Fostering connections between community members even if they work in different areas or are across the globe from each other, or taking advantage of word of mouth spread in local contexts, or the benefits of having soft ties on each continent such as a friendly base for an EA travelling for work to crash or having a friend of a friend who works in the right government agency for your policy proposal, seem like a valid type of "soft and hard to quantify" effect. Like right now, you can throw a dart on the map, and you will probably be able to find one ea in that country to stay with, and if you throw a few more darts then you can probably find an ea in a government and so on, in a handwavey sense I most people would say that this is a generally beneficial effect of doing inclusive global outreach for any policy or ngo goal.

Whereas if you don't have much faith in that soft and hard to quantify narrative, if you're pursuing hard, quantified impact maximizing, then why do community outreach at all? Why not instead work on something more direct like headhunting or fund some more direct work?

I'm sympathetic to "this theory of change isn't clear enough", it just seems weird to me that if you've accepted all the other unclear things about the community building theory of change, that you would worry about inclusion efforts specifically. If you were sending out malaria nets I would understand if you made the choice that gave out the most nets even if it was less inclusive, because in that scenario at least you would have some chance of accurately predicting when inclusion reduced your nets. But in community building, that doesn't make as much sense, if inclusion is hurting your bottom line how would you even know it? I feel like maybe you have to have a harder model of what your theory of change is before you can go around saying "regrettably, inclusion efforts funge against our bottom line in our theory of change", because it seems to me like on soft and fuzzy not very quantified models of impact, inclusion efforts and global reach mostly make as much sense as any other community building impact model, and when one is in that scenario why not do the common sensically positive thing of being inclusive at least when it's not very expensive to do so? 

Answer by YellowAug 22, 20234

The trouble isn't that AI can't have complex or vague goals, it's that there's no reason why having more complicated and vague goals makes something less dangerous. 

Think of it this way: A lion has complicated and vague goals. It is messy, organic, and not "programmed".  Does that mean that a lion is safe? Would you be afraid to be locked in a cage with a lion? I would be.

Humans and lions both have complicated and sometimes vague goals, but because their goals are not the same goals, both beings pose a severe danger to each other all the same. The lion is dangerous to the human because the lion is stronger than the human. The human is dangerous to the lion because the human is smarter than the lion. 

Where most people go wrong is that they think that smart means nice, so they think that if only the lion in this analogy was smart too, then it would magically also be safe. They don't imagine that a smart lion might want to eat you just the same as a regular lion. 

In order to make a lion safe, you need to either control its values, so that it doesn't want to harm you, or you need to make it more predictable.

I would agree that impact calculations are only improved by considering concepts like counterfactual / additional marginal and so on.

I would caveat that when you're not doing calculations based on data, but rather more informal reasoning, it's important not to overweight this idea and assume that less competitive positions are probably better - it might easily be that the role with higher impact when your calculation is naïve to margins and counterfactuals, remains the role with higher impact after adding those calculations in, even if it is indeed the more competitive role. 

I think for most people when it comes to big cause area differences like CEA vs AMF, what they think about the big picture regarding cause areas will likely dominate their considerations. Your estimate would have to fall within a very specific range before adjustments for counterfactual additionality on the margin would be a consideration that tips the scale, wouldn't it?

Answer by YellowJul 18, 202336

The funny thing is, (I don't have any inside info here, this is all pure speculation) I wouldn't be shocked if the AMF position ended up being more competitive of the two, despite the lower salary.

Normally non-profit salaries tend to be lower than for-profit salaries even when skills are equivalent because people are altruistic and therefore willing to work for less if the work is prosocial and meaningful and fits their interests. For example, the position of a professor is more competitive and has a higher skill requirement than industrial R&D jobs in the same field, but the latter is more compensated.

I believe that in the EA community this effect is much more pronounced. Some (not all) EA orgs are likely in a somewhat unique position where even if you offer 28-35k and less, you may still be getting applications from people with degrees from top schools who could be making double or triple or quadruple that on the open market, and at some point when you notice that your top applicants are mostly motivated by impact and not money you might become uncertain as to whether or not offering more money actually further improves the applicant pool.

In such an environment,  salaries aren't exactly set by market forces in the same way as a normal job. Instead, they are set by organizational culture and decision making. This is likely all the more true for remote roles, where lack of geography constraints makes expected pay among equally skilled candidates even more variable.

Some people see the situation and think "so let's spend less money and be more cost effective, leaving more money for the beneficiaries and the impact and attracting more donations from our reduced overhead". They aren't in it for money, and they figure all the truly committed candidates who make the best hires aren't either.

Other people see the situation and and think "nevertheless, let's pay people competitive rates, even if we could get away with less" either out of a sense of fair play (e.g. let's not underpay people doing such important work just because there are some people who are altruistic enough to accept that, that's not nice) or the golden rule (e.g. they themselves want a competitive salary and would feel weird offering an uncompetitive one) or or because they figure they will get better candidates or more long term career sustainability and personnel retention that way. 

One of these perspectives is probably the correct one for a given use-case, but I'm not sure which one and reasonable people seem to diverge.

(Of course it's not just personal preference - some organizations, and some positions inside organizations, have more star-power than others and so have this option more. And it's also a cause area thing - some cause areas perceive themselves as more funding-bottlenecked where every $2 in salary is one less mosquito net, while others aren't implicitly making that comparison as much, pouring extra money into their project wouldn't necessarily improve their project and the true bottleneck is something else.

Even if one believes they can make more impact at AMF, they would have to give up 20k pounds in salary to pass on the content specialist role. We learned recently to consider earning less, but this may still be quite the conundrum. What do you think?

As far as the personal conundrum goes, I guess you have to ask yourself how much you value earning more, and consider if you'd be willing to pay the difference to buy the greater impact that you'd achieve by taking the position you believe is higher impact to take.

This is a conflation of technical criticism (e.g. you critique a methodology or offer scientific evidence to the contrary) and office politics criticism (e.g. you point out a conflict of interest or question a power dynamic)

Plant made a technical criticism, whereas office politics disagreement is the one that potentially carries social repercussions.

Besides, ea orgs aren't the only party that matters- the media reads this forum too, i can see how someone might not want a workplace conflict to become their top Google result.

Answer by YellowJan 16, 202347

How should we navigate this divide?

I generally think we should almost always prioritize honesty where honesty and tact genuinely trade off against each other. That said, I suspect the cases where the trade-off is genuine (as opposed to people using tact as a bad justification for a lack of honesty, or honesty as a bad justification for a lack of tact) are not that common.

Do you disagree with this framing? For example, do you think that the core divide is something else?

I think that a divide exists, but I disagree that it pertains to recent events. Is it possible that you're doing a typical mind fallacy thing where just because you don't find something to be very objectionable, you're assuming others probably don't find it very objectionable and are only objecting for social signaling reasons? Are you underestimating the degree to which people genuinely agree with what you'd framing as the socially acceptable consensus views. rather than only agreeing with said socially acceptable consensus views due to social capital? 

To be clear, I think there is always a degree to which some people are just doing things for social reasons, and that applies no less to recent events than it does to everywhere else. But I don't think recent events are particularly more illustrative of these camps. 

it appears to me, that those who prioritise AI Safety tend to fall into the first camp more often and those who prioritise global poverty tend to fall into the second camp.

I think this is false. If you look at every instance of an organization seemingly failing at full transparency for optics reasons, you won't find much in the way of trend towards global health organizations. 

On the other hand, if you look at more positive instances (people who advocate concern for branding, marketing, and PR with transparent and good intentions),  you still don't see any particular trend towards global health. (Some examples: [1]][2][3] just random stuff pulled out by doing a keyword search for words like "media", "marketing" etc). Alternatively you could consider the cause area leanings of most"ea meta/outreach" type orgs in general, w.r.t. which cause area puts their energy where.

It's possible that people who prioritize global poverty are more strongly opposed systematic injustices such as racism, in the same sense that people who prioritize animal welfare are more likely to be vegan. It does seem natural, doesn't it, that the type of person who is sufficiently motivated by that to make a career out of it, might also be more strongly motivated to be against racism? But that, again, is not a case of "prioritizing social capital over epistemics", any more than an animal activist's veganism is mere virtue-signaling.  It's a case of genuine difference in worldviews. 

Basically, I think you've only arrived at this conclusion that global health people are more concerned with social capital because you implicitly have the framing that being against the racist-sounding stuff specifically is a bid for social capital, while ignoring the big picture outside of that one specific area. 

Also I think that if you think people are wrong about that stuff, and you'd like them to change their mind, you have to convince them of your viewpoint, rather than deciding that they only hold their viewpoint because they're seeking social capital rather than holding it for honest reasons.

I think the central point is that animals carry moral weight and that we should act accordingly, not that there are no trade-offs to to the health and pleasure of humans from abstaining from using animal products. It's not as if, given a scientific consensus that the optimal diet at our current tech level includes meat, animal advocates would cease advocating for abstaining from using animal products. Assigning animals a significant moral weight means that such very minor drawbacks to humans become a rounding error next to the major harms to animals.

Animal advocates who say that cutting out meat will not harm your health or will improve it, aren't presenting an unbiased argument about nutrition literature and human health. The conclusion is motivated by not wanting to hurt animals. Research that validates or debunks this motivated conclusion may be useful to animal advocates insofar as which vitamins and protein powders they might recommend, but it wouldn't sway the central point.

Load more