Y

Yellow

116 karmaJoined Oct 2021

Posts
1

Sorted by New

Comments
16

I guess I'm not fully sure I understand why you are thinking that.  Is it possible that you are feeling confused about your feelings because it is a dog and so it is easier to think of its welfare in terms of which number is smaller rather than engaging with the question emotionally?

Imagine it was a human child. Wouldn't it be "very good" to give one human child a caring family and a home? Why does the fact that it would be arguably more good to prevent ten human children from premature malaria deaths take away from that it's good to help one child? 

If everybody with the capability would give serious support to even one other person then almost all these problems would be solved several times over

Or lets imagine a more down to earth scenario. Your friend wants you to help them move. So you help them move. But you instead could have worked extra and made more money, hired movers for your friend, and also on top of that paid for a week of your friend's meals. Haven't you still done a good turn by helping your friend, even if an even more efficient way to help them exists? (Especially when in reality you were never going to put in all those extra work hours and you would burn out if you lived like that)\

A small good deserves a small reward, not a punishment, but it sounds like you are punishing yourself. The purpose of guilt and shame emotions are not intended to punish yourself for not doing enough.Those emotions are intended to stop you from doing bad things, not punish you for being insufficiently efficient about good. If you emotionally punish yourself for doing good in smaller than maximally efficient ways then you'll only train yourself to flinch away from doing good things, don't do that to yourself.  

We want you to work hard at doing good for others for many years, We don't want you to feel guilt and shame about not doing enough until your motivation fizzles out. Small good things count. 

It's not about catering to your emotions at the expense of rationality or anything. When you find a $1 dollar on the street, you are happy to have gotten a little money, not sad that the bill was not $100, right? It doesn't do to not appreciate the small things just because larger things exist. You talked about emotions vs yourf riends being "more calculating" than you earlier, but what your friends said was actually not rational, it is not calculating correctly to not count small good things as good just because an even bigger good is placed next to them. 

If you want to change your behavior to do even more good that's great but there's no sense in which doing a small good should count against you.

I’m curious how others think about this, how you cope with this tension, and if you can imagine spending so much time on one human or animal (who is a stranger to you).

Personally no, not really - I wouldn't spent 1000+ dollars or a day's work on an animal who is a stranger to me, or even any an animal that I know. For a human stranger I might, but there are limits there too. I think to some degree i I can viscerally feel in my feelings as well as my thoughts opportunity costs above a certain magnitude, so it doesn't really feel like cold calculations over feelings to me. It also helps to personally know more people in need than I can realistically help, and so being accustomed to feeling in triage even before taking abstract strangers into account.

But I sometimes do smaller versions of this. For example, last week I tried to catch a mouse and release it in a location where it might survive rather than killing it, even when that costs an extra hour which could be spent doing much more good. I don't think this is a problem at all, it's a more impactful use of my time than scrolling on the internet which I also do sometimes. Why should only good actions be subject to scrutiny? So you helped a dog, instead of buying a fancier car, why should anyone have a problem with that?

When your friend isn't criticizing that you bought a more expensive apartment or laptop than is strictly needed to maximize effectiveness, then why should they decide to criticize your act of kindness to a dog? There are institutions with billions in resources who spend it on nothing useful, or war, who wants to worry that one little dog has gotten some good fortune?

I think it is a big mistake to bring those who are doing a little small bit of good, under more negative scrutiny than if they had done nothing.

I think it's good you helped the dog and you should get positive reinforcement for it and feel good about yourself for it.

I think it would have been, idk, maybe 20 thousand or more times as good, if you had used the same amount of money on highly cost effective global health interventions, but that doesn't mean it isn't also good that you helped the dog.

I thought I'd work through how my reasoning goes for the provided examples. 

Many of these examples are grants that have later been funded by other grantmakers or private donors.

In my judgement, most of these (very helpful) concrete examples fall under either a) this deserves a public statement, or b) this represents a subjective judgement call where other funders should make that call independently. It's not that I think a private communication is never the right way to handle it, it's just that it seems to me like they usually aren't, even in the examples that are picked out.

The first three examples all involve subjective judgement calls, by a scientist, by yourself, and by an acquaintance, and it would be bad if these judgement calls (especially by just an acquantance!) propagated via a whisper network instead of other people making an independent decision. 

The next two examples, which involve grantees not delivering on promises, if they involve sufficiently large grants...well, I think a grantmaker ought to state what the impact of their grant are, and if a grant didn't have impact then that should be a made a note of publicly. This should not be an attack on the grantee, this is transparency by the grantmaker about what the impact their grant had, and bad grants should be acknowledged. However, I guess in the scenario where the grantee is intended to remain anonymous, then it is fair to propagate that info via whisper network but not public statement, but I would question the practice of giving large grants to anonymous grantees. For small grants to individuals, I guess if someone failed to deliver once isn't it best to let it go and let them try again elsewhere with someone else, the way it would be in any other professional realm? If they failed to deliver multiple times a whisper network is justified. If they seem to be running a scam then it's time for a public statement.

The rest of the examples save the last, which involve concerns about character...I mean, outright plagiarism and faking data absolutely should be called out publicly. When it's about less substantial and more vague reputational concerns, I can see the case for private comms a bit more, although it's a goldilocks scenario even then because if the concerns aren't substantially verified then shouldn't others independently make their judgement calls?

(The final example is valid for a private check, but tautologically so - yes of course, if the rational for a grantmaker is "LTFF would probably fund it" they ought to check if LTFF did in fact evaluate and reject it.)

In summary I think for the majority of these examples, I think either the public statement should be made, or the issue should be dropped, and it's only the very rare borderline case where private communications such that all grantmakers are actually secretly talking to each other and deferring to each other are the way to go.

it is frequently imprudent, impractical, or straightforwardly unethical to directly make public our reasons for rejection.

I think these are all sensible reasons, the trouble is that all of these considerations also apply to the private communication networks proposed as solutions, not in the body of the post but in thee comment section (such as common slack channel which only funders are on, norm of checking in with LTFF, etc).

It seems like a rare scenario that something is, by professional standards, too "private" or too "punching down" for a public statement, but sufficiently public and free of power disparities to be fair game for spreading around the rumor network. And concerns about reifying your subjective choices and fears by applicants that you would share negative information about them arguably become worse when the reification and spread occurs in private, rather than in public. 

I think an expectation that longtermist nonprofit grantmakers talk to each other by default would be an improvement over the status quo.

This sounds obviously good in general if we talk to each other, but I get the impression that in this context we're talking not about the latest research and best practices but specifically about the communication of sensitive applicant info which would ordinarily be a bit private... if negative evaluations of people which are too time consuming to bother with and are not made public, not even to the applicant themselves, tend to just disappear or remain privately held - maybe that's basically fine as a status quo? Does negative-tinged information that is too trivial to bother with for public statements and formal channels being spread through private informal channels that only the inner grantmaker circle can access really constitute an improvement?

I’ve started feeling super guilty and sad about how much I, and the EA community, have wasted on supporting my participation i

I think that in saying this, you're technically putting a rather low upper bound on the marginal value of a community building staff member as lower than your recruitment and moving costs, which has implications for what you ought to think about community building (vs earning to give an amount greater than your moving costs, or working in a different cause area).

To expand on this in more detail: I think there is something incoherent in saying that it wasn't justified for you in particular to move to the US with the 100% intention to work (which is a much stronger case than flying people out to conferences who might one day work), but it is justifed for you to work on the project now that you have moved.  Why not discount the value of working on the project at all even if you are local, if there's truly such a big supply of other locals ready to do it who could have done just as well, such that the marginal impact is ultimately less than a moving and recruit cost? You can probably find a more neglected project to work on instead, one in which no one equally talented would replace you, one which is important enough that the flight isn't a material consideration, right? 

In fact why do community building and work so hard to recruit anyone, if it's not even worth the cost of flying yourself and your suitcases on site to get one new recruit, then why is it worth spending so many much more expensive hours of expensive labor on recruitment in general, regardless of if they're local to an ea hub or not? I wouldn't be surprised if the cost of staff time spent to vet and recruit and hire one local is greater than your moving costs.  

Is the scale really so precisely balaned that the your flight is what tips it, probably not, probably either you're working on entirely the wrong thing, or you're working on the right thing and the flight and moving is a rounding error. If money is at such a premium that moving tips the balance, why not instead move to the highest earning city and earn to give? I bet boston/nyc jobs pay better than south african ones, you can make back the cost of the moving manyfold. 

So I think, logically speaking, either you're placing  the value of community building as net positive but inferior to earning to give an amount more than your move and recruit cost, or you should think the move was a net-positive relative to the next best person being hired, there isn't really a coherent narrative where the move wasn't worth it and locals who don't need to move should on the margin continue to work on community building rather than ETG (or a better direct work alternative).
 
(Edit: moving this to its own comment as it is a separate point)

Yellow
8mo26
2
0
1
1

The paragraphs below are partly responding to your framing of the issue here. If you frame it as "we can either have 4x attendees and not be inclusive by sponsering flights and visas, or 3x attendees and be inclusive" that's pursuasive, if you're saying "we can cut the costs of this conference by a large amount by not sponsering any flights or visas, which means more malaria nets or more ai grants and I think that's worth it" that's potentially pursuasive, but when you frame it as about about the project of inclusion in general, then I do feel like you're making a mistake of unevenly placed skepticism here. 

I think community builders and those funding/steering community building efforts should be more explicit and open about what their theory of change for global community building is

I do think meta orgs could be clearer about their theory of change, but to get there via the questioning of the value of diversity seems like an odd reasoning path, the lack of clarity is so much deeper than that! I feel like there is some selective scepticism going on here. If you apply this skepticism to the bigger picture then I don't see why one ought to zero in on diversity initiatives in particular as the problem. 

Firstly, I think it would be illustrative if you said what you think is the point of community building, in your view? Community building is inherently pretty vague and diffuse as a path to impact and why you do it changes what you do.

 For instance, suppose you think the point of community is to recruit new staff. Then I'd say maybe you ought to focus on targetted headhunting specifically rather than doing community-building? Or failing that, training people for roles? As far as non-technical roles, it doesn't seem like there's a huge shortage of 95th+ percentile generally-high-talent people who want an EA job but don't have one, but there's lots of work to be done in vetting them, or training them. As far as technical roles, you can try and figure out who the leaders of relevant technical fields are and recruit them directly. If I wanted to just maximize staff hires I wouldn't do community building, I'd do headhunting, training, vetting, recruitment, matchmaking etc in tight conjunction with the high impact orgs i was trying to serve.

 Or, if you think the point of the community building is to have meetings between key players, then why not just only invite existing staff members within your specific cause area in a small room? From a networking perspective community building is too diffuse, there's not much in the way of real professional reasons why the AI safety people and the animal rights people need to meet. You don't need a huge conference or local groups for that.
 
I think when someone focuses on community building, when someone thinks that's the best way to make change, then (assuming they are thinking from an impact maximizing perspective at all, i suspect at least some resources people direct towards meta has more in common with the psychology of donating to your university or volunteering with your church than with cold utilitarian calculus, and i think that's okay) they're probably thinking of stuff which is quite indirect and harder to quantify, like the value of having people from very different segments of the community who would ordinarily have no reason to meet encounter each other, or the value of provoding some local way to connect to EA for everyone who is part of it. For these purposes, being geographically inclusive makes sense. Questions like whether people could sponser their own flights depend on how valueable you think that type of community building is, I agree that there's a difference between thinking it's valueable and thinking it's valuable enough to fly everyone in even if they don't have a clear intent to work on something that requires flying in like you did.  If community building is intended to capture soft and not easily quantified effects that don't have an obvious reason behind them, then I don't see why those soft and not easily quantified effects shouldn't include global outreach. Fostering connections between community members even if they work in different areas or are across the globe from each other, or taking advantage of word of mouth spread in local contexts, or the benefits of having soft ties on each continent such as a friendly base for an EA travelling for work to crash or having a friend of a friend who works in the right government agency for your policy proposal, seem like a valid type of "soft and hard to quantify" effect. Like right now, you can throw a dart on the map, and you will probably be able to find one ea in that country to stay with, and if you throw a few more darts then you can probably find an ea in a government and so on, in a handwavey sense I most people would say that this is a generally beneficial effect of doing inclusive global outreach for any policy or ngo goal.

Whereas if you don't have much faith in that soft and hard to quantify narrative, if you're pursuing hard, quantified impact maximizing, then why do community outreach at all? Why not instead work on something more direct like headhunting or fund some more direct work?

I'm sympathetic to "this theory of change isn't clear enough", it just seems weird to me that if you've accepted all the other unclear things about the community building theory of change, that you would worry about inclusion efforts specifically. If you were sending out malaria nets I would understand if you made the choice that gave out the most nets even if it was less inclusive, because in that scenario at least you would have some chance of accurately predicting when inclusion reduced your nets. But in community building, that doesn't make as much sense, if inclusion is hurting your bottom line how would you even know it? I feel like maybe you have to have a harder model of what your theory of change is before you can go around saying "regrettably, inclusion efforts funge against our bottom line in our theory of change", because it seems to me like on soft and fuzzy not very quantified models of impact, inclusion efforts and global reach mostly make as much sense as any other community building impact model, and when one is in that scenario why not do the common sensically positive thing of being inclusive at least when it's not very expensive to do so? 

Answer by YellowAug 22, 20235
4
1

The trouble isn't that AI can't have complex or vague goals, it's that there's no reason why having more complicated and vague goals makes something less dangerous. 

Think of it this way: A lion has complicated and vague goals. It is messy, organic, and not "programmed".  Does that mean that a lion is safe? Would you be afraid to be locked in a cage with a lion? I would be.

Humans and lions both have complicated and sometimes vague goals, but because their goals are not the same goals, both beings pose a severe danger to each other all the same. The lion is dangerous to the human because the lion is stronger than the human. The human is dangerous to the lion because the human is smarter than the lion. 

Where most people go wrong is that they think that smart means nice, so they think that if only the lion in this analogy was smart too, then it would magically also be safe. They don't imagine that a smart lion might want to eat you just the same as a regular lion. 

In order to make a lion safe, you need to either control its values, so that it doesn't want to harm you, or you need to make it more predictable.

I would agree that impact calculations are only improved by considering concepts like counterfactual / additional marginal and so on.

I would caveat that when you're not doing calculations based on data, but rather more informal reasoning, it's important not to overweight this idea and assume that less competitive positions are probably better - it might easily be that the role with higher impact when your calculation is naïve to margins and counterfactuals, remains the role with higher impact after adding those calculations in, even if it is indeed the more competitive role. 

I think for most people when it comes to big cause area differences like CEA vs AMF, what they think about the big picture regarding cause areas will likely dominate their considerations. Your estimate would have to fall within a very specific range before adjustments for counterfactual additionality on the margin would be a consideration that tips the scale, wouldn't it?

Answer by YellowJul 18, 20233
1
1

The funny thing is, (I don't have any inside info here, this is all pure speculation) I wouldn't be shocked if the AMF position ended up being more competitive of the two, despite the lower salary.

Normally non-profit salaries tend to be lower than for-profit salaries even when skills are equivalent because people are altruistic and therefore willing to work for less if the work is prosocial and meaningful and fits their interests. For example, the position of a professor is more competitive and has a higher skill requirement than industrial R&D jobs in the same field, but the latter is more compensated.

I believe that in the EA community this effect is much more pronounced. Some (not all) EA orgs are likely in a somewhat unique position where even if you offer 28-35k and less, you may still be getting applications from people with degrees from top schools who could be making double or triple or quadruple that on the open market, and at some point when you notice that your top applicants are mostly motivated by impact and not money you might become uncertain as to whether or not offering more money actually further improves the applicant pool.

In such an environment,  salaries aren't exactly set by market forces in the same way as a normal job. Instead, they are set by organizational culture and decision making. This is likely all the more true for remote roles, where lack of geography constraints makes expected pay among equally skilled candidates even more variable.

Some people see the situation and think "so let's spend less money and be more cost effective, leaving more money for the beneficiaries and the impact and attracting more donations from our reduced overhead". They aren't in it for money, and they figure all the truly committed candidates who make the best hires aren't either.

Other people see the situation and and think "nevertheless, let's pay people competitive rates, even if we could get away with less" either out of a sense of fair play (e.g. let's not underpay people doing such important work just because there are some people who are altruistic enough to accept that, that's not nice) or the golden rule (e.g. they themselves want a competitive salary and would feel weird offering an uncompetitive one) or or because they figure they will get better candidates or more long term career sustainability and personnel retention that way. 

One of these perspectives is probably the correct one for a given use-case, but I'm not sure which one and reasonable people seem to diverge.

(Of course it's not just personal preference - some organizations, and some positions inside organizations, have more star-power than others and so have this option more. And it's also a cause area thing - some cause areas perceive themselves as more funding-bottlenecked where every $2 in salary is one less mosquito net, while others aren't implicitly making that comparison as much, pouring extra money into their project wouldn't necessarily improve their project and the true bottleneck is something else.
 

Even if one believes they can make more impact at AMF, they would have to give up 20k pounds in salary to pass on the content specialist role. We learned recently to consider earning less, but this may still be quite the conundrum. What do you think?

As far as the personal conundrum goes, I guess you have to ask yourself how much you value earning more, and consider if you'd be willing to pay the difference to buy the greater impact that you'd achieve by taking the position you believe is higher impact to take.

Load more