All of Rebecca's Comments + Replies

I think a 2x2 rather than 1x3 seating arrangement would be more natural. Currently it feels like you and Arden are too far away to make it a cosy chat vibe. I agree with Jamie that the topics should be impact-relevant, rather than just friends chatting about random things.

The work tests that don’t require a single sitting still do have a max number of hours

I find it’s very rare to have to do the work test in 1 sitting, and I at least usually do better if I can split it up a bit

5
Joseph Lemien
2d
From my own experience as an applicant for EA organizations, I'd estimate that maybe 50% to 60% of the work sample tests or the tasks that I've been assigned have either requested or required that I complete it in one sitting. And I do think that there is a lot of benefit in limiting the time candidates can spend on it, otherwise we might end up assessing Candidate A's ten hours of work and Candidate B's three hours of work. We want to make sure it is a fair evaluation of what each of them can do when we control for as many variables as possible.
1
Ávila Carmesí
3d
Thank you for your advice! I will say that my part-time job was research, which is crucial if I want to get research positions or into PhD programs in the near future. The clubs I lead are also very relevant to the jobs I'm applying to, and I think they may be quite impactful (so I'm willing to do them even if they harm my own odds).  Regardless of my specific situation, I think EA orgs should conduct hiring under the assumption that a significant portion of their applicants don't have the time for multiple multi-hour work tests in early stages of the application process (where most will be weeded out).

I don’t think it requires years of learning to write a thoughtful op-ed-level critique of EA. I’d be surprised if that’s true for an academic paper-level one either

That's fair! But I also think most op-eds on any topic are pretty bad. As for academic papers, I have to say it took me at least a year to write anything good about EA, and that was on a research-only postdoc with 50% of my research time devoted to longtermism. 

There's an awful lot that has been written on these topics, and catching up on the state of the art can't be rushed without bad results. 

Mainly advice on intermediate steps to get more domain-relevant experience.

1
Ávila Carmesí
3d
Yes, I've had two calls with them. Maybe it wasn't very clear from my background but I've been pretty deeply involved with EA for about 2 years (also went to multiple EAGs).  How do you think 80k career advice would help in my situation?

The point I’m trying to make is that there are many ways you can be influential (including towards people that matter) and only some of them increase prestige. People can talk about your ideas without ever mentioning or knowing your name, you can be a polarising figure who a lot of influential people like but who it’s taboo to mention, and so on.

I also do think you originally meant (or conveyed) a broader meaning of influential - as you mention economic output and the dustbins of history, which I would consider to be about broad influence.

Andrew Tate is very influential, but entirely lacking in prestige.

0
Linch
4d
Interesting example! I don't know much about Tate, but I understand him as a) only "influential" in a very ephemeral way, in the way that e.g. pro wrestlers are, and b) only influential among people who themselves aren't influential. It's possible we aren't using the word "influential" in the same way. E.g. implicit in my understanding of "influential" is something like "having influence on people who matter" whereas maybe you're just defining it as "having influence on (many) people, period?"

This is interesting, thanks. Though I wanted to flag that the volume of copyediting errors means I’m unlikely to share it with others.

4
Linch
4d
I might not be tracking all the exact nuances, but I'd have thought that prestige is ~just legible influence aged a bit, in the same way that old money is just new money aged a bit. I model institutions like Oxford as trying to play the "long game" here.

You’re answering a somewhat different question to the one I’m bringing up

I’m very confused why you think that FHI brought prestige to Oxford University rather than the other way around

6
Linch
5d
The vast majority of academic philosophy at prestigious universities will be relegated to the dustbins of history, FHI's work is quite plausibly an exception.  To be clear, this is not a knock on philosophy; I'd guess that total funding for academic philosophy in the world is on the order of 1B. Most things that are 0.001% of the world economy won't be remembered much 100 years from now. I'd guess philosophy in general punches well above its weight here, but base rates are brutal.
8
Hamish McDoodles
5d
My thinking was that Because they were doing influential research and brought in funding? FHI's work seems significantly better than most academic philosophy, even by prestigious university standards. But on reflection, yes, obviously Oxford University will bring more prestige to anything it touches.

In the examples you give, the arguments for and against are fairly cached so there’s less of a need to bring them up. That doesn’t apply here. I also think your argument is often false even in your examples - in my experience, the bigger the gap between the belief the person is expressing and that of the ~average of everyone else in the audience, the more likely there is to be pushback (though not always by putting someone on the spot to justify their beliefs, e.g. awkwardly changing the conversation or straight out ridiculing the person for the belief)

8
Habryka
5d
Pushback (in the form of arguments) is totally reasonable! It seems very normal that if someone is arguing for some collective path of action, using non-shared assumptions, that there is pushback.  The thing that feels weirder is to invoke social censure, or to insist on pushback when someone is talking about their own beliefs and not clearly advocating for some collective path of action. I really don't think it's common for people to push back when someone is expressing some personal belief of theirs that is only affecting their own actions.  In this case, I think it's somewhat ambiguous whether there I am was arguing for a collective path of action, or just explaining my private beliefs. By making a public comment I at least asserted some claim towards relevance towards others, but I also didn't explicitly say that I was trying to get anyone else to really change behavior. And in either case, invoking social censure on the basis of someone expressing a belief of theirs without also giving a comprehensive argument for that belief seems rare (not unheard of, since there are many places in the world where uniform ideologies are enforced, though I don't think EA has historically been such a place, nor wants to be such a place).
Rebecca
5d17
7
1
1
4

In my experience people update less from positive comments and more from negative comments intuitively to correct for this asymmetry (that it's more socially acceptable to give unsupported praise than unsupported criticism). Your preferred approach to correcting the asymmetry, while I agree is in the abstract better, doesn't work in the context of these existing corrections.

8
Habryka
5d
Yeah, I agree this is a real dynamic. It doesn't sound unreasonable for me to have a standard link that l link to if I criticize people on here that makes it salient that I am aspiring to be less asymmetric in the information I share (I do think the norms are already pretty different over on LW, where if anything I think criticism is a bit less scrutinized than praise, so its not like this is a totally alien set of norms).

I took that second quote to mean ‘even if Sam is dodgy it’s still good to publicly back him’

2
David Mathers
15d
I meant something in between "is" and "has a non-zero chance of being", like assigning significant probability to it (obviously I didn't have an exact number in mind), and not just for base rate reasons about believing all rich people to be dodgy. 

Re your footnote 4, CE/AIM are starting an earning-to-give incubation program, so that is likely to change pretty soon

4
Ben_West
16d
Oh good point! That does seem to increase the urgency of this. I'd be interested to hear if CE/AIM had any thoughts on the subject.

Factual note: Rory Stewart isn’t a co-founder of GD, he is/was a later stage employee

1
Deborah W.A. Foulkes
20d
Correct. Thank you. Was mixing it up with the other charity he founded with his wife - Turquoise Mountain. He's now an advisor for Give Directly: https://www.givedirectly.org/team/ His bio there: Rory is an advisor at GiveDirectly. Previously, he was the UK Secretary of State for International Development, Minister of State for Justice, Minister of State in Foreign Office and DFID (covering Africa, Middle East, and Asia), Minister for the Environment and Chair of the House of Commons Defence Select Committee. After a brief period as an infantry officer he joined the UK Diplomatic Service, serving overseas in Jakarta, as British representative to Montenegro in the wake of the Kosovo crisis, and as the coalition Deputy-Governor of two provinces of Southern Iraq following the intervention of 2003. He left the diplomatic service to undertake a two-year walk across Afghanistan, Iran, Pakistan, India and Nepal. In 2005, he established the Turquoise Mountain Foundation in Kabul, working to restore a section of the old city, establish a clinic, primary school, and Arts Institute, and bring Afghan crafts to international markets. In 2008, he became the Ryan Professor of Human Rights at the Harvard Kennedy School and Director for the Carr Centre for Human Rights Policy. He is a Visiting Fellow at The Jackson Institute at Yale University. Speaking & Press Requests: If you are interested in Rory speaking at an event or making a press appearance please email press@givedirectly.org. He’s on Twitter at @RoryStewartUK.

Are you sure it's not the other possible candidate? I have only heard negative things about one of their personalities.

2
Hauke Hillebrandt
22d
He did mention the head of the FTX foundation which was Nick Beckstead - not sure about the others, but would still seem weird for them to say it like that - maybe one of the younger staff members said something like 'I care more about the far future' or something along the lines of 'GiveDirectly is too risk averse'. but would still think he's painting quite the stereotype of EA here.

Was that lying or misremembering though? Lying is a fairly big accusation to make.

6
Hauke Hillebrandt
22d
It's just my inside view that he carelessly and to some extent intentionally plays fast and loose with the truth to the point of libel by saying that Beckstead said 'To be honest I don't care that much about poverty' and then ended the call and went off to to have lunch. Stewart then framed it in a way as if he just in a very unreflective way just cares about 'asteroid strikes and robot overlords' - you can also call it hyperbole. I think he just couldn't bear that someone younger - 'sitting in California in his hoodie'- didn't want to give him- Rory Stewart OBE- a grant for a charity whose effectiveness he probably understands less well then Beckstead. I have a strong prior that he misrepresented Beckstead's view on this (Beckstead used to work for Givewell) and also due to the Sam Harris incident (which I only came across incidentally because I sometimes hate read Sam Harris on this topic). I thought it was worth it to come out strong with my inside view and on the spectrum from misremembering to lying I'm more inclined to call it lying.

The Wired article says that there’s been a bunch more research in recent years about the effects of bed nets on fish stocks, so I would consider the GiveWell response out of date

I don’t think it can be separated neatly. If the person who has died as a result of the charity’s existence is a recipient of a disease reduction intervention, then they may well have died from the disease instead if not for the intervention.

Answer by RebeccaMar 28, 202416
2
0
  1. What do you see as the importance of GiveWell specifically pulling out a “deaths caused” number, vs factoring that number in by lowering the “lives saved” number?

  2. Are you saying that no competent philosopher would use their own definition for altruism when what it “really” means is somewhat different? My experience of studying philosophy has been the reverse - defining terms unique is very common.

  3. Is the implication of this paragraph, that all the events described happened after SBF started donating FTX money, intentional?

WHILE SBF’S MONEY was st

... (read more)

I don’t think you incorporate the number at face value, but plausibly you do factor it in in some capacity, given the level of detail GiveWell goes into for other factors

7
Ben Millwood
1mo
I think if there's no credible reason to assign responsibility to the intervention, there's no need to include it in the model. I think assigning the charity responsibility for the consequences of a crime they were the victim of is just not (by default) a reasonable thing to do. It is included in the detailed write-up (the article even links to it). But without any reason to believe this level of crime is atypical for the context or specifically motivated by e.g. anger against the charity, I don't think anything else needs to be made of it.

I am very surprised to read that GiveWell doesn't at all try to factor in deaths caused by the charities when calculating lives saved. I don't agree that you need a separate number for lives lost as for lives saved, but I had always implicitly assumed that 'lives saved' was a net calculation.

The rest of the post is moderately misleading though (e.g. saying that Holden didn't start working at Open Phil, and the EA-aligned OpenAI board members didn't take their positions, until after FTXFF had launched).

2
Arden Wiese
1mo
Interesting! I think the question of whether 1 QALY saved (in expectation) is canceled out by the loss of 1 QALY (in expectation) is a complicated question. I tend to think there's an asymmetry between how good well-being is & how bad suffering is, though my views on this have oscillated a lot over the years. I'd like GiveWell to keep the tallies separate because I'd prefer to make the moral judgement depending on my current take on this asymmetry, rather than have them default to saying it's 1:1.

The "deaths caused" example picked was pretty tendentious. I don't think it's reasonable to consider an attack at a facility by a violent criminal in a region with high baseline violent crime "deaths caused by the charity" or to extrapolate that into the assumption that two more people will be shot dead for every $100,000 donated. (For the record, if you did factor that into their spreadsheet estimate, it would mean saving a life via that program now cost $4776 rather than $4559)

I would expect the lives saved from the vaccines to be netted out against deat... (read more)

We don't know from this announcement that they are planning to prioritise rapidity of sale over time-adjusted return - it could still make sense to not continue e.g. paying as many salaries, and to have declared it shut down as a project.

9
Habryka
1mo
Yes, totally possible. I am just specifically claiming that given that the cost of capital is one of the major expenses for this project, it would be surprising to me if it wasn't worth the marginal cost of operating it on financial grounds, at least until some kind of buyer was found.  I am trying to make a pretty concrete claim about how I expect a benefit calculation to come out if done well, and definitely could be wrong (the thing that I have higher confidence in is that this decision wasn't very sensitive to such a cost-benefit calculation and seems more driven by other factors).

That wasn’t my interpretation of this section. I took “be smart” to mean like ‘make smart career decisions’, not ‘be Smart^TM’

Regarding your last paragraph, I see the Profile 1 vs Profile 2 axis as basically distinct from the Doer vs Thinker axis. People can spend years in large companies without ever needing or developing a get sh*t done mentality, and otoh starting an EA org and rapidly iterating can be a great way to develop or exercise that skill (see e.g. BlueDot Impact, AI-Plans.com). Maybe it's that you're leaving out a Profile 3 - people who start their career in (or very quickly switch into) EA but by starting a new thing rather than working their way up the ladder of an EA org. (Though the starting of a new thing could technically happen within an existing org as well).

I'd be quite interested in reading a more fleshed-out version of this, if you were considering whether that was worth your time. What dimensions of advice about a given career path are you seeing people given that should be discounted without domain success?

All CE charities to date have focused on global development or animal welfare

CE incubated Training for Good, which runs two AI-related fellowships. They didn’t start out with an AI focus, but they also didn’t start out with a GHD or animal welfare focus.

I didn’t vote, but I’d guess that people are trying to discourage politicisation on the forum?

2
more better
1mo
Interesting, thanks for sharing! I can see how that may be the case and I appreciate your feedback. It made me think. I believe there can be value in keeping a space politically neutral, but that there are circumstances that warrant exceptions and that this is one such case. If Trump wins, I believe that moral progress will unravel and several cause areas will be rendered hopeless. If there had been a forum in existence before WW2, I wonder if posts expressing concerns about Hitler or inquiring about efforts to counter actions of Nazis would have been downvoted. I certainly hope not.

This feels like it could just be a genre of Quick Takes that people may choose to post?

Saying it isn't an EA project seems too strong - another co-founder of SMA is Jan-Willem van Putten, who also co-founded Training for Good which does the EU tech policy and Tarbell journalism fellowships, and at one point piloted grantmaker training and 'coaching for EA leaders' programs. TfG was incubated by Charity Entrepreneurship.

You missed the most impressive part of Jan-Willem’s EA CV - he used to co-direct EA Netherlands, and I hear that's a real signal of talent ;)

But yes, I guess it depends on how you define ‘EA project’. They're intentionally trying to do something different, so that's why I don't describe them as one, but the line is very blurred when you take into account the personal and philosophical ties. 

If EA was a broad and decentralised movement, similar to e.g., environmentalism, I'd classify SMA as an EA project. But right now EA isn't quite that. Personally, I hope we one day get there.  

How are people just letting him get away with a victim narrative?

I agree that starting with some non-EA experience is good (and this is the approach I took), though 5 years seems too long.

3
yanni kyriacos
1mo
You might be right. It's long enough for values to drift and lose touch with EA. Maybe 2 as a minimum and 4 as a maximum? Very person dependant.

I think it’s reasonable to focus on expressing an experienced sentiment, but I think it’s also fair for people to push back on the sentiment. There are after all people who have felt alienated from and pushed out of EA as a result of the active shaping of forum content to be more agreeable.

implicitly endorsed by CEA by virtue of not being removed or something like that

I think it would be quite bad if forum mods began to remove posts on the basis that something existing on the forum constitutes an endorsement by CEA. I’m not even sure it’s a coherent im... (read more)

3
Ulrik Horn
1mo
Hi Rebecca and thanks for taking the time to patiently engage with this topic - I think that is important. I agree 100% that people should push back if they feel like it. And I absolutely see the perspective of those that feel like they have to censor themselves in EA settings and that this also causes alienation. I kind of feel EA has 3 choices here: 1. Continue trying to find a middle ground, alienating people on "both sides", leadership/prominent figures awkwardly silent on the topics 2. Embrace the "all discussion is good" and do little in the way of DEI, alienating people who feels discomfort from certain topics like eugenics 3. Go all in on "Deloitte NYC" and strongly discourage certain discussions, do lots of DEI interventions, have leadership speak loudly about DEI I am, as is probably obvious from now, pushing hard for option number 3 and also think this is more likely to lead to us achieving our goals. I kind of feel like the frist, currently pursued option is the worst - there is a reason few organizations/companies do this. Take Nike, X, Deloitte etc. they all have taken a strong stance. I apologize for going part way down the rabbit hole of identity politics. I only meant to say how I feel about the term, to emphasize the points made in the OP. I respectfully decline to go further down that rabbit hole. And I know this can come off as a bit arrogant but I am sure others have written on this topic. 

I think the assumption is that most people already knew about the facts disclosed

It's often done to make sure the reader tries to weigh the merits of the content by itself.

My understanding is that it's usually meant to serve the opposite purpose: to alert readers to the possibility of bias so they can evaluate the content with that in mind and decide for themselves whether they think bias has creeped in. The alternative is people being alerted to the CoI in the comments and being angry the quite relevant information being kept from them, not that they would otherwise still know about the bias and not be able to evaluate the article well because of it.

I think the key actual difference (vs perceived as you point out), is whether you think those constraints are good or not.

6
Kyle Smith
1mo
It's pretty clear to me that these constraints are bad (and to me core EA is partially about breaking the self-imposed constraints of giving) but the simple reality is that private foundations are legally required to follow their charter. If the board wanted to radically change their charter, in most instances they could (my understanding), but boards tend to be extremely deferential to the founder's original intent. They begin with a fundamental assumption: "We will focus our giving on X cause area or Y geographic area" and then they have the power to make decisions beyond that. The concern I have is that EA has basically written off all private foundations that are not already EA-aligned as a lost cause.

World Vision being a Christian charity I think dominates these other effects unfortunately.

2
Jason
1mo
Definitely not recommending World Vision itself. But if you could get more American evangelical Christians to support bednet distribution by creating a new AMF-esque organization with (e.g.) Bible verses featured in its promotional materials and sewn in tags on its bednets, then I would probably be in favor of that. The Bible verses would not make the bednets less effective.

CE/AIM just launched something like a founding-to-give incubation program, will be interesting to see how that goes, who their participants end up being etc

Hmm so I currently think the default should be that withdrawals without a decision aren't included in the time-till-_decision_ metric, as otherwise you're reporting a time-till-closure metric. (I weakly think that if the withdrawal is due to the decision taking too long and that time is above the average (as an attempt to exclude cases where the applicant is just unusually impatient), then it should be encorporated in some capacity, though this has obvious issues.)

Perhaps I am overestimating how worried a source might be that their organisation traces a leak back to them if it's known that someone from within the organisation provided it.

I tick 2.5 of the DEI boxes you’ve identified, and I found this post quite off-putting. It’s hard for me to evaluate the examples as the box you’ve reasonably chosen to focus on I don’t tick, but the anecdote about your experience on the plane I found quite alarming. You say “I get it”, but I don’t get it. Airport security is overly stringent, and I’d be very surprised if I’d react that way in similar circumstances. Should I be offended that you think it’s representative of the average white person’s feelings? So I wonder if you might be projecting your own biases onto other white people/men/etc.

Hi Rebecca, I am realizing after posting and after your insightful comment that perhaps my feelings about DEI maybe is at least to some degree some sort of male/white guilt and that I am overcompensating. And it is a good point that I might be projecting my biases too strongly onto others who share my privileges - I did spend the first 18 years of my life in a very white environment, for example, so am probably wired quite differently from someone that grew up somewhere more diverse. Your comment is definitely well taken and makes me update towards being e... (read more)

Is the repetition of “applied in the last 30 days” possibly a typo?

2
calebp
2mo
oops, fixed - thank you

My point was that if someone withdraws their application because you were taking so long to get back to them, and you count that as the date you gave them your decision, you’re artificially lowering the average time-till-decision metric.

Actually the reason I asked if you’d factored in withdrawn application not how was to make sure my criticism was relevant before bringing it up - but that probably made the criticism less clear

2
Jeff Kaufman
2mo
What would you consider the non-artificial "average time-till-decision metric" in this case?

My point is more around the fact that if a person withdraws their application, then they never received a decision and so the time till decision is unknown/infinite, it’s not the time until they withdrew.

4
calebp
2mo
Oh, right - I was counting "never receiving a decision but letting us know" as a decision. In this case, the number we'd give is days until the application was withdrawn. We don't track the reason for withdrawals in our KPIs, but I am pretty sure that process length is a reason for a withdrawal 0-5% of the time. I might be missing why this is important, I would have thought that if we were making an error it would overestimate those times - not underestimate them.

The question relating to website timelines would be hard to answer as it was changed a few times I believe

Do you know what proportion of applicants fill out the feedback form?

1
calebp
2mo
I'm not sure sorry, I don't have that stat in front of me. I may be able to find it in a few days.
Load more