All of Eli Rose's Comments + Replies

(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are "common beliefs" of people like the ones on this forum.

I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.

  1. Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
  2. Going to punt on this one as I'm not quite sure what is meant by "systems."
  3. This is too big to summarize here, unfortunately.
  1. Check out "what kinds of qualities are you looking for in a hire" here. My sense is we index less on previous experience than many other organizations (though it's still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that's a bit conjectural.
  2. This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
  3. We look for this in work tests and in previous experience.
  1. The CB team continuously evaluates the track record of grants we've made when they're up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what's happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
  2. Check out "what kinds of qualities are you looking for in a hire" here.
  3. Same answer as 2.

Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).

I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar h... (read more)

1
JoshuaBlake
6mo
Thank you - this is a very useful answer

Thanks for the reply.

I think "don't work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need" is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.

  1. ^

    Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.

2
Linch
6mo
I think this argument doesn't quite go through as stated, because AMF doesn't have an infinite funding gap. If everybody on Earth (or even, say, 10% of the richest 10% of people) acted on the version of contractualism that mandated donating significantly to AMF as a way to discharge their moral obligations, we'll be well-past the point where anybody who wants and needs a bednet can have one.  That said, I think a slightly revised version of your argument can still work. In a contractualist world, people should be willing to give almost unlimited resources to a single identifiable victim than working on large-scale moral issues, or having fun. 

Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.

 

It seems like a society where everyone took contractualism to heart mi... (read more)

5
Bob Fischer
6mo
Good question, Eli. I think a lot here depends on keeping the relevant alternatives in view. The question is not whether it's permissible to coordinate climate change mitigation efforts (or what have you). Instead, the question is whether we owe it to anyone to address climate change relative to the alternatives. And when you compare the needs of starving children or those suffering from serious preventable diseases, etc., to those who might be negatively affected by climate change, it becomes a lot more plausible that we don't owe to anyone to address those things over more pressing needs (assuming we have a good chance of doing something about those needs / moving the needle significantly / etc.). 

So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any giv

... (read more)
5
Bob Fischer
6mo
Thanks for your question, Eli. The contractualist can say that it would be callous, uncaring, indecent, or invoke any number of other virtue theoretic notions to explain why you shouldn't leave broken glass bottles in the woods. What they can't say is that, in some situation where (a) there's a tradeoff between some present person's weighty interests and the 20-years-from-now young child's interests and (b) addressing the present person's weighty interests requires leaving the broken glass bottles, the 20-years-from-now young child could reasonably reject a principle that exposed them to risk instead of the present person's. Upshot: they can condemn the action in any realistic scenario.   
Answer by Eli RoseSep 04, 202316
0
0

(I'm a trustee on the EV US board.)

Thanks for checking in. As Linch pointed out, we added Lincoln Quirk to the EV UK board in July (though he didn’t come through the open call). We also have several other candidates at various points in the recruitment pipeline, but we’ve put this a bit on the backburner both because we wanted to resolve some strategic questions before adding people to the board and also because we've been lower capacity than we thought.

Having said that, we were grateful for all the applications and nominations which we received in that in... (read more)

I think we should keep "neglectedness" referring to the amount of resources invested in the problem, not P(success). This seems a better fit for the "tractability" bucket.

(+1 to this approach for estimating neglectedness; I think dollars spent is a pretty reasonable place to start, even though quality adjustments might change the picture a lot. I also think it's reasonable to look at number of people.)

Looks like the estimate in the 80k article is from 2020, though the callout in the biorisk article doesn't mention it — and yeah, AIS spending has really taken off since then.

I think the OP amount should be higher because I think one should count X% of the spending on longtermist community-building as being AIS spending, for s... (read more)

2
Linch
10mo
Makes sense, so order $300m total?

Made the front page of Hacker News. Here's the comments.

The most common pushback (and the first two comments, as of now) are from people who think this is an attempt at regulatory capture by the AI labs, though there's a good deal of pushback and (I thought) some surprisingly high-quality discussion.

It seems relevant that most of the signatories are academics, where this criticism wouldn't make sense. @HaydnBelfield created a nice graphic here demonstrating this point.

2
ClimateDoc
11mo
  This is also the case in the comments on this FT article (paywalled I think), which I guess indicates how less techy people may be tending to see it.

Off topic: There's a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: "was also credited with helping shift the Animal Rights movement to a more utilitarian focus including a focus on chicken."

This is an amazing thing to learn.

FWIW several people I spoke to just weren't aware subforums existed, during the time they were being piloted.

  1. This refers to the amount you were promised from FTXF.
  2. This refers to the amount that was promised, but hasn’t been paid out.
1
Falk Lieder
1y
Thank you very much for answering my questions. :)

(I work at Open Phil assisting with this effort.)

Thanks for pointing this out; it looks like there was a technical error which excluded these from the email receipt, which we've now fixed. The information was still received on our end, so you don't need to take any extra actions.

(I work at Open Phil assisting with this effort.)

  1. Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.

  2. For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.

(I work at Open Phil assisting with this effort.)

We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.

1
Max Nadeau
1y
From the post: "We plan to have some researchers arrive early, with some people starting as soon as possible. The majority of researchers will likely participate during the months of December and/or January."

If you haven't already, I'd recommend reading Richard Ngo's AGI Safety From First Principles, which I think is an unusually rigorous treatment of the issue.

1
aelwood
2y
I had it bookmarked, but not looked at it yet. Thanks for the recommendation!

We've been paying people based on time spent, rather than by word. The amounts are based on our assessment of market rates for high-quality freelance translators for the language in question online, though my guess is this will be a more attractive than being a freelance translator because it's a source of steady work for a long period of time (e.g. 6 months).

Have you considered writing a letter to the editor? I think actual worked examples of naive consequentialism failing are kind of rare and cool for people to see .

Hmm yeah, I went East Coast --> Bay and I somewhat miss the irony.

1
Henry_Sleight
2y
Ah this irony point is interesting! Do you think that this irony is in some way antithetical to the statusy self-importance of west coast culture?
  1. We're interested in increasing the diversity of the longtermist community along many different axes. It's hard to give a unified 'strategy' at this abstract level, but one thing we've been particularly excited about recently is outreach in non-Western and non-English-speaking countries.

  2. Yes, you can apply for a grant under these circumstances. It's possible that we'll ask you to come back once more aspects of the plan are figured out, but we have no hard rules about that. And yes, it's possible to apply for funding conditional on some event and later return the money/adjust the amount you want downwards if the event doesn't happen.

I'll stand by the title here. I think a bilingual person without specific training in translation can have good taste in determining whether or not a given translation is high-quality. These seem like distinct skills, e.g. in English I'm able to recognize a work badly translated from French even if I don't speak French and couldn't produce a better one. And having good taste seems like the most important skill for someone who is vetting and contracting with professional translators.

Separately, I also think that many (but not all) bilingual people without s... (read more)

Hi Zakariyau. This seems like it definitely meets the criteria of a language with >5m speakers — I don't have the context, but I don't think English being the official language would be a barrier of any kind.

Unfortunately I think this kind of experimental approach is a bad fit here; opportunity costs seem really high, there's a small number of data points, and there's a ton of noise from other factors that language communities vary along.

Fortunately I think we'll have additional context that will help us assess the impacts of these grants beyond a black-box "did this input lead to this output" analysis.

Hi Nathan — I think that probably wouldn't make sense in this case, as I think it's important for the person leading a given translation project to understand EA and related ideas well, even if translators they hire do not.

Yep, this list isn't intended to rule anything out. We'd certainly be interested in getting applications from people who want to get content translated into Hindi or other Indian languages.

3
Erich_Grunewald
2y
Maybe translations into Mandarin could be useful too, not only because there are >1B speakers, but also because influential Chinese EAs may end up being very impactful in reducing AI risk (e.g. wrt AI race dynamics).

Thanks, really appreciate the concrete suggestion! This seems like a good lead for anyone who wants to supervise Polish translation.

I think this Wikipedia claim is from Reagan's autobiography. But according to The Dead Hand, written by a third-party historian, Reagan was already very concerned about nuclear war by this time, and had been at least since his campaign in 1980. It's pretty interesting — apparently this concern led to both his interest in nuclear weapon abolition (which he mostly didn't talk about) and in his unrealistic and harmful missile defense plans.

So according to this book, The Day After wasn't actually any kind of turning point.

1
Phil Tanny
2y
  Some people argue that Reagan's "Star Wars" missile defense plan did succeed at it's real goal, convincing the Soviets that they would never be able to keep up with America's R&D, so it was better to make peace.   From this point of view, whether SDI was realistic or not misses the point, that Reagan succeeded in creating the impression that it was realistic.   That is, from this point of view, he succeeded in bluffing the Soviets.   Other factors contributed, like the collapsing Soviet economy, Russian defeat in Afghanistan, etc.  SDI was a kind of economic war, we can out spend you etc.

The answer is yes, I can think of some projects in this general area that sound good to me. I'd encourage you to email me or sign up to talk to me about your ideas and we can go from there. As is always the case, a lot rides on further specifics about the project — i.e. just the bare fact that something is focused on mid-career professionals in tech doesn't give me a lot of info about whether it's something we'd want to fund or not.

Thanks, I only meant to ask if focusing on mid-career tech is already a deal breaker for things you'd be interested in, and I understand that isn't.

 

Here are some ideas:

  • EA tech newsletter
    • Aim to keep people somewhat engaged in EA and hear about opportunities that fit them when these people are looking for work.
    • Draft
  • Running local groups that publicly transparently look for impactful tech companies together
    • This is good because:
      • If a tech person goes to a local EA group and asks how to have impact without working remotely, the group will have some answer b
... (read more)

(I work at Open Phil on community-building grantmaking.)

This role seems quite high-impact to me and I'd encourage anyone on the fence to apply. Our 2020 survey leads me to believe that 80k has been very impactful in terms of contributing to the trajectories of people who are now doing important longtermist work. I think good marketing work could significantly increase the number of people that 80k reaches, and the impact of doing this quickly and well seems competitive with a lot of other community-building work to me — one reason for this is that I think ... (read more)

Is there an equally high level of expert consensus on the existential risks posed by AI?

There isn't. I think a strange but true and important fact about the problem is that it just isn't a field of study in the same way e.g. climate science is — as argued in this Cold Takes post. So it's unclear who the relevant "experts" should be. Technical AI researchers are maybe the best choice, but they're still not a good one; they're in the business of making progress locally, not forecasting what progress will be globally and what effects that will have.

7
Marshall
2y
Thanks! I agree - AI risk is at a much earlier stage of development as a field. Even as the field develops and experts can be identified, I would not expect a very high degree of consensus. Expert consensus is more achievable for existential risks such as climate science and asteroid impacts that can be mathematically modeled with high historical accuracy - there's less to dispute on empirical / logical grounds.  A campaign to educate skeptics seems appropriate for a mature field with high consensus, whereas constructively engaging skeptics supports the advancement of a nascent field with low consensus.

I think this is a good question and there are a few answers to it.

One is that many of these jobs only look like they check the "improving the world" box if you have fairly unusual views. There aren't many people in the world for whom e.g. "doing research to prevent future AI systems from killing us all" tracks as an altruistic activity. It's interesting to look at this (somewhat old) estimate of how many EAs even exist.

Another is that many of the roles discussed here aren't research-y roles (e.g. the biosecurity projects require entrepreneurship, not resea... (read more)

4
Ben Snodin
2y
Some quick thoughts on this from me: Honestly for me it's probably at the "almost too good to be true" level of surprisingness (but to be clear it actually is true!). I think it's a brilliant community / ecosystem (though of course there's always room for improvement). I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it's a big factor here. I also agree that not all roles are research roles, although I don't know how much this weakens the surprisingness because some people probably don't find research roles appealing but do find e.g. project management appealing. (Also I do feel like most research is pretty tough one way or another, whether or not it's "EA" research.) I guess there's also the "downsides" I mentioned in the post. One that particularly comes to mind is that there still aren't a ton of great EA jobs to just slot into, and the ones that exist often seem to be very over-subscribed. Partly depends on your existing profile of skills of course :).

What I'm talking about tends to be more of an informal thing which I'm using "EMH" as a handle for. I'm talking about a mindset where, when you think of something that could be an impactful project, your next thought is "but why hasn't EA done this already?" I think this is pretty common and it's reasonably well-adapted to the larger world, but not very well-adapted to EA.

1
Chris Lonsberry
2y
still seems like a fair question. I think the underlying problem you're pointing to might be that people will then give up on their projects or ideas without having come up with a good answer. An "EMH-style" mindset seems to point to an analytical shortcut: if it hasn't already been done, it probably isn't worth doing. Which, I agree is wrong.  I still think EMH has no relevance in this context and that should be the main argument against applying it to EA projects. 

EMH says that we shouldn't expect great opportunities to make money to just be "lying around" ready for anyone to take. EMH says that, if you have an amazing startup idea, you have to answer "why didn't anyone do this before?" (ofc. this is a simplification, EMH isn't really one coherent view)

One might also think that there aren't great EA projects just "lying around" ready for anyone to do. This would be an "EMH for EA." But I think it's not true.

1
Chris Lonsberry
2y
I had to use Wikipedia to get a concise definition of EMH, rather than rely on my memory: This appears to me to apply exclusively to financial (securities) markets and I think we would be taking (too) far out of its original context in trying to use it to answer questions about whether great EA projects exist. In that sense, I completely agree with you that:  In the real (non-financial) world, there are plenty of opportunities to make money, which is one reason entrepreneurs exist and are valuable. Are you aware of people using EMH to suggest we should not expect to find good philanthropic opportunities? 1. ^ https://en.wikipedia.org/wiki/Efficient-market_hypothesis 

I changed my display name as a result of this post, thanks!

1
Tristan Williams
1y
Just throwing another comment here for support, read and changed.
2
Austin
2y
Me too!

There is/was a debate on LessWrong about how valid the efficient market hypothesis is. I think this is super interesting stuff, but I want to claim (with only some brief sketches of arguments here) that, regarding EA projects, the efficient market hypothesis is not at all valid (that is, I think it's a poor way to model the situation that will lead you to make systematically wrong judgments). I think the main reasons for this are:

  1. EA and the availability of lots of funding for it are relatively new — there's just not that much time for "market ineffi
... (read more)
1
Chris Lonsberry
2y
I don't see the connection between EMH and EA projects. Can you elaborate on how those two intersect?

If you're an EA who's just about to graduate, you're very involved in the community, and most of the people you think are really cool are EAs, I think there's a decent chance you're overrating jobs at EA orgs in your job search. Per the common advice, I think most people in this position should be looking primarily at the "career capital" their first role can give them (skills, connections, resume-building, etc.) rather than the direct impact it will let them have.

At first blush it seems like this recommends you should almost never take an EA job early in ... (read more)

1
billzito
2y
I agree with this take (and also happen to be sitting next to Eli right now talking to him about it :). I think working at a fast-growing startup in an emerging technology is one of the best opportunities for career capital: https://forum.effectivealtruism.org/posts/ejaC35E5qyKEkAWn2/early-career-ea-s-should-consider-joining-fast-growing 

Thanks for posting this. I found a lot of it resonant — particularly the stuff about inventing reasons to discount positive feedback, and having to pile on more and more unlikely beliefs to avoid updating to "I'm good at this."

I remember, fairly recently, taking seriously some version of "I'm not actually good at this stuff, I'm just absurdly skilled at fooling others into thinking that I am." I don't know man, it seemed like a pretty good hypothesis at the time.

1
calebp
2y
I think this might be one of my current hypothesis for why I am doing what I am doing.  Or maybe I think I think it's ~60% likely  I'm ok at my job, and 40% likely I have fooled other people into thinking I'm ok at my job.

One can't stack the farmed animal welfare multiplier on top of the ones about giving malaria nets or the one about focusing on developing countries, right? E.g. can't give chickens malaria nets.

It seems like that one requires 'starting from scratch' in some sense. There might be analogies to the human case (e.g. don't focus on your pampered pets), but they still need to be argued.

So I think the final number should be lower. (It's still quite high, of course!)

The way I framed it was unclear, but the final number is correct because I was comparing the QALYs/$ of farmed animal interventions to that of malaria nets. See the footnote:

Assumes 40 chicken QALYs/$, 1 human QALY/$100, and that 400 chicken QALY = 1 human QALY due to neuron differences. Ana's moral circle includes all beings weighted by neuron count, but she hadn't thought about this enough.

I was directly comparing the following rough estimates 

  • 40 chicken QALY/$ generated by broiler and cage-free interventions (Rethink Priorities has a mean of 41)
  • 0.0
... (read more)

Just a reminder that the deadline for applications is this Friday, March 25th.

Hmm. Thanks for the example of the "pure time" mapping of t --> mental states. It's an interesting one. It reminds me of Max Tegmark's mathematical universe hypothesis at "level 4," where, as far as I understand, all possible mathematical structures are taken to "exist" equally. This isn't my current view, in part because I'm not sure what it would mean to believe this.

I think the physical dust mapping is meaningfully different from the "pure time" mapping. The dust mapping could be defined by the relationships between dust specks. E.g. at each tim... (read more)

I downvoted the OP because it doesn't seem to be suited to this forum. The author's experiences are interesting, but I don't think the post contains an attempt to explore the potential cause area impartially.

Hmm, so from my point of view this post is written with the intention to bring a class of interventions to the attention of people who care about mental health, which seems good to me.

I agree this post is not an impartial exploration, but I tentatively would like this forum to also be welcoming to new altruistically minded people who have a new idea about helping people, even ideas that are in their infancy in terms of the question whether it’s among the most effective ways to do good.

So instead of feedback that their initial thoughts don’t belong here, I’... (read more)

Load more