All of Eli Rose🔸's Comments + Replies

It is popular to hate on Swapcard, and yet Swapcard seems like the best available solution despite its flaws. Claude Code or other AI coding assistants are very good nowadays, and conceivably, someone could just Claude Code a better Swapcard that maintained feature parity while not having flaws.

Overall I'm guessing this would be too hard right now, but we do live in an age of mysteries and wonders. It gets easier every month. One reason for optimism is it seems like the Swapcard team is probably not focused on the somewhat odd use case of EAGs in general (... (read more)

4
Patrick Gruban 🔸
@Yonatan Cale posted a demo last week of an app he’s building in the EAG Bay Area Slack.
  1. Anecdotally, the authors of this post have now persuaded nearly 10% of the students at their university to take a GWWC pledge (trial or full), while the pledge rate at most universities is well under 1%. After getting to know these authors, I believe this incredible success is due to their kindness, charisma, passion and integrity - not because the audience at their university is fundamentally different than at other universities. 

Wow, that boggles my mind, especially as someone who attended a similar school for undergrad. Anywhere we can read ab... (read more)

5
Sam Anschell
Yes that's right! I'll message them to see if they're able to share more info on this thread

Unsurprising that Buggs approve of this topic.

I like the core point and think it's very important — though I don't really vibe with statements about calibration being actively dangerous.

I think EA culture can make it seem like being calibrated is the most important thing ever. But I think on the topic of "will my ambitious projects succeed?" it seems very difficult to be calibrated and fairly cursed overall, and it may overall be unhelpful to try super hard at this vs. just executing.

For example, I'm guessing that Norman Borlaug didn't feed a billion people primarily by being extremely well-calib... (read more)

1
meugen
These are great thoughts, thank you so much for the comment Eli!

Yeah, totally a contextual call about how to make this point in any given conversation, it can be easy to get bogged down with irrelevant context.

I do think it's true that utilitarian thought tends to push one towards centralization and central planning, despite the bad track record here. It's worth engaging with thoughtful critiques of EA vibes on this front.

Salaries are the most basic way our economy does allocation, and one possible "EA government utopia" scenario is one where the government corrects market inefficiencies such that salaries perfectly tr... (read more)

I like the main point you're making.

However, I think "the government's version of 80,000 Hours" is a very command-economy vision. Command economies have a terrible track record, and if there were such a thing as an "EA world government" (which I would have many questions about regardless) I would strongly think it shouldn't try to plan and direct everyone's individual careers, and should instead leverage market forces like ~all successful large economies.

6
Toby Tremlett🔹
Lol yep that's fair. This is surprisingly never the direction the conversation has gone after I've shared this thought experiment.  Maybe it should be more like: in a world where resources are allocated according to EA priorities (allocation method- silent), 80,000 Hours would be likelier to tell someone to be a post officer than an AI safety researcher... Bit less catchy though. 

+1 on wanting a more model-based version of this.

And +1 to you vibe coding it!

Upon seeing this, I had the same thought about vibe coding a more model-based version ... so, race you to whoever gets around to it?

1
Clara Torres Latorre 🔸
I started, and then realised how complicated is to choose a set of variables and weights to make sense of "how privileged am I" or "how lucky am I". I have an MVP (but ran out of free LLM assistance), and right now the biggest downside is that if I include several variables, the results tend to be far from the top. And I don't know what to do about this. For instance, let's say that in "healthcare access", having good public coverage puts you in the top 10% bracket (number made up). Then, if you pick 95% as the reference point for that any weighted average including this will miss on some distance to the top. So just a weighted average of different questions is not good enough I guess. We can discuss and workshop it if you want.

I mostly donated to democracy preservation work and did some political giving. And a little to the shrimp.

Wow awesome thanks for letting me know!

Thanks for writing this!!

This risk seems equal or greater to me than AI takeover risk. Historically the EA & AIS communities focused more on misalignment, but I'm not sure if that choice has held up.

Come 2027, I'd love for it to be the case that an order of magnitude more people are usefully working on this risk. I think it will be rough going for the first 50 people in this area; I expect there's a bunch more clarificatory and scoping work to do; this is virgin territory. We need some pioneers.

People with plans in this area should feel free to apply f... (read more)

I'm quite excited about EAs making videos about EA principles and their applications, and I think this is an impactful thing for people to explore. It seems quite possible to do in a way that doesn't compromise on idea fidelity; I think sincerity counts for quite a lot. In many cases I think videos and other content can be lighthearted / fun / unserious and still transmit the ideas well.

I think the vast majority of people making decisions about public policy or who to vote for either aren't ethically impartial, or they're "spotlighting", as you put it. I expect the kind of bracketing I'd endorse upon reflection to look pretty different from such decision-making.

But suppose I want to know who of two candidates to vote for, and I'd like to incorporate impartial ethics into that decision. What do I do then?

That said, maybe you're thinking of this point I mentioned to you on a call

Hmm, I don't recall this; another Eli perhaps? : )

3
mal_graham🔸
@Eli Rose🔸  I think Anthony is referring to a call he and I had :) @Anthony DiGiovanni I think I meant more like there was a justification of the basic intuition bracketing is trying to capture as being similar to how someone might make decisions in their life, where we may also be clueless about many of the effects of moving home or taking a new job, but still move forward. But I could be misremembering!  Just read your comment more carefully and I think you're right that this conversation is what I was thinking of.  

(vibesy post)

People often want to be part of something bigger than themselves. At least for a lot of people this is pre-theoretic. Personally, I've felt this since I was little: to spend my whole life satisfying the particular desires of the particular person I happened to be born into the body of, seemed pointless and uninteresting.

I knew I wanted "something bigger" even when I was young (e.g. 13 years old). Around this age my dream was to be a novelist. This isn't a kind of desire people would generally call "altruistic," nor would my younger self have c... (read more)

Like if you're contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, you're going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them...

...I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, th

... (read more)
3
mal_graham🔸
All very interesting, and yes let's talk more later!  One quick thing: Sorry my comment was unclear -- when I said "precise probabilities" I meant the overall approach, which amounts to trying to quantify everything about an intervention when deciding its cost effectiveness (perhaps the post was also unclear).  I think most people in EA/AW spaces use the general term "precise probabilities" the same way you're describing, but perhaps there is on average a tendency toward the more scientific style of needing more specific evidence for those numbers. That wasn't necessarily true of early actors in the WAW space and I think it had some mildly unfortunate consequences.  But this makes me realize I should not have named the approach that way in the original post, and should have called it something like the "quantify as much as possible" approach. I think that approach requires using precise probabilities -- since if you allow imprecise ones you end up with a lot of things being indeterminate -- but there's more to it than just endorsing precise probabilities over imprecise ones (at least as I've seen it appear in WAW). 

I just remembered Matthew Barnett's 2022 post My Current Thoughts on the risks from SETI which is a serious investigation into how to mitigate this exact scenario.

That does seem right, thanks. I intended to include dictator-ish human takeover there (which seems to me to be at least as likely as misaligned AI takeover) as well, but didn't say that clearly.

Edited to "relatively amoral forces" which still isn't great but maybe a little clearer.

Enjoyed this post.

Maybe I'll speak from an AI safety perspective. The usual argument among EAs working on AI safety is:

  1. the future is large and plausibly contains much goodness
  2. today, we can plausibly do things to steer (in expectation) towards achieving this goodness and away from catastrophically losing it
  3. the invention of powerful AI is a super important leverage point for such steering

This is also the main argument motivating me — though I retain meaningful meta-uncertainty and am also interested in more commonsense motivations for AI safety work.

A lot of... (read more)

7
mal_graham🔸
Thanks Eli! I sort of wonder if some people in the AI community -- any maybe you, from what you've said here? -- are using precise probabilities to get to the conclusion that you want to work primarily on AI stuff, and then spotlighting to that cause area when you're analyzing at the level of interventions.  I think someone using precise probabilities all the way down is building a lot more explicit models every time they consider a specific intervention. Like if you're contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, you're going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them. And all sorts of things like that. So your output would be a bunch of hypotheses about exactly how these fellows are going to benefit AI policy, and some precise probabilities about how those policy benefits are going to help people, and possibly animals to what degree, etc.  I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, that accounted for backfire to birds, and it took a lot of research effort. If you asked me how bird-safe glass policy is going to affect AI risk after all that, I might throw my computer at you. But I think the precise probabilities approach would imply that I should.   Re: I'm definitely interested in robustness comparisons but not always sure how they would work, especially given uncertainty about what robustness means. I suspect some of these things will hinge on how optimistic you are about the value of life. I think the animal community attracts a lot more folks who are skeptical about humans being good stewards of the world, and so are less convinced that a rogue AI would be worse in e

But lots of the interventions in 2. seem to also be helpful for getting things to go better for current farmed and wild animals, e.g. because they are aimed avoiding a takeover of society by forces which don't care at all about morals

Presumably misaligned AIs are much less likely than humans to want to keep factory farming around, no? (I'd agree the case of wild animals is more complicated, if you're very uncertain or clueless whether their lives are good or bad.)

No one is dying of not reading Proust, but many people are leading hollower and shallower lives because the arts are so inaccessible.

Tangential to your main point, and preaching to the choir, but... why are "the arts" "inaccessible?" The Internet is a huge revolution in the democratization of art relative to most of human history, TV dramas are now much more complex and interesting than they have been in the past, A24 is pumping out tons of weird/interesting movies, way more people are making interesting music and distributing it than before.

I think (and t... (read more)

Vince Gilligan (the Breaking Bad guy) has a new show Pluribus which is many things, but also illustrates an important principle, that being (not a spoiler I think since it happens in the first 10 minutes)...

If you are SETI and you get an extraterrestrial signal which seems to code for a DNA sequence...

DO NOT SYNTHESIZE THE DNA AND THEN INFECT A BUNCH OF RATS WITH IT JUST TO FIND OUT WHAT HAPPENS. 

Just don't. Not a complicated decision. All you have to do is go from "I am going to synthesize the space sequence" to "nope" and look at that, x-risk averted. You're a hero. Incredible work.

6
Eli Rose🔸
I just remembered Matthew Barnett's 2022 post My Current Thoughts on the risks from SETI which is a serious investigation into how to mitigate this exact scenario.
7
Yarrow Bouchard 🔸
I work at SETI and this simply isn't realistic because we have to use our large supply of experimental rats every quarter. What else would you propose we do with them?

One note: I think it would be easy for this post to be read as "EA should be all about AGI" or "EA is only for people who are focused on AGI."

I don't think that is or should be true. I think EA should be for people who care deeply about doing good, and who embrace the principles as a way of getting there. The empirics should be up for discussion.

2
William_MacAskill
Thanks! I agree strongly with that.

Appreciated this a lot, agree with much of it.

I think EAs and aspiring EAs should try their hardest to incorporate every available piece of evidence about the world when deciding what to do and where to focus their efforts. For better or worse, this includes evidence about AI progress.

The list of important things to do under the "taking AI seriously" umbrella is very large, and the landscape is underexplored so there will likely be more things for the list in due time. So EAs who are already working "in AI safety" shouldn't feel like their cause prioritiza... (read more)

8
Eli Rose🔸
One note: I think it would be easy for this post to be read as "EA should be all about AGI" or "EA is only for people who are focused on AGI." I don't think that is or should be true. I think EA should be for people who care deeply about doing good, and who embrace the principles as a way of getting there. The empirics should be up for discussion.

Thanks for writing this Arden! I strong upvoted.

I do my work at Open Phil — funding both AIS and EA capacity-building — because I'm motivated by EA. I started working on this in 2020, a time when there were way fewer concrete proposals for what to do about averting catastrophic AI risks & way fewer active workstreams. It felt like EA was necessary just to get people thinking about these issues. Now the catastrophic AI risks field is much larger and somewhat more developed, as you point out. And so much the better for the world!

But it seems so far ... (read more)

Am I right that a bunch of the content of this response itself was written by an AI?

I enjoyed this, in particular:

the inner critic is actually a kind of ego trip

which resonates for me.

I personally experience my inner critic as something which often prevents me from "seeing the world clearly" including seeing good things I've done clearly, and seeing my action space clearly. It's odd that this is true, because you'd think the point of criticism is to check optimism and help us see things more clearly. And I find this to be very true of other people's criticism, and for some mental modes of critiquing my own plans.

But the distinct flavor of... (read more)

And placing some weight on the prediction that the curve will simply continue[1] seems like a useful heuristic / counterbalance (and has performed well).

"and has performed well" seems like a good crux to zoom in on; for which reference class of empirical trends is this true, and how true is it?

It's hard to disagree with "place some weight"; imo it always makes sense to have some prior that past trends will continue. The question is how much weight to place on this heuristic vs. more gears-level reasoning.

For a random example, observers in 2009 might h... (read more)

I'm skeptical of an "exponentials generally continue" prior which is supposed to apply super-generally. For example, here's a graph of world population since 1000 AD; it's an exponential, but actually there are good mechanistic reasons to think it won't continue along this trajectory. Do you think it's very likely to?

The Human Population – Introductory Biology: Evolutionary and ...

2
Lizka
I tried to clarify things a bit in this reply to titotal: https://forum.effectivealtruism.org/posts/iJSYZJJrLMigJsBeK/lizka-s-shortform?commentId=uewYatQz4dxJPXPiv  In particular, I'm not trying to make a strong claim about exponentials specifically, or that things will line up perfectly, etc.  (Fwiw, though, it does seem possible that if we zoom out, recent/near-term population growth slow-downs might be functionally a ~blip if humanity or something like it leaves the Earth. Although at some point you'd still hit physical limits.)
8
Ben_West🔸
I don't personally have well-developed thoughts on population growth, but note that "population growth won't continue to be exponential" is a prediction with a notoriously bad track record.

It turns out, these managed hives, they're just un-bee-leave-able.

2
NickLaing
Winning comment strong upvote

I think there's really something to this, as a critique of both EA intellectual culture & practice. Deep in our culture is a type of conservatism and a sense that if something is worth doing, one ought to be able to legibly "win a debate" against all critiques of it. I worry this chokes off innovative approaches, and misses out on the virtues of hits-based giving.

However, there are really a wide variety of activities that EAs get up to, and I think this post could be improved by deeper engagement with the many EA activities that don't fit the bednet mo... (read more)

I'm not an axiological realist, but it seems really helpful to have a term for that position, upvoted.

Broadly, and off-topic-ally, I'm confused why moral philosophers don't always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.

However, some of the public stances he has taken make it difficult for grantmakers to associate themselves with him. Even if OP were otherwise very excited to fund AISC, it would be political suicide for them to do so. They can’t even get away with funding university clubs.

(I lead the GCR Capacity Building team at Open Phil and have evaluated AI Safety Camp for funding in the past.)

AISC leadership's involvement in Stop AI protests was not a factor in our no-fund decision (which was made before the post you link to).

For AI safety talent programs, I think it... (read more)

1
gergo
Hey! Thanks, this is really useful input! :) I will update this bit of the text and link to your comment

I edited this post on January 21, 2025, to reflect that we are continuing funding stipends for graduate student organizers for non-EA groups, while stopping funding stipends for undergraduate student organizers. I think that paying grad students for their time is less unconventional than for undergraduates, and also that their opportunity cost is higher on average. Ignoring this distinction was an oversight in the original post.

Hey! I lead the GCRCB team at Open Philanthropy, which as part of our portfolio funds "meta EA" stuff (e.g. CEA).

I like the high-level idea here (haven't thought through the details).

We're happy to receive proposals like this for media communicating EA ideas and practices. Feel free to apply here, or if you have a more early-stage idea, feel free to DM me on here with a short description — no need for polish — and I'll get back to you with a quick take about whether it's something we might be interested in. : )

8
akash 🔸
Related Q: is there a list of EA media project that you would like to see more of but ones that currently do not exist?

What is the base rate for Chinese citizens saying on polls that the Chinese government should regulate X, for any X?

4
Nick Corvino
Great question! I wish I knew the answer. Of all the Chinese surveys we looked through, to my knowledge, that question was never asked. (I think it might be a bit of a taboo.) 

I thought this was interesting & forceful, and am very happy to see it in public writing.

The full letter is available here — was recently posted online as part of this tweet thread.

(meta musing) The conjunction of the negations of a bunch of statements seems a bit doomed to get a lot of disagreement karma, sadly. Esp. if the statements being negated are "common beliefs" of people like the ones on this forum.

I agreed with some of these and disagreed with others, so I felt unable to agreevote. But I strongly appreciated the post overall so I strong-upvoted.

  1. Similar to that of our other roles, plus experience running a university group as an obvious one — I also think that extroversion and proactive communication are somewhat more important for these roles than for others.
  2. Going to punt on this one as I'm not quite sure what is meant by "systems."
  3. This is too big to summarize here, unfortunately.
  1. Check out "what kinds of qualities are you looking for in a hire" here. My sense is we index less on previous experience than many other organizations (though it's still important). Experience juggling many tasks, prioritize, and syncing up with stakeholders jumps to mind. I have a hypothesis that consultant experience would be helpful for this role, but that's a bit conjectural.
  2. This is a bit TBD — happy to chat more further down the pipeline with any interested candidates.
  3. We look for this in work tests and in previous experience.
  1. The CB team continuously evaluates the track record of grants we've made when they're up for renewal, and this feeds into our sense of how good programs are overall. We also spend a lot of time keeping up with what's happening in CB and in x-risk generally, and this feeds into our picture of how well CB projects are working.
  2. Check out "what kinds of qualities are you looking for in a hire" here.
  3. Same answer as 2.

Empirically, in hiring rounds I've previously been involved in for my team at Open Phil, it has often seemed to be the case that if the top 1-3 candidates just vanished, we wouldn't make a hire. I've also observed hiring rounds that concluded with zero hires. So, basically I dispute the premise that the top applicants will be similar in terms of quality (as judged by OP).

I'm sympathetic to the take "that seems pretty weird." It might be that Open Phil is making a mistake here, e.g. by having too high a bar. My unconfident best-guess would be that our bar h... (read more)

1
JoshuaBlake
Thank you - this is a very useful answer

Thanks for the reply.

I think "don't work on climate change[1] if it would trade off against helping one currently identifiable person with a strong need" is a really bizarre/undesirable conclusion for a moral theory to come to, since if widely adopted it seems like this would lead to no one being left to work on climate change. The prospective climate change scientists would instead earn-to-give for AMF.

  1. ^

    Or bettering relations between countries to prevent war, or preventing the rise of a totalitarian regime, etc.

2
Linch
I think this argument doesn't quite go through as stated, because AMF doesn't have an infinite funding gap. If everybody on Earth (or even, say, 10% of the richest 10% of people) acted on the version of contractualism that mandated donating significantly to AMF as a way to discharge their moral obligations, we'll be well-past the point where anybody who wants and needs a bednet can have one.  That said, I think a slightly revised version of your argument can still work. In a contractualist world, people should be willing to give almost unlimited resources to a single identifiable victim than working on large-scale moral issues, or having fun. 

Moreover, it’s common to assume that efforts to reduce the risk of extinction might reduce it by one basis point—i.e., 1/10,000. So, multiplying through, we are talking about quite low probabilities. Of course, the probability that any particular poor child will die due to malaria may be very low as well, but the probability of making a difference is quite high. So, on a per-individual basis, which is what matters given contractualism, donating to AMF-like interventions looks good.

 

It seems like a society where everyone took contractualism to heart mi... (read more)

5
Bob Fischer
Good question, Eli. I think a lot here depends on keeping the relevant alternatives in view. The question is not whether it's permissible to coordinate climate change mitigation efforts (or what have you). Instead, the question is whether we owe it to anyone to address climate change relative to the alternatives. And when you compare the needs of starving children or those suffering from serious preventable diseases, etc., to those who might be negatively affected by climate change, it becomes a lot more plausible that we don't owe to anyone to address those things over more pressing needs (assuming we have a good chance of doing something about those needs / moving the needle significantly / etc.). 

So, it may be true that some x-risk-oriented interventions can help us all avoid a premature death due to a global catastrophe; maybe they can help ensure that many future people come into existence. But how strong is any individual's claim to your help to avoid an x-risk or to come into existence? Even if future people matter as much as present people (i.e., even if we assume that totalism is true), the answer is: Not strong at all, as you should discount it by the expected size of the benefit and you don’t aggregate benefits across persons. Since any giv

... (read more)
5
Bob Fischer
Thanks for your question, Eli. The contractualist can say that it would be callous, uncaring, indecent, or invoke any number of other virtue theoretic notions to explain why you shouldn't leave broken glass bottles in the woods. What they can't say is that, in some situation where (a) there's a tradeoff between some present person's weighty interests and the 20-years-from-now young child's interests and (b) addressing the present person's weighty interests requires leaving the broken glass bottles, the 20-years-from-now young child could reasonably reject a principle that exposed them to risk instead of the present person's. Upshot: they can condemn the action in any realistic scenario.   

(I'm a trustee on the EV US board.)

Thanks for checking in. As Linch pointed out, we added Lincoln Quirk to the EV UK board in July (though he didn’t come through the open call). We also have several other candidates at various points in the recruitment pipeline, but we’ve put this a bit on the backburner both because we wanted to resolve some strategic questions before adding people to the board and also because we've been lower capacity than we thought.

Having said that, we were grateful for all the applications and nominations which we received in that in... (read more)

Load more