Quick takes

Set topic
Frontpage
Global health
Animal welfare
Existential risk
Biosecurity & pandemics
10 more

Does anybody know if the Trump EO on instituting "most favored nation" guarantees on drugs sold in the US will affect prices in developing countries or just rich industrialized ones? 

The text of the EO implies that it's to address imbalances between the US and other developed countries (AI summary).

The Executive Order states that "Americans should not be forced to subsidize low-cost prescription drugs and biologics in other developed countries, and face overcharges for the same products in the United States."


When describing potential importation of dr

... (read more)

Drug prices in the US are often absurdly high and not super relevent to other Developed countries, let alone low income countries. New Zealand for example buys medications through a different system, usually far far cheaper than the US does.

And its almost an unrelated parralel drug market in places like Uganda compared with the US, with competing Indian companies competing to sell drugs here, its amazing how cheap they are here really. Some examples

1. Amoxicillin 100 tablets 250mg $1.50
2. Doxycline 100 tablets 100mg $2.20
3. Diclofenac gel (Voltaren Gel) $0... (read more)

7
Francis
As far as I can tell, the direct effects of the order are only about drug pricing in developed countries, despite the phrasing. The text of the executive order states (bolding added): The reporting I've seen on the EO that has said anything explicitly in either direction has also suggested that it would only apply to drug pricing in developed countries -- e.g. here's the AP: Here's a post expressing concern about the potential effects on low-income countries, which still asserts that this particular order is only about drug pricing in developed countries:

Question: how to reconcile the fact that expected value is linear with preferences being possibly nonlinear?

Example: people are tipically willing to pay more than expected value for a small chance of a big benefit (lottery), or to remove a small chance of a big loss (insurance).

This example could be rejected as a "mental bias" or "irrational". However, it is not obvious to me that linearity is a virtue, and even if it is, we are human and our subjective experience is not linear.

2
NunoSempere
1. Look into logarithmic utility of money; there is some rich literature here 2. For an altruistic actor, money becomes more linear again, but I don't have a quick reference here.
  1. Thank you for pointing out log utility, I am aware of this model (and also other utility functions). Any reasonable utility function is concave (diminishing returns), which can explain insurance to some extent but not lotteries.
  2. I could imagine that, for an altruistic actor, altruistic utility becomes "more linear" if it's a linear combination of the utility functions of the recipients of help. This might be defensible, but it is not obvious for me unless that actor is utilitarian, at least in their altruistic actions.

Potential Megaproject: 'The Cooperation Project' (or the like)

This is a very loose idea, based on observations like these:

  • We have ongoing geopolitical tensions (e.g. China-US, China-Taiwan, Russia-Ukraine) and a lot of resources and attention spent on those.
  • We have (increasing?) risks from emerging technology that potentially threaten everyone. It's difficult to estimate the risk levels, but there seems to be an emerging consensus that we are on a reckless path, even from perspectives concerned purely with individual or national self-interest.

The project w... (read more)

I'm in favor of exploring interesting areas, and broadly sympathetic to there being more work in this area. 

I'd quickly note that I think the framing of "megaproject" seems distracting to me. I think the phrase really made sense in a very narrow window of time when EAs were flush with cash, and/or for very specific projects that really need it. But generally "mega-project" is an anti-pattern. 

Showing 3 of 5 replies (Click to show all)

Excited to read your work, Seth. Thanks for sharing

4
Angelina Li
Neat! Consider link posting as a top level post to make this easier to engage with?
2
Seth Ariel Green 🔸
I think if I end up writing something that's particularly EA-aligned, e.g. a cost-benefit analysis of some intervention, I'd do that. as is I'm happy to err on the side of not annoying people when promoting my stuff 😃 

In response to Caviola, L., Schubert, S., & Greene, J. D. (2021). The psychology of (in)effective altruism.

I have issues with EA in general in fundamental ways, so much so that after reading this paper made me dig in more and write this 2000 word post out of sheer frustration with the pride in it. One thing that really stands out reading this paper is how much EA positions itself as offering an almost irrefutable logic: maximize your positive impact by supporting only the most “effective” causes, and anything less is, at best, an error and, at worst, a... (read more)

I sometimes say, in a provocative/hyperbolic sense, that the concept of "neglectedness" has been a disaster for EA. I do think the concept is significantly over-used (ironically, it's not neglected!), and people should just look directly at the importance and tractability of a cause at current margins.

Maybe neglectedness useful as a heuristic for scanning thousands of potential cause areas. But ultimately, it's just a heuristic for tractability: how many resources are going towards something is evidence about whether additional resources are likely to be i... (read more)

Showing 3 of 14 replies (Click to show all)

I have a post about this sitting in my drafts. I think I'll just delete it and tell people to read this quick take instead. Strong upvote. 

1
Jordan Arel
Hey Trevor, it’s been a while, I just read Kuhan’s quick take which referred to this quick take, great to see you’re still active! This is very interesting, I’ve been evaluating a cause area I think is very important and potentially urgent—something like the broader class of interventions of which “the long reflection” and “coherent extrapolated volition” are examples, essentially how do we make sure the future is as good as possible conditional on aligned advanced AI. Anyways, I found it much easier to combine tractability and neglectedness into what I called “marginal tractability,” meaning how easy is it to increase success of a given cause area by, say, 1% at the current margin.  I feel like trying to abstractly estimate tractability independent of neglectedness was very awkward, and not scalable; i.e. tractability can change quite unpredictably over time, so it isn’t really a constant factor, but something you need to keep reevaluating as conditions change over time.  Asking the tractability question “If we doubled the resources dedicated to solving this problem, what fraction of the problem would we expect to solve?” isn’t a bad trick, but in a cause area that is extremely neglected this is really hard to do because there are so few existing interventions, especially measurable ones. In this case investigating some of the best potential interventions is really helpful. I think you’re right that the same applies when investigating specific interventions. Neglectedness is still a factor, but it’s not separable from tractability; marginal tractability is what matters, and that’s easiest to investigate by actually looking at the interventions to see how effective they are at the current margin. I feel like there’s a huge amount of nuance here, and some of the above comments were good critiques… But for now gotta continue on the research. The investigation is at about 30,000 words, need to finish, lightly edit, and write some shorter explainer versions, woul
6
David_Moss
That's interesting, but seems to be addressing a somewhat separate claim to mine. My claim was that that broad heuristics are more often necessary and appropriate when engaged in abstract evaluation of broad cause areas, where you can't directly assess how promising concrete opportunities/interventions are, and less so when you can directly assess concrete interventions. If I understand your claims correctly they are that: * Neglectedness is more likely to be misleading when applied to broad cause areas * When considering individual solutions, it's useful to consider whether the intervention has already been tried. I generally agree that applying broad heuristics to broad cause areas is more likely to be misleading than when you can assess specific opportunities directly. Implicit in my claim is that where you don't have to rely on broad heuristics, but can assess specific opportunities directly, then this is preferable. I agree that considering whether a specific intervention has been tried before is useful and relevant information, but don't consider that an application of the Neglectedness/Crowdedness heuristic.

Would a safety-focused breakdown of the EU AI Act be useful to you?

The Future of Life Institute published a great high-level summary of the EU AI Act here: https://artificialintelligenceact.eu/high-level-summary/

What I’m proposing is a complementary, safety-oriented summary that extracts the parts of the AI Act that are most relevant to AI alignment researchers, interpretability work, and long-term governance thinkers. 

It would include:

  • Provisions related to transparency, human oversight, and systemic risks
  • Notes on how technical safety tools (e.g. inte
... (read more)

As a community builder, I've started donating directly to my local EA group—and I encourage you to consider doing the same.

Managing budgets and navigating inflexible grant applications consume valuable time and energy that could otherwise be spent directly fostering impactful community engagement. As someone deeply involved, I possess unique insights into what our group specifically needs, how to effectively meet those needs, and what actions are most conducive to achieving genuine impact.

Of course, seeking funding from organizations like OpenPhil remains ... (read more)

Thank you!

I also do the same - small amounts really do go long ways. Grant applications are a separate skill from community engagement, often not that scope-sensitive (i.e. too much work for the small sums involved), and getting any funding awards is difficult right now/being turned down for funding can be really off-putting. The empowerment of an invested volunteer is generally a pretty good use of materials money.

EA community building relies heavily on a few large donors. This creates risk.

One way to reduce that risk is to broaden the funding base. Membership models might help.[1]

Many people assume EA will only ever appeal to a small slice of the population, and so this funding would never amount to anything significant. However, I think people often underestimate how large a “small slice” can be.

Take the Dutch mountaineering association. A mountaineering club in one of the flattest countries on Earth doesn’t exactly scream mass appeal.

So, how many members do you t... (read more)

Showing 3 of 6 replies (Click to show all)
4
James Herbert
I think you might be overestimating how much the NKBV offers as part of the basic membership. Most of their trips and courses, etc., are paid add-ons. What the €50 fee actually gets you is fairly lightweight: a magazine, eligibility to join trips (not free), discounted access to mountain huts (because the NKBV helps fund them), inclusion in their group insurance policy, and a 10% discount with a Dutch outdoor brand. That’s not nothing, but it’s modest and it shows that people will pay for affiliation, identity, and access to a community infrastructure, even if the tangible perks are limited. The EA equivalent could be things like discounted or early access to EAG(x) events, member-only discussion groups, or eligibility to complete advanced courses offered by national EA associations. If multiple countries coordinated, pooled membership fees could help subsidise international EA public goods such as the Forum, EAG(x) events, group support infrastructure, etc. I think the key point is this: the NKBV shows that people are willing to pay for affiliation, even if the direct perks are modest, as long as the organisation feels valuable to their identity and goals. EA can plausibly do the same.
7
Jason
Maybe, but this sounds to me a lot like erecting new pay gates for engagement with the community (both the membership fee and any extra fee for the advanced courses, etc.). Maybe that's unavoidable, but it does carry some significant downsides that aren't present with a mountaineering club (where the benefits of participation are intended to flow mainly to the participant rather than to third parties like animals or future people)  It also seems at tension with the current recruitment strategy by increasing barriers/friction to deeper engagement. And it seems that people most commonly become interested in EA in their 20s, an age at which imposing financial barriers to deeper engagement may be particularly negative. While I think people would be okay lowering pay gates based on certain objectively-applied markers of merit or need, I am not confident that this could be done in a way that both didn't impede "core" recruitment and that "supporter" members experienced as fair and acceptable. Most people don't want to pay for something others are getting for free / near-free without a sufficiently compelling reason.

You’re right to flag the risks of introducing pay gates. I agree it would be a mistake to charge for things that are currently core to how people first engage, especially given how many people first get involved in their 20s when finances are tight.

I think the case for a supporter membership model rests on keeping those core engagement paths free (intro courses, certain events, 1-1 advice, etc.), while offering membership as an optional way for people to express support, get modest perks, and help fund infrastructure.

I also think the contrast you draw betw... (read more)

 

I've now spoken to  ~1,400 people as an advisor with 80,000 Hours, and if there's a quick thing I think is worth more people doing, it's doing a short reflection exercise about one's current situation. 

Below are some (cluster of) questions I often ask in an advising call to facilitate this. I'm often surprised by how much purchase one can get simply from this -- noticing one's own motivations, weighing one's personal needs against a yearning for impact, identifying blind spots in current plans that could be triaged and easily addressed, etc... (read more)

https://economics.mit.edu/news/assuring-accurate-research-record

A really important paper on how AI speeds up R&D discovery was withdrawn and the PhD student who wrote it is no longer at MIT.

I have $20 in unused RunPod.io credit (cloud GPU service) that I’m not using and can’t refund. 😢 I’d love to donate it to someone working on any useful — whether it's for running models, processing data, or prototyping.

Feel free to message me if you want it.

I know that folks in EA often favor donating to more effective things rather than less effective things. With that in mind, I have mixed feelings knowing that many Harvard faculty are donating 10%, and that they are donating to the best funded and most prestigious university in the world.

On the one hand, it is really nice to know that they are willing to put their money where their mouth is when their institution is under attack. I get some warm fuzzy feelings from the idea of defending an education institution against political attacks. On the other hand,... (read more)

Some notes about the graphs:

  • These are from a project I did several months ago using data from the Common Data Set, from College Scorecard, from their Form 990 tax filings, and some data from the college's websites.
  • The selection of the non-Harvard schools is fairly arbitrary. For that particular project I just wanted to select a few different types of schools (small liberal arts, more technical focused, etc.) rather than comparing Harvard to other 'hyper elite' schools.
  • I left the endowment graph non-logarithmic just to illustrate the ludicrous difference. Yes, I know it is bad design practice and that it obscures the numbers for the non-Harvard schools.

As a group organiser I was wildly miscalibrated about the acceptance rate for EAGs! I spoke to the EAG team, and here are the actual figures:
 

  • The overall acceptance rate for undergraduate student is about ¾! (2024)
  • For undergraduate first timers, it’s about ½ (Bay Area 2025)

If that’s peaked your interest, EAG London 2025 applications close soon - apply here!
Jemima

3
James Herbert
Ah that's great info! Would be useful to get similar numbers for EAGx events. I know the overall acceptance rate is quite high, but don't know how it is for students who are applying for their regional EAGx. 
  • EAGx undergraduate acceptance rate across 2024 and 2025 = ~82%
  • EAGx first-timer undergraduate acceptance rate across 2024 and 2025 = ~76%

Obvious caveat that if we tell lots of people that the acceptance rate is high, we might attract more people without any context on EA and the rate would go down.

(I've not closely checked the data)

why do i find myself less involved in EA?

epistemic status: i timeboxed the below to 30 minutes. it's been bubbling for a while, but i haven't spent that much time explicitly thinking about this. i figured it'd be a lot better to share half-baked thoughts than to keep it all in my head — but accordingly, i don't expect to reflectively endorse all of these points later down the line. i think it's probably most useful & accurate to view the below as a slice of my emotions, rather than a developed point of view. i'm not very keen on arguing about any of th... (read more)

"why do i find myself less involved in EA?"

You go over more details later and answer other questions like what caused some reactions to some EA-related things, but an interesting thing here is that you are looking for a cause of something that is not.

> it feels like looking at the world through an EA frame blinds myself to things that i actually do care about, and blinds myself to the fact that i'm blinding myself.

I can strongly relate, had the same experience. i think it's due to christian upbringing or some kind of need for external validation. I think many people don't experience that, so I wouldn't say that's an inherently EA thing, it's more about the attitude. 

 

5
Owen Cotton-Barratt
I appreciated you expressing this. Riffing out loud ... I feel that there are different dynamics going on here (not necessarily in your case; more in general): 1. The tensions where people don't act with as much integrity as is signalled * This is not a new issue for EA (it arises structurally despite a lot of good intentions, because of the encouragement to be strategic), and I think it just needs active cultural resistance * In terms of writing, I like Holden's and Toby's pushes on this; my own attempts here and here * But for this to go well, I think it's not enough to have some essays on reading lists; instead I hope that people try to practice good orientation here at lots of different scales, and socially encourage others to 2. The meta-blinding * I feel like I haven't read much on this, but it rings true as a dynamic to be wary of! Where I take the heart of the issue to be that EA presents a strong frame about what "good" means, and then encourages people to engage in ways that make aspects of their thinking subservient to that frame 3. As someone put it to me, "EA has lost the mandate of heaven" * I think EA used to be (in some circles) the obvious default place for the thoughtful people who cared a lot to gather and collaborate * I think that some good fraction of its value came from performing this role? * Partially as a result of 1 and 2, people are disassociating with EA; and this further reduces the pull to associate * I can't speak to how strong this effect is overall, but I think the directionality is clear I don't know if it's accessible (and I don't think I'm well positioned to try), but I still feel a lot of love for the core of EA, and would be excited if people could navigate it to a place where it regained the mandate of heaven.
9
Ozzie Gooen
Thanks for clarifying your take! I'm sorry to hear about those experiences.  Most of the problems you mention seem to be about the specific current EA community, as opposed to the main values of "doing a lot of good" and "being smart about doing so." Personally, I'm excited for certain altruistic and smart people to leave the EA community, as it suits them, and do good work elsewhere. I'm sure that being part of the community is limiting to certain people, especially if they can find other great communities.  That said, I of course hope you can find ways for the key values of "doing good in the world" and similar to work for you. 

I feel like EAs might be sleeping a bit on digital meetups/conferences.

My impression is that many people prefer in-person events to online ones. But at the same time, a lot of people hate needing to be in the Bay Area / London or having to travel to events.

There was one EAG online during the pandemic (I believe the others were EAGxs), and I had a pretty good experience there. Some downsides, but some strong upsides. It seemed very promising to me.

I'm particularly excited about VR. I have a Quest3, and have been impressed by the experience of chatting to pe... (read more)

4
Arepo
Have you checked out the EA Gather? It's been languishing a bit for want of more input from me, but I still find it a really pleasant place for coworking, and it's had several events run or part-run on there - though you'd have to check in with the organisers to see how successful they were.
4
Ozzie Gooen
I assumed it's been mostly dead for a while (haven't heard about it for a few months). I'm very supportive of it, would like to see it (and more) do well. 

It's still in use, but it has the basic problem of EA services that unless there's something to announce, there's not really any socially acceptable way of advertising it.

Similar to "Greenwashing" and "Safetywashing", I've been thinking about "Intellectual Washing."

The pattern works as: "Find someone who seems like an intellectual who somewhat aligns with your position. Then claim you have strong intellectual (and by extension, logical) support for your views."


This is easiest to see in sides that you disagree with.

For example, MAGA gets intellectual cred from "The dark enlightenment" / Curtis Yarvin / Peter Thiel / etc. But I'm sure Trump never listened to any of these people, and was likely barely influenced by them. [1]

Hi... (read more)

What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.

Some ideas:

  • Donate to orgs that are working to AI risk (which ones, though?)
  • Write letters to policy-makers expressing your concerns
  • Be public about your concerns. Normalize caring about x-risk

I have a bunch of disagreements with Good Ventures and how they are allocating their funds, but also Dustin and Cari are plausibly the best people who ever lived. 

Showing 3 of 8 replies (Click to show all)
3
Saul Munn
fwiw i instinctively read it as the 2nd, which i think is caleb's intended reading
2
calebp
I was going for the second, adding some quotes to make it clearer.

Yeah, sorry: it was obvious to me that this was the intended meaning, after I realized it could be interpreted this way. I noted it because I found the syntactic ambiguity mildly interesting/amusing.

The UK offers better access as a conference location for international participants compared to the US or the EU.

I'm being invited to conferences in different parts of the world as a Turkish citizen, and visa processes for the US and the EU have gotten a lot more difficult lately. I'm unable to even get a visa appointment for several European countries, and my appointment for the US visa was scheduled 16 months out. I believe the situation is similar for visa applicants from other countries. The UK currently offers the smoothest process with timelines of only a few weeks. Conference organizers that seek applications from all over the world could choose the UK over other options.

Load more