All of Aaron Bergman's Comments + Replies

I made a tool to play around with how alternatives to the 10% GWWC Pledge default norm might change:

  1. How much individuals are "expected" to pay
    1. The idea being that there are functions of income that people would prefer to the 10% pledge behind some relevant veil of ignorance, along the lines of "I don't want to commit 10% of my $30k salary, but I gladly commit 20% of my $200k salary"
  2. How much total donation revenue gets collected

     

There's some discussion at this Tweet of mine

Some folks pushed back a bit, citing the following:

  • The pledge isn't supposed to
... (read more)

New interview with Will MacAskill by @MHR🔸

Almost a year after the 2024 holiday season Twitter fundraiser, we managed to score a very exciting "Mystery EA Guest" to interview: Will MacAskill himself.

  • @MHR🔸 was the very talented interviewer and shrimptastic fashion icon
  • Thanks to @AbsurdlyMax🔹 for help behind the scenes
  • And of course huge thanks to Will for agreeing to do this

Summary, highlights, and transcript below video!

Image 
 

Summary and Highlights

(summary AI-generated) 

Effective Altruism has changed significantly since its inception. With the ... (read more)

4
Vasco Grilo🔸
Thanks for sharing, Aaron! Nitpick. The flyer says 2024 instead of 2025.
5
Toby Tremlett🔹
This slaps! Also kudos for raising so much! V cool.

I strongly endorse this and think that there are some common norms that stand in the way of actually-productive AI assistance.

  1. People don't like AI writing aesthetically
  2. AI reduces the signal value of text purportedly written by a human (i.e. because it might have been trivial to create and the "author" needn't even endorse each claim in the writing)

Both of these are reasonable but we could really use some sort of social technology for saying "yes, this was AI-assisted, you can tell, I'm not trying to trick anyone, but also I stand by all the claims made in the text as though I had done the token generation myself."

I think I'm more bullish on digital storage than you.

Most alignment work today exists as digital bits: arXiv papers, lab notes, GitHub repos, model checkpoints. Digital storage is surprisingly fragile without continuous power and maintenance.

SSDs store bits as charges in floating-gate cells; when unpowered, charge leaks, and consumer SSDs may start losing data after a few years. Hard drives retain magnetic data longer, but their mechanical parts degrade; after decades of disuse they often need clean-room work to spin up safely. Data centres depend on air-c

... (read more)

Wanted to bring this comment thread out to ask if there's a good list of AI safety papers/blog posts/urls anywhere for this?

(I think local digital storage in many locations probably makes more sense than paper but also why not both)

Lightcone and Alex Bores (so far)

Edit: to say a tiny bit more, LessWrong seems instrumentally good and important and rationality is a positive influence on EA. Lightcone doesn't have the vibes of "best charity" to me, but when I imagine my ideal funding distribution it is the immediate example of "most underfunded org" that comes to mind. Obviously related to Coefficient not supporting rationality community building anymore. Remember, we are donating on the margin, and approximately the margin created by Coefficient Giving!

Edit: And Animal Welfare Fund

Super cool - a bit hectic and I substantively disagree with one of the "fallacies" the fallacy evaluator flagged on this post but I'll definitely be using this going forward

2
Ozzie Gooen
Thanks! I wouldn't take its takes too seriously, as it has limited context and seems to make a bunch of mistakes. It's more a thing to use to help flag potential issues (at this stage), knowing there's a false positive rate. 

Thanks for the highlight! Yeah I would love better infrastructure for trying to really figure out what the best uses of money are. I don't think it has to be as formal/quantitative as GiveWell. To quote myself from a recent comment (bolding added)

At some level, implicitly ranking charities [eg by donating to one and not another] is kind of an insane thing for an individual to do - not in an anti-EA way (you can do way better than vibes/guessing randomly) but in a "there must be better mechanisms/institutions for outsourcing donation advice than GiveWell an

... (read more)

I did something related but haven't updated it in a couple years! If there's a good collection of AI safety papers/other resources/anything anywhere it would be very easy for me to add it to the archive for people to download locally, or else I could try to collect stuff myself

List [not necessarily final!]

1. ClusterFree
2. Center for Reducing Suffering
3. Arthropoda Foundation
4. Shrimp Welfare Project
5. Effective Altruism Infrastructure Fund
6. Forethought Foundation
7. Wild Animal Initiative
8. Center for Wild Animal Welfare
9. Animal Welfare Fund
10. Aquatic Life Institute
11. Longview Philanthropy's Emerging Challenges Fund
12. Legal Impact for Chickens
13. The Humane League
14. Rethink Priorities
15. Centre for Enabling EA Learning & Research
16. MATS Research

Methodology

I used AI for advice (unlike last year) with Claude-Opus-4.5 and... (read more)

I do not accept premise 2:

For some small amount of intense suffering, there is always some sufficiently large amount of moderate suffering such that the intense suffering is preferable.

To be clear, I think this premise is one way of distilling and clarifying the (or 'a') crux of my argument and if I wind up convinced that the whole argument is wrong, it will probably be because I am convinced of premise 2 or something very similar 

2
MichaelDickens
I see, I took the chart under "The compensation schedule's structure" to imply that the Axiom of Continuity held for suffering, based on the fact that the X axis shows suffering measured on a cardinal scale. If you reject Continuity for suffering then I don't think your assumptions are self-contradictory.

Wow, this is super exciting and thanks so much to the judges! ☺️

An interesting dynamic around this competition was that the promise of the extremely cracked + influential judging team reading (and implicitly seriously considering) my essay was a much stronger incentive for me to write/improve it than the money (which is very nice don’t get me wrong).[1]

I’m not sure what the implications of this are, if any, but it feels useful to note this explicitly as a type of incentive that could be used to elicit writing/research in the future

  1. ^

    Insofar as I’m not total

... (read more)
4
Ben_West🔸
Congrats Aaron!

Interesting, thanks! I might actually sign up for the Arctic Archive thing! I don't see you mention m-discs like this - any reason for that?

Also, do you have any takes on how many physical locations a typical X is stored in, for various X?

X could be:

  • A wikipedia page
  • An EA Forum post
  • A YouTube video
  • A book that's sold 100/1k/10k/100k/1M copies
  • Etc
3
Yarrow Bouchard 🔸
M-Discs are certainly interesting. What's complicated is that the company that invented M-Discs, Millenniata, went bankrupt, and that has sort of introduced a cloud of uncertainty over the technology.  There is a manufacturer, Verbatim, with the license to manufacture discs using the M-Disc standard and the M-Disc branding. Some customers have accused Verbatim of selling regular discs with the M-Disc branding at a huge markup and this accusation could be completely wrong and baseless — Verbatim has denied it — but it's sort of hard to verify what's going on anymore.  If Millenniata were still around, they would be able to tell us for sure whether Verbatim is still complying properly with the M-Disc standard and whether we can rely on their discs. I don't understand the nuances of optical disc storage well enough to really know what's going on. I would love to see some independent third-party who has expertise in this area and who is reputable and trustworthy tell us whether the accusations against Verbatim are really just a big misunderstanding.  Millenniata's bankruptcy is an example of the unfortunate economics of archival storage media. Rather than pay more for special long-lasting media, it's far more cost-effective to use regular, short-term storage media — today, almost entirely hard drives — and periodically copy over the data to new media. This means the market for archival media is small.  As for how many physical locations digital data is kept in, that depends on what it is. The CLOCKSS academic archive keeps digital copies of 61.4 million academic papers and 550,000 books in 12 distinct physical locations. I don't know how Wikipedia does its backups, mirroring, or archiving internally, but every month an updated copy of the English Wikipedia is released that anyone can download. Given Wikipedia's openness, it is unusually well-replicated across physical locations, just considering the number of people who download copies.  I also don't know how the E

After thinking about this post ("Utilitarians Should Accept that Some Suffering Cannot be “Offset”") some more, there's an additional, weaker claim I want to emphasize, which is: You should be very skeptical that it’s morally good to bring about worlds you wouldn’t personally want to experience all of

We can imagine a society of committed utilitarians all working to bring about a very large universe full of lots of happiness and, in an absolute sense, lots of extreme suffering. The catch is that these very utilitarians are the ones that are going to be expe... (read more)

I disagree but think I know what you're getting at and am sympathetic. I made the following to try to illustrate and might add it in to the post if it seems clarifying

I made it on a whim just now without thinking too hard so don't necessarily consider the graphical representation on as solid footing as the stuff in the post

"Diagram showing three overlapping circles representing different meanings of 'utilitarianism.' On the left, arguments/reasons 1 through n point via solid arrows to properties 1 through n (contained in a blue circle labeled 'Thing I'm talking about in the post and am calling "utilitarianism"'). From these properties, dashed arrows point to implications 1 through n on the right. A red circle labeled 'What utilitarians really care about' encompasses the arguments and properties. A yellow/orange circle labeled 'How "utilitarianism" gets used' encompasses the properties and implications. The solid arrows indicate 'Assume correct inference for the model,' while dashed arrows indicate 'Alleged inference but not necessarily ground truth.' The diagram illustrates that accepting core utilitarian properties doesn't logically require accepting all implications commonly attributed to utilitarianism."Retry
1
River
I take it "any bad can be offset by a sufficient good" is what you are thinking of as being in the yellow circle implications. And my view is that it is actually red circle. It might actually be how I would define utilitarianism, rather than your UC. What I am still really curious about is your motivation. Why do you even want to call yourself a utilitarian or an effective altruist or something? If you are so committed to the idea that some bads cannot be offset, then why don't you just want to call yourself a deontologist? I come to EA precisely to find a place where I can do moral reasoning and have moral conversations with other spreadsheet people, without running into this "some bads cannot be offset" stuff.

Thanks!

(Ideal): annihilation is ideally desirable in the sense that it's better (in expectation) than any other remotely realistic alternative, including <detail broadly utopian vision here>. (After all, continued existence always has some chance of resulting in some uncompensable suffering at some point.)

Yeah I mean on the first one, I acknowledge that this seems pretty counterintuitive to me but again just don't think it is overwhelming evidence against the truth of the view.

Perhaps a reframing is "would this still seem like a ~reductio conditional... (read more)

6
Richard Y Chappell🔸
Thanks for your reply! Working backwards... On your last point, I'm fully on board with strictly decoupling intrinsic vs instrumental questions (see, e.g., my post distinguishing telic vs decision-theoretic questions). Rather, it seems we just have very different views about what telic ends or priorities are plausible. I give ~zero credence to pro-annihilationist views on which it's preferable for the world to end than for any (even broadly utopian) future to obtain that includes severe suffering as a component. Such pro-annihilationist lexicality strikes me as a non-starter, at the most intrinsic/fundamental/principled levels. By contrast, I could imagine some more complex variable-value/threshold approach to lexicality turning out to have at least some credibility (even if I'm overall more inclined to think that the sorts of intuitions you're drawing upon are better captured at the "instrumental heuristic" level). On moral uncertainty: I agree that bargaining-style approaches seem better than "maximizing expected choiceworthiness" approaches. But then if you have over 50% credence in a pro-annihilationist view, it seems like the majority rule is going to straightforwardly win out when it comes to determining your all-things-considered preference regarding the prospect of annihilation. Re: uncompensable monster: It isn't true that "orthodox utilitarianism also endorses this in principle", because a key part of the case description was "no matter what else happens to anyone else". Orthodox consequentialism allows that any good or bad can be outweighed by what happens to others (assuming strictly finite values). No one person or interest can ever claim to settle what should be done no matter what happens to others. It's strictly anti-absolutist in this sense, and I think that's a theoretically plausible and desirable property that your view is missing. I don't think it's helpful to focus on external agents imposing their will on others, because that's going to tr

Thanks, yeah I may have gotten slightly confused when writing.

1) VNM

Wikipedia screenshot:

Let P be the thing I said in the post:

If A ≻ B ≻ C, there's some probability p ∈ (0, 1) where a guaranteed state of the world B is ex ante morally equivalent to "lottery p·A + (1-p)·C”

or, symbolically

and let

I think  but not  in general.

So my writing was sloppy. Super good catch (not caught by any of the various LLMs iirc!)

But for the purposes of the... (read more)

I am getting really excellent + thoughtful comments on this (not just saying that) - I will mention here that I have highly variable and overall reduced capacity for personal reasons at the moment, so please forgive me if it takes a little while for me to respond (and in the mean time note that I have read stuff so they're not being ignored) 🙂

Great points both and I agree that the kind of tradeoff/scenario described by @EJT and @bruce in his comment are the strongest/best/most important objections to my view (and the thing most likely to make me change my mind)

Let me just quote Bruce to get the relevant info in one place and so this comment can serve as a dual response/update. I think the fundamentals are pretty similar (between EJT and Bruce's examples) even though the exact wording/implementation is not:

A) 70 years of non-offsettable suffering, followed by 1 trillion happy huma

... (read more)

don’t bite the bullet in the most natural reading of this, where very small changes in i_s do only result in very small changes in subjective suffering from a subjective qualitative POV. Insofar as that is conceptually and empirically correct, I (tentatively) think it’s a counterexample that more or less disproves my metaphysical claim (if true/legit).

Going along with 'subjective suffering', which I think is subject to the risks you mention here, to make the claim that the compensation schedule is asymptotic (which is pretty important to your toplin... (read more)

8
Elliott Thornley (EJT)
Oops yes, fundamentals between my and Bruce's cases are very similar. Should have read Bruce's comment! The claim we're discussing - about the possibility of small steps of various kinds - sounds kinda like a claim that gets called 'Finite Fine-Grainedness'/'Small Steps' in the population axiology literature. It seems hard to convincingly argue for, so in this paper I present a problem for lexical views that doesn't depend on it. I sort of gestured at it above with the point about risk without making it super precise. The one-line summary is that expected welfare levels are finitely fine-grained.

Mostly for fun I vibecoded an API to easily parse EA Forum posts as markdown with full comment details based on post URL (I think helpful mostly for complex/nested comment sections where basic copy and paste doesn't work great)

I have tested it on about three posts and every possible disclaimer applies

Again I appreciate your serious engagement!

The positive argument for the metaphysical claim and the title of this piece relies (IMO) too heavily on a single thought experiment, that I don't think supports the topline claim as written.

Not sure what you mean by the last clause, and to quote myself from above: 

I don't expect to convince all readers, but I'd be largely satisfied if someone reads this and says: "You're right about the logic, right about the hidden premise, right about the bridge from IHE preferences to moral facts, but I would personally,

... (read more)
  1. I'm much more confident about the (positive wellbeing + suffering) vs neither trade than intra-suffering trades. It sounds right that something like the tradeoff you describe follows from the most intuitive version of my model, but I'm not actually certain of this; like maybe there is a system that fits within the bounds of the thing I'm arguing for that chooses A instead of B (with no money pumps/very implausible conclusions following)

Ok interesting! I'd be interested in seeing this mapped out a bit more, because it does sound weird to have BOS be offsett... (read more)

This is coming in handy, thanks!!

Who would win, a $3T company or one guy on the EA Forum? (answer: the latter)

I genuinely respect your attempts to find out the true answer here, but from my (relatively naive - I certainly haven't read all your stuff) POV, shouldn't the conclusion be much closer to "nobody knows a goddamn thing, don’t spend any money till we become more confident"

4
Vasco Grilo🔸
Thanks, Aaron. I think decreasing the uncertainty about the effects on soil animals, in particular, about whether soil nematodes have positive or negative lives, would be more cost-effective than funding HIPF. However, OP does not fund interventions targeting wild animals or invertebrates, so that is not a live option. In addition, my sense is that OP has historically found it difficult to spend as much as desired by its major funders, Dustin Moskovitz and Cari Tuna, so I believe they would not want to decrease their spending.

Assuming we're not radically mistaken about our own subjective experience, it really seems like pleasure is good for the being experiencing it (aside from any function or causal effects it may have).

In fact, pleasure without goodness in some sense seems like an incoherent concept. If a person was to insist that they felt pleasure but in no sense was this a good thing, I would say that they are mistaken about something, whether it be the nature of their own experience or the usual meaning of words.

Some people, I think, concede the above but want to object t... (read more)

2
LanceSBush
When you say “Assuming we’re not radically mistaken”...you’re using the term “we” as though you’re assuming I and others agree with you. But I don’t know if I agree with you, and there’s a good chance I don’t. What do you mean when you say that pleasure is good for the being experiencing it? For that matter, what do you mean by “pleasure”? If “pleasure” refers to any experience that an agent prefers, and for something to be good for something is for them to prefer it, then you’d be saying something I’d agree with: that any experiences an agent prefers are experiences that agent prefers. But if you’re not saying that, then I am not sure what you are saying. I think there are facts about what is good according to different people’s stances. So my pleasure can be good according to my stance. But I do not think pleasure is stance-independently good What do you mean by “goodness”? I’m perfectly fine with saying that there are facts about what individuals prefer and consider good, but the fact that something is good relative to someone’s preferences does not entail that it is good simpliciter, good relative to my preferences, intrinsically good, or anything like that. The fact that this person is a “valid element of the world/universe” doesn’t change that fact.  What you’re saying doesn’t strike me so much as metaphysically spooky but as conceptually underdeveloped. I don’t think it’s clear (at least, not to me) what you mean when you refer to goodness. For instance, I cannot tell if you are arguing for some kind of moral realism or normative realism.  What would it mean for there to be “more badness” in world A? Again, it’s just not clear to me what you mean by the terms you are using.
1
Manuel Del Río Rodríguez 🔹
I think I concede that 'pleasure is good for the being experiencing it'. I don't think this leads to were you take it, though. It is good for me to eat meat, but probably it isn't good for the animal. But in the thought experiment you make, I prefer world A where I'm eating bacon and the pig is dead than world B where the pig is feeling fine and I'm eating broccoli. You can't jump from what's good for one to what's good for many. But besides, granting something is good for he who experiences is feels likes bit broad: the good for him doesn't make it into some law that must be obeyed, even form him/her. There are trade-offs between other desires, you might also want to consider (or not) long-term effects, etc... It also has no ontological status as 'the good', just as there is no Platonic form of 'the good' floating in Platonic heaven.

I'm continually unsure how best to label or characterize my beliefs. I recently switched from calling myself a moral realist (usually with some "but its complicated" pasted on) to an "axiological realist."

I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like "objectively it would be better to promote happiness over suffering"

But I'm not sure I see the basis for making some additional leap to genuine normativity; I don't think things like objective or... (read more)

1
Charlie_Guthmann
  I know lots of people who think some amount of suffering is good (and not just instrumentally for having more pleasure later). Is your claim here just that you somehow know that pleasure is inherently good? I think the belief you are describing is more accurately "I'm confident my subjective view won't change" or something like that. 
1
Daniel_Friedrich
I think objective ordering does imply "one should" so I subscribe to moral realism. However, recently I've been highly appreciating the importance of your insistence that the "should" part is kind of fake - i.e. it means something like "action X is objectively the best way to create most value from the point of view of all moral patients" but it doesn't imply that an ASI that figures out what is morally valuable will be motivated to act on it. (Naively, it seems like if morality is objective, there's basically a physical law formulated as "you should do actions with characteristics X". Then, it seems like a superintelligence that figures out all the physical laws internalizes "I should do X". I think this is wrong mainly because in human brains, that sentence deceptively seems to imply "I want to do x" (or perhaps "I want to want x") whereas it actually means "Provided I want to create maximum value from an impartial perspective, I want to do x". In my own case, the kind of argument for optimism around AI doom in the style that @Bentham's Bulldog advocated in Doom Debates seemed a bit more attractive before I truly spelled this out in my head.)

Alastair Norcross is a famous philosopher with similar views. Here's the argument I once gave him that seemed to convert him (at least on that day) to realism about normative reasons:

First, we can ask whether you'd like to give up your Value Realism in favour of a relativistic view on which there's "hedonistic value", "desire-fulfilment value", and "Nazi value", all metaphysically on a par.  If not -- if there's really just one correct view of value, regardless of what subjective standards anyone might arbitrarily endorse -- then we can raise the

... (read more)
5
Pablo
This is basically my view, and I think ‘axiological realism’ is a great name for it.
3
LanceSBush
Why do you think some states of the world are objectively better than others, or that pleasure is inherently good? I suppose I can go check out the podcast, but I'd be happy to have a discussion with you here.

I'm not an axiological realist, but it seems really helpful to have a term for that position, upvoted.

Broadly, and off-topic-ally, I'm confused why moral philosophers don't always distinguish between axiology (valuations of states of the world) and morality (how one ought to behave). People seem to frequently talk past each for lack of this distinction. For example, they object to valuing a really large number of moral patients (an axiological claim) on the grounds that doing so would be too demanding (a moral claim). I first learned these terms from https://slatestarcodex.com/2017/08/28/contra-askell-on-moral-offsets/ which I recommend.

This is incredibly good and generous of you, but also I suspect that even on purely altruistic grounds it makes more sense to save the money for yourself and become slightly less risk averse as a result?

I don’t have a good model or rigorous justification for this, just an intuition 

6
Ebenezer Dukakis
Hypothesis: A big reason why organizations like Givewell exist is because developed currencies go further in developing countries -- but, it's hard for people in developed countries to know the best foreign orgs to give to. Givewell fills that gap by doing research and publicizing it. Insofar as that hypothesis is true, we should encourage EAs in developing countries to look for giving opportunities in their personal network, if good opportunities seem to exist there. Here's another way of making the same argument: * GiveDirectly does blanket cash transfers for entire communities. * A hypothetical version of GiveDirectly which targets only the very neediest individuals, or only the most inspired entrepreneurs who will do the most to stimulate the local economy and reduce poverty, could be even more cost-effective. (IIRC, Givewell thinks most of the impact from their top charities comes from indirect "flow-through effects".) * Sadly, targeting individual recipients isn't possible at the scale GiveDirectly operates at. But, targeting individual recipients does seem feasible for an individual African donor who has a strong local network. Note also that GiveDirectly has lost many thousands of dollars to fraud? Presumably, fraud would be less of an issue for a savvy local donor. ---------------------------------------- I think this argument is weakest in areas where local knowledge doesn't help a lot for knowing what works. Even though Givewell is based in the US, for a while they were ranking US educational charities. Having a strong local network in the US doesn't necessarily help a ton for knowing which educational interventions work. However, I still think a "randomized" giving algorithm such as "if your friends say this school really helped their kid, donate to that school" might work quite well for a lot of small donors at scale.

I disagree, giving habits are important to cultivate early, from a habit perspective even if from a dollar utility perspective you may be right.

Important to consider though!

Was sent a resource in response to this quick take on effectively opposing Trump that at a glance seems promising enough to share on its own: 

From A short to-do list by the Substack Make Trump Lose Again:

  1. Friends in CA, AZ, or NM: Ask your governor to activate the national guard (...)
  2. Friends in NC: Check to see if your vote in the NC Supreme Court race is being challenged (...)
  3. Friends everywhere: Call your senators and tell them to vote no on HR 22 (...)
  4. Friends everywhere: If you’d like to receive personalized guidance on what opportunities are best su
... (read more)

Is there a good list of the highest leverage things a random US citizen (probably in a blue state) can do to cause Trump to either be removed from office or seriously constrained in some way? Anyone care to brainstorm?

Like the safe state/swing state vote swapping thing during the election was brilliant - what analogues are there for the current moment, if any?

7
Sanjay
This post (especially this section) explores this. There are also some ideas on this website. I've copied and pasted the ideas from that site below. I think it's written with a more international perspective, but likely has some overlap with actions which could be taken by Americans. * Promoting free and fair elections, especially at the midterms * Several NGOs are well established as working on this, eg Common Cause works to reduce needless barriers to voting and stop gerrymandering, etc. Verified voting advocates for secure voting systems, and the Brennan Center for Justice researches and advocates for relevant policies. * Enabling bravery of key individuals. * Example: Mike Pence was very brave in standing up to Trump and enabling a transition of power, and he has been vilified for this by Trump and his supporters. * Today, members of Congress don’t always seem to stand up for what they believe in (eg not opposing controversial appointments such as RFK and Hegseth). Presumably they are concerned about threats made by Trump. * Unclear exactly what this intervention looks like (provide financial support? Or something else?) * Consumer power and investor power * The boycott of Tesla is an obvious example of this, and Musk is clearly feeling the pain. * Further work could identify and assess the extent to which other large corporates are kowtowing to the Trump administration, so that consumers can make informed choices. * People who are members of pension schemes could write to the trustees asking them to divest from relevant corporates (Tesla being the obvious choice at this stage, this large scheme has already divested from Tesla). Furthermore, people could coordinate this activity. * Support grassroots protests * MoveOn, Democracy forward, etc * Bail project * Supporting free and balanced media * We need media sources which are critical of government. * Such media sources don’t seem naturally set up to accept moderate sized d
4
Chakravarthy Chunduri
tl;dr: Getting Trump removed from office is not high-leverage and can be actively dangerous, since that only means JD Vance becomes President, and he and the Republicans will be politically obligated to dig in their heels and implement a Trump agenda. Instead, I think somehow getting millions of Americans to recognize Trump's gameplan (which I've summarized below), and getting them to panic-vote Democrats in the 2026 midterms is the highest leverage thing you can do.   Here's my $0.02, even though I don't know how to achieve a Dem victory. I think Trump is doing all of this with an eye on the 2026 midterms. I'll try to publish a more detailed write-up later if I'm able to, but all these arguably inane policy decisions that even well connected Trump supporters were blindsided by- are all things that the Trump administration is hoping will play well with the voters in the mid-term elections. Then more Trumpists get elected into the House, then that empowers Trump even more, considering he's already installed his vassal Mike Johnson as the leader of the House of Representatives.  I think that's the overall game plan. That's why he's had no compunction in walking decisions back as soon as they've generated news headlines. As plans go, it certainly is a cogent plan. It looks and feels like the Southern Strategy 2.0. Frankly I was expecting this sort of populist grandstanding from Bernie Sanders, Elizabeth Warren and the Democratic Socialists; and not this generation of Republicans.  It looks like Trump saw Bernie Sanders attempting Ralph Nader's political strategy and thought, "Huh. I could do that!" and so here we are today. That's why I personally dislike politicians who are not centrists. But I think there's a flaw in Trump's plan.  I'm not an American, but I'm guessing the reason previous presidents haven't done this kind of populism is because American voters (a) aren't that stupid, like, most American citizens have at least a high-school level education; and
4
MaxRa
(Just quick random thoughts.) The more that Trump is perceived as a liability for the party, the more likely they would go along with an impeachment after a scandal. 1. Reach out to Republicans in your state about your unhappiness about the recent behavior of the Trump administration. 2. Financially support investigative reporting on the Trump administration. 3. Go to protests? 4. Comment on Twitter? On Truth Social? 1. It's possibly underrated to write concise and common sense pushback in the Republican Twitter sphere?

~30 second ask: Please help @80000_Hours figure out who to partner with by sharing your list of Youtube subscriptions via this survey

Unfortunately this only works well on desktop, so if you're on a phone, consider sending this to yourself for later. Thanks!

Sharing https://earec.net, semantic search for the EA + rationality ecosystem. Not fully up to date, sadly (doesn't have the last month or so of content). The current version is basically a minimal viable product! 

On the results page there is also an option to see EA Forum only results which allow you to sort by a weighted combination of karma and semantic similarity thanks to the API!

Final feature to note is that there's an option to have gpt-4o-mini "manually" read through the summary of each article on the current screen of results, which will give... (read more)

3
lroberts
I found this recently, just wanted to comment that it's been super helpful, thanks!

Christ, why isn’t OpenPhil taking any action, even making a comment or filing an amicus curiae?

I certainly hope there’s some legitimate process going on behind the scenes; this seems like an awfully good time to spend whatever social/political/economic/human capital OP leadership wants to say is the binding constraint.

And OP is an independent entity. If the main constraint is “our main funder doesn’t want to pick a fight,” well so be it—I guess Good Ventures won’t sue as a proper donor the way Musk is; OP can still submit some sort of non-litigant comment. Naively, at least, that could weigh non trivially on a judge/AG

[warning: speculative]

As potential plaintiff: I get the sense that OP & GV are more professionally run than Elon Musk's charitable efforts. When handing out this kind of money for this kind of project, I'd normally expect them to have negotiated terms with the grantee and memoralized them in a grant agreement. There's a good chance that agreement would have a merger clause, which confirms that (e.g.) there are no oral agreements or side agreements. Attorneys regularly use these clauses to prevent either side from getting out of or going beyond the nego... (read more)

9
NickLaing
I agree this is absurd, this is probably the most obvious action open Phil has not taken. What do they have to lose at this stage by filling a lawsuit or at the very least like you say making an official comment.  Perhaps EAs and EA orgs are just by nature largely allergic to open public conflict even if it has decent potential to do good?

Reranking universities by representation in EA survey per undergraduate student, which seems relevant to figuring out what CB strategies are working (obviously plenty of confounders). Data from 1 minute of googling + LLMs so grain of salt

There does seem to be a moderate positive correlation here so nothing shocking IMO.

Same chart as above but by original order 

4
David_Moss
Thanks for putting this together! It's good to confirm the overall pattern. I would note that (as we observed in the case of per capita figures for small countries), per capita figures for small colleges risk being very noisy (e.g. a very small number of respondents can make a very big difference), so I would be very cautious drawing inferences about any individual college.

Offer subject to be arbitrarily stopping at some point (not sure exactly how many I'm willing to do)

Give me chatGPT Deep Research queries and I'll run them. My asks are that:

  1. You write out exactly what you want the prompt to be so I can just copy and paste something in
  2. Feel free to request a specific model (I think the options are o1, o1-pro, o3-mini, and o3-mini-high) but be ok with me downgrading to o3-mini
  3. Be cool with me very hastily answering the inevitable set of follow-up questions that always get asked (seems unavoidable for whatever reason). I might say something like "all details are specified above; please use your best judgement"
5
Neel Nanda
Note that the UI is atrocious. You're not using o1/o3-mini/o1-pro etc. It's all the same model, a variant of o3, and the model in the bar at the top is completely irrelevant once you click the deep research button. I am very confused why they did it like this https://openai.com/index/introducing-deep-research/

I have a weird amount of experience with the second thing (dealing with formatted PDFs) and may be able to help - feel free to DM!

I’ll just highlight that it seems particularly cruxy whether to view such NDAs as covenants or contracts that are not intrinsically immoral to break

It’s not obvious to me that it should be the former, especially when the NDA comes with basically a monetary incentive for not breaking

5
Holly Elmore ⏸️ 🔸
If the supposed justification for taking these jobs is so that they can be close to what's going on, and then they never tell and (I predict) get no influence on what the company does, how could this possibly be the right altruistic move?

Here is a kinda naive LLM prompt you may wish to use for inspiration and iterate on:

“List positions of power in the world with the highest ratio of power : difficulty to obtain. Focus only on positions that are basically obtainable by normal arbitrary US citizens and are not illegal or generally considered immoral

I’m interested in positions of unusually high leverage over national or international systems”
 

It’s  personal taste, but for me the high standards (if implicit) - not only in reasoning quality but also as you say, formality (and I’d add comprehensiveness/covering all your bases) are a much bigger disincentive to posting than dry/serious tone (which maybe I just don’t mind a ton).

I’m not even sure this is bad; possibly lower standards would be worse all things considered. But still, it’s a major disincentive to publishing.

@MHR🔸 @Laura Duffy, @AbsurdlyMax and I have been raising money for the EA Animal Welfare Fund on Twitter and Bluesky, and today is the last day to donate!

If we raise $3k more today I will transform my room into an EA paradise complete with OWID charts across the walls, a literal bednet, a shrine, and more (and of course post all this online)! Consider donating if and only if you wouldn't use the money for a better purpose! 

 

See some more fun discussion and such by following the replies and quote-tweets here

3
KarolinaSarek🔸
That's amazing; thank you for this initiative and fundraising for the EA AWF! 

I was hoping he’d say himself but @MathiasKB (https://forum.effectivealtruism.org/users/mathiaskb) is our lead!

But I think you’re basically spot-on; we’re like a dozen people in a Slack, all with relatively low capacity for various reasons, trying to bootstrap a legit organization.

The “bootstrap” analogy is apt here because we are basically trying to hire the leadership/managerial and operational capacity that is generally required to do things like “run a hiring round,” if that makes any sense.

So yeah, the idea is volunteers run a hiring round, and my sen... (read more)

To expand a bit on the funding point (and speaking for myself only):

I’d consider the $15k-$100k range what makes sense as a preliminary funding round, taking into account the high opportunity cost of EA animal welfare funding dollars. This is to say that I think SFF could in fact use much more than that, but the merits and cost effectiveness of the project will be a lot clearer after spending this first $100k; it is in large part paying for value of information.

Again speaking for myself only, my inside view is that the $100k figure is too low of an upper bound for preliminary funding; maybe I’d double it.

Speaking for myself (not other coauthors), I agree that $15k is low and would describe that as the minimum plausible amount to hire for the roles described (in part because of the willingness of at least one prospective researcher to work for quite cheap compared to what I perceive as standard among EA orgs, even in animal welfare).

IIRC I threw the $100k amount out as a reasonable amount we could ~promise to deploy usefully in the short term. It was a very hasty BOTEC-type take by me: something like $30k for the roles described + $70k for a full-time project lead.

Thanks Aaron! I think I'm now a bit confused what a prospective funder would be funding.

Is it something like, the volunteer group would run a hiring round (managed by anyone in particular?) for a part-time leader (maybe someone in the group?), but no one specifically has raised their hand for this? And then perhaps that person could deploy some of the $15k to hire a research associate if they'd like?

I respect that this is an early stage idea y'all are just trying to get started / don't have all the details figured out yet, just trying to understand (mostly for the sake of any prospective funders) who they would be betting on etc. :)

~All of the EV from the donation election probably comes from nudging OpenPhil toward the realization that they're pretty dramatically out of line with "consensus EA" in continuing to give most marginal dollars to global health. If this was explicitly thought through, brilliant.

Image

(See this comment for sourcing and context on the table, which was my attempt to categorize all OP grants not too long ago)

3
Jason
I'd update significantly more in that direction if the final outcomes for the subset of voters with over X karma (1000? 2000? I dunno) were similar to the current all-voter data. I say that not because I think only medium-plus karma voters have value, but because it's the cleanest way I can think of to mitigate the risk that the results have been affected by off-Forum advocacy and organizing. Those efforts have been blessed by the mods within certain bounds, but the effects of superior get-out-the-vote efforts are noise insofar as determining what the "consensus EA" is, and the resulting electorate may be rather unrepresentative. In contrast, the set of medium-plus karma voters seems more like to be representative of the broader community's thinking regarding cause areas. (If there are other voter characteristics that could be analyzed and would be expected to be broadly representative, those would be worth looking at too.) For example, it seemed fairly clear to me that animal-advocacy folks were significantly more on the ball in claiming funds during Manifund's EA Community Choice event than other folks. This makes sense given how funding constrained animal advocacy is. So the possibility that something similar could be going on here caps how much I'd be willing to update on the current data.
8
ethai
hmm not sure it's fair to make claims about what "consensus EA" believes based on the donation election honestly * "consensus EA" seems like it is likely to be something other than "people who are on the Forum between Nov 18 and Dec 3" * I only pay attention to climate, but as a cause area it tends to be more prominent in "EA-wide" surveys/giving than it is among the most highly-engaged EAs (Forum readers)[1] * people are literally voting based on what OP is not funding 1. ^ I didn't even vote for GG bc I know it won't win, but it does warm my cold dead heart that four whole people did

Re: a recent quick take in which I called on OpenPhil to sue OpenAI: a new document in Musk's lawsuit mentions this explicitly (page 91)

6
Jason
Interesting lawsuit; thanks for sharing! A few hot (unresearched, and very tentative) takes, mostly on the Musk contract/fraud type claims rather than the unfair-competition type claims related to x.ai: 1. One of the overarching questions to consider when reading any lawsuit is that of remedy. For instance, the classic remedy for breach of contract is money damages . . . and the potential money damages here don't look that extensive relative to OpenAI's money burn. 2. Broader "equitable" remedies are sometimes available, but they are more discretionary and there may be some significant barriers to them here. Specifically, a court would need to consider the effects of any equitable relief on third parties who haven't done anything wrongful (like the bulk of OpenAI employees, or investors who weren't part of an alleged conspiracy, etc.), and consider whether Musk unreasonably delayed bringing this lawsuit (especially in light of those third-party interests). On hot take, I am inclined to think these factors would weigh powerfully against certain types of equitable remedies. 1. Stated more colloquially, the adverse effects on third parties and the delay ("laches") would favor a conclusion that Musk will have to be content with money damages, even if they fall short of giving him full relief. 2. Third-party interests and delay may be less of a barrier to equitable relief against Altman himself. 3. Musk is an extremely sophisticated party capable of bargaining for what he wanted out of his grants (e.g., a board seat), and he's unlikely to get the same sort of solicitude on an implied contract theory that an ordinary individual might. For example, I think it was likely foreseeable in 2015 to January 2017 -- when he gave the bulk of the funds in question -- that pursuing AGI could be crazy expensive and might require more commercial relationships than your average non-profit would ever consider. So I'd be hesitant to infer much in the way of implied-contractual

A little while ago I posted this quick take: 

I didn't have a good response to @DanielFilan, and I'm pretty inclined to defer to orgs like CEA to make decisions about how to use their own scarce resources. 

At least for EA Global Boston 2024 (which ended yesterday), there was the option to pay a "cost covering" ticket fee (of what I'm told is $1000).[1]

All this is to say that I am now more confident (although still <80%) that marginal rejected applicants who are willing to pay their cost-covering fee would be good to admit.[2]

In part this stems ... (read more)

[This comment is no longer endorsed by its author]Reply
4
NickLaing
I agree. One minor issue with your "low bar" is the giving 10 percent. Giving this much is extremely uncommon to any cause, so for me might be more of a "medium bar" ;)
2
Jason
Would this (generally) be a one-time deal? The idea that some people would benefit from a bolus of EA as a "break-in point" or "on-ramp" seems plausible, and willingness to pay a hefty admission fee / other expenses would certainly have a signaling value.[1] However, the argument probably gets weaker after the first marginal admission (unless the marginal applicant is a lot closer to the line on the second time around).  Maybe allowing only one marginal admission per person absent special circumstances would mitigate concerns about "diluting" the event. 1. ^ I recognize the downsides of a pay-extra-to-attend approach as far as perceived fairness, equity, accessibility to people from diverse backgrounds, and so on. That would be a tradeoff to consider.
5
harfe
I am under the impression that EAGx can be such a break-in point, and has lower admission standards than EAG. In particular, there is EAGxVirtual (Applications are open!). Has the rejected person you are thinking of applied to any EAGx conference?
Load more