All of Aaron Bergman's Comments + Replies

Automated interface between Twitter and the Forum (eg a bot that, when tagged on twitter, posts the text and image of a tweet on Quick Takes and vice versa)

2
Ben Millwood
4d
on its own quick takes? controllable by anyone? or do you authorise it to post on your own quick takes? (full disclosure, I don't personally use twitter so I doubt I'll do this, but maybe it's useful to you to clarify)

I’d be very surprised if you can’t get a job that pays much more than the sub teacher role- the gap between that and ~any EA org job is massive and inability to get the latter is only very weak evidence of inability to earn more.

Sorry if I missed this but this does depend a lot on location/willingness to move. The above assumes If you’re in the US and willing to move cities.

Also, living frugally to donate more is of course very virtuous if you take your salary to be a given, but from an altruistic perspective, insofar as they trade off, it’s probably much ... (read more)

3
Elijah Persson-Gordon
5d
I agree! I think if I moved I'd have better luck.

Random sorta gimmicky AI safety community building idea: tabling at universities but with a couple laptop signed into Claude Pro with different accounts. Encourage students (and profs) to try giving it some hard question from eg a problem set and see how it performs. Ideally have a big monitor for onlookers to easily see.

Most college students are probably still using ChatGPT-3.5, if they use LLMs at all. There’s a big delta now between that and the frontier.

I have a vague fear that this doesn't do well on the 'try not to have the main net effect be AI hypebuilding' heuristic.

I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities. 

Presenting: Donation Printer 

  1. ^

    OpenAI is rolling out monetization for custom GPTs:

    Builders can earn based on GPT usage

    In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

Yeah you're right, not sure what I missed on the first read

This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

[This comment is no longer endorsed by its author]Reply
4
Charles Dillon
4mo
I don't understand why you think this is the case. If you think of the "distribution of grants given" as a sum of multiple different distributions (e.g. upskilling, events, and funding programmes) of significantly varying importance across cause areas, then more or less dropping the first two would give your overall distribution a very different shape.

Yeah but my (implicit, should have made explicit lol) question is “why this is the case?”

Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

6
Charles Dillon
4mo
I think getting enough people interested in working on animal welfare has not usually been the bottleneck, relative to money to directly deploy on projects, which tend to be larger.

Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.

A few stats:

  • The 25th percentile AWF grant was $24,250, compared to $5,802 for Infrastructure and $7,700 for LTFF (and median looks basically the same).
  • AWF has only made just nine grants of less than $10k, compared to 163 (Infrastructure) and 132 (LTFF).

Proportions under $threshold... (read more)

4
MHR
4mo
Very interesting, thanks for pulling this data!
7
Jason
4mo
This is not surprising to me given the different historical funding situations in the relevant cause areas, the sense that animal-welfare and global-health are not talent-constrained as much as funding-constrained, and the clearer presence of strong orgs in those areas with funding gaps.  For instance: * there are 15 references to "upskill" (or variants) in the list of microgrants, and it's often hard to justify an upskilling grant in animal welfare given the funding gaps in good, shovel-ready animal-welfare projects.  * Likewise, 10 references to "study," 12 to "development,' 87 to "research" (although this can have many meanings), 17 for variants of "fellow," etc. * There are 21 references to "part-time," and relatively small, short blocks of time may align better with community building, small research projects than (e.g.) running a corporate campaign
5
Charles Dillon
4mo
Seems pretty unsurprising - the animal welfare fund is mostly giving to orgs, while the others give to small groups or individuals for upskilling/outreach frequently.

In their most straightforward form (“foundation models”), language models are a technology which naturally scales to something in the vicinity of human-level (because it’s about emulating human outputs), not one that naturally shoots way past human-level performance

  • i.e. it is a mistake-in-principle to imagine projecting out the GPT-2—GPT-3—GPT-4 capability trend into the far-superhuman range

Surprised to see no pushback on this yet. I do not think this is true; I've come around to thinking that Eliezer is basically right that the limit of next token predict... (read more)

Sorry, I think you're reading me as saying something like "language models scaled naively up don't do anything superhuman"? Whereas I'm trying to say something more like "language models scaled naively up break the trend line in the vicinity of human level, because the basic mechanism for improved capabilities that they had been using stops working, so they need to use other mechanisms (which probably move a bit slower)".

If you disagree with that unpacking, I'm interested to hear it. If you agree with the unpacking and think that I've done a bad job summar... (read more)

0[comment deleted]4mo

For others considering whether/where to donate: RP is my current best guess of "single best charity to donate to all things considered (on the margin - say up to $1M)."

FWIW I have a manifold market for this (which is just one source of evidence - not something I purely defer to. Also I bet in the market so grain of salt etc). 

Strongly, strongly, strongly agree. I was in the process of writing essentially this exact post, but am very glad someone else got to it first. The more I thought about it and researched, the more it seemed like convincingly making this case would probably be the most important thing I would ever have done. Kudos to you.

A few points to add

  1. Under standard EA "on the margin" reasoning, this shouldn't really matter, but I analyzed OP's grants data and found that human GCR has been consistently funded 6-7x more than animal welfare (here's my tweet thread this i
... (read more)

I analyzed OP's grants data

FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.

I also made these interactive plots which summarise all EA funding:

[On mobile; sorry for the formatting]

Given my quick read and especially the bit below, it seems like the title is at least a bit misleading.

Quote: “To be clear: this document is not a detailed vindication of any particular class of philanthropic interventions. For example, although we think that contractualism supports a sunnier view of helping the global poor than funding x-risk projects, contractualism does not, for all our argument implies, entail that many EA-funded global poverty interventions are morally preferable to all other options (some of which... (read more)

5
Bob Fischer
6mo
Thanks for this, Aaron. Fair point. A more accurate title would be something like: "If Scanlonian contractualism is true, then between Emma Curran's work on the ex post version of the view and this post's focus on the ex ante version, it's probably true that when we have duties to aid distant strangers, we ought to discharge them by investing in high impact, high confidence interventions like AMF." 

LessWrong has a new feature/type of post called "Dialogues". I'm pretty excited to use it, and hope that if it seems usable, reader friendly, and generally good the EA Forum will eventually adopt it as well.

I'm interested in supporting this financially (that sounds like something a rich person would say so I should clarify this would not be a ton of money lol) and possibly in other ways as well (e.g., helping set up a website)

At least some chance of a less terrible death later, no? I'm really not sure what the distribution of causes of death looks like for different types of wild animal hosts

New fish data with estimated individuals killed per country/year/species  (super unreliable, read below if you're gonna use!) 

That^ is too big for Google Sheets, so here's the same thing just without a breakdown by country that you should be able to open easily if you want to take a look.

Basically the UN data generally used for tracking/analyzing the amount of fish and other marine life captured/farmed and killed only tracks the total weight captured for a given country-year-species (or group of species). 

I had chatGPT-4 provide estimated lo... (read more)

Good point, and I'll throw out The Humane League as one specific recipient of money. 

Farmed animal welfare is politically controversial in a way that GiveWell is not. This is potentially bad:

Is OpenPhil's current support of farmed animal welfare politically controversial? I don't get that sense but, if so, among who?

Maybe people who don't care about farmed animals are correct

Sure but same goes for literally everything, including eg AMF being net positive. Happy to discuss object level though.

Farmed animal advocacy is so cost-effective because, if succ

... (read more)
1
Jonathan Paulson
6mo
Is there a cost-effectiveness analysis that takes these costs into account? I don't think I've seen one.

I’ve argued this largely on Twitter, but it seems pretty clear to me that no marginal dollars at all, at least up to say $1B, should in fact be going to the GiveWell portfolio (or similar charities for that matter). I don’t think it’s obvious what the alternative should be, but do think that (virtually) no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option.

I feel like this/adjacent debates often gets framed as “normal poverty stuff vs weird longtermist stuff” but a lot of my confidence i... (read more)

Thanks for pointing that out, Aaron!

I feel like this/adjacent debates often gets framed as “normal poverty stuff vs weird longtermist stuff” but a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR.

I do not agree with the "any plausibly relevant criteria" part. However, I do think the best interventions to help farmed animals increase welfare way more cost-effectively than GiveWell's top charities. Some examples illustrating this:

... (read more)

What specifically in farmed animal welfare do you think beats GiveWell? (GiveWell is a specific thing you can actually donate money to; "farmed animal welfare" is not)

Farmed animal welfare is politically controversial in a way that GiveWell is not. This is potentially bad:
- Maybe people who don't care about farmed animals are correct
- Farmed animal advocacy is so cost-effective because, if successful, it forces other people (meat consumers? meat producers?) to bear the costs of treating animals better. I'm less comfortable spending other people's money to ... (read more)

a lot of my confidence in the above comes from farmed animal welfare strictly dominating GiveWell in terms of any plausibly relevant criteria save for maybe PR

Well some people might have ethical views or moral weights that are extremely favourable to people-focused interventions.

Or people could really value certainty of impact, and the evidence base could lead them to be much more confident that marginal donations to GiveWell charities have a counterfactual impact than marginal donations to animal welfare advocacy orgs.

FWIW I'm more likely to donate to ani... (read more)

[Epistemic status: unsure how much I believe each response but more pushing back against that "no well informed person trying to allocate a marginal dollar most ethically would conclude that GiveWell is the best option."]

  1. I think worldview diversification can diversify to a worldview that is more anthropocentric and less scope sensitive across species/not purely utilitarian. This would directly change the split with farmed animal welfare.
  2. There's institutional and signalling value in showing that OpenPhil is willing to stand behind long commitments. This can
... (read more)

According to Kevin Esvelt on the recent 80,000k podcast (excellent btw, mostly on biosecurity), eliminating the New World New World screwworm could be an important farmed animal welfare (infects livestock), global health (infects humans), development (hurts economies), science/innovation intervention, and most notably quasi-longtermist wild animal suffering intervention. 

More, if you think there’s a non-trivial chance of human disempowerment, societal collapse, or human extinction in the next 10 years, this would be important to do ASAP because we may... (read more)

EAG(x)s should have a lower acceptance bar. I find it very hard to believe that accepting the marginal rejectee would be bad on net.

Are you factoring in that CEA pays a few hundred bucks per attendee? I'd have a high-ish bar to pay that much for someone to go to a conference myself. Altho I don't have a good sense of what the marginal attendee/rejectee looks like.

3
Chris Leong
8mo
What is the acceptance bar?

How right now is "right now"? Like would giving $100 literally this moment be worth $105 given in a week? A month? 

Just looking for something super approximate, especially a rough time horizon where $1 now $1 then

6
Linch
8mo
It's very hard/confusing for me to think of an exact number, in part because the very existence of this public announcement and public comments probably changes the relevant numbers.  Suppose the counterfactual for this post is that we wait for November to make a "normal" end-of-year fundraising post, and during that time we make do with an income stream similar to donations to us in the past few months (~100k/month). If we are honest about our funding needs in November (likely still very high), I expect say ~1-2m of donations to us from people's end-of-year donation budgets (3-5.5m including Open Phil matching). In that world, because of sharply diminishing returns, I'd likely prefer 10k additional now (30k including Open Phil matching), to 20k additional in December (60k including OP matching).  But the very existence of this post means we aren't living in that world, as (hopefully) donors with far lower opportunity cost of money will generously donate to us now to ameliorate such gaps. So the whole thing leaves me pretty confused. Anyway, I will not encourage giving money to us now if the urgency imposes significant hardship on your end (beyond the level you reflectively endorse for donations in general).  If you are a large (>50k?) donor faced very concretely with an option of giving us $X now vs $X * Y later (I gave the example of tax reasons below), feel free to ping Caleb or I. We can discuss together what makes the most sense, and also (if necessary, I also need to check with ops) EA Funds can borrow against such promises and/or make conditional grants to grantees.
2
calebp
8mo
I’m not confident and would encourage other fund managers to weigh in here. I’d guess that $100 now is similarly useful to us as $140 in 3 months and something like $350 in six months time after the OP matching runs out. These numbers aren’t very resilient and are mostly my gut impression.

Somewhere in languagespace, there should be a combination of ~50-200 words that 1) successfully convinces >30% people that Wild Animal Welfare is really important, and then 2) they realize that the society they grew up in is confused, ill, and deranged. A superintelligence could generate this.

I don't think this is true, at least taking "convinces" to mean something more substantial than, say, marking the box for "yeah WAS is important" on a survey given immediately after reading. 

It's not at all obvious to me that marginal carbon actually cashes out as bad even in expectation.

Eh I'm not actually sure how bad this would be. Of course it could be overdone, but a post's author is its obvious best advocate, and a simple "I think this deserves more attention" vote doesn't seem necessarily illegitimate to me

I think the proxy question is “after what period of time is it reasonable to assume that any work building or expanding on the post would have been published?” and my intuition here is about 1 year but would be interested in hearing others thoughts

I went ahead and made an "Evergreen" tag as proposed in my quick take from a while back: 

Meant to highlight that a relatively old post (perhaps 1 year or older?) still provides object level value to read i.e., above and beyond:

  1. It's value as a cultural or historical artifact above
  2. The value of more recent work it influenced or inspired
4
Larks
9mo
Hopefully people will be sparing in applying it to their own recent posts!
6
quinn
9mo
cool, but I don't think a year is right. I would have said 3 years. 

What’s are some questions you hope someone’s gonna ask that seem relatively unlikely to get asked organically?

Bonus: what are the answers to those questions?

8
Peter Wildeford
9mo
Honestly I love this question but I got asked a lot of real questions that I think were varied and challenging, so right now I don't currently feel like I need even more!

Aside from RP, what is your best guess for the org that is morally best to give money to?

I feel a lot of cluelessness right now about how to work out cross-cause comparisons and what decision procedures to use. Luckily we hired a Worldview Investigations Team to work a lot more on this, so hopefully we will have some answers soon.

In the meantime, I currently am pretty focused on mitigating AI risk due to what I perceive as both an urgent and large threat, even among other existential risks. And contrary to last year, I think AI risk work is actually surprisingly underfunded and could grow. So I would be keen to donate to any credible AI r... (read more)

Idea/suggestion: an "Evergreen" tag, for old (6 months month? 1 year? 3 years?) posts (comments?), to indicate that they're still worth reading (to me, ideally for their intended value/arguments rather than as instructive historical/cultural artifacts)

As an example, I'd highlight Log Scales of Pleasure and Pain, which is just about 4 years old now.

I know I could just create a tag, and maybe I will, but want to hear reactions and maybe generate common knowledge.

6
Nathan Young
9mo
I think we want someone to push them back into the discussion.  Or you know, have editable wiki versions of them.

Thanks! Let me write them as a loss function in python (ha)

For real though:

  • Some flavor of hedonic utilitarianism
    • I guess I should say I have moral uncertainty (which I endorse as a thing) but eh I'm pretty convinced
  • Longtermism as explicitly defined is true
    • Don't necessarily endorse the cluster of beliefs that tend to come along for the ride though
  • "Suffering focused total utilitarian" is the annoying phrase I made up for myself
    • I think many (most?) self-described total utilitarians give too little consideration/weight to suffering, and I don't think it really
... (read more)
4
BrownHairedEevee
9mo
I was inspired to create this market! I would appreciate it if you weighed in. :)

Some shrinsight (shrimpsight?) from the comments:

I'm pretty happy with how this "Where should I donate, under my values?" Manifold market has been turning out. Of course all the usual caveats pertaining to basically-fake "prediction" markets apply, but given the selection effects of who spends manna on an esoteric market like this I put a non-trivial weight into the (live) outcomes.

I guess I'd encourage people with a bit more money to donate to do something similar (or I guess defer, if you think I'm right about ethics!), if just as one addition to your portfolio of donation-informing considerations.

4
BrownHairedEevee
10mo
This is a really interesting idea! What are your values, so I can make an informed decision?
4
Aaron Bergman
10mo
Some shrinsight (shrimpsight?) from the comments:

Even given no electricity, copies stored physically in e.g. a flash drive or hard drive would persist until electricity could be supplied, I'm almost certain

Just chiming in to say I have a similar situation, although less extreme. Was vegan for 4 years and eventually concluded it wasn’t sustainable or realistic for me. Main animal products I buy are grass fed beef, grass fed whey protein, eggs from brands that at least go to decent lengths to make themselves seem non-horrible (3rd party humane certified, outdoor access) and a bit of conventional dairy (cheese, butter). I’d be lying if I said I’ve never bought anything “worse” than those, though.

I’ve definitely thought about this and short answer: depends on who “we” is.

A sort of made up particular case I was imagining is “New Zealand is fine, everywhere else totally destroyed” because I think it targets the general class of situation most in need of action (I can justify this on its own terms but I’ll leave it for now)

In that world, there’s a lot of information that doesn't get lost: everything stored in the laptops and servers/datacenters of New Zealand (although one big caveat and the reason I abandoned the website is that I lost confidence tha... (read more)

I have only a vague idea what this means but yeah, whatever facilitates access/storage. Is there anything I should do?

6
RomanHauksson
10mo
I can look into how to set up a torrent link tomorrow and let you know how it goes!

It’s actually been a little while since I made it, but places most likely to both (1) not be direct targets of a nuclear attack and (2) be uncorrelated with the fates of major datacenters plausibly holding the information currently

I tried making a shortform -> Twitter bot (ie tweet each new top level ~quick take~) and long story short it stopped working and wasn't great to begin with.

I feel like this is the kind of thing someone else might be able to do relatively easily. If so, I and I think much of EA Twitter would appreciate it very much! In case it's helpful for this, a quick takes RSS feed is at https://ea.greaterwrong.com/shortform?format=rss

2
rime
10mo
I would be interested in following this bot if it were made. Thanks for trying!
5
Sjlver
10mo
Prediction markets haven't moved all that much yet: https://manifold.markets/bcongdon/will-a-cell-cultured-meat-product-b But I share your hopeful attitude :)
1
Jonathan Ng
10mo
Link is broken?

Seems like the forces that turn people crazy are the same ones that lead people to do anything good and interesting at all. At least for EA, a core function of orgs/elites/high status community members is to make the kind of signaling you describe highly correlated with actually doing good. Of course it seems impossible to make them correlate perfectly, and that’s why setting with super high social optimization pressure (like FTX) are gonna be bad regardless.

But (again for EA specifically) I suspect the forces you describe would actually be good to increas... (read more)

2
Jobst Heitzig (vodle.it)
7mo
The "impossible to correlate perfectly" piece is like in AI alignment, where one could also argue that perfect alignment of a reward function to the "true" utility function is impossible. Indeed, one might even argue that the joint cognition implemented by the EA/rationality/x-risk community as a whole is a form of "artificial" intelligence, let's call it "EI" and thus we face an "EI alignment" problem. As EA becomes more powerful in the world, we get "ESI" (effective altruism superhuman intelligence) and related risks from misaligned ESI. The obvious solution in my opinion is the same for AI and EI: don't maximize, since the metric you might aim to maximize is most likely imperfectly aligned with true utility. Rather satisfice: be ambitious, but not infinitely so. After reaching an ambitious goal, check if your reward function still makes sense before setting the next, more ambitious goal. And have some human users constantly verify your reward function :-)

Hypothesis: from the perspective of currently living humans and those who will be born in the currrent <4% growth regime only (i.e. pre-AGI takeoff or I guess stagnation) donations currently earmarked for large scale GHW, Givewell-type interventions should be invested (maybe in tech/AI correlated securities) instead with the intent of being deployed for the same general category of beneficiaries in <25 (maybe even <1) years.

The arguments are similar to those for old school "patient philanthropy" except now in particular seems exceptionally uncerta... (read more)

I'm skeptical of this take. If you think sufficiently transformative + aligned AI is likely in the next <25 years, then from the perspective of currently living humans and those who will be born in the current <4% growth regime, surviving until transformative AI arrives would be a huge priority. Under that view, you should aim to deploy resources as fast as possible to lifesaving interventions rather than sitting on them.

Made a podcast feed with EAG talks. Now has both the recent Bay Area and London ones:

Full vids on the CEA Youtube page

Not OP but here are some "user problems" either I have or am pretty sure a bunch of people have:

  • Lots of latent, locked up insight/value in drafts
    • Implicitly high standards discourage posting these as normal posts, which is good for avg post quality and bad for total quality
  • Would want to collaborate on either an explicit idea or something tbd, but making this happen as is takes a bunch of effort
  • Reduces costs to getting and giving feedback
    • Currently afaik there's no market where feedback buyers and sellers can meet - just ad hoc Google doc links 
    • In princi
... (read more)
2
Vaidehi Agarwalla
10mo
+1 to all of this also. 

Definitely part of the explanation, but my strong impression from interaction irl and on Twitter is that many (most?) AI-safety-pilled EAs donate to GiveWell and much fewer to anything animal related.

I think ~literally except for Eliezer (who doesn’t think other animals are sentient), this isn’t what you’d expect from the weirdness model implied.

Assuming I’m not badly mistaken about others’ beliefs and the gestalt (sorry) of their donations, I just don’t think they’re trying to do the most good with their money. Tbc this isn’t some damning indictment - it’s how almost all self-identified EAs’ money is spent and I’m not at all talking about ‘normal person in rich country consumption.’

Note: this sounds like it was written by chatGPT because it basically was (from a recorded ramble)🤷‍
 

I believe the Forum could benefit from a Shorterform page, as the current Shortform forum, intended to be a more casual and relaxed alternative to main posts, still seems to maintain high standards. This is likely due to the impressive competence of contributors who often submit detailed and well-thought-out content. While some entries are just a few well-written sentences, others resemble blog posts in length and depth.

As such, I find myself hesitant... (read more)

Thanks for brining our convo here! As context for others, Nathan and I had a great discussion about this which was supposed to be recorded...but I managed to mess up and didn't capture the incoming audio (i.e. everything Nathan said) 😢

Guess I'll share a note I made about this (sounds AI written because it mostly was, generated from a separate rambly recording). A few lines are a little spicier than I'd ideally like but 🤷

Donations and Consistency in Effective Altruism

I believe that effective altruists should genuinely strive to practice effective altruism

... (read more)
4
RedStateBlueState
11mo
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its "weird" premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between "doesn't rest on controversial claims" and "maximal impact".
7
Jason
11mo
Thanks for posting this. I had branching out my giving strategy to conclude some animal-welfare organizations on the to-do list, but this motivated me to actually pull the trigger on that.
Load more