AB

Aaron Bergman

1707 karmaJoined Nov 2017Working (0-5 years)Maryland, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.

I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.

Blog: aaronbergman.net

How others can help me

  • Suggest action-relevant, tractable research ideas for me to pursue
  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Convince me of a better marginal use of small-dollar donations than giving to the Fish Welfare Initiative, from the perspective of a suffering-focused hedonic utilitarian.
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
140

Topic contributions
1

Automated interface between Twitter and the Forum (eg a bot that, when tagged on twitter, posts the text and image of a tweet on Quick Takes and vice versa)

I’d be very surprised if you can’t get a job that pays much more than the sub teacher role- the gap between that and ~any EA org job is massive and inability to get the latter is only very weak evidence of inability to earn more.

Sorry if I missed this but this does depend a lot on location/willingness to move. The above assumes If you’re in the US and willing to move cities.

Also, living frugally to donate more is of course very virtuous if you take your salary to be a given, but from an altruistic perspective, insofar as they trade off, it’s probably much better to spend effort on finding a way to earn more.

This is a bit of a hobbyhorse of mine but this could look like “found a startup with a 5% chance of earning $10M” in addition to or instead of searching for higher salaried roles

Random sorta gimmicky AI safety community building idea: tabling at universities but with a couple laptop signed into Claude Pro with different accounts. Encourage students (and profs) to try giving it some hard question from eg a problem set and see how it performs. Ideally have a big monitor for onlookers to easily see.

Most college students are probably still using ChatGPT-3.5, if they use LLMs at all. There’s a big delta now between that and the frontier.

I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities. 

Presenting: Donation Printer 

  1. ^

    OpenAI is rolling out monetization for custom GPTs:

    Builders can earn based on GPT usage

    In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

Yeah you're right, not sure what I missed on the first read

This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

[This comment is no longer endorsed by its author]Reply

Yeah but my (implicit, should have made explicit lol) question is “why this is the case?”

Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.

A few stats:

  • The 25th percentile AWF grant was $24,250, compared to $5,802 for Infrastructure and $7,700 for LTFF (and median looks basically the same).
  • AWF has only made just nine grants of less than $10k, compared to 163 (Infrastructure) and 132 (LTFF).

Proportions under $threshold 

fundprop_under_1kprop_under_2500prop_under_5kprop_under_10k
Animal Welfare Fund0.0000.0040.0120.036
EA Infrastructure Fund0.0200.0860.1940.359
Global Health and Development Fund0.0000.0000.0000.000
Long-Term Future Fund0.0070.0680.1630.308

Grants under $threshold 

fundnunder_2500under_5kunder_10kunder_25kunder_50k
Animal Welfare Fund250139243248
EA Infrastructure Fund4543988163440453
Global Health and Development Fund2000057
Long-Term Future Fund4292970132419429

Summary stats (rounded)

fundnmedianmeanq1q3total
Animal Welfare Fund250$50,000$62,188$24,250$76,000$15,546,957
EA Infrastructure Fund454$15,319$41,331$5,802$45,000$18,764,097
Global Health and Development Fund20$900,000$1,257,005$297,925$1,481,630$25,140,099
Long-Term Future Fund429$23,544$44,624$7,700$52,000$19,143,527
Load more