AB

Aaron Bergman

1651 karmaJoined Nov 2017Working (0-5 years)Maryland, USA
aaronbergman.neocities.org/

Bio

Participation
4

I graduated from Georgetown University in December, 2021 with degrees in economics, mathematics and a philosophy minor. There, I founded and helped to lead Georgetown Effective Altruism. Over the last few years recent years, I've interned at the Department of the Interior, the Federal Deposit Insurance Corporation, and Nonlinear, a newish longtermist EA org.

I'm now doing research thanks to an EA funds grant, trying to answer hard, important EA-relevant questions. My first big project (in addition to everything listed here) was helping to generate this team Red Teaming post.

Blog: aaronbergman.net

How others can help me

  • Suggest action-relevant, tractable research ideas for me to pursue
  • Give me honest, constructive feedback on any of my work
  • Introduce me to someone I might like to know :)
  • Convince me of a better marginal use of small-dollar donations than giving to the Fish Welfare Initiative, from the perspective of a suffering-focused hedonic utilitarian.
  • Offer me a job if you think I'd be a good fit
  • Send me recommended books, podcasts, or blog posts that there's like a >25% chance a pretty-online-and-into-EA-since 2017 person like me hasn't consumed
    • Rule of thumb standard maybe like "at least as good/interesting/useful as a random 80k podcast episode"

How I can help others

  • Open to research/writing collaboration :)
  • Would be excited to work on impactful data science/analysis/visualization projects
  • Can help with writing and/or editing
  • Discuss topics I might have some knowledge of
    • like: math, economics, philosophy (esp. philosophy of mind and ethics), psychopharmacology (hobby interest), helping to run a university EA group, data science, interning at government agencies

Comments
138

Topic contributions
1

Random sorta gimmicky AI safety community building idea: tabling at universities but with a couple laptop signed into Claude Pro with different accounts. Encourage students (and profs) to try giving it some hard question from eg a problem set and see how it performs. Ideally have a big monitor for onlookers to easily see.

Most college students are probably still using ChatGPT-3.5, if they use LLMs at all. There’s a big delta now between that and the frontier.

I made a custom GPT that is just normal, fully functional ChatGPT-4, but I will donate any revenue this generates[1] to effective charities. 

Presenting: Donation Printer 

  1. ^

    OpenAI is rolling out monetization for custom GPTs:

    Builders can earn based on GPT usage

    In Q1 we will launch a GPT builder revenue program. As a first step, US builders will be paid based on user engagement with their GPTs. We'll provide details on the criteria for payments as we get closer.

Yeah you're right, not sure what I missed on the first read

This doesn't obviously point in the direction of relatively and absolutely fewer small grants, though. Like naively it would shrink and/or shift the distribution to the left - not reshape it.

[This comment is no longer endorsed by its author]Reply

Yeah but my (implicit, should have made explicit lol) question is “why this is the case?”

Like at a high level it’s not obvious that animal welfare as a cause/field should make less use of smaller projects than the others. I can imagine structural explanations (eg older field -> organizations are better developed) but they’d all be post hoc.

Interesting that the Animal Welfare Fund gives out so few small grants relative to the Infrastructure and Long Term Future funds (Global Health and Development has only given out 20 grants, all very large, so seems to be a more fundamentally different type of thing(?)). Data here.

A few stats:

  • The 25th percentile AWF grant was $24,250, compared to $5,802 for Infrastructure and $7,700 for LTFF (and median looks basically the same).
  • AWF has only made just nine grants of less than $10k, compared to 163 (Infrastructure) and 132 (LTFF).

Proportions under $threshold 

fundprop_under_1kprop_under_2500prop_under_5kprop_under_10k
Animal Welfare Fund0.0000.0040.0120.036
EA Infrastructure Fund0.0200.0860.1940.359
Global Health and Development Fund0.0000.0000.0000.000
Long-Term Future Fund0.0070.0680.1630.308

Grants under $threshold 

fundnunder_2500under_5kunder_10kunder_25kunder_50k
Animal Welfare Fund250139243248
EA Infrastructure Fund4543988163440453
Global Health and Development Fund2000057
Long-Term Future Fund4292970132419429

Summary stats (rounded)

fundnmedianmeanq1q3total
Animal Welfare Fund250$50,000$62,188$24,250$76,000$15,546,957
EA Infrastructure Fund454$15,319$41,331$5,802$45,000$18,764,097
Global Health and Development Fund20$900,000$1,257,005$297,925$1,481,630$25,140,099
Long-Term Future Fund429$23,544$44,624$7,700$52,000$19,143,527

In their most straightforward form (“foundation models”), language models are a technology which naturally scales to something in the vicinity of human-level (because it’s about emulating human outputs), not one that naturally shoots way past human-level performance

  • i.e. it is a mistake-in-principle to imagine projecting out the GPT-2—GPT-3—GPT-4 capability trend into the far-superhuman range

Surprised to see no pushback on this yet. I do not think this is true; I've come around to thinking that Eliezer is basically right that the limit of next token prediction on human generated text is superintelligence. Now how this latent ability manifests is a hard question, but it's there to be used by the model for its own ends or elicited by humans for ours, or both.

Also worth adding (guessing this point has been made before) that non human-generated text (e.g. regression outputs from a program) are in the training data, so merely predicting those gets you superhuman performance in some domains.

For others considering whether/where to donate: RP is my current best guess of "single best charity to donate to all things considered (on the margin - say up to $1M)."

FWIW I have a manifold market for this (which is just one source of evidence - not something I purely defer to. Also I bet in the market so grain of salt etc). 

Load more