Gavin

Founder @ Arb
5566 karmaJoined Working (6-15 years)Pursuing a doctoral degree (e.g. PhD)
www.gleech.org/

Bio

https://www.gleech.org/

Co-founder of Arb, an AI / forecasting / etc consultancy. Doing a technical AI PhD.

Conflicts of interest: ESPR, EPSRC, Emergent Ventures, OpenPhil, Infrastructure Fund, Alvea.

Posts
36

Sorted by New
5
Gavin
· · 1m read
187
Gavin
· · 2m read
50
Gavin
· · 2m read
95
Gavin
· · 6m read
57
Gavin
· · 8m read
36
Gavin
· · 3m read
53
Gavin
· · 6m read

Comments
433

Topic contributions
4

Nah we were only 5 people plus a list of contacts to the end. Main blocker was trying to solve executive search and funding at the same time when these are coupled problems. And the cause of that is maybe me not having enough pull.

Appreciate this. 

The second metric is aid per employee I think, so salaries don't come into it(?) Distributing food is labour intensive, but so is UNICEF's work and parts of WHO.

The rest of my evidence is informal (various development economists I've spoken to with horror stories) and I'd be pleased to be wrong.

Answer by Gavin19
0
0

Arb is a research consultancy led by Misha Yagudin and Gavin Leech. Here's our review of our first and second years. We worked on forecasting, vaccine strategy, AI risk, economic policy, grantmaking, large-scale data collection, a little software engineering, explaining highly technical concepts, and intellectual history. Lately we've been on a biotech jaunt and also events.

We're looking for researchers with some background in ML, forecasting, technical writing, blogging, or some other hard thing. Current staff include a philosophy PhD, two college dropouts, a superforecaster, a machine learning PhD, etc. We pay US wage.

Fully remote with optional long retreats. We spent a full half of 2022 colocated.

We only take work we think is important.

hi@arbresearch.com

When producing the main estimates, Sam already uses just the virtual camps, for this reason. Could emphasise more that this probably doesn't generalise.

The key thing about AISC for me was probably the "hero licence" (social encouragement, uncertainty reduction) the camp gave me. I imagine this specific impact works 20x better in person. I don't know how many attendees need any such thing (in my cohort, maybe 25%) or what impact adjustment to give this type of attendee (probably a discount, since independence and conviction is so valuable in a lot of research).

Another wrinkle is the huge difference in acceptance rates between programmes. IIRC the admission rate for AISC 2018 was 80% (only possible because of the era's heavy self-selection for serious people, as Sam notes). IIRC, 2023 MATS is down around ~3%. Rejections have some cost for applicants, mostly borne by the highly uncertain ones who feel they need licencing. So this is another way AISC and MATS aren't doing the same thing, and so I wouldn't directly compare them (without noting this). Someone should be there to catch ~80% of seriously interested people. So, despite appearances, AGISF is a better comparison for AISC on this axis.

Well there's a lot of different ways to design an NN.

That sounds related to OAA (minus the vast verifier they also want to build), so depending on the ambition it could be "End to end solution" or "getting it to learn what we want" or "task decomp". See also this cool paper from authors including Stuart Russell.

It's not a separate approach, the non-theory agendas and even some of the theory agendas have their own answers to these questions. I can tell you that almost everyone besides CoEms and OAA are targeting NNs though.

Load more