All of Arb's Comments + Replies

Hey! Sorry, it went to spam, I replied now.

No form, just email for now.

Thanks!
Gavin

Strong upvote from us.

Two natural places to ask are Bountied Rationality and the EA twitter group.

Answer by ArbMay 27, 202239
0
0

Arb is a new research consultancy led by Misha Yagudin and Gavin Leech.

In our first 6 months we've worked on forecasting, vaccine strategy, AI risk, economics, cause prioritisation, grantmaking, and large-scale data collection. We're also working with Emergent Ventures and Schmidt Futures on their AI talent programme.

Consulting is reactive, but we have lots of ideas of our own which you can help shape.

We're looking for researchers with some background in ML, forecasting, technical writing, blogging, or some other hard thing. We only take work we think is i... (read more)

1
ak08
2y
Hi, I see openings for ArB mentioned in the 80000 Hours Job board but I don't see an application form anywhere. I have sent a mail but have received no response. Are you guys still hiring?
Arb
2y27
0
0

Language models for detecting bad scholarship 

Epistemic institutions

Anyone who has done desk research carefully knows that many citations don't  support the claim they're cited for - usually in a subtle way, but sometimes a total nonsequitur. Here's a fun list of 13 features we need to protect ourselves.

This seems to be a side effect of academia scaling so much in recent decades - it's not that scientists are more dishonest than other groups, it's that they don't have time to carefully read everything in their sub-sub-field (... while maintaining... (read more)

Just emailed Good Judgment Inc about it.

Arb
2y27
0
0

On malevolence: How exactly does power corrupt?

Artificial Intelligence / Values and Reflective Processes

How does it happen, if it happens? Some plausible stories:

  • Backwards causation: People who are “corrupted” by power always had a lust for power but deluded others and maybe even themselves about their integrity;
     
  • Being a good ruler (of any sort) is hard and at times very unpleasant, even the nicest people will try to cover up their faults, covering up causes more problems... and at some point it is very hard to admit that you were incompetent ruler al
... (read more)
2
MaxRa
2y
Yes, that's interesting and plausibly very useful to understand better. Might also affect some EAs at some point. The hedonic treadmill might be  part of it. You get used to the personal perks quickly, so you still feel motivated & justified to still put ~90% of your energy into problems that affect you personally -> removing threats to your rule, marginal status-improvements, getting along with people close to you And some discussion about the backwards causation idea is here in an oldie from Yudkowsky: Why Does Power Corrupt?
Arb
2y29
0
0

Evaluating large foundations

Effective Altruism

Givewell looks at actors: object-level charities, people who do stuff. But logically, it's even more worth scrutinising megadonors (assuming that they care about impact or public opinion about their operations, and thus that our analysis could actually have some effect on them).

For instance, we've seen claims that the Global Fund, who spend $4B per year, meet a 2x GiveDirectly bar but not a Givewell Top Charity bar.

This matters because most charity - and even most good charity - is still not by EAs or run on EA... (read more)

Arb
2y19
0
0

More Insight Timelines

In 2018, the Median Group produced  an impressive timeline of all of the insights required for current AI, stretching back to China's Han Dynasty(!)

The obvious extension is to alignment insights. Along with some judgment calls about relative importance, this would help any effort to estimate / forecast progress, and things like the importance of academia and non-EAs to AI alignment. (See our past work for an example of something in dire need of an exhaustive weighted insight list.)

Another set in need of collection are more genera... (read more)

Arb
2y39
0
0

Our World in Base Rates

Epistemic Institutions

Our World In Data are excellent; they provide world-class data and analysis on a bunch of subjects. Their COVID coverage made it obvious that this is a very great public good. 

So far, they haven't included data on base rates; but from Tetlock we know that base rates are the king of judgmental forecasting (EAs generally agree). Making them easily available can thus help people think better about the future. Here's a cool corporate example. 

e.g. 

85% of big data projects fail”; 
10% of people r... (read more)

I think this is neat. 

Perhaps-minor note: if you'd do it at scale, I imagine you'd want something more sophisticated than coarse base rates. More like, "For a project that has these parameters, our model estimates that you have a 85% chance of failure."

I of course see this as basically a bunch of estimation functions, but you get the idea.