B

Buhl

518 karmaJoined Mar 2021

Sequences
1

Scalable Longtermist Projects: Speedrun Series

Comments
14

Topic contributions
4

Thank you! 

Worth noting that our input was also very unevenly distributed – our original idea list includes ~40% AI-related ideas, ~15% bio, ~25% movement building / community infrastructure, and only ~20% other. (this was mainly due to us having better access to AI-related project ideas via our networks). If you’re interested in pursuing biosecurity- or movement building-related projects, feel free to get in touch and I can share some of our additional ideas – for the other areas I think we don’t necessarily have great ideas.

Thanks, appreciate your comment and the compliment!

On your questions:

2. The research process does consider cost-effectiveness as a key factor – e.g., the weighted factor model we used included both an “impact potential” and a “cost” item, so projects were favoured if they had high estimated impact potential and/or a low estimated cost. “Impact potential” here means “impact with really successful (~90th percentile) execution” – we’re focusing on the extreme rather than the average case because we expect most of our expected impact to come from tail outcomes (but have a separate item in the model to account for downside risk). The “cost” score was usually based on a rough proxy, but the “impact potential” score was basically just a guess – so it’s quite different from how CE (presumably) uses cost-effectiveness, in that we don’t make an explicit cost-effectiveness estimate and in that we don’t consult evidence from empirical studies (which typically don’t exist for the kinds of projects we consider). 

Re: “For each of the ideas do you feel like you have a sense of why this thing has not happened already?” –  we didn’t consider this explicitly in the process (though it somewhat indirectly featured as part of considering tractability and impact potential). I feel like I have a rough sense for each of the projects listed – and we wouldn’t include projects where we didn’t think it was plausible that the project would be feasible, that there’d be a good founder out there etc. – but I could easily be missing important reasons. Definitely an important question – would be curious to hear how CE takes it into account. 

3. Appreciate the input! The idea here wouldn’t be to just shove people into government jobs, but also making sure that they have the right context, knowledge, skills and opportunities to have a positive impact once there. I agree that policy is an ecosystem and that people are needed in many kinds of roles. I think it could make sense for an individual project to focus just/primarily on one or a few types of role (analogously to how the Horizon Institute focuses primarily on technocratic staffer and executive branch roles + think tank roles), but am generally in favour of high-quality projects in multiple policy-related areas (including advocacy/lobbying and developing think tank pipelines). 

The quick explanation is that I don't want people to over-anchor on it, given that the inputs are extremely uncertain, and that I think that a ranked list produced by a relatively well-respected research organisation is the kind of thing people could very easily over-anchor on, even if you caveat it heavily

(I'm in a similar position to Amber: Limited background (technical or otherwise) in AI safety and just trying to make sense of things by discussing them.)

Re: "I think you need to say more about what the system is being trained for (and how we train it for that). Just saying "facts about humans are in the data" doesn't provide a causal mechanism by which the AI acts in human-like ways, any more than "facts about clouds are in the data" provides a mechanism by which the AI role-plays being a cloud."

The (main) training process for LLMs is exactly to predict human text, which seems like it could reasonably be described as being trained to impersonate humans. If so, it seems natural to me to think that LLMs will by default acquire goals that are similar to human goals. (So it's not just that "facts about humans are in the data", but rather that state-of-the-art models are (in some sense) being trained to act like humans.)

I can see some ways this could go wrong – e.g., maybe "predicting what a human would do" is importantly different from "acting like a human would" in terms of the goals internalised; maybe fine-tuning changes the picture; or maybe we'll soon move to a different training paradigm where this doesn't apply. And of course, even if there's some chance this doesn't happen (even if it isn't the default), it warrants concern. But, naively, this argument still feels pretty compelling to me.

Buhl
1y14
0
1

Thank you for the important post!
 

“we might question how well neuron counts predict overall information-processing capacity”

My naive prediction would be that many other factors predicting information-processing capacity (e.g., number of connections, conduction velocity, and refractory period) are positively correlated with neuron count, such that neuron count is pretty strongly correlated with information processing even if it only plays a minor part in causing more information processing to happen. 

You cite one paper (Chitka 2009) that provides some evidence against my prediction (based on skimming the abstract, this seemed to be roughly by arguing that insect brains are not necessarily worse at information processing than vertebrate brains). Curious if you think this is the general trend of the literature on this topic?

Curious what you're referring to here and if there's any publicly available information about it? Couldn't find anything in ALLFEDs 2020 and 2021 updates. (I'm trying to estimate the cost-effectiveness of this kind of project as part of my work at Rethink Priorities)

Another failure mode I couldn’t easily fit into the taxonomy that might warrant a new category:

Competency failures - EAs are just ineffective at achieving things in the world due to lack of skills (eg comms, politics, org running) or bad judgement. Maybe this could be classed as a resource failure (for failing to attract people with certain skills) or a rigor failure (for failing to develop them/learn from others). Will try to think of a title beginning with R…

Minor points:

  • I was also considering something like value failures (EAs have the wrong moral theories/values), but that could probably be classified as a failure of rigor.
  • +1 to separating internal strife and reputation risks.

Curious what people think of the argument that, given that people in the EA community have different rankings of the top causes, a close-to-optimal community outcome could be reached if individuals argmax using their own ranking?

(At least assuming that the number of people who rank a certain cause as the top one is proportional to how likely it is to be the top one.)

Buhl
2y22
0
0

[Shortform version of this comment here.]

Update: I helped Linch collect data on the undergrad degrees of exceptionally successful people (using some of the ex post metrics Linch mentioned).

Of the 32 Turing Award winners in the last 20 years, 6 attended a top 10 US university, 16 attended another US university, 3 attended Oxbridge, and 7 attended other non-US universities. (full data)

Of the 97 Decacorn company founders I could find education data for, 19 attended a top 10 US university, 32 attended another US university, and 46 attended non-US universities (no Oxbridge). (full data)

So it seems like people who are successful on these metrics are pretty spread out across both US/elsewhere and elite/non-elite unis, but concentrated enough that having considerable focus on top US universities makes sense (assuming a key aim is to target people with the potential to be extremely successful). 

The concentration gets a bit higher for PhDs for the Turing Award winners (28% at top 10 US universities). It’s also higher for younger Decacorn company founders (e.g., 50% of under-35s in the US at MIT or Stanford) – so that gives some (relatively weak) evidence that concentration at top US universities has increased in the last few decades. 

There’s a doc with more details here for anyone interested. 

Buhl
2y15
0
0

Tl;dr: Most Turing Award winners and Decacorn company founders (i.e., exceptionally successful people) don’t attend US top universities, but there’s a fair amount of concentration.

In response to the post Most Ivy-smart students aren't at Ivy-tier schools and as a follow-up to Linch’s comment tallying the educational background of Field Medalists, I collected some data on the undergrad degrees of exceptionally successful people (using some of the  (imperfect) ex post metrics suggested by Linch).

Of the 32 Turing Award winners in the last 20 years, 6 attended a top 10 US university, 16 attended another US university, 3 attended Oxbridge, and 7 attended other non-US universities. (full data)

Of the 97 Decacorn company founders I could find education data for, 19 attended a top 10 US university, 32 attended another US university, and 46 attended non-US universities (no Oxbridge). (full data)

So it seems like people who are successful on these metrics are pretty spread out across both US/elsewhere and elite/non-elite unis, but concentrated enough that having considerable focus on top US universities makes sense (assuming a key aim is to target people with the potential to be extremely successful). 

The concentration gets a bit higher for PhDs for the Turing Award winners (28% at top 10 US universities). It’s also higher for younger Decacorn company founders (e.g., 50% of under-35s in the US at MIT or Stanford) – so that gives some (relatively weak) evidence that concentration at top US universities has increased in the last few decades. 

There’s a doc with more details here for anyone interested. 

[Also for full disclosure: I collected this data as part of my job, not just as a fun after hours project.]

Load more