New Comment
15 comments, sorted by Click to highlight new comments since: Today at 4:36 PM

CSET report on AI and Compute does not acknowledge Alignment

It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has given over $50M to set up.

Shorthand for hedging statements?

A lot of EA writing contains many hedging statements (along the lines of "I'm uncertain about this, but", "my best guess is", "this is potentially a good idea", "it might be good", "I'm tentatively going to say","I'm not confident in this assertion, but","I'm unsure of the level of support/evidence base for this" etc).

To make things more concise, perhaps [ha!] a shorthand could be developed, where (rough) probabilities are given for statements. Maybe [haha] it could take the form of a subscript with a number, with the statements bounded by apostrophes ('), except the apostrophes are also subscript. To be as minimal as possible, the numbers could be [lol] written as 9 for 0.9 of 90%, 75 for 0.75 or 75%, 05 for 0.05 or 5%, 001 for 0.001 or 0.1% etc (basically just taking the decimal probability and omitting the decimal point). Footnotes could be added for explanations where appropriate.

Maybe the statements (or numbers) could be colour coded for ease of spotting whether something is regarded as highly likely or highly unlikely, or somewhere in the middle. Although maybe all of this will disrupt the flow of reading too much?

Words of estimative probability from the intelligence world is a related concept.

The problem with these is having everyone on the same page of what the words mean. I recall Toby Ord not liking the IPCC's use of them in The Precipice for this reason.

I agree that  words are quite imprecise and usually having numbers is superior.

AGI x-risk timelines: 10% chance (by year X) estimates should be the headline, not 50%.

Given the stakes involved (the whole world/future light cone), we should regard timelines of ≥10% probability of AGI in ≤10 years as crunch time, and -- given that there is already an increasingly broad consensus around this[1] -- be treating AGI x-risk as an urgent immediate priority (not something to mull over leisurely as part of a longtermist agenda).

Of course it's not just time to AGI that is important. It's also P(doom|AGI & alignment progress). I think most people in AI Alignment would regard this as >50% given our current state of alignment knowledge and implementation[2].

To borrow from Stuart Russell's analogy: if there was a 10% chance of aliens landing in the next 10-15 years[3], we would be doing a lot more than we are currently doing[4]. AGI is akin to an alien species more intelligent than us that is unlikely to share our values.

  1. ^

    Note that Holden Karnofsky's all-things-considered (and IMO conservative) estimate for the advent of AGI is >10% chance in (now) 14 years. Anecdotally, the majority of people I've spoke to on the current AGISF course have estimates for 10% chance of 10 years or less.

  2. ^

    Correct me if you think this is wrong; would be interesting to see a recent survey on this. Maybe there is more optimism factoring in extra progress before the advent of AGI.

  3. ^

    This is different to the original analogy, which was an email saying: "People of Earth: We will arrive on your planet in 50 years. Get ready." Say astronomers spotted something that looked like a space-craft, heading in approximately our direction, and estimated there was 10% chance that it was indeed a spacecraft heading to Earth.  

  4. ^

    Although perhaps we wouldn't. Maybe people would endlessly argue about whether the evidence is strong enough to declare a >10% probability. Or flatly deny it.

I agree with this, and think maybe this should just be a top-level post

[Half-baked global health idea based on a conversation with my doctor: earlier cholesterol checks and prescription of statins]

I've recently found out that I've got high (bad) cholesterol, and have been prescribed statins. What surprised me was that my doctor said that they normally wait until the patient has a 10% chance of heart attack or stroke in the next 10 years before they do anything(!) This seems crazy in light of the amount of resources put into preventing things with a similar (or lower) risk profiles, such as Covid, or road traffic accidents. Would reducing that to, say 5%* across the board (i.e. worldwide), be a low hanging fruit? Say by adjusting things set at a high level. Or have I just got this totally wrong? (I've done ~zero research, apart from searching givewell.org for "statins", from which I didn't find anything relevant).

*my risk is currently at 5%, and I was pro-active about getting my blood tested.

Romeo Stevens writes about cholesterol here.

Companies like thriva.co offer cheap at home lipid tests.

Here are a few recent papers on new drugs:

https://academic.oup.com/eurjpc/article/28/11/1279/5898664

https://www.sciencedirect.com/science/article/pii/S0735109721061131?via%3Dihub 

Cardiovascular disease is on the rise in emerging economies, so maybe it'd be competitive in the future. 

Saturated fat seems to be a main culprit: 

https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD011737.pub3/full

Public health interventions might be a fat tax:

https://en.wikipedia.org/wiki/Fat_tax

https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD012415.pub2/full

Or donating to the Good Food institute on human health grounds.

Surprisingly, globally, high cholesterol might kill 4m per year  - 50% in emerging economies. I think OPP is looking into air pollution which kills 7m per year, so maybe this is indeed something to lookin into.

Thanks for sharing. I'm adding this to my potential research agenda, kept here: https://airtable.com/shrGF5lAwSZpQ7uhP/tblJR9TaKLT41AoSL and https://airtable.com/shrQdonZuU20cpGR4

CEEALAR is hiring for a full-time Operations Manager (again), please share with anyone you think may be interested: https://ceealar.org/job-operations-manager/ 

Blackpool, UK. To start mid-late September. £31,286 – £35,457 per year (40 hours a week). Applications due by 31st August.

CEEALAR is hiring for a full-time Community Manager, please share with anyone you think may be interested: https://ceealar.org/job-community-manager/

To start mid-late September. £31,286 – £35,457 per year (full time, 40 hours a week).

CEEALAR is hiring for a full-time Operations Manager, please share with anyone you think may be interested: https://ceealar.org/job-operations-manager/ Blackpool, UK. 

To start mid-late July. £31,286 – £35,457 per year (40 hours a week).