Founder of CEEALAR (née the EA Hotel;

Wiki Contributions


Announcing, powered by EA Funds

Some related news: Peter Singer has released a (very limited) NFT series! They're up for auction on OpenSea, with proceeds going to TLYCS.

Mapping of EA

Not exactly what you are looking for, but here is an actual [metaphorical] map (although it could do with updating; it's from Feb 2020):

I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related)

I don't think mathematics should be a crux. As I say below, it could be generalised to being offered to anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem). Or perhaps “Fields Medalists, Nobel Prize winners in Physics, other equivalent prize recipients in Computer Science, or Philosophy[?], or Economics[?]”. And we could include additional criteria, such as being able to intuit what is being alluded to here. Basically, the idea is to headhunt the very best people for the job, using extreme financial incentives. We don't need to artificially narrow our search to one domain, but maths ability is a good heuristic as a starting point.

Greg_Colbourn's Shortform

[Half-baked global health idea based on a conversation with my doctor: earlier cholesterol checks and prescription of statins]

I've recently found out that I've got high (bad) cholesterol, and have been prescribed statins. What surprised me was that my doctor said that they normally wait until the patient has a 10% chance of heart attack or stroke in the next 10 years before they do anything(!) This seems crazy in light of the amount of resources put into preventing things with a similar (or lower) risk profiles, such as Covid, or road traffic accidents. Would reducing that to, say 5%* across the board (i.e. worldwide), be a low hanging fruit? Say by adjusting things set at a high level. Or have I just got this totally wrong? (I've done ~zero research, apart from searching for "statins", from which I didn't find anything relevant).

*my risk is currently at 5%, and I was pro-active about getting my blood tested.

What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?

I think the main problem is that you don't know for sure that they're close to AGI, or that it  is misaligned, beyond saying that all AGIs are misaligned by default, and what they have looks close to one. If they don't buy this argument -- which I'm assuming they won't given they're otherwise proceeding  -- then you probably won't get very far. 

As for using force (lets assume this is legal/governmental force), we might then find ourselves in a "whack-a-mole" situation, and how do we get global enforcement (/cooperation)?

What would you do if you had a lot of money/power/influence and you thought that AI timelines were very short?

Imagine it's just the standard AGI scenario where the world ends "by accident", i.e. the people making the AI don't heed the standard risks,  or solve the Control Problem, as outlined in books like Human Compatible and Superintelligence, in a bid to be first to make AGI  (perhaps for economic incentives, or perhaps for your ** scenario). I imagine it will also be hard to know who exactly the actors are, but you could have some ideas (e.g. the leading AI companies, certain governments etc). 

I’ll pay you a $1,000 bounty for coming up with a good bounty (x-risk related)

Good idea about the fellowship. I've been thinking that it would need to come from somewhere prestigious. Perhaps CHAI, FLI or CSER, or a combination of such academic institutions? If it was from, say, a lone crypto millionaire, they might risk being dismissed as a crackpot, and by extension risk damaging the reputation of AGI Safety. Then again, perhaps the amounts of money just make it too outrageous to fly in academic circles? Maybe we should be looking to something like sports or entertainment instead? Compare the salary to that of e.g. top footballers or musicians. (Are there people high up in these fields who are concerned about AI x-risk?)

Discussion with Eliezer Yudkowsky on AGI interventions

Yes, concern is optimisation during training. My intuition is along the lines of "sufficiently large pile of linear algebra with reward function-> basic AI drives maximise reward->reverse engineers [human behaviour / protein folding / etc] and manipulates the world so as to maximise it's reward ->[foom / doom]".

I wouldn't say "personality" comes into it. In the above scenario the giant pile of linear algebra is completely unconscious and lacks self-awareness; it's more akin to a force of nature, a blind optimisation process.

Load More