All of Bogdan Ionut Cirstea's Comments + Replies

[$20K In Prizes] AI Safety Arguments Competition

'One metaphor for my headspace is that it feels as though the world is a set of people on a plane blasting down the runway:

And every time I read commentary on what's going on in the world, people are discussing how to arrange your seatbelt as comfortably as possible given that wearing one is part of life, or saying how the best moments in life are sitting with your family and watching the white lines whooshing by, or arguing about whose fault it is that there's a background roar making it hard to hear each other.

I don't know where we're actually heading, o... (read more)

[$20K In Prizes] AI Safety Arguments Competition

'If you know the aliens are landing in thirty years, it’s still a big deal now.' (Stuart Russell)

[$20K In Prizes] AI Safety Arguments Competition

'Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound. For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room,... (read more)

[$20K In Prizes] AI Safety Arguments Competition

'Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make provided that the machine is docile enough to tell us how to keep it under control.' (I. J. Good)

[$20K In Prizes] AI Safety Arguments Competition

'The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.' (Eliezer Yudkowsky)

[$20K In Prizes] AI Safety Arguments Competition

'You can't fetch the coffee if you're dead' (Stuart Russell)

Career Advice: Philosophy + Programming -> AI Safety

Consider applying for

Thanks, I'm now on their mailing list!
Nines of safety: Terence Tao’s proposed unit of measurement of risk

If I remember correctly (from 'The Precipice') 'Unaligned AI ~1 in 50 1.7' should actually be 'Unaligned AI ~1 in 10 1'.

Thanks for pointing this out! Should be fixed now
Why AI alignment could be hard with modern deep learning

From : 'A model about as powerful as a human brain seems like it would be ~100-10,000 times larger than the largest neural networks trained today, and I think could be trained using an amount of data and computation that -- while probably prohibitive as of August 2021 -- would come within reach after 15-30 years of hardware and algorithmic improvements.' Is it safe to assume that this is an updated, shorter timeline compared to (read more)

Not intended to be expressing a significantly shorter timeline; 15-30 years was supposed to be a range of "plausible/significant probability" which the previous model also said (probability on 15 years was >10% and probability on 30 years was 50%). Sorry that wasn't clear! (JTBC I think you could train a brain-sized model sooner than my median estimate for TAI, because you could train it on shorter horizon tasks.)
What kind of event, targeted to undergraduate CS majors, would be most effective at getting people to work on AI safety?

Encouraging them to apply to the next round of the AGI Safety Fundamentals program might be another idea. The curriculum there can also provide inspiration for reading group materials.

Forecasting Newsletter: August 2021

'CSET-Foretell forecasts were quoted by Quanta Magazine (a) on on whether VC funding for tech startups will dry up' - the linked article seems to come from Quartz, not Quanta Magazine

Thanks. fixed.
Forecasting transformative AI: the "biological anchors" method in a nutshell

I was very surprised by the paragraph: 'However, I also have an intuitive preference (which is related to the "burden of proof" analyses given previously) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors.' especially in context and especially because of the use of the term 'conservative'. I would have thought that the conservative assumption to make would be shorter timelines (since less time to prepare). If I remember correctly, Toby Ord discusse... (read more)

3Holden Karnofsky7mo
There are contexts in which I'd want to use the terms as you do, but I think it is often reasonable to associate "conservatism" with being more hesitant to depart from conventional wisdom, the status quo, etc. In general, I have always been sympathetic to the idea that the burden of proof/argumentation is on those who are trying to raise the priority of some particular issue or problem. I think there are good reasons to think this works better (and is more realistic and conducive to clear communication) than putting the burden of proof on people to ignore some novel issue / continue what they were doing.
What EA projects could grow to become megaprojects, eventually spending $100m per year?

I think aligning narrow superhuman models could be one very valuable megaproject and this seems scalable to >= $100 million, especially if also training large models (not just fine-tuning them for safety). Training their own large models for alignment research seems to be what Anthropic plans to do. This is also touched upon in Chris Olah's recent 80k interview.