Bogdan Ionut Cirstea

Posts

Sorted by New

Wiki Contributions

Comments

Why AI alignment could be hard with modern deep learning

From  https://www.cold-takes.com/supplement-to-why-ai-alignment-could-be-hard/ : 'A model about as powerful as a human brain seems like it would be ~100-10,000 times larger than the largest neural networks trained today, and I think could be trained using an amount of data and computation that -- while probably prohibitive as of August 2021 -- would come within reach after 15-30 years of hardware and algorithmic improvements.' Is it safe to assume that this is an updated, shorter timeline compared to https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines

What kind of event, targeted to undergraduate CS majors, would be most effective at getting people to work on AI safety?

Encouraging them to apply to the next round of the AGI Safety Fundamentals program https://www.eacambridge.org/agi-safety-fundamentals might be another idea. The curriculum there can also provide inspiration for reading group materials.

Forecasting Newsletter: August 2021

'CSET-Foretell forecasts were quoted by Quanta Magazine (a) on on whether VC funding for tech startups will dry up' - the linked article seems to come from Quartz, not Quanta Magazine

Forecasting transformative AI: the "biological anchors" method in a nutshell

I was very surprised by the paragraph: 'However, I also have an intuitive preference (which is related to the "burden of proof" analyses given previously) to err on the conservative side when making estimates like this. Overall, my best guesses about transformative AI timelines are similar to those of Bio Anchors.' especially in context and especially because of the use of the term 'conservative'. I would have thought that the conservative assumption to make would be shorter timelines (since less time to prepare). If I remember correctly, Toby Ord discusses something similar in the chapter on AI risk from 'The Precipice': how at one of the AI safety conferences (FLI Puerto Rico 2015?) some AI researchers used the term 'conservative' to mean 'we shouldn't make wild predictions about AI' and others to mean 'we should be really risk-averse, so we should assume that it could happen soon'. I would have expected to see the second use here. 

What EA projects could grow to become megaprojects, eventually spending $100m per year?

I think aligning narrow superhuman models could be one very valuable megaproject and this seems scalable to >= $100 million, especially if also training large models (not just fine-tuning them for safety). Training their own large models for alignment research seems to be what Anthropic plans to do. This is also touched upon in Chris Olah's recent 80k interview.