Hi I'm Steve Byrnes, an AGI safety researcher in Boston, MA, USA, with a particular focus on brain algorithms—see https://sjbyrnes.com/agi.html
Yup! Alternatively: we’re working with silicon chips that are 10,000,000× faster than the brain, so we can get a 100× speedup even if we’re a whopping 100,000× less skillful at parallelizing brain algorithms than the brain itself.
Hi, I’m an AGI safety researcher who studies and talks about neuroscience a whole lot. I don’t have a neuroscience degree—I’m self-taught in neuroscience, and my actual background is physics. So I can’t really speak to what happens in neuroscience PhD programs. Nevertheless, my vague impression is that the kinds of things that people learn and do and talk about in neuroscience PhD programs has very little overlap with the kinds of things that would be relevant to AI safety. Not zero, but probably very little. But I dunno, I guess it depends on what classes you take and what research group you join. ¯\_(ツ)_/¯
AGI is possible but putting a date on when we will have an AGI is just fooling ourselves.
So if someone says to you “I’m absolutely sure that there will NOT be AGI before 2035”, you would disagree, and respond that they’re being unreasonable and overconfident, correct?
I find the article odd in that it seems to be going on and on about how it's impossible to predict the date when people will invent AGI, yet the article title is "AGI isn't close", which is, umm, a prediction about when people will invent AGI, right?
If the article had said "technological forecasting is extremely hard, therefore we should just say we don't know when we'll get AGI, and we should make contingency-plans for AGI arriving tomorrow or in 10 years or in 100 years or 1000 etc.", I would have been somewhat more sympathetic.
(Although I still think numerical forecasts are a valuable way to communicate beliefs even in fraught domains where we have very little to go on -- I strongly recommend the book "Superforecasting".)
(Relatedly, the title of this post uses the word "close" without defining it, I think. Is 500 years "close"? 50 years? 5 years? If you're absolutely confident that "AGI isn't close" as in we won't have AGI in the next 30 years (or whatever), which part of the article explains why you believe that 30 years (or whatever) is insufficient?)
As written, the article actually strikes me as doing the crazy thing where people sometimes say "we don't know 100% for sure that we'll definitely have AGI in the next 30 years, therefore we should act as if we know 100% for sure that we definitely won't have AGI in the next 30 years". If that's not your argument, good.
I had a very bad time with RSI from 2006-7, followed by a crazy-practically-overnight-miracle-cure-happy-ending. See my recent blog post The “mind-body vicious cycle” model of RSI & back pain for details & discussion. :)
The implications for "brand value" would depend on whether people learn about "EA" as the perpetrator vs. victim. For example, I think there were charitable foundations that got screwed over by Bernie Madoff, and I imagine that their wiki articles would have also had a spike in views when that went down, but not in a bad way.
Related:
I’m in no position to judge how you should spend your time all things considered, but for what it’s worth, I think your blog posts on AI safety have been very clear and thoughtful, and I frequently recommend them to people (example). For example, I’ve started using the phrase “The King Lear Problem” from time to time (example).
Anyway, good luck! And let me know if there’s anything I can do to help you. 🙂