steve2152

Hi I'm Steve Byrnes, an AGI safety researcher in Boston, MA, USA, with a particular focus on brain algorithms—see https://sjbyrnes.com/agi.html

Wiki Contributions

Comments

A mesa-optimization perspective on AI valence and moral patienthood

Oh, you said "evolution-type optimization", so I figured you were thinking of the case where the inner/outer distinction is clear cut. If you don't think the inner/outer distinction will be clear cut, then I'd question whether you actually disagree with the post :) See the section defining what I'm arguing against, in particular the "inner as AGI" discussion.

A mesa-optimization perspective on AI valence and moral patienthood

Nah, I'm pretty sure the difference there is "Steve thinks that Jacob is way overestimating the difficulty of humans building AGI-capable learning algorithms by writing source code", rather than "Steve thinks that Jacob is way underestimating the difficulty of computationally recapitulating the process of human brain evolution".

For example, for the situation that you're talking about (I called it "Case 2" in my post) I wrote "It seems highly implausible that the programmers would just sit around for months and years and decades on end, waiting patiently for the outer algorithm to edit the inner algorithm, one excruciatingly-slow step at a time. I think the programmers would inspect the results of each episode, generate hypotheses for how to improve the algorithm, run small tests, etc." If the programmers did just sit around for years not looking at the intermediate training results, yes I expect the project would still succeed sooner or later. I just very strongly expect that they wouldn't sit around doing nothing.

A mesa-optimization perspective on AI valence and moral patienthood

AlphaGo has a human-created optimizer, namely MCTS. Normally people don't use the term "mesa-optimizer" for human-created optimizers.

Then maybe you'll say "OK there's a human-created search-based consequentialist planner, but the inner loop of that planner is a trained ResNet, and how do you know that there isn't also a search-based consequentialist planner inside each single run through the ResNet?"

Admittedly, I can't prove that there isn't. I suspect that there isn't, because there seems to be no incentive for that (there's already a search-based consequentialist planner!), and also because I don't think ResNets are up to such a complicated task.

AI timelines and theoretical understanding of deep learning

I find most justifications and arguments made in favor of a timeline of less than 50 years to be rather unconvincing. 

If we don't have convincing evidence in favor of a timeline <50 years, and we also don't have convincing evidence in favor of a timeline ≥50 years, then we just have to say that this is a question on which we don't have convincing evidence of anything in particular. But we still have to take whatever evidence we have and make the best decisions we can. ¯\_(ツ)_/¯ 

(You don't say this explicitly but your wording kinda implies that ≥50 years is the default, and we need convincing evidence to change our mind away from that default. If so, I would ask why we should take ≥50 years to be the default. Or sorry if I'm putting words in your mouth.)

I am simply not able to understand why we are significantly closer to AGI today than we were in 1950s

Lots of ingredients go into AGI, including (1) algorithms, (2) lots of inexpensive chips that can do lots of calculations per second, (3) technology for fast communication between these chips, (4) infrastructure for managing large jobs on compute clusters, (5) frameworks and expertise in parallelizing algorithms, (6) general willingness to spend millions of dollars and roll custom ASICs to run a learning algorithm, (7) coding and debugging tools and optimizing compilers, etc. Even if you believe that you've made no progress whatsoever on algorithms since the 1950s, we've made massive progress in the other categories. I think that alone puts us "significantly closer to AGI today than we were in the 1950s": once we get the algorithms, at least everything else will be ready to go, and that wasn't true in the 1950s, right?

But I would also strongly disagree with the idea that we've made no progress whatsoever on algorithms since the 1950s. Even if you think that GPT-3 and AlphaGo have absolutely nothing whatsoever to do with AGI algorithms (which strikes me as an implausibly strong statement, although I would endorse much weaker versions of that statement), that's far from the only strand of research in AI, let alone neuroscience. For example, there's a (IMO plausible) argument that PGMs and causal diagrams will be more important to AGI than deep neural networks are. But that would still imply that we've learned AGI-relevant things about algorithms since the 1950s. Or as another example, there's a (IMO misleading) argument that the brain is horrifically complicated and we still have centuries of work ahead of us in understanding how it works. But even people who strongly endorse that claim wouldn't also say that we've made "no progress whatsoever" in understanding brain algorithms since the 1950s.

Sorry if I'm misunderstanding.

isn't there an infinite degree of freedom associated with a continuous function?

I'm a bit confused by this; are you saying that the only possible AGI algorithm is "the exact algorithm that the human brain runs"? The brain is wired up by a finite number of genes, right?

A mesa-optimization perspective on AI valence and moral patienthood

most contemporary progress on AI happens by running base-optimizers which could support mesa-optimization

GPT-3 is of that form, but AlphaGo/MuZero isn't (I would argue).

I'm not sure how to settle whether your statement about "most contemporary progress" is right or wrong. I guess we could count how many papers use model-free RL vs model-based RL, or something? Well anyway, given that I haven't done anything like that, I wouldn't feel comfortable making any confident statement here. Of course you may know more than me! :-)

If we forget about "contemporary progress" and focus on "path to AGI", I have a post arguing against what (I think) you're implying at Against evolution as an analogy for how humans will create AGI, for what it's worth.

Ideally we'd want a method for identifying valence which is more mechanistic that mine. In the sense that it lets you identify valence in a system just by looking inside the system without looking at how it was made.

Yeah I dunno, I have some general thoughts about what valence looks like in the vertebrate brain (e.g. this is related, and this) but I'm still fuzzy in places and am not ready to offer any nice buttoned-up theory. "Valence in arbitrary algorithms" is obviously even harder by far.  :-)

AI timelines and theoretical understanding of deep learning

Have you read https://www.cold-takes.com/where-ai-forecasting-stands-today/ ?

I do agree that there are many good reasons to think that AI practitioners are not AI forecasting experts, such as the fact that they're, um, obviously not—they generally have no training in it and have spent almost no time on it, and indeed they give very different answers to seemingly-equivalent timelines questions phrased differently. This is a reason to discount the timelines that come from AI practitioner surveys, in favor of whatever other forecasting methods / heuristics you can come up with. It's not per se a reason to think "definitely no AGI in the next 50 years".

Well, maybe I should just ask: What probability would you assign to the statement "50 years from today, we will have AGI"? A couple examples:

  • If you think the probability is <90%, and your intention here is to argue against people who think it should be >90%, well I would join you in arguing against those people too. This kind of technological forecasting is very hard and we should all be pretty humble & uncertain here. (Incidentally, if this is who you're arguing against, I bet that you're arguing against fewer people than you imagine.)
  • If you think the probability is <10%, and your intention here is to argue against people who think it should be >10%, then that's quite a different matter, and I would strongly disagree with you, and I would very curious how you came to be so confident. I mean, a lot can happen in 50 years, right? What's the argument?
A mesa-optimization perspective on AI valence and moral patienthood

Let's say a human writes code more-or-less equivalent to the evolved "code" in the human genome. Presumably the resulting human-brain-like algorithm would have valence, right? But it's not a mesa-optimizer, it's just an optimizer. Unless you want to say that the human programmers are the base optimizer? But if you say that, well, every optimization algorithm known to humanity would become a "mesa-optimizer", since they tend to be implemented by human programmers, right? So that would entail the term "mesa-optimizer" kinda losing all meaning, I think. Sorry if I'm misunderstanding.

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

Addendum: In the other direction, one could point out that the authors were searching for "an approximation of an approximation of a neuron", not "an approximation of a neuron". (insight stolen from here.) Their ground truth was a fancier neuron model, not a real neuron. Even the fancier model is a simplification of real life. For example, if I recall correctly, neurons have been observed to do funny things like store state variables via changes in gene expression. Even the fancier model wouldn't capture that. As in my parent comment, I think these kinds of things are highly relevant to simulating worms, and not terribly relevant to reverse-engineering the algorithms underlying human intelligence.

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

It's possible much of that supposed additional complexity isn't useful

Yup! That's where I'd put my money.

It's a forgone conclusion that a real-world system has tons of complexity that is not related to the useful functions that the system performs. Consider, for example, the silicon transistors that comprise digital chips—"the useful function that they perform" is a little story involving words like "ON" and "OFF", but "the real-world transistor" needs three equations involving 22 parameters, to a first approximation!

By the same token, my favorite paper on the algorithmic role of dendritic computation has them basically implementing a simple set of ANDs and ORs on incoming signals. It's quite likely that dendrites do other things too besides what's in that one paper, but I think that example is suggestive.

Caveat: I'm mainly thinking of the complexity of understanding the neuronal algorithms involved in "human intelligence" (e.g. common sense, science, language, etc.), which (I claim) are mainly in the cortex and thalamus. I think those algorithms need to be built out of really specific and legible operations, and such operations are unlikely to line up with the full complexity of the input-output behavior of neurons. I think the claim "the useful function that a neuron performs is simpler than the neuron itself" is always true, but it's very strongly true for "human intelligence" related algorithms, whereas it's less true in other contexts, including probably some brainstem circuits, and the neurons in microscopic worms. It seems to me that microscopic worms just don't have enough neurons to not squeeze out useful functionality from every squiggle in their neurons' input-output relations. And moreover here we're not talking about massive intricate beautifully-orchestrated learning algorithms, but rather things like "do this behavior a bit less often when the temperature is low" etc. See my post Building brain-inspired AGI is infinitely easier than understanding the brain for more discussion kinda related to this.

How to get more academics enthusiastic about doing AI Safety research?

See here, the first post is a video of a research meeting where he talks dismissively about Stuart Russell's argument, and then the ensuing forum discussion features a lot of posts by me trying to sell everyone on AI risk :-P

(Other context here.)

Load More