QP

Quintin Pope

271 karmaJoined Nov 2021

Comments
9

I think it's potentially misleading to talk about public opinion on AI exclusively in terms of US polling data, when we know the US is one of the most pessimistic countries in the world regarding AI, according to Ipsos polling. The figure below shows agreement with the statement "Products and services using artificial intelligence have more benefits than drawbacks", across different countries:

Image

This is especially true given the relatively smaller fraction of the world population that the US and similarly pessimistic countries represent.

if-we-don't-someone-will

They (Meta) literally did do it. They open sourced a GPT-3 clone called OPT. It’s 175-B parameter version is the most powerful LM whose weights are publicly available. I have no idea why they released a system as bad as Blenderbot, but don’t let their worst projects distort your impression of their best projects. They’re 6 months behind Deepmind, not 6 years.

Nate / Elizer / others I've seen arguing for a sharp left turn appeal to an evolution -> human capabilities analogy and say that evolution's outer optimization process built a much faster human inner optimization process whose capability gains vastly outstripped those evolution built into humans. They seem to expect a similar thing to happen with SGD creating some inner thing which is not SGD and gains capabilities much faster than SGD can "insert" them into the AI. Then, just like human civilization exploded in capabilities over a tiny evolutionary timeframe, so too will AIs explode in capabilities over a tiny "SGD timeframe". 

I think this is very wrong, and that "evolution -> human capabilities" is a very bad reference class to make predictions about "AI training -> AI capabilities". We don't train out AIs via an outer optimizer over possible inner learning processes, where each inner learning process is initialized from scratch, then takes billions inner learning steps before the outer optimization process take one step, and then is deleted after the outer optimizer's single step. Obviously, such a "two layer" training process would experience a "sharp left turn" once each inner learner became capable of building off the progress made by the previous inner learners (which happened in humans via culture / technological progress from one generation to another).

However, this "sharp left turn" does not occur because the inner learning processes is inherently better / more foomy / etc. than the outer optimizer. It happens because you devoted billions of times more resources to the inner learning processes, but then deleted each inner learner after a short amount of time. Once the inner learning processes become capable enough to pass their knowledge along to their successors, you get what looks like a sharp left turn. But that sharp left turn only happens because the inner learners have found a kludgy workaround past the crippling flaw where they all get deleted shortly after initialization.

In my frame, we've already figured out and applied the "sharp left turn" to our AI systems, in that we don't waste our compute on massive amounts of incredibly inefficient neural architecture search or hyperparameter tuning[1]. We know that, for a given compute budget, the best way to spend it on capabilities is to train a single big model in accordance with the empirical scaling laws discovered in the Chinchilla paper, not to split the compute budget across millions of different training runs for vastly tinier models with slightly different architectures / training processes. The marginal return on architecture tweaking is much lower than the return to direct scaling.

(Also, we don't delete our AIs well before they're fully trained and start again from scratch using the same number of parameters. I feel a little silly to be emphasizing this point so often, but I think it really does get to the crux of the matter. Evolution's sharp left turn happened because evolution spent compute in a shockingly inefficient manner for increasing capabilities. Once you condition on this specific failure mode of evolution, there really is nothing else to be explained here, and no reason to suppose some general tendency towards "sharpness" in inner capability gains.)

  1. ^

    It can be useful to do hyperparameter tuning on smaller versions of the model you're training. My point is that relatively little of your compute budget should go into such tweaking

SensorLog is an app that lets you continuously record iPhone sensor data as stream to a file or web server. You might use it as a convenient form of life logging. Presumably, resurrection is easier if the intelligence doing it has lots of info about your location, movements, environment, etc.

I don’t know if the title was changed after this comment, but “Any feedback on my EAIF application?” initially made me think that you had an active EAIF application and wanted the EAIF grant makers to publicly comment on the ongoing application.

Maybe something like “Request: critique my EAIF application”?

People are currently working on doing exactly this. E.g., Adept is training language models to use external software. They’re aiming to build a “natural language interface” to various pieces of software.

See also: https://mobile.twitter.com/jluan/status/1519035169537093632

What if the man you’re talking to then shows that praying to the approaching god actually works?

What if the man shows that anyone who knows the appropriate rituals can pray to the god and get real benefits from doing so?

What if people have already made billions of dollars in profit from industrialized prayer?

What if the man conclusively shows that prayers to the god have been getting easier and more effective over time?

In such a case, you should treat the god’s approach very seriously. Without such proof, you should dismiss his claims. The difference is that gods are imaginary, and AI is real.

This is insufficiently meta. Consider that this very simple and vague payout scheme is probably not optimal for encouraging good bounty suggestions. I suggest going one level up and putting out a bounty for the optimal incentive structure of bounty bounties. A bounty bounty bounty, if you will.

(This is mostly a joke, but I’m not adverse to getting paid if you actually decide to do it)

Edit: now that I’ve thought about it more, something in this space is probably worthwhile. A “bounty bounty bounty” is, funnily enough, both too specific and too abstract. However, a general “bounty on optimal bounty schemes” may be very valuable. A detailed investigation into the optimal bounty payouts for different goals, how best to split bounties among multiple participants, how best to score proposals, etc are all important questions for bounty construction. A bounty to answer such questions makes sense.