LF

Lukas Finnveden

1413 karmaJoined Aug 2018

Bio

Research analyst at Open Philanthropy. All opinions are my own.

Sequences
1

Project ideas for making transformative AI go well, other than by working on alignment

Comments
151

Topic contributions
1

compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have "low" estimates

Christiano says ~22% ("but you should treat these numbers as having 0.5 significant figures") without a time-bound; and Carlsmith says ">10%" (see bottom of abstract) by 2070. So no big difference there.

I'll hopefully soon make a follow-up post with somewhat more concrete projects that I think could be good. That might be helpful.

Are you more concerned that research won't have any important implications for anyone's actions, or that the people whose decisions ought to change as a result won't care about the research?

Similary, 'Politics is the Mind-Killer' might be the rationalist idea that has aged worst - especially for its influences on EA.

What influence are you thinking about? The position argued in the essay seems pretty measured.

Politics is an important domain to which we should individually apply our rationality—but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational. [...]

I’m not saying that I think we should be apolitical, or even that we should adopt Wikipedia’s ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party.

I liked this recent interview with Mark Dybul who worked on PEPFAR from the start: https://www.statecraft.pub/p/saving-twenty-million-lives

One interesting contrast with the conclusion in this post is that Dybul thinks that PEPFAR's success was a direct consequence of how it didn't involve too many people and departments early on — because the negotiations would have been too drawn out and too many parties would have tried to get pieces of control. So maybe a transparent process that embraced complexity wouldn't have achieved much, in practice.

(At other parts in the process he leaned farther towards transparency than was standard — sharing a ton of information with congress.)

FWIW you can see more information, including some of the reasoning, on page 655 (# written on pdf) /  659 (# according to page searcher) of the report. (H/t Isabel.) See also page 214 for the definition of the question.

Some tidbits:

Experts started out much higher than superforecasters, but updated downwards after discussion. Superforecasters updated a bit upward, but less:

(Those are billions on the y-axis.)

This was surprising to me. I think the experts' predictions look too low even before updating, and look much worse after updating!

The part of the report that talks about "arguments given for lower forecasts". (The footnotes contain quotes from people expressing those views.)

Arguments given for lower forecasts (2024: <$40m, 2030: <$110m, 2050: ⩽$200m)

● Training costs have been stable around $10m for the last few years.1326

● Current trend increases are not sustainable for many more years.1327 One team cited this AI Impacts blog post.

● Major companies are cutting costs.1328

● Increases in model size and complexity will be offset by a combination of falling compute costs, pre-training, and algorithmic improvements.1329

● Large language models will probably see most attention in the near future, and these are bottlenecked by availability of data, which will lead to smaller models and less compute.1330

● Not all experiments will be public, and it is possible that the most expensive experiments will not be public.1331

(This last bullet point seems irrelevant to me. The question doesn't specify that the experiments has to be public, and "In the absence of an authoritative source, the question will be resolved by a panel of experts.")

It's the crux between you and Ajeya, because you're relatively more in agreement on the other numbers. But I think that adopting the xpt numbers on these other variables would slow down your own timelines notably, because of the almost complete lack of increase in spending.

That said, if the forecasters agreed with your compute requirements, they would probably also forecast higher spending.

in terms of saving “disability-adjusted life years” or DALYs, "a case of HIV/AIDS can be prevented for $11, and a DALY gained for $1” by improving the safety of blood transfusions and distributing condoms

These numbers are wild compared to eg current givewell numbers. My guess would be that they're wrong, and if so, that this was a big part of why PEPFAR did comparatively better then expected. Or maybe that they were significantly less scalable (measured in cost of marginal life saved as a function of lives saved so far) than PEPFAR.

If the numbers were right, and you could save more lives than PEPFAR for 100x less money (or 30x (?) less after taking into account some falls in cost), I'm not sure I buy that the political feasibility of PEPFAR was greater than the much cheaper ask (a priori). At least I get very sympathetic to the then-economists.

(But again, I'd guess those numbers were probably wrong or unscalable?)

Nice, gotcha.

Incidentally, as its central estimate for algorithmic improvement, the takeoff speeds model uses AI and Efficiency's ~1.7x per year, and then halves it to ~1.3x per year (because todays' algorithmic progress might not generalize to TAI). If you're at 2x per year, then you should maybe increase the "returns to software" from 1.25 to ~3.5, which would cut the model's timelines by something like 3 years. (More on longer timelines, less on shorter timelines.)

Yeah sorry, I didn't mean to say this directly contradicted anything you said. It just felt like a good reference that might be helpful to you or other people reading the thread. (In retrospect, I should have said that and/or linked it in response to the mention in your top-level comment instead.)

(Also, personally, I do care about how much effort and selection is required to find good retrodictions like this, so in my book "I didn't look up the data on Google beforehand" is relevant info. But it would have been way more impressive if someone had been able to pull that off in 1890, and I agree this shouldn't be confused for that.)

Re "it was incorrect by an order of magnitude": that seems fine to me. If we could get that sort of precision for predicting TAI, that would be awesome and outperform any other prediction method I know about.

Load more