DM

David Mathers🔸

4839 karmaJoined

Posts
10

Sorted by New

Comments
546

Because you are so strongly pushing a particular political perspective on twitter-tech right=good roughly, I worry that your bounties are mostly just you paying people to say things you already believe about those topics. Insofar as you mean to persuade people on the left/centre of the community to change their views on these topics, maybe it would be better to do something like make the bounties conditional on people who disagree with your takes finding the investigations move their views in your direction.

I also find the use of the phrase "such controversial criminal justice policies" a bit rhetorical dark artsy and mildly incompatible with your calls for high intellectual integrity. It implies that a strong reason to be suspicious of Open Phil's actions has been given. But you don't really think the mere fact that a political intervention on an emotive, polarized topic is controversial is actually particularly informative about it. Everything on that sort of topic is controversial, including the negation of the Open Phil view on the US incarceration rate. The phrase would be ok if you were taking a very general view that we should be agnostic all political issues where smart, informed people disagree. But you're not doing that, you take lots of political stances in the piece: de-regulatory libertarianism, the claim that environmentalism has been net negative and Dominic Cummings can all accurately be described as "highly controversial".

 

Maybe I am making a mountain out of a molehill here. But I feel like rationalists themselves often catastrophise fairly minor slips into dark arts like this as strong evidence that someone lacks integrity. (I wouldn't say anything as strong as that myself; everyone does this kind of thing sometimes.) And I feel like if the NYT referred to AI safety as "tied to the controversial rationalist community" or to "highly controversial blogger Scott Alexander" you and other rationalists would be fairly unimpressed.

More substantively (maybe I should have started with this as it is a more important point), I think it is extremely easy to imagine the left/Democrat wing of AI safety becoming concerned with AI concentrating power, if it hasn't already. The entire techlash anti "surveillance" capitalism, "the algorithms push extremism" thing from left-leaning tech critics is ostensibly at least about the fact that a very small number of very big companies have acquired massive amounts of unaccountable power to shape political and economic outcomes. More generally, the American left has, I keep reading, been on a big anti-trust kick recently. The explicit point of anti-trust is to break up concentrations of power. (Regardless of whether you think it actually does that, that is how its proponents perceive it. They also tend to see it as "pro-market"; remember that Warren used to be a libertarian Republican before she was on the left.) In fact, Lina Khan's desire to do anti-trust stuff to big tech firms was probably one cause of Silicon Valley's rightward shift.

 It is true that most people with these sort of views are currently very hostile to even the left-wing of AI safety, but lack of concern about X-risk from AI isn't the same thing as lack of concern about AI concentrating power. And eventually the power of AI will be so obvious that even these people have to concede that it is not just fancy autocorrect.

It is not true that all people with these sort of concerns only care private power and not the state either. Dislike of Palantir's nat sec ties is a big theme for a lot of these people, and many of them don't like the nat sec-y bits of the state very much either. Also a relatively prominent part of the left-wing critique of DOGE is the idea that it's the beginning of an attempt by Elon to seize personal effective control of large parts of the US federal bureaucracy, by seizing the boring bits of the bureaucracy that actually move money around. In my view people are correct to be skeptical that Musk will ultimately choose decentralising power over accumulating it for himself.

Now strictly speaking none of this is inconsistent with your claim that the left-wing of AI safety lacks concern about concentration of power, since virtually none of these anti-tech people are safetyists. But I think it still matters for predicting how much the left wing of safety will actually concentrate power, because future co-operation between them and the safetyists against the tech right and the big AI companies is a distinct possibility.  

Section 4 is completely over my head I have to confess. 

Edit: But the abstract gives me what I wanted to know  :)  : "To quantify the capabilities of AI systems in terms of human capabilities, we propose a new metric: 50%-task-completion time horizon. This is the time humans typically take to complete tasks that AI models can complete with 50% success rate"

It's actually the majority view amongst academics who directly study the issue. (I'm probably an anti-realist though). https://survey2020.philpeople.org/survey/results/486

I don't quite get what that means. Do they really take exactly the same amount of time on all tasks for which they have the same success rate? Sorry, maybe I am being annoying here and this is all well-explained in the linked post. But I am trying to figure out how much this is creating the illusion that progress on it means a model will be able to handle all tasks that it takes normal human workers about that amount of time to do, when it really means something quite different.  

"I don't think that, for a given person, existing can be better or worse than not existing. " 

Presumably even given this, you wouldn't create a person who would spending their entire life in terrible agony, begging for death. If that can be a bad thing to do even though existing can't be worse than not existing, then why can't it be a good thing to create happy people, even though existing can't be better than not existing? 

Is the point when models hit a length of time on the x-axis of the graph meant to represent the point where models can do all tasks of that length that a normal knowledge worker could perform on a computer? The vast majority of knowledge worker tasks of that length? At least one task of that length? Some particular important subset of tasks of that length? 

Morally, I am impressed that you are doing an in many ways socially awkward and uncomfortable thing because you think it is right. 

BUT

I strongly object to you citing the Metaculus AGI question as significant evidence of AGI by 2030. I do not think that when people forecast that question, they are necessarily forecasting when AGI, as commonly understood or in the sense that's directly relevant to X-risk will arrive. Yes the title of the question mentions AGI. But if you look at the resolution criteria, all an AI model has to in order to resolve the question 'yes' is pass a couple of benchmarks involving coding and general knowledge, put together a complicated model car, and imitate. None of that constitutes being AGI in the sense of "can replace any human knowledge worker in any job". For one thing, it doesn't involve any task that is carried out over a time span of days or weeks, but we know that memory and coherence over long time scales is something current models seem to be relatively bad at, compared to passing exam-style benchmarks. It also doesn't include any component that tests the ability of models to learn new tasks at human-like speed, which again, seems to be an issue with current models. Now, maybe despite all this, it's actually the case that any model that can pass the benchmark will in fact be AGI in the sense of "can permanently replace almost any human knowledge worker", or at least will obviously only be a 1-2 years of normal research progress away from that. But that is a highly substantive assumption in my view. 

I know this is only one piece of evidence you cite, and maybe it isn't actually a significant driver of your timelines, but I still think it should have been left out. 

Yes. (Though I'm not saying this will happen, just that it could, and that is more significant than a short delay.) 

Load more