Matthew_Barnett

1348Joined Nov 2017

Comments
102

Algorithmic progress on ImageNet seems to effectively halve compute requirements every 4 to 25 months (Erdil and Besiroglu 2022); assume that the doubling time is 50% longer for transformers.

I think it's important not to take the trend in algorithmic progress too literally. At the moment, we only really know the rate for computer vision, which might be very different than for other tasks. The confidence interval is also quite wide, as you mentioned (the 5th percentile is 4 months and 95th percentile is 25 months). And algorithmic progress is plausibly driven by increasing algorithmic experimentation over time, which might become bottlenecked after either the relevant pool of research talent is exhausted or we reach hardware constraints. For these reasons, I have wide uncertainty regarding the rate of general algorithmic progress in the future.

In my experience, fast algorithmic progress is often the component that yields short timelines in compute-centric models. And yet, both the rate and the mechanism behind algorithmic progress is very poorly understood. Extrapolating this rate naively gives a false impression of high confidence in the future, in my opinion. Assuming that the rate is exogenous gives the arguably false impression that we can't do much to change it. I would be very careful before interpreting these results.

You're right that "forecasting" might not be the right word. Informed speculation might be more accurate, but that might confuse people, since there's already plenty of work people call "AI forecasting" that looks similar to what I'm talking about.

I also think that there are a lot of ways in which AI forecasting can be done in the sense you described, by "generating fairly precise numerical estimates through some finding a reference class, establishing a base rate, and applying a beautiful sparkle of intuition". For example, if you look at Epoch's website, you can find work that follows that methodology, e.g. here.

I also agree that climate change researchers have much more access to historical data and, in some ways, the problem they're working on is easier than the problem I'm trying to work on. I still think that AI forecasting and climate forecasting are conceptually similar, however. And in fact, to the extent that AI plays a large role in shaping the future of life on Earth,  climate forecasts should probably take AI into account. So, these problems are interrelated.

I don't personally have a strong sense as to what opportunities are available. I think Epoch and AI Impacts are both great organizations in this space. However, I think this type of work can plausibly also come from anywhere that thinks rigorously about the future of AI.

Average differences in traits based on age are sometimes quite large, enough that the value of using the prejucide for making predictions can exceed the unfairness downsides of stereotyping people.

Even if there were no average differences in maturity between age groups, it still might be rational to prefer older people for important roles like president or CEO for pure credentialing reasons. The reason is simple. 23 year olds have had less time to prove their maturity. Even if they were highly mature, their track record would be brief, and thus not conclusive.

Most young people today (even most nerds) know that what Bostrom said (even though it was in the context of giving an example of what you shouldn't say) would elicit strong negative reactions, given how much media attention these things receive. I assume this was less obvious to nerds in the 1990s (though it was probably fairly predictable even back then).

It is perhaps important to note that in the original email, Bostrom quite directly says that he is aware of the social norm about not saying what he said. In fact, that was one of the main points of the email: that saying something true in a blunt manner about a controversial topic is likely to be viewed as offensive. If Bostrom learned anything -- and indeed, he apologized within 24 hours -- it was that saying something like that can be inadvisable even among friends.

In general, I don't think old people generally have a stronger understanding of social norms than younger people. Old people will of course have more experience to draw from, and their mannerisms will have gone through more trial and error. In that sense, I agree: old people are often wiser. But the frontier of cultural norms are generally driven by young people, and old people are often left out of that conversation.

It is not uncommon to hear young people say they're shocked by their older relatives who are ignorant or only superficially aware of social norms that became widespread in the last ten years, e.g. stating one's pronouns while introducing oneself. To the extent that we are judging people on their understanding of current social norms, we should probably hold young adults to the strictest standards of any group.

Bostrom was essentially still a kid (age ~23) when he wrote the 1996 email.

I agree with Jason. I don't think being 23 years old means that you're "essentially still a kid". 

If we want to judge young adults for their positive achievements, it makes sense to hold a symmetric attitude and judge them for their mistakes as well (though one could take the perspective that we shouldn't judge anyone for making mistakes, but that's a separate argument).

Fluid intelligence is generally considered to peak between one's late teens to mid 20s,  and the majority of measured cognitive abilities either decline or only very slightly rise after the age of 23. If we use cognitive ability as the marker of adulthood, rater than life experience, one could even make the case that 23 year olds are more "adult" than any other age group.

(Though of course I might be biased, because I'm 23 years old right now.)

I mean, it plausibly did cause net harm to the Bahamas in this case, even if that wasn't what people expected.

In comparison, tech stocks might be valued at like ~$6T?

According to companiesmarketcap.com, Apple, Microsoft, Google and Amazon alone are worth $6 trillion. I am not familiar with an estimate of the total market size, but I suspect it is substantially higher than $6 trillion?

At the time Cinera's post was published, the most upvoted post on the EA forum about the controversy was this post, which explicitly said that Bostrom's apology was insufficient,

His apology fails badly to fully take responsibility or display an understanding of the harm the views expressed represent.

Answer by Matthew_BarnettDec 30, 20223-3

I disagree with the other commenters here who say that tax evasion is theft. I think theft requires taking someone else's property, and I don't think the government generally has a strong moral claim to your property.

That said, the penalty for deliberate tax evasion can be severe. You can easily find people convicted for tax evasion who face years of prison and large fines. I believe this downside outweighs the potential benefit of donating your money instead.

Load more