P

postlibertarian

16 karmaJoined Nov 2020

Posts
1

Sorted by New

Comments
2

I don't follow your point about infosec.  The RAND link summary seems to argue that the case against crypto is that it may not be widespread enough to be a good money laundering source, not that laundering won't work.  Or maybe that's what you're saying? So sure, I agree the NSA will continue to target terrorist groups...but they can't do it through monero or zcash! There just isn't enough information leakage. But maybe your point is just that there are enough other attack surfaces that intelligence services will continue to target them?

(edit: just saw you are the author, ha! So I took a few more minutes to read, and tbh I'm not sure we disagree. I understand Bitcoin can be somewhat deanonymized, but there are lots of easy ways to make that harder, and monero and zcash seem very  private. My point is more that this contributes additional risk overall and we even see examples of this already even w/o e.g. monero being widely accepted.)

I suppose that is somewhat of a rebuff to my thesis as written, perhaps too provocatively. I guess what I mean to say is that *some funding will leak through* and stopping all illegal funding through cryptocurrencies is an impossible task (but perhaps a silly point as maybe you could say similar things about cash).  And by proof, I'd present that Hamas already uses cryptocurrencies to route around sanctions and regulations, as does North Korea.  My concern is that you only need one crazy terrorist group to make a superflu, and I think cryptocurrencies could make that more likely.

That means they will continue to run to embrace KYC/AMF regulation

Hmm, this doesn't make sense to me. Bitcoin miners don't have any KYC requirements, and don't track that information. That's why people were annoyed with the Biden Infrastructure Bill, as it appeared to change the rules, effectively making mining illegal in the US (obviously lots of regulatory guidance still to be issued, etc.). Coin Center covered a lot of that,  here and here probably good places to start.  But yeah I've done a quick 5 minute check and I've found many Bitcoin mining pools, none of which have KYC. Maybe you could elaborate what you mean here?

Also, this isn't particularly relevant but I find it interesting (and controversial!) so I'll dive in a bit: I maintain that nodes (that is, users) determine consensus in Bitcoin, not hashpower. If you take 90% of Bitcoin's hashpower and start breaking consensus rules,  you can mine blocks, but no one will accept them. It would be chaotic and crazy for sure, and of course would result in a hard fork. But the mainchain would continue at 10% of mining hashpower, all the exchanges and users would just ignore the other chain because those blocks break consensus rules.  It'd be slow for a bit, but it wouldn't be a huge problem.  The 90% hashpower would lose all the resources they sunk into their forked chain that no one uses. We even saw this play out in 2017 when Bitcoin Cash hard forked with a bunch of hashpower. The main Bitcoin chain kept going, because most nodes stayed with the main chain regardless of where hashpower went. 

I have a lot of thoughts but not a lot of time, so apologies if this is a bit scatterbrained. 

I've read your blog, Roodman's blog from last year and a lot of Roodman's report. I see this line of thinking in the following way: 

Some EAs/rationalists/AI alignment groups believe that AI could be transformative because AI itself is unlike anything that has come before (I mostly share this view). Your and Roodman's line of inquiry is to do a check on this view from an "outside view" perspective, using long term economic growth to make sure that there is more supporting the transformative AI idea than just "AI is totally different from anything before" -- the inside view. 

This could be particularly useful for AI timelines, and also perhaps convincing skeptics of transformative AI that this is worth considering. 

The big problem then, is that economic growth over the last century+ has been at a fairly constant rate, or at least, it's certainly not increasing. 

So I completely agree with your assessment of Roodman's model. 

I actually wrote about this is a blog post earlier this year. I'm interested in the question of whether we should focus any time thinking about economic growth as a major policy outcome; if transformative AI is very close, then getting US GDP from 1.5% to 3% is kinda unimportant. 

If I'm an AI skeptic, I don't think Roodman's model convinces me of much. It can't really rely on GWP data of the last century because it doesn't fit the model, so the entirety of the argument stems from GWP estimates going back 10,000 years. And the best guesses of late 20th century economists about how to measure "Gross production" in 5000 BC just seem really shaky. 

So, yes, it seems really unconvincing to pull AI timelines from this data. 

 

Also, on endogenous growth models, you don't exactly mention it in your post, but what really jumped out at me was that you say around ~1880, people started getting richer, there wasn't just population growth when technological progress was made.  But then the next point seems very clear: there's been tons of population growth since 1880 and yet growth rates are not 4x 1880 growth rates despite having 4x the population. The more people -> more ideas thing may or may not be true, but it hasn't translated to more growth. 

So if AI is exciting because AIs could start expanding the number of "people" or agents coming up with ideas, why aren't we seeing huge growth spurts now?

Ok final thing: I think the question of GDP measurement is a big deal here. GDP deflators determine what counts as "economic growth" compared to nominal price changes, but deflators don't really know what to do with new products that didn't exist. What was the "price" of an iPhone in 2000? Infinity? Could this help recover Roodman's model? If ideas being produced end up as new products that never existed before, could that mean that GDP deflators should be "pricing" these replacements as massively cheaper, thus increasing the resulting "real" growth rate? 

It certainly seems plausible to me, but I'm not sure what the practical relevance is. Would this convince people that transformative AI is a possibility? Would it give us better timelines? It seems like we're just kinda inventing our own growth model again and then declaring that shows an "outside view" that transformative AI is a possibility. This seems unlikely to convince skeptics, but perhaps the critique of GDP calculation alone is worth broadly articulating before making any claims about AI.