IK

Iyngkarran Kumar

69 karmaJoined

Comments
14

Across all of this my impression is that, just like with Torres, there was little to no direct pushback

Strongly agree. I think the TESCREAL/e-acc movements badly mischaracterise the EA community with extremely poor, unsubstantiated arguments, but there doesn’t seem to be much response to this from the EA side. 

I think this is very much linked to playing a strong 'inside game' to access the halls of power and no 'outside game' to gain legitimacy for that use of power

What does this refer to? I'm not familiar. 

Other thoughts on this:

Publicly, the quietness from the EA side in response to TESCREAL/e-acc/etc.  allegations is harming the community's image and what it stands for. But ‘winning’ the memetic war is important. If not, then the world outside EA - which has many smart, influential people - ends up seeing the community as a doomer cult (in the case of AI safety) or assigns some equally damaging label that lets them quickly dismiss many of the arguments being made. 

I think this is a case where the the epistemic standards of the EA community work against it. Rigorous analysis, expressing second/third-order considerations, etc. are seen as the norm for most writing on the forum. However, in places such as Twitter, these sorts of analyses aren’t ‘memetically fit’ [1]

So, I think we're in need of more pieces like the Time essay on Pausing AI - a no-punches-pulled sort of piece that gets across the seriousness of what we’re claiming. I’d like to see more Twitter threads and op-ed’s that dismantle claims like “advancements in AI have solved it’s black-box nature”, ones that don't let clearly false claims like this see the light of day in serious public discourse. 

  1. ^

    Don't get me wrong - epistemically rigorous work is great. But when responding to TESCREAL/e-acc 'critiques' that continuously hit below the belt, other tactics may be better. 

A core part of the longtermist project is making it very clear to people today that 21st century humanity is far from the peak of complex civilization. Imagine an inhabitant of a 16th-century medieval city looking at their civilization and thinking “This is it; this is civilization close to its epitome. Sure, we may build a few more castles over there, expand our army and conquer those nearby kingdoms, and develop a new way to breed ultra-fast horses, but I think the future will be like this, just bigger”. As citizens of the 21st century we’re in the position to see how wrong this would be, yet I think we’re prone to making a very similar type of error. 

To get past this error, a fun exercise is to try to explain the scale of 21st century civilization in terms of concepts that would be familiar to our 16th century friends. Then we can extrapolate this into the future to better intuit the scale of future civilisations. Here are two ways to do so:   

Military power: The United States military is the strongest armed force in the world today. How do we convey the power of such a force to citizens of the distant past? One way would be to ask them to consider their own military - foot soldiers, bowmen, cavalry, and all - and then ask how many such armies would be needed to rival the power of the modern-day US military. I’d guess that the combined armies of 100 medieval kingdoms would struggle to pose a challenge to the US military. Ditto for the 21st century. I expect the combined strength of 100 US militaries[1] to struggle to make a scratch in the military power of future civilizations. 

Infrastructure and engineering capability: Men and women of the distant past would view modern-day human civilization as god-like engineers. Today, we build continent-spanning electric grids to power our homes and construct entire cities in a handful of years. How do we communicate this engineering prowess to our 16th century medieval city counterparts? I’m no civil engineer, but I estimate that the largest state governments of today could rebuild the entire infrastructure of a medieval city in a handful of months if they tried. Ditto for the 21st century. I expect that the civilisations of the future will be able to rebuild the entirety of Earth’s infrastructure - cities, power grids, factories, etc. - within a few months. To put that into context, imagine a civilisation that, starting in January, could rebuild London, Shanghai, New York, every highway, airport, bridge, port, and dam, by the time summer rolled around. That would certainly qualify them for the title of a supercivilisation!

  1. ^

    Again 100 is a rough guess - it could be more or less, potentially by orders of magnitude.

Thanks, this was an interesting post. I was shocked to learn about the ag-gag laws. The argument in support of these just doesn't seem to stand up to the animal rights abuses that it allows to be swept under the rug. I'm surprised these managed to get passed!

What are your thoughts on the types of media coverage that should be addressed to countries at various stages of economic development? I imagine that the type of media coverage we want to direct to well-off citizens in Western Europe is different from that we want to direct to those in semi-rural parts of China

Appreciate the concreteness in the predictions!

poor at general reasoning (compared to humans)

Which examples do you think of when you say this? (Not necessarily disagreeing, I'm just interested in the different interpretations of 'LLMs are poor at general reasoning' ).

I also think that LLM reasoning can be significantly boosted with scaffolding - i.e: most hard reasoning problems can be split up into a a handful of easier reasoning problems; this can be done recursively until your LLM can solve a subproblem, then build back up the full solution. So whilst scale might not get us to a level of general reasoning that qualifies as AGI, perhaps GPT-5 (or 6) plus scaffolding can. 

Some recent thoughts on two parameters that measure how technologically advanced a civilsation (civ) is. Feedback appreciated, especially directions to related work.

Main ideas/questions: A civilisation’s stock of knowledge (A) is related, yet distinct to its capital stock/Kardashev level (K). How can we model a civilization's stock of knowledge? And how intertwined are A and K? Is it possible to unlock all the truths of the universe whilst remaining a Kardashev Type I civilisation?

 

*************

How can we measure the level of advancement of a civilisation (civ)? There are two parameters that I want to outline here. 

The first is a measure of the stock of capital resources that the civ has. Let’s call this K. A civ that maximizes K is one that has colonized multiple galaxies/galactic clusters, whilst one on the very low end of the spectrum is a medieval village; the majority of the capital stock of this ‘civ’ consists of a few flimsy wooden huts. The Kardashev scale captures what I’m gesturing at here - it measures how advanced a civ is by considering the civ’s energy consumption. On the Kardashev scale the medieval village might be a Type 0.1 civilisation, whereas the civilisation that ‘maximizes humanity’s potential’ is Kardashev Type III or greater.

The second parameter is a measure of a civ’s knowledge, and I want to pull this out as being somewhat independent of a civ’s capital stock (Kardashev level). Let’s call this second parameter A. Concretely, let’s model the task of figuring out everything there is to know about the universe as a task of collecting 100 books. Book 1, 2, 3 are the simplest and easiest bits of knowledge to find - perhaps Book 1 is discovering fire, and Book 2 is developing writing. Every time a civilization advances its state of knowledge, it adds a book to its shelf. So Einstein and co. developing general relativity last century could be thought of adding, say, Book 24 to humanity’s shelf of knowledge^[1], and Niels Bohr and co. formulating quantum mechanics might be like adding Book 27 to the shelf. A civilisation that has maximized A knows everything there is to possibly know about the universe - it has collected all 100 books^[2]. 

An interesting question results from this: How intertwined is progression along the K and A scales? To what extent does being fixed at a given Kardashev scale (say, Kardashev level 1.1) constrain a civ’s attempt to move up the A scale (which we’re measuring by the number of books the civ has on its ‘bookshelf of knowledge’). 

My intuitions have always been that K and A are tightly intertwined - when I think of a civilization that knows all the truths of the universe, I imagine a huge galaxy-spanning empire that transforms entire planets into particle colliders to probe nature’s deepest secrets. But recently I have become less convinced by this. The advent of advanced AI could lead to an intellectual explosion in which, metaphorically, tens of books are added to humanity’s shelf of knowledge, all whilst we remain less than a Type I Kardashev civilisation. 

Some questions that result from this:

  • How can we conceptualise and model a civilization's stock of knowledge? I’ve quickly conceptualised it here as accumulating books to a bookshelf; are there other ways that can be grounded in empirical data?
  • Given a suitable model of a civilisation’s stock of knowledge (A), how intertwined is A with a civilisation’s capital stock K?
    • There are some clear constraints. For example, the LHC was required to discover the Higgs Boson. But the LHC was also the result of a huge international collaboration and is a mammoth piece of engineering; saying the LHC is out of reach of a medieval village would be an understatement. 
  • Given a suitable conception of a civilization's knowledge, what’s the smallest Kardashev level required in order for a civilization to ‘solve everything’? In the framing here, what’s the smallest Kardashev scale required for a civilization to collect all 100 books? 

 

 

(1) -  General relativity might be Book 24, Book 5 or Book 99. It clearly can’t be Book 100 as we don’t have a theory of everything. 

(2) A quick tangent on the contents of Book 100 - Might Book 100 be the elusive Theory of Everything, or something else that biological intelligences like us cannot fathom, similarly to how chimps cannot grasp quantum mechanics? That’s an interesting question for sure, but one for another time. 

Great resource, thanks for putting this together!

Just stumbled upon this sequence and happy to have found it! There seems to be lots of analysis ripe for picking here.
 

Some thoughts on the strength of the grabbiness selection effect below. I’ll likely to come back to this to add further thoughts in the future.

One factor that seems to be relevant here is the number of pathways to technological completion. If we assume that the only civilisations that dominate the universe in the far future are the ones that have reached technological completion (seems pretty true to me), then tautologically, the dominating civilisations must be those who have walked the path to technological completion. Now imagine that in order to reach technological completion, you must tile 50% of the planets under your control with computer chips, but your value system means that you assign huge disvalue to tiling planets with computer chips*. As a result, you’ll refuse to walk the path to technological completion, and be subjugated or wiped out by the civilisations that did go forward with this action.

The more realistic example here is a future in which suffering subroutines are a necessary step towards technological completion, and so civilisations that disvalue suffering enough to not take this step will be dominated by civilisations that either (1) don’t care for suffering or (2) are willing to bite the bullet of creating suffering sub-routines in order to pre-emptively colonise their available resources.

So the question here is how many paths are there to technological completion? Technological completion could be like a mountain summit that is accessible from many directions - in that case, if your value system doesn’t allow you to follow one path, you can change course and reach the summit from the other direction. But if there’s just a single path with some steps that are necessary to take, then this will constrain the set of value systems that dominate the far future. Sketching out precedents for technological completion would be a first step to gaining clarity here.
 

*This value system is just for the thought experiment, I’m not claiming that it’s a likely one.

Yep, the variance of human worker teams should definitely be stressed. It’s plausible that a super team of hackers might have attack workloads on the scale of 100s to 1000s of hours [1], whereas for lower quality teams, this may be more like 100,000s of hours.

Thinking about it, I can probably see significant variance amongst AI systems due to various degrees of finetuning on cyber capabilities [2](though as you said, not as much variance as human teams). E.g: A capable foundational model may map to something like a 60th percentile hacker and so have attack workloads on the order of 10,000s of hours (like in this piece). A finetuned model might map to a 95th percentile hacker and so a team of these may have workloads on the scale of 1000s of hours.

  1. ^

    Though 100s of hours seems more on the implausible side - I'm guessing this would require a very large team (100s) of very skilled hackers.

  2. ^

    And other relevant skills, like management

Kurzgesagt script + Melody Sheep music and visuals = great video about the long term future. Someone should get a colab between the two going

Intuitively I feel that this process does generalise, and I would personally be really keen to read case studies of an idea/strategy that was moved from left to right in the diagram above. i.e a thinker initially identifies a problem, and over the following years or decades it moves to tactics research, then policy development, then advocacy and finally is implemented.  I doubt any idea in AI governance has gone through the full strategy-to-implementation lifecycle, but maybe one in climate change, nuclear risk management, or something else has? Would appreciate if anyone could link case studies of this sort!

Load more