Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3516 karmaJoined Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
340

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

(Not really an argument, although I do disagree with stuff like RP's moral weights.  Just kind of an impression / thought, that I am addressing to Vasco but also to invertebrate-suffering folks more broadly.)

Reading through this interesting and provocative (though also IMO incorrect) post and some of your helpfully linked resources & further analysis, it's hard to wrap my mind around the worldview that must follow, once you believe that each random 1m^2 patch of boreal taiga, temperate grassland, and other assorted forest biomes (as you tabulate here; screenshot below), despite appearing to be an inert patch of dirt topped by a few shrubs or a tree, actually contains the moral equivalent of DOZENS of suffering humans (like 20 - 40 humans suffering 24/7 per cube of dirt)??

In this Brian-Tomasik style world, humans (and indeed, essentially every visible thing) are just a tiny, thin crust of intelligence and complexity existing atop a vast hellish ocean of immense (albeit simple/repetitive) suffering.  (Or, if the people complaining that nematode lives might be net-positive are correct but all the other views on the importance of invertebrates are kept the same, then everything we see is the same irrelevant crust but now sitting atop a vast incomprehensible bulk of primordial pleasure.)

What is the best way to imagine this?  I am guessing that insect-welfare advocates would object to my image of each cube of dirt containing dozens of suffering humans, saying stuff like:

  • "you can't actually use RP-style moral weights to compare things in that way" (but they seem to make exactly these comparisons all the time?)
  • "it's an equivalent amount of suffering, yes, but it's such a different TYPE of suffering that you shouldn't picture suffering humans, instead it would be more accurate to picture X"  (what should X be?  maybe something simpler than an adult human but still relatable, like crying newborns or a writhing, injured insect?)
  • "negative QALYs aren't actually very bad; it's more like having a stubbed toe 24/7 than being tortured 24/7" (I am very confused about the idea of negative QALYs, neutral points, etc, and it seems everyone else is too)

Here is a picture of some square meters of boreal tundra that I googled, if it helps:
The Taiga Biome (7) - Geodiode

I'd also be very curious to know what people make of the fact that at least the most famous nematode species has only 302 neurons that are always wired up in the exact same way.  Philosophically, I tend to be of the opinion that if you made a computer simulation of a human brain experiencing torture, it would be very bad to run that simulation.  But if you then ran the EXACT same simulation again, this would not be 2x as bad -- it might not be even any worse at all than running it once.  (Ditto for running 2 copies of the simulation on 2 identical computers sitting next to each other.  Or running the simulation on a single computer with double-width wires.)  How many of those 302 neurons can possibly be involved in nematode suffering?  Maybe, idk, 10 of them?  How many states can those ten neurons have?  How many of those states are negative vs positive?  You see what I'm getting at -- how long before adding more nematodes doesn't carry any additional moral weight (under the view I outlined above), because it starts just being "literally the exact same nematode experience" simply duplicated many times?

Anyways, perhaps this perspective --wherein human civilization is essentially irrelevant except insofar as we can take action that affects the infinite ocean of primitive-but-vast nematode experience -- would seem more normal to me if I came from a more buddhist / hindu / jain culture instead of a mostly christian/western one -- mahayana buddhism is always on about innumerable worlds filled with countless beings, things persisting for endless repetitions of lifetimes, and so forth.  In contrast to christianity which places a lot of emphasis on individual human agency and the drama of historical events (like the roman empire, etc).  Or one could view it as a kind of moral equivalent of the copernican / broader scientific revolution, when people were shocked to realize that the earth is actually a tiny part of an incomprehensibly vast galaxy.  The galaxy is physically large, but it is mostly just rocks and gas, so (we console ourselves) it is not "morally large"; we are still at the center of the "moral universe".  But for many strong believers in animal welfare as a cause area, and doubly or triply so for believers in insect welfare, this is not the case.

Agreed with Marcus Abramovitch that (if nematode lives are indeed net-negative, and if one agrees with RP-style weights on the importance of very simple animals), I think it WOULD strongly suggest (both emotionally and logically) pursuing "charities that just start wildfires" (which IMO would be cost-effective -- seems pretty cheap to set stuff on fire...), or charities that promote various kinds of existential risk.  Vasco comments that nuclear war or bioweapons would likely result in even more insect suffering by diminishing the scope of human civilization, which makes a lot of sense to me.  But there are other existential risks where this defense wouldn't work.  Deliberately hastening global warming (perhaps by building a CFC-emissions factory on the sly) might shift biomes in a favorable way for the nematodes.  Steering an asteroid into the earth, or hastening the arrival of a catastrophically misaligned AI superintelligence, might effectively sterilize the planet where nukes can't.  And so on.  All the standard longtermist arguments would then apply -- even raising the chance of sterilizing the earth by a little bit would be worth a lot.  From my perspective (as someone who disagrees with the premises of this insect-welfare stuff), these implications do seem socially dangerous.

If you were Capt. Willard in Apocalypse Now > General Discussion > AR15.COM
(Pictured: how I imagine it must feel to be an insect-welfare advocate who believes that every couple meters of boreal taiga contains lifetimes of suffering??)

I actually wrote the above comment in response to a very similar "Chinese AI vs US AI" post that's currently being discussed on lesswrong.  There, commenter Michael Porter had a very helpful reply to my coment.  He references a May 2024 report from Concordia AI on "The State of AI Safety in China", whose executive summary states: 

The relevance and quality of Chinese technical research for frontier AI safety has increased substantially, with growing work on frontier issues such as LLM unlearning, misuse risks of AI in biology and chemistry, and evaluating "power-seeking" and "self-awareness" risks of LLMs. 

There have been nearly 15 Chinese technical papers on frontier AI safety per month on average over the past 6 months. The report identifies 11 key research groups who have written a substantial portion of these papers. 

China’s decision to sign the Bletchley Declaration, issue a joint statement on AI governance with France, and pursue an intergovernmental AI dialogue with the US indicates a growing convergence of views on AI safety among major powers compared to early 2023. 

Since 2022, 8 Track 1.5 or 2 dialogues focused on AI have taken place between China and Western countries, with 2 focused on frontier AI safety and governance. 

Chinese national policy and leadership show growing interest in developing large models while balancing risk prevention. 

Unofficial expert drafts of China’s forthcoming national AI law contain provisions on AI safety, such as specialized oversight for foundation models and stipulating value alignment of AGI. 

Local governments in China’s 3 biggest AI hubs have issued policies on AGI or large models, primarily aimed at accelerating development while also including provisions on topics such as international cooperation, ethics, and testing and evaluation. 

Several influential industry associations established projects or committees to research AI safety and security problems, but their focus is primarily on content and data security rather than frontier AI safety. 

In recent months, Chinese experts have discussed several focused AI safety topics, including “red lines” that AI must not cross to avoid “existential risks,” minimum funding levels for AI safety research, and AI’s impact on biosecurity.

Michael then says, "So clearly there is a discourse about AI safety there, that does sometimes extend even as far as the risk of extinction. It's nowhere near as prominent or dramatic as it has been in the USA, but it's there."

I agree that it's not like everyone in China is 100% asleep at the wheel -- China is a big place with lots of smart people, they can read the news and discuss ideas just like we can, and so naturally there are some folks there who share EA-style concerns about AI alignment.  But it does seem like the small amount of activity happening there is mostly following / echoing / agreeing with western ideas about AI safety, and seems more concentrated among academics, local governments, etc, rather than also coming from the leaders of top labs like in the USA.

As for trying to promote more AI safety thinking in China, I think it's very tricky -- if somebody like OpenPhil just naively started sending millions of dollars to fund Chinese AI safety university groups and create Chinese AI safety think tanks / evals organizations / etc, I think this would be (correctly?) percieved by China's government as a massive foreign influence operation designed to subvert their national goals in a critical high-priority area.  Which might cause them to massively crack down on the whole concept of western-style "AI safety", making the situation infinitely worse than before.  So it's very important that AI safety ideas in China arise authentically / independently -- but of course, we paradoxically want to "help them" independently come up with the ideas!  Some approaches that seem less likely to backfire here might be:

  • The mentioned "track 2 diplomacy", where mid-level government officials, scientists, and industry researchers host informal / unofficial discussions about the future of AI with their counterparts in China.
  • Since China already somewhat follows Western thinking about AI, we should try to use that influence for good, rather than accidentally egging them into an even more desperate arms race.  Eg, if the USA announces a giant "manhattan project for AI" with great fanfare, talks all about how this massive national investment is a top priority for outracing China on military capabilies, etc, that would probably just goad China's national leaders into thinking about AI in the exact same way.  So, trying to influence US discourse and policy has a knock-on effect in China.
  • Even just in a US context, I think it would be extremely valuable to have more objective demonstrations of dangers like alignment faking, instrumental convergence, AI ability to provide advice to would-be bioterrorists, etc.  But especially if you are trying to convince Chinese labs and national leaders in addition to western ones, then you are going to be trying to reach across a much bigger gap in terms of cultural context / political mistrust / etc.  For crossing that bigger gap, objective demonstrations of misalignment (and other dangers like gradual disempowerment, etc) become relatively even more valuable compared to mere discourse like translating LessWrong articles into chinese.

@ScienceMon🔸  There is vastly less of an "AI safety community" in China -- probably much less AI safety research in general, and much less of it, in percentage terms, is aimed at thinking ahead about superintelligent AI.  (ie, more of China's "AI safety research" is probably focused on things like reducing LLM hallucinations, making sure it doesn't make politically incorrect statements, etc.)

  • Where are the chinese equivalents of the American and British AISI government departments?  Organizations like METR, Epoch, Forethought, MIRI, et cetera?
  • Who are some notable Chinese intellectuals / academics / scientists (along the lines of Yoshua Bengio or Geoffrey Hinton) who have made any public statements about the danger of potential AI x-risks?
  • Have any chinese labs published "responsible scaling plans" or tiers of "AI Safety Levels" as detailed as those from OpenAI, Deepmind, or Anthropic?  Or discussed how they're planning to approach the challenge of aligning superintelligence?
  • Have workers at any Chinese AI lab resigned in protest of poor AI safety policies (like the various people who've left OpenAI over the years), or resisted the militarization of AI technology (like googlers protesting Project Maven, or microsoft employees protesting the IVAS HMD program)?

When people ask this question about the relative value of "US" vs "Chinese" AI, they often go straight for big-picture political questions about whether the leadership of China or the US is more morally righteous, less likely to abuse human rights, et cetera.  Personally, in these debates, I do tend to favor the USA, although certainly both the US and China have many deep and extremely troubling flaws -- both seem very far from the kind of responsible, competent, benevolent entity to whom I would like to entrust humanity's future.

But before we even get to that question of "What would national leaders do with an aligned superintelligence, if they had one," we must answer the question "Do this nation's AI labs seem likely to produce an aligned superintelligence?"  Again, the USA leaves a lot to be desired here.  But oftentimes China seems to not even be thinking about the problem.  This is a huge issue from both a technical perspective (if you don't have any kind of plan for how you're going to align superintelligence, perhaps you are less likely to align superintelligence), AND from a governance perspective (if policymakers just think of AI as a tool for boosting economic / military progress and haven't thought about the many unique implications of superintelligence, then they will probably make worse decisions during an extremely important period in history).

Now, indeed -- has Trump thought about superintelligence?  Obviously not -- just trying to understand intelligent humans must be difficult for him.  But the USA in general seems much more full of people who "take AI seriously" in one way or another -- sillicon-valley CEOs, pentagon advisers, billionare philanthropists, et cetera.  Even in today's embarassing administration, there are very high-ranking people (like Elon Musk and J. D. Vance) who seem at least aware of the transformative potential of AI.  China's government is more opaque, so maybe they're thinking about this stuff too.  But all public evidence suggests to me that they're kinda just blindly racing forward, trying to match and surpass the West on capabilities, without giving much thought as to where this technology might ultimately go.

 Pretty much all company owners (or the respective investors) believe that they are most knowledgeable about what's the best way to reinvest income.
Unfortunately, mostly they overestimate their own knowledge in this regard.

The idea that random customers would be better at corporate budgeting than the people who work in those companies and think about corporate strategy every day, is a really strong claim, and you should try to offer evidence for this claim if you want people to take your fintech idea seriously.

Suppose I buy a new car from Toyota, and now I get to decide how Toyota invests the $10K of profit they made by selling me the car.  There are immediately so many problems:

  • How on earth am I supposed to make this decision??  Should they spend the money on ramping up production of this exact car model?  Or should they spend the money on R&D to make better car engines in the future?  Or should they save up money to buy an electric-vehicle battery manufacturing startup?  Maybe they should just spend more on advertising?  I don't know anything about running a car company.  I don't even know what their current budget is -- maybe advertising was the best use of new funds last year, but this year they're already spending a ton on advertising, and it would be better to simply return additional profits to shareholders rather than over-expand?
    • Would it be Toyota's job to give me tons of material that I could read, to become informed and make the decision properly?  But then wouldn't Toyota just end up making all the decisions anyway, in the form of "recommendations", that customers would usually agree with?
      • Wouldn't a lot of this information be secret / internal data, such that giving it away would unduly help rival companies?
    • Maybe an idea is popular and sounds good, but is actually a terrible idea for some subtle reason.  For example, "Toyota should pivot to making self-driving cars powered by AI" sounds like a good idea to me, but I'm guessing that the reason Toyota isn't doing it is that it would be pretty difficult for them to become a leader in self-driving technology.  If ill-informed customers were making decisions, wouldn't we expect follies like this to happen all the time?
  • How is everyone supposed to find the time to be constantly researching different corporations?  Last month I bought a car and had to become a Toyota expert, this month I bought a new TV from Samsung, next month I'll upgrade my Apple iphone, or maybe buy a Nintendo switch.  And let's not forget all the grocery shopping I do, restaurant meals, etc innumerable small purchases.
    • What happens to all the votes of the people who never bother to engage with this system?  What's the incentive for customers to spend time making corporate decisions?
    • It seems like you'd need some kind of liquid-democracy-style delegation system for this to work properly, and not take up everyone's time.  Like, maybe you'd delegate most coporate decision-making power to a single expert who we think knows the most about the company (we could call this person a "CEO"), and then have a wider circle of people that oversee the CEO's behavior and fire them if necessary (this could be a "board of directors"), and then a wider circle of people who are generally interested in that company (these might be called "shareholders") could determine who's on the board of directors...

Thanks for this detailed overview; I've been interested to learn about AI for materials science (after hearing about stuff like Alphafold in biology), and this is the most detailed exploration I've yet seen.

 Hello!

I'm glad you found my comment useful!  I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors.  In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.

As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of "sorry, i didn't have enough time to write you a short letter, so I wrote you a long one"):

  • My scold-y comment on Tristan's post might suggest a pretty sharp dichotomy, where your choice is to either donate to proven global health interventions, or else to fully convert to longtermism and donate everything to some weird AI safety org doing hard-to-evaluate-from-the-outside technical work.
  • That's a frustrating choice for a lot of reasons -- it implies totally pivoting your giving to a new field, where it might no longer feel like you have a special advantage in picking the best opportunities within the space. It also means going all-in on a very specific and uncertain theory of impact (cue the whole neartermist-vs-longtermist debate about the importance of RCTs, feedback loops, and tangible impact, versus ideas like "moral uncertainty" that m.
    • You could try to split your giving 50/50, which seems a little better (in a kind of hedging-your-bets way), but still pretty frustrating for various reasons...
    • I might rather seek to construct a kind of "spectrum" of giving opportunities, where Givewell-style global health interventions and longtermist AI existential-risk mitigation define the two ends of the spectrum. This might be a dumb idea -- what kinds of things could possibly be in the middle of such a bizarre spectrum? And even if we did find some things to put in the middle, what are the chances that any of them would pass muster as a highly-effective, EA-style opportunity? But I think possibly there could actually be some worthwhile ideas here. I will come back to this thought in a moment.
  • Meanwhile, I agree with Tristan's comment here that it seems like eventually money will probably cease to be useful -- maybe we go extinct, maybe we build some kind of coherent-extrapolated-volition utopia, maybe some other similarly-weird scenario happens.
    • (In a big-picture philosophical sense, this seems true even without AGI? Since humanity would likely eventually get around to building a utopia and/or going extinct via other means. But AGI means that the transition might happen within our own lifetimes.)

 

However, unless we very soon get a nightmare-scenario "fast takeoff" where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future.  There are a couple ways we could hope to influence the long-term future:

  • We could simply try to avoid going extinct at the hands of misaligned ASI (most technical AI safety work is focused on this)
    • If you are a MIRI-style doomer who believes that there is a 99%+ chance that AI development leads to egregious misalignment and therefore human extinction, then indeed it kinda seems like your charitable options are "donate to technical alignment research", "donate to attempts to implement a global moratorium on AI development", and "accept death and donate to near-term global welfare charities (which now look pretty good, since the purported benefits of longtermism are an illusion if there is effectively a 100% chance that civilization ends in just a few years/decades)".  But if you are more optimistic than MIRI, then IMO there are some other promising cause areas that open up...
  • There are other AI catastrophic risks aside from misalignment -- gradual disempowerment is a good example, as are various categories of "misuse" (including things like "countries get into a nuclear war as they fight over who gets to deploy ASI")
    • Interventions focused on minimizing the risk of these kinds of catastrophes will look different -- finding ways to ease international tensions and cooperate around AI to avoid war?  Advocating for georgism and UBI and designing new democratic mechanisms to avoid gradual disempowerment?  Some of these things might also have tangible present-day benefits even aside from AI (like reducing the risks of ordinary wars, or reducing inequality, or making democracy work better), which might help them exist midway on the spectrum I mentioned earlier, from tangible givewell-style interventions to speculative and hard-to-evaluate direct AI safety work.
  • Even among scenarios that don't involve catastrophes or human extinction, I feel like there is a HUGE variance betwen the best possible worlds, and the median outcome.  So there is still tons of value in pushing for a marginally better future -- CalebMaresca's answer mentions the idea that it's not clear whether animals would be invited along for the ride in any future utopia.  This indeed seems like an important thing to fight for.  I think there are lots of things like this -- there are just so many different possible futures.
    • (For example, if we get aligned ASI, this doesn't answer the question of whether ordinary people will have any kind of say in crafting the future direction of civilization; maybe people like Sam Altman would ideally like to have all the power for themselves, benevolently orchestrating a nice transhumanist future wherein ordinary people get to enjoy plenty of technological advancements, but have no real influence over the direction of which kind of utopia we create.  This seems worse to me than having a wider process of debate & deliberation about what kind of far future we want.)
    • CalebMaresca's answer seems to imply that we should be saving all our money now, to spend during a post-AGI era that they assume will look kind of neo-feudal.  This strikes me as unwise, since a neo-feudal AGI semi-utopia is a pretty specific and maybe not especially likely vision of the future!  Per Tristan's comment that money will eventually cease to be useful, it seems like it probably makes the most sense to deploy cash earlier, when the future is still very malleable:
      • In the post-ASI far future, we might be dead and/or money might no longer have much meaning and/or the future might already be effectively locked in / out of our control.
      • In the AGI transition period, the future will still be very malleable, we will probably have more money than we do now (although so will everyone else), and it'll be clearer what the most important / neglected / tractable things are to focus on.  The downside is that by this point, everyone else will have realized that AGI is a big deal, lots of crazy stuff will be happening, and it might be harder to have an impact because things are less neglected.
      • Today, lots of AI-related stuff is neglected, but it's also harder to tell what's important / tractable.

 

For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei's vision of what an aspirational AGI transition period might look like, and what it would take to bring it about:

  • Dario talks about how AI-enhanced biological research could lead to amazing medical breakthroughs.  To allow this to happen more quickly, it might make sense to lobby to reform the FDA or the clinical trial system.  It also seems like a good idea to lobby for the most impactful breakthroughs to be quickly rolled out, even to people in poor countries who might not be able to afford them on their own.  Getting AI-driven medical advances to more people, more quickly would of course benefit the people for whom the treatments arrive just in time.  But it might also have important path-dependent effects on the long-run future, by setting precedents and infuencing culture and etc.
  • In the section on "neuroscience and mind", Dario talks about the potential for an "AI coach who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective".  Maybe there is some way to support / accelerate the development of such tools?
    • Dario is thinking of psychology and mental health here. (Imagine a kind of supercharged, AI-powered version of Happier-Lives-Institute-style wellbeing interventions like StrongMinds?)  But there could be similarly wide potential for disseminating AI technology for promoting economic growth in the third world (even today's LLMs can probably offer useful medical advice, engineering skills, entrepeneurial business tips, agricultural productivity best practices, etc).
    • Maybe there's no angle for philanthropy in promoting the adoption of "AI coach" tools, since people are properly incentivized to use such tools and the market will presumably race to provide them (just as charitable initiatives like OneLaptopPerChild ended up much less impactful than ordinary capitalism manufacturing bajillions of incredibly cheap smartphones).  But who knows; maybe there's a clever angle somewhere.
  • He mentions a similar idea that "AI finance ministers and central bankers" could offer good economic advice, helping entire countries develop more quickly.  It's not exactly clear to me why he expects nations to listen to AI finance ministers more than ordinary finance ministers.  (Maybe the AIs will be more credibly neutral, or eventually have a better track record of success?)  But the general theme of trying to find ways to improve policy and thereby boost economic growth in LMIC (as described by OpenPhil here) is obviously an important goal for both the tangible benefits, and potentially for its path-dependent effects on the long-run future.  So, trying to find some way of making poor countries more open to taking pro-growth economic advice, or encouraging governments to adopt efficiency-boosting AI tools, or convincing them to be more willing to roll out new AI advancements, seem like they could be promising directions.
  • Finally he talks about the importance of maintaining some form of egalitarian / democratic control over humanity's future, and the idea of potentially figuring out ways to improve democracy and make it work better than it does today.  I mentioned these things earlier; both seem like promising cause areas.

"However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years."

I like learning about ecology and evolution, so personally I enjoy these kinds of thought experiments.  But in the real world, isn't it pretty unlikely that natural ecosystems will just keep humming along for another million years?  I would guess that within just the next few hundred years, human civilization will have grown in power to the point where it can do what it likes with natural ecosystems:
 

  • perhaps we bulldoze the earth's surface in order to cover it with solar panels, fusion power plants, and computronium?
  • perhaps we rip apart the entire earth for raw material to be used for the construction of a Dyson swarm?
  • more prosaically, maybe human civilization doesn't expand to the stars, but still expands enough (and in a chaotic, unsustainable way) such that most natural habitats are destroyed
  • perhaps there will have been a nuclear war (or some other similarly devastating event, like the creation of mirror life that devastates the biosphere)
  • perhaps we create unaligned superintelligent AI which turns the universe into paperclips
  • perhaps humanity grows in power but also becomes more responsible and sustainable, and we reverse global warming using abundant clean energy powering technologies like carbon air capture, assorted geoengineering techniques, etc
  • perhaps humanity attains a semi-utopian civilization, and we decide to extensively intervene in the natural world for the benefit of nonhuman animals
  • etc

Some of those scenarios might be dismissable as the kind of "silly sci-fi speculation" mentioned by the longtermist-style meme below.  But others seem pretty mundane, indeed "to be expected" even by the most conservative visions of the future.  To me, the million-year impact of things like climate change only seems relevant in scenarios where human civilization collapses pretty soon, but in a way that leaves Earth's biosphere largely intact (maybe if humans all died to a pandemic?).
 

Load more