JW

Jackson Wagner

Space Systems Engineer @ https://www.xonaspace.com/pulsar
2768 karmaJoined Apr 2021Working (6-15 years)Fort Collins, CO, USA

Bio

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about videogames like Braid and The Witness at jacksonw.xyz

Comments
267

Answer by Jackson WagnerDec 23, 2021180

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Community notes seem like a genuinely helpful improvement on the margin -- but coming back to this post a year later, I would say that on net I am disappointed.  (Disclaimer -- I don't use twitter much myself, so I can't evaluate people's claims of whether twitter's culture has noticeably changed in a more free-speech direction or etc.  From my point of view just occasionally reading others' tweets, I don't notice any change.)

During the lead-up to the purchase, people were speculating about all kinds of ways that Twitter could try to change its structure & business model, like this big idea that it could split apart the database from the user-interface, then allow multiple user-interfaces (vanilla Twitter plus third-party alternatives) to compete and use the database in different ways, including doing the federated censorship that Larks mentioned in his comment.  The database would almost become the social version of what blockchains are for financial transactions -- a kind of central repository of everything that everyone's saying, which is then used and filtered and presented in many different ways.

But instead, the biggest change so far has been the introduction of a subscription model.  Maybe this is just Step 1 of a larger process (gotta start by stabilizing the company and making it profitable)... but it seems like there is no larger vision for big changes/experiments like this.  With a year of hindsight, it seems like Elon's biggest concerns were just the sometimes aggressively left-wing moderation/norms of the site, and the way that the bluecheck system favored certain groups like journalists.  It seems like now he's fixed those perceived problems, but it hasn't resulted in a transformative improvement to the platform, and there are simply no more steps in the plan.

So, that's unfortunate.  But I am still optimistic that Twitter is interested in experimenting and trying new things -- even if there isn't a concrete vision, I guess I am still optimistic that Twitter will eventually find its way to some of these interesting ideas via small-scale experimentation and iteration.

For totally selfish, non-historical reasons, I feel like May 8 is a better date:

  • December 9 is too close to other rationalist/EA holidays, like Solstice, Giving Tuesday, and Petrov Day.

  • December 9 is right at the START of the typical cold/flu season, when infectious diseases are the worst. (Although idk if smallpox, plague, typhus, etc, were also seasonal in this way?) Maybe this makes it thematically resonant? But personally, like how Christians celebrate Easter at the end of winter, I feel like Smallpox eradication is a good seasonal match as a spring holiday, when flu season has receded and we are enjoying the bountiful outdoors and etc.

  • Not sure if it is a plus or a minus to have the holiday located in December during the "giving season" of charity fundraisers. Probably still a big plus, despite the competition from other fundraisers. And probably this factor is worth hundreds of thousands of dollars of money moved, which would vastly overwhelm my mere desire for a nice springtime EA holiday. (But I can still feel grumpy about it.)

I agree with the idea that nuclear wars, whether small or large, would probably push human civilization in a bad, slower-growth, more zero-sum and hateful, more-warlike direction.  And thus, the idea of civilizational recovery is not as bright a silver lining as it seems (although it is still worth something).

I disagree that this means that we should "try to develop AGI as soon as possible", which connotes to me "tech companies racing to deploy more and more powerful systems without much attention paid to alignment concerns, and spurred on by a sense of economic competition rather than cooperating for the good of humanity, or being subject to any kind of democratic oversight".

I don't think we should pause AI development indefinitely -- because like you say, eventually something would go wrong, whether a nuclear war or someone skirting the ban to train a dangerous superintelligent AI themselves.  But I would be very happy to "pause" for a few years while the USA / western world figures out a regulatory scheme to restrain the sense of an arms race between tech companies, and puts together some sort of "manhattan/apollo project for alignment".  Then we could spend a decade working hard on alignment, while also developing AI capabilities in a more deliberate, responsible, centralized way.  At the end of that decade I think we would still be ahead of China and everyone else, and I think we would have put humanity in a much better position than if we tried to rush to get AGI "as soon as possible".

I agree that hoping for ideal societies is a bit of a pipe dream.  But there is some reason for hope.  China and Russia, for instance, were both essentially forced to abandon centrally-planned economies and adopt some form of capitalism in order to stay competitive with a faster-growing western world.  Unfortunately, the advantages of democracy vs authoritarianism (although there are many) don't seem quite as overwhelming as the advantages of capitalism vs central planning.  (Also, if you are the authoritarian in charge, maybe you don't mind switching economic systems, but you probably really want to avoid switching political systems!)

But maybe if the West developed even better governing institutions (like "futarchy", a form of governance based partly on democratic voting and partly on prediction markets), or otherwise did a great job of solving our own problems (like doing a good job generating lots of cheap, clean energy, or adopting Georgist and Yimby policies to lower the cost of housing and threreby boost the growth of western economies), once again we might pressure our geopolitical competitors to reform their own institutions in order to keep up.

Alternatively, if Russia/China/etc didn't reform, I would expect them to eventually fall further and further behind (like North Korea vs South Korea); they'd still have nukes of course, but after many decades of falling behind, eventually I'd expect the US could field some technology -- maybe aligned AI like you say, or maybe just really effective missile-defence -- that would help end the era of nuclear risk (or at least nuclear risk from backwards, fallen-behind countries).

Yes, it is definitely a little confusing how EA and AI safety often organize themselves via online blog posts instead of papers / books / etc like other fields!  Here are two papers that seek to give a comprehensive overview of the problem:

Hi!  Some assorted thoughts on this post:

  • You say that "my opinion is that Nuclear War is among the most likely causes of death for a person my age in the Northern Hemisphere".  I think I agree with this in a literal sense, but your most of your post strikes me as more pessimistic than this statistic alone.  Based on actuarial tables, the risk of dying at 45 years old (from accidents, disease, etc) is about 0.5% for the average person.  So, in order to be the biggest single risk of death, the odds of dying in a nuclear war probably need to be at least 0.2% per year.
  • 0.2% chance of nuclear-war-death per year actually lines up pretty well with this detailed post by a forecasting group?  They estimated in October that the situation in Ukraine maybe has around a 0.5% chance of escalating to a full-scale nuclear war in which major NATO cities like London are getting hit.  Obviously there is big uncertainty and plenty of room for debate on many steps of a forecast like this, but my point is that something like a 0.2% yearly risk of experiencing full-scale nuclear war sounds believable.  (Of course, most years will be less dangerous than the current Ukraine war, but a handful years, like a potential future showdown over Taiwan, will obviously contain most of the risk.)
  • But, wait a second -- this argument cuts both ways!!  What if My Most Likely Reason to Die Young is AI X-Risk?!  AI systems aren't very powerful right now, so nuclear war definitely has a better chance of killing me right now, in 2023.  But over the next, say, thirty years, it's not clear to me if nuclear risk, moseying along at perhaps a 0.5% chance per year and adding up to 15% chance of war by 2053, is greater than the total risk of AI catastrophe by 2053.
  • So far, we've been talking about personal probability-of-death.  But many EAs are concerned both with the lives of currently living people like ourselves, and with the survival of human civilization as a whole so that humanity's overall potential is not lost.  (Your mention of greatest risk of death for people "in the northern hemisphere" hints at this.)  Obviously a full-scale nuclear war would have a devastating impact on civilization.  But it nevertheless seems unlikely to literally extinguish all humanity, thus giving civilization a chance to bounce back and try again.  (Of course, opinions differ about how severe the effects of nuclear war / nuclear winter would be, and how easy it would be for civilization to bounce back.  See Luisa's excellent series of posts about this for much more detail!)  By contrast, scenarios involving superintelligent AI seem more likely to eventually lead to completely extinguishing human life.  So, that's one reason we might not want to trade ambient nuclear risk for superintelligent AI risk, even if they both gave a 15% chance of personal death by 2053.
  • Totally unrelated side-note, but IMO the fermi paradox doesn't argue against the idea that alien civilizations are getting taken over by superintelligent AIs that rapidly expand to colonize the universe.  That's because if the AI civilizations are expanding at a reasonable fraction of the speed of light, we wouldn't see them coming!  So, we'd logically expect to observe a "vast galactic silence" even if the universe is actually chock full of rapidly-expanding civilizations which are about to overtake the earth and destroy us.  For more info on this, read about Robin Hanson's "grabby aliens" model -- full website here, or entertaining video explanation here.
  • If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare”: A  review

Alright, that is a lot of bullet points!  Forgive me if this post comes across as harsh criticism -- that is not at all how I am intending this, rather just as a rapid-fire list of responses and thoughts to this thought-provoking post.  Also forgive me for not trying to make the case for the plausibility of AI risk, since I'm guessing you're already familiar with some of the arguments.  (If not, there are many great explainers out there including waitbutwhy, Cold Takes, and some long FAQs by Eliezer Yudkowsky and Scott Alexander.

Ultimately I agree with you that one of the aspirational goals of AI technology (if we can solve the seemingly impossibly difficult challenge of understanding and being able to control something vastly smarter than ourselves) is to use superintelligent AI to finally end all forms of existential risk and achieve a position of "existential security", from which humanity can go on to build a thriving and diverse super-civilization.  But I personally feel like AI is probably more dangerous than nuclear war (both to my individual odds of dying a natural death in old age, and to humanity's chances of surviving to achieve its long-term potential), so I would be happy to trade an extra decade of nuclear risk for the precious opportunity for humanity to do more alignment research during an FLI-style pause on new AI capabilities deployments.

As for my proposed "alternative exit strategy", I agree with you that civilization as it stands today seems woefully inadequate to safely handle either nuclear weapons or advanced AI technology for very long.  Personally I am optimistic about trying to create new, experimental institutions (like better forms of voting, or governments run in part by prediction markets) that could level-up civilization's adequacy/competence and create a wiser civilization better equipped to handle these dangerous technologies.  But I recognize that this strategy, too, would be very difficult to accomplish and any benefits might arrive too late to help with situations where AI shows up soon.  But at least it is another strategy in the portfolio of efforts that are trying to mitigate existential risk.

I appreciate that the new initiative is also a stealth reference to Star Trek -- "The Next Generation" being the sequel series to original Star Trek just like this is the sequel to Operation Warp Speed. Makes it seem like there are some live players in the White House fighting for what would really do the most for pandemic preparedness, instead of just optimizing for whatever "looks good" in an HHS funding package.

Sad, of course, to see the usual total lack of recognition that in addition to subsidizing production, it would also help advance medical progress if we made production/innovation easier by reforming onerous and way-too-slow FDA processes.

In addition to the pan-coronavirus vaccine effort, I also appreciate the funding for mucosal vaccines that would be easier/more pleasant to administer (no needles that scare people and create anti-vax sentiment, just a friendly nasal spray), and would do more to stop transmission of the virus between people, as opposed to just preventing severe illness and death.

Credit to folks like Alex Tabarrok at Marginal Revolution (and many, many others throughout government, pharma, EA, etc) who throughout the pandemic advocated for neglected good ideas like these. Maybe someday we will even see important FDA reforms, funding for metagenomic sequencing to identify early new pandemics, bans on dangerous gain-of-function-style research, and etc! (https://marginalrevolution.com/marginalrevolution/2022/04/an-operation-warp-speed-for-nasal-vaccines.html)

Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).

This also feels like the kind of thing where EA wouldn't necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc...  we could just fund some seminal early cases and figure out what a general "playbook" should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems.  Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we'd established enough of a "playbook" for how such cases work.

One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout.  This could set the tone for the field in an especially helpful direction.

Load more