Jackson Wagner

Space Systems Engineer @ https://www.xonaspace.com/pulsar
Working (6-15 years of experience)
2694Fort Collins, CO, USAJoined Apr 2021

Bio

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about effective-altruist and longtermist topics at nukazaria.substack.com, or you can read about videogames like Braid and The Witness at jacksonw.xyz

Comments
251

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

It seems like the paper's dispute with their bank has been going on for a while before the recent drama, perhaps long enough to make the timelines match up. But yes, it's confusing to me why they couldn't just switch to another bank. Definitely possible that they are basically just out of money and their bank is trying to cut them off, but the paper is hyping this up as political persecution in order to buy time / gain some negotiating advantage. (Of course, regardless of what the actual story turns out to be, there is seemingly zero reason for FLI money to be involved in this BS .)

Just perusing the front page of Nya Dagbladet, it looks like their business's main bank account has been cut off (perhaps similar to how Visa or Paypal will routinely freeze the accounts of grey-area or politically unpalatable businesses here in the US), and now they are scrambling to try and get funds where they can:

It's possible that this is the context in which Tegmark made the (very poor) decision to attempt to rush a 100K grant to a "foundation" set up in equal haste by Nya Dagbladet.  Which would come off less as "funding a neo-nazi foundation to pursue shadowy neo-nazi projects" and more as nepotistic misuse of FLI's funds to keep the newspaper Nya Dagbladet afloat, perhaps as a way of helping out Tegmark's brother?

I would also note that, as Erich_Grunewald describes in his comments, the paper clearly does come across as populist / right-wing, but seems only a bit more sensationalized and extreme than something like the Washington Examiner or NY Post, and less so than things like Breitbart, the Drudge Report, Infowars, etc.  It definitely does not come across as the homepage of a neo-Nazi organization:

Still seems like an extremely dubious use of FLI's funds to make a sketchy grant to a random populist newspaper with bad moral values and bad epistemics!  But "pro-nazi" seems like it might be an exaggeration on the part of Expo.

[This comment is no longer endorsed by its author]Reply

I am confused by some of the logic in this post.

  • If TAI arrives soon, either I'll be dead (so I should borrow and spend all my money now -- this part makes sense to me) or I'll be fantastically rich in a post-TAI utopia, so you say I should borrow and spend all my money now, to smooth out my consumption.  Apparently this is the consensus of all mainstream economics and also the results of common-sense.
  • But you also say: if TAI arrives soon, that means real interest rates should be higher, so I should engage in a risky investment strategy of shorting the bond market.  This strategy will make me poorer in the short-term, but will pay off by making me richer later, once markets realize the consequences of TAI.

That second idea seems like the opposite of consumption smoothing??  Maybe it's worthwhile because I would become rich enough that the extra volatility is worth it to me?  But what's the point of being rich for just a short time before I die, or of being rich for just a short time before TAI-induced utopia makes everyone fantastically rich anyways?

 

I also do not find it plausible that a vision of impending TAI-induced utopia, an acceleration of technological progress and human flourishing even more significant than the industrial revolution, would... send stock prices plummeting?  I am not an economist, but if someone told me that humanity's future was totally assured, I feel like my discount rate would go down rather than up, and I would care about the future much more rather than less, and I would consequently want to invest as much as possible now, so that I could have more influence over the long and wondrous future ahead of me.  You could make an analogy here between stable utopia and assured personal longevity (perhaps to hundreds of years of age), similar to the one you make between human extinction and personal death.  The promise of a stable utopian future (or personal longevity) seems like it should lead to the opposite of the short-term behavior seen in the near-term-extinction (or personal death) scenario.  But your post says that these two futures end up in the same place as far as the discount rate is concerned?

To quote a Peter Thiel joke that didn't make it into my post about investing under anthropic shadow, "Certainly if we could just live to all be 100, that would be quite a transformation. There is good news and bad news. The bad news is: If you don’t believe in the good news, you’re not saving enough for retirement."

Definitely agree with this.  Consider for instance how markets seemed to have reacted strangely / too slowly to the emergence of the Covid-19 pandemic, and then consider how much more familiar and predictable is the idea of a viral pandemic compared to the idea of unaligned AI:

The coronavirus was x-risk on easy mode: a risk (global influenza pandemic) warned of for many decades in advance, in highly specific detail, by respected & high-status people like Bill Gates, which was easy to understand with well-known historical precedents, fitting into standard human conceptions of risk, which could be planned & prepared for effectively at small expense, and whose absolute progress human by human could be recorded in real-time...   If the worst-case AI x-risk happened, it would be hard for every reason that corona was easy. When we speak of “fast takeoffs”, I increasingly think we should clarify that apparently, a “fast takeoff” in terms of human coordination means any takeoff faster than ‘several decades’ will get inside our decision loops. 
-- Gwern

Peter Thiel (in his "Optimistic Thought Experiment" essay about investing under anthropic shadow, which I analyzed in a Forum post) also thinks that there is a "failure of imagination" going on here, similar to what Gwern describes:

Those investors who limit themselves to what seems normal and reasonable in light of human history are unprepared for the age of miracle and wonder in which they now find themselves. The twentieth century was great and terrible, and the twenty-first century promises to be far greater and more terrible. ...The limits of a George Soros or a Julian Robertson, much less of an LTCM, can be attributed to a failure of the imagination about the possible trajectories for our world, especially regarding the radically divergent alternatives of total collapse and good globalization.

Yes, if such a farfetched situation ever occurred (a productive EA suddenly confronted with the need for expensive medical treatment, but for some reason they couldn't raise enough cash from their personal friends/family/etc), it could make sense for an EA organization to give them a grant covering some of the cost of the treatment, just as people sometimes receive grants to boost their productivity in other ways -- allowing them to hire a research assistant, or financing a move so they can work in-person rather than remotely, or buying an external monitor to pair with their laptop, or etc.

But I don't think this would realistically extend to ever crazier and crazier levels, like spending millions of dollars on lifesaving medical treatment.  First of all, there are few real-life medical procedures that cost so much, and fewer still which are actually effective at prolonging healthy life by decades.  (On average, half of all US healthcare costs are incurred in the last 6 months of people's lives -- people are understandably very motivated to spend a lot of money for even a small increase in their own lifespan while fighting diseases like cancer, but it wouldn't make sense altruistically to spend exorbitant sums on this kind of late-stage care for random EAs.)

Furthermore, even if you believe that longtermist work is super-duper-effective, such that each year of a longtermist's research/effort is saving thousands or millions of lives in expectation...  (Personally, I am a pretty committed longtermist, but nevertheless I doubt that longtermism is bajillions of times more effective than everything else -- its big advantages on paper are eroded somewhat by issues like moral cluelessness, lack of good feedback loops, the fact that I am not just a total hedonic utilitarian that linearly values the creation of more and more future lives, etc.)

...even if you believe that longtermism is super-duper-effective, it still wouldn't be worth paying exorbitant sums to fund the medical care of individual researchers, because it would be cheaper to simply pay the salaries of newly hired, replacement researchers of equivalent quality!  So, in practice, in a maximally dramatic scenario where a two-million-dollar medical treatment could restore a dying EA to another 30 years of perfect health and productivity, a grantmaking organization could weigh it against the alternative option to spend the same money on funding 30 years of research by other people at $70,000/year.

Of course if someone was especially effective and hard-to-replace (perhaps we elect an EA-sympathetic politician to the US Senate, and they are poised to introduce a bunch of awesome legislation about AI safety, pandemic preparedness, farmed animal welfare, systemic reform of various institutions, etc, but then they fall ill in dramatic fashion), the numbers would go up.  But they wouldn't go up infinitely -- there will always be the question of opportunity cost, and what other interventions could have been funded.  (Eg, I doubt there's any single human on Earth who we could identify as having such a positive impact on the world that it would be worth diverting 100% of OpenPhil's longtermist grantmaking for an entire year, to save them from an untimely death.)

I would assume it's most impactful to focus on the marginal future where we survive, rather than the median?  ie, the futures where humanity barely solves alignment in time, or has a dramatic close-call with AI disaster, or almost fails to build the international agreement needed to suppress certain dangerous technologies, or etc.

IMO, the marginal futures where humanity survives, are the scenarios where our actions have the most impact -- in futures that are totally doomed, it's worthless to try anything, and in other futures that go absurdly well it's similarly unimportant to contribute our own efforts.  Just in the same way that our votes are more impactful when we vote in a very close election, our actions to advance AI alignment are most impactful in the scenarios balanced on a knife's edge between survival and disaster.

(I think that is the right logic for your altruistic, AI safety research efforts anyways.  If you are making personal plans, like deciding whether to have children or how much to save for retirement, that's a different case with different logic to it.)

Feels almost like a joke to offer advice on minor financial-planning tweaks while discussing AI timelines... but for what it's worth, if you are saving up to purchase a house in the next few years, know that first-time homebuyers can withdraw money from traditional 401k / IRA accounts without paying the usual 10% early-withdrawal penalty.  (Some related discussion here: https://www.madfientist.com/how-to-access-retirement-funds-early/)

And it seems to me like a Roth account should be strictly better than a taxable savings account even if you 100% expect the world to end by 2030?  You pay the same tax up front either way, plus you can still withdraw the money any time, but savings in a Roth account doesn't face any capital gains taxes.  (Unless the investment options you get through your workplace plan are restrictive enough that you'd rather pay the capital gains taxes in exchange for greater investment choice.)

Of course it makes sense to save less for the future overall, and spend more in the present day, when the prospect of living a long and healthy life seems less likely.  If that is what you meant, then that makes sense.  (Personally, I also feel torn about having children just like you described.)

I think the tone of this post (rather than the helpful info about end-of-year matching campaigns) is why you are getting some downvotes. I understand that you probably dashed off this post quickly -- perfect being the enemy of the good and all that -- but consequently it's a bit hard to understand what you're saying. You jump very quickly between:

  • GiveDirectly's main matching campaign has ended
  • GiveDirectly is a very effective charity with really positive effects
  • People are foolish for continuing to donate on the main website, since they don't know there are smaller ongoing matching drives elsewhere (this comes off almost like you are insulting those people, even though your purpose is of course to helpfully provide them with relevant info!)
  • More about GiveDirectly being a good charity
  • Kind of a wild non-sequitur at the end, claiming that SNAP is the most effective US government program without explaining any reasoning or linking to any analysis?? (Is SNAP truly more effective than, idk, the long-term geopolitical effects of the NATO alliance, assuming these effects are net-positive? Or what about policies that don't cost anything, like immigrant entrepreneur visas to stimulate economic growth -- are these infinitely effective because the divisor is zero? SNAP is probably the most effective program in a neartermist sense, among large, highly-quantifiable, and highly scalable government programs... but that is a very different claim!)
Load More