Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about rationalist and longtermist topics at jacksonw.xyz

Wiki Contributions

Comments

FLI launches Worldbuilding Contest with $100,000 in prizes

See my response to kokotajlod to maybe get a better picture of where I am coming from and how I am thinking about the contest.

"Directly conflicts with the geopolitical requirements." -- How would asking the AGI to take it slow conflict with the geopolitical requirements?  Imagine that I invent a perfectly aligned superintelligence tomorrow in my spare time, and I say to it, "Okay AGI, I don't want things to feel too crazy, so for starters, how about you give humanity 15% GDP growth for the next 30 years?   (Perhaps by leaking designs for new technologies discreetly online.)  And make sure to use your super-persuasion to manipulate public sentiment a bit so that nobody gets into any big wars."  That would be 5x the current rate of worldwide economic growth, which would probably feel like "transforming the economy sector by sector" to most normal people.  I think that world would perfectly satisfy the contest rules.  The only problems I can see are:

  • The key part of my story is not very realistic or detailed.  (How do I end up with a world-dominating AGI perfectly under my control by tomorrow?)
  • I asked my AGI to do something that you would consider unambitious, and maybe immoral.  You'd rather I command my genie to make changes somewhere on the spectrum from "merely flipping the figurative table" to dissolving the entire physical world and reconfiguring it into computronium.  But that's just a personal preference of yours -- just because I've invented an extremely powerful AGI doesn't mean I can't ask it to do boring ordinary things like merely curing cancer instead of solving immortality.

I agree with you that there's a spectrum of different things that can be meant by "honesty", sliding from "technically accurate statements which fail to convey the general impression" to "correctly conveying the general impression but giving vague or misleading statements", and that in some cases the thing we're trying to describe is so strange that no matter where we go along that continuum it'll feel like lying because the description will be misleading in one way or the other.  That's a problem with full Eutopia but I don't think it's a problem with the 2045 story, where we're being challenged not to describe the indescribable but to find the most plausible path towards a goal (a familiar but peaceful and prosperous world) which, although very desirable to many people, doesn't seem very likely if AGI is involved.

I think the biggest risk of dishonesty for this contest, is if the TRUE most-plausible path to a peaceful & prosperous 2045 (even one that satisfies all the geopolitical conditions) still lies outside the Overton Window of what FLI is willing to publish, so instead people choose to write about less plausible paths that probably won't work.  (See my "cabal of secret geniuses runs the show from behind the scenes" versus "USA/China/EU come together to govern AI for all mankind" example in my comment to kokotajlod -- if the cabal of secret geniuses path is what we should objectively be aiming for but FLI will only publish the latter story, that would be unfortunate.)

Maybe you think the FLI contest is immoral for exactly this reason -- because the TRUE most-plausible path to a good future doesn't/couldn't go through .  Yudkowsky has said a few things to this effect, about how no truly Pivotal Action (something your aligned AGI could do to prevent future unaligned AGIs from destroying the world) fits inside the overton window, and he just uses "have the AGI create a nanosystem to melt all the world's GPUs" (which I guess he sees as being an incomplete solution) as a politically palatable illustrative example.  I'm not sure about this question and I'm open to being won over.

FLI launches Worldbuilding Contest with $100,000 in prizes

To clarify, I'm not affiliated with FLI, so I'm not the one imposing the constraints, they are.  I'm just defending them, because the contest rules seem reasonable enough to me.  Here are a couple of thoughts:

  • Remember that my comment was drawing a distinction between "describing total Eutopia, a full and final state of human existence that might be strange beyond imagining" versus "describing a 2045 AGI scenario where things are looking positive and under-control and not too crazy".  I certainly agree with you that describing a totally transformed Eutopia where the USA and China still exist in exactly their current form is bizarre and contradictory.  My point about Eutopia was just that an honest description of something indescribably strange should err towards trying to get across the general feeling (ie, it will be nice) rather than trying to scare people with the weirdness.  (Imagine going back in time and horrifying the Founding Fathers by describing how in the present day "everyone sits in front of machines all day!! people eat packaged food from factories!!!".  Shocking the Founders like this seems misleading if the overall progress of science and technology is something they would ultimately be happy about.)  Do you agree with that, or do you at least see what I'm saying?

Anyways, on to the more important issue of this actual contest, the 2045 AGI story, and its oddly-specific political requirements:

  • I agree with you that a positive AGI outcome that fits all these specific details is unlikely.
  • But I also think that the idea of AGI having a positive outcome at all seems unlikely -- right now, if AGI happens, I'm mostly expecting paperclips!
  • Suppose I think AGI has a 70% chance of going paperclips, and a 30% chance of giving us any kind of positive outcome.  Would it be unrealistic for me to write a story about the underdog 30% scenario in which we don't all die horribly?  No, I think that would be a perfectly fine thing to write about.
  • What if I was writing about a crazy-unlikely, 0.001% scenario?  Then I'd be worried that my story might mislead people, by making it seem more likely than it really is.  That's definitely a fair criticism -- for example, I might think it was immoral for someone to write a story about "The USA has a communist revolution, but against all odds and despite the many examples of history, few people are hurt and the new government works perfectly and never gets taken over by a bloodthirsty dictator and central planning finally works better than capitalism and it ushers in a new age of peace and prosperity for mankind!".
  • But on the other hand, writing a very specific story is a good way to describe a goal that we are trying to hit, even if it's unlikely.  The business plan of every moonshot tech startup was once an unlikely and overly-specific sci-fi story.  ("First we're going to build the world's first privately-made orbital rocket.  Then we're going to scale it up by 9x, and we're going to fund it with NASA contracts for ISS cargo delivery.  Once we've figured out reusability and dominated the world launch market, we'll make an even bigger rocket, launch a money-printing satellite internet constellation, and use the profits to colonize Mars!").  Similarly, I would look much more kindly on a communist-revolution story if, instead of just fantasizing, it tried to plot out the most peaceful possible path to a new type of government that would really work -- trying to tell the most realistic possible story under a set of unrealistic constraints that define our goal.  ("..After the constitution had been fully reinterpreted by our revisionist supreme court justices -- yes, I know that'll be tough, but it seems to be the only way, please bear with me -- we'll use a Georgist land tax to fund public services, and citizens will contribute directly to decisionmaking via a cryptographically secured system of liquid democracy...")
  • FLI is doing precisely this: choosing a set of unrealistic constraints that define a positive near-term path for civilization that most normal people (not just wild transhumanist LessWrongers) would be happy about.  Chinese people wouldn't be happy about a sci-fi future that incidentally involved a nuclear war in which their entire country was wiped off the map.  Most people wouldn't be happy if they heard that the world was going to be transformed beyond all recognition, with the economy doubling every two months as the world's mountains and valleys are ripped up and converted to nanomachine supercomputers.  Et cetera.  FLI isn't trying to choose something plausible -- they're just trying to choose a goal that everybody can agree on (peace, very fast but not bewilderingly fast economic growth, exciting new technologies to extend lifespan and make life better).  Figuring out if there's any plausible way to get there is our job.  That's the whole point of the contest.

You say: "[This scenario seems so unrealistic that I can only imagine it happening if we first align AGI and then request that it give us a slow ride even though it's capable of going faster.] ...Would you accept interpretations such as this?"

I'm not FLI so it's not my job to say which interpretations are acceptable, but I'd say you're already doing exactly the work FLI was looking for!  I agree that this scenario is one of the most plausible ways that civilization might end up fulfilling the contest conditions.  Here are some other possibilities:

  • Our AGI paradigm turns out to be really limited for some reason and it doesn't scale well, so we get near-human AGI that does a lot to boost growth, but nothing really transformative.  (It seems very unlikely to me that AGI capabilities would top out at such a convenient level, but who knows.)
  • Civilization is totally out of control and the alignment problem isn't solved at all; we're in the middle of "slow takeoff" towards paperclips by 2050, but the contest timeline ends in 2045 so all we see is things getting nicer and nicer as cool new technologies are invented, and not the horrifying treacherous turn where it all goes wrong.  (This seems quite likely to me, but also seems to go strongly against the spirit of the question and would probably be judged harshly for lacking in the "aspirational" department.)
  • Who is in control of the AGI?  Maybe it's USA/China/EU all cooperating in a spirit of brotherhood to limit the pace of progress to something palatable and non-disorienting (the scenario you described).  Or maybe it's some kind of cabal of secret geniuses controlling things behind the scenes from their headquarters at Deepmind.  If you're more thinking that AGI will be developed all at once and via fast-takeoff (thus giving huge disproportionate power to the first inventors of aligned AGI), you might see the "cabal of secret geniuses" story as more plausible than the version where governments all come together to competently manage AI for the sake of humanity.

See my response to Czynski for more assorted thoughts, although I've written so much now that perhaps I could have entered the contest myself by now if I had been writing stories instead of comments!  :P

Edited to add that alas, I only just now saw your other comment about "So in order to describe a good future, people will fiddle with the knobs of those important variables so that they are on their conducive-to-good settings rather than their most probable settings. ".  This strikes me as a fair criticism of the contest.  (For one, it will bias people towards handwaving over the alignment problem by saying "it turned out to be surprisingly easy".)  I don't think that's devastating for the contest, since I think there's a lot of value in just trying to envision what an agreeable good outcome for humanity looks like.  But definitely a fair critique that lines up with the stuff I was saying above -- basically, there are both pros and cons to putting $100K of optimization pressure behind getting people to figure out the most plausible optimistic outcome under a set of constraints.  (Maybe FLI should run another contest encouraging people to do more Yudkowsky-style brainstorming of how everything could go horribly wrong before we even realize what we were dealing with, just to even things out!)

Free money from New York gambling websites

Hi!  Separately from this current arbitrage opportunity, does the new NY law represent any kind of meaningful advance for the legality of prediction markets?

Also, if there are any rationalist sports fans out there with hot takes (or perhaps cold ones) on how I should bet, I'd be open to taking your advice.  After all, if I sign up for this, I will have to end up placing some random sports bets one way or another.  Might as well try to grab some of that mythical rationalist alpha while I'm at it.

FLI launches Worldbuilding Contest with $100,000 in prizes

The contest is only about describing 2045, not necessarily a radically alien far-future "Eutopia" end state of human civilization.  If humans totally solve alignment, we'd probably ask our AGI to take us to Eutopia slowly, allowing us to savor the improvement and adjust to the changes along the way, rather than leaping all the way to the destination in one terrifying lurch.  So I'm thinking there are probably some good ways to answer this prompt.


But let's engage with the harder question of describing a full Eutopia.  If Eutopia is truly good, then surely there must be honest ways of describing it that express why it is good and desirable, even if Eutopia is also scary.  Otherwise you'd be left with three options that all seem immoral:

  1. Silent elitism -- the rabble will never understand Eutopia, so we simply won't tell them where we're taking humanity.  They'll thank us later, when we get there and they realize it's good.
  2. Pure propaganda -- instead of trying to make a description that's an honest attempt at translating a strange future into something that ordinary people can understand, we give up all attempts at honesty and just make up a nice-sounding future with no resemblance to the Eutopia which is secretly our true destination.
  3. Doomed self-defeating attempts at honesty -- if you tell such a scary story about "Eutopia" that nobody would want to live there, then people will react badly to it and they'll demand to be steered somewhere else.   Because of your dedication to always emphasizing the full horror and incomprehensibility, your attempts to persuade people of Eutopia will only serve to move us farther away from it.

It's impossible to imagine infinity, but if you're trying to explain how big infinity is, surely it's better to say "it's like the number of stars in the night sky", or "it's like the number of drops of water in the ocean", than to say "it's like the number of apples you can fit in a bucket".  Similarly, the closest possible description of the indescribable Eutopia must be something that sounds basically good (even if it is clearly also a little unfamiliar), because the fundamental idea of Eutopia is that it's desirable.  I don't think that's lying, anymore than trying to describe other indescribable things as well as you can is lying.

Yudkowsky's own essay "Eutopia is Scary" was part of a larger "Fun Theory" sequence about attempting to describe utopias.  He mostly described them in a positive light, with the "Eutopia is Scary" article serving as an important, but secondary, honesty-enhancing caveat: "these worlds will be a lot of fun, but keep in mind they'll also be a little strange".

Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

This post was helpful to me in understanding what I should aim to accomplish with my own personal donations.  I expect that many other EAs feel similarly -- donating is an important part of being an EA for many people, but the question of how to maximize impact as a small-scale individual donor is a complex puzzle when you consider the actions of other donors and the community as a whole.  This post is a clear, early articulation of key themes that show up in the continual debate and discussion that surround real-world individual donation decisions.  I think it has stood the test of time and could easily go on to help more EAs figure out their donations in the future, justifying its inclusion in the decadal review.

There's No Fire Alarm for Artificial General Intelligence

This is going to be a quick review since I there has been plenty of discussion of this post and people understand it well.  But this post was very influential for me personally, and helped communicate yet another aspect of the key problem with AI risk -- the fact that its so unprecedented, which makes it hard to test and iterate solutions, hard to raise awareness and get agreement about the nature of the problem, and hard to know how much time we have left to prepare.

AI is simply one of the biggest worries among longtermist EAs, and this essay does a good job describing a social dynamic unique to the space of AI risk that makes dealing with the risk harder.  For this reason it would be a fine inclusion in the decadal review.

Existential Risk and Economic Growth

I think this research into x-risk & economic growth is a good contribution to patient longtermism.  I also think that integrating thoughts on economic growth more deeply into EA holds a lot of promise -- maybe models like this one could someday form a kind of "medium-termist" bridge between different cause areas, creating a common prioritization framework.  For both of these reasons I think this post is worth of inclusion in the decadal review.

The question of whether to be for or against economic growth in general is perhaps not the number-one most pressing dilemma in EA (since everyone agrees that differential technology development into x-risk-reducing areas is very important), but it is surely up there, since it's such a big-picture question that affects so many decisions.  Other than X-risk concerns, economic growth obviously looks attractive -- both in the developing world where it's a great way to make the world's poorest people more prosperous, or in the first world where the causes championed by "progress studies" promise to create a prosperous and more dynamic society where people can live better lives.  But of course, by longtermist lights, how fast we get to the future is less important than making sure we get there at all.  So, in the end, what to do about influencing economic growth?  Leopold's work is probably just a starting point for this huge and perhaps unanswerable set of questions.  But it's a good start -- I'm not sure if I want more economic growth or not, but I definitely want more posts like these tackling the question.

For the decadal review, rather than the literal text of this post (which merely refers to the pdf) or the comprehensive 100-page pdf itself, I'd suggest including Leopold's "Works in Progress" article summarizing his research.

Big List of Cause Candidates

There are many reasons why I think this post is good:

  • This post has been personally helpful to me in exploring EA and becoming familiar with the arguments for different areas.
  • Having resources like this also contributes to the "neutrality" and "big-tent-ness" of the Effective Altruism movement (which I think are some of the most promising elements of EA), and helps fight against the natural forces of inertia that help entrench a few cause areas as dominant simply because they were identified early.
  • Honestly, having a "Big List" that just neutrally presents other people's claims, rather than a curated, prioritized selection of causes, is helpful in part because it encourages people to form their own opinions rather than deferring to others.  When I look at this list of cause candidates, I see plenty of what I'd consider to be obvious duds, and others that seem sorely underrated.  You'd probably disagree with me on the details, and that's a good thing!
  • Finally, this post helped me realize that simply listing and organizing all the intellectual work that happens in EA can be an effective way to contribute.  As a highly distributed, intellectually complex, and extremely big-tent social/academic/intellectual/philanthropic movement, there is a lot going on in EA and there is a lot of value in helping organize and explain all the different threads of thought that make up the movement.  For this reason especially, it would be good to include this post in the Decadal Review, since the Decadal Review is also an attempt to wrangle and organize the recent intellectual progress of the EA movement -- it's a reasonably up-to-date map of the EA cause area landscape, all in one post!  (On the downside, it might not work well in a printed book since it's so heavy on hyperlinks.)
The Drowning Child and the Expanding Circle

As the Creative Writing Contest noted, Singer's drowing-child thought experiment "probably did more to launch the EA movement than any other piece of writing".   The key elements of the story have spread far and wide -- when I was in high school in 2009, an English teacher of mine related the story to my class as part of a group discussion, years before I had ever heard of Effective Altruism or anything related to it.  

Should this post be included in the decadal review?  Certainly, its importance is undisputed.  If anything, Singer's essay might be too well-known to merit inclusion in a decadal review, since its basic logic (that suffering far away still matters, and that suffering far away can sometimes be averted very cheaply) is essentially the starting-point through which almost all new EAs are introduced to the movement.  It also might fail due to the technicality that the story was originally written in 1997, despite being reposted on our dear Forum in 2014.

Certainly there have been criticisms of the story, such as Yudkowsky's here.  It seems a bit of a bait and switch to have the story be about diving into a pool (which only takes a few minutes and at most ruins a nice set of clothes), and then Singer says that we are all in the situation described due to the existence of charities like Against Malaria Foundation who can save a life for several thousand dollars (which even for most rich-world citizens is more like the hard-earned savings from several months' labor).  But that is just nitpicking plot details -- the fundamentals of the story are sound.

The Narrowing Circle (Gwern)

I see Gwern's/Aaron's post about The Narrowing Circle as part of an important thread in EA devoted to understanding the causes of moral change.  By probing the limits of the "expanding circle" idea for counterexamples, perhaps we can understand it better.

Effective altruism is popular among moral philosophers, and EAs are often seeking to expand people's "moral circle of concern" towards currently neglected classes of beings, like nonhuman animals or potential future generations of mankind.  This is a laudable goal (and one which I share), but it's important that the movement does not get carried away with the assumption that such a cultural shift is inevitable.  The phenomenon of the expanding circle must be caused by something, and those causes are probably driven by material conditions that could change or reverse in the future.

As I see it, the strongest part of the argument for a "narrowing circle" is the "Ancestors" and "Descendants" sections.  It seems plausible to me that preindustrial "farmer" culture placed nigh-obsessive emphasis on pleasing the wishes of your ancestors and securing a promising future for your descendants.  (I suspect this is probably because, in a world where income came from farming the land rather than hunting/gathering or performing skilled industrial-age work for wages, inheritance of farmland from one generation to the next becomes crucially important.)  Much of the modern world seems to have essentially abandoned the idea that we should place much weight on the values of our ancestors, which should be concerning to longtermists since valuing the lives of ancestors seems very close to valuing the lives of unborn generations (see for instance Chesterton's quote about how "tradition is the democracy of the dead").

The idea that concern for descendants has also decreased is certainly a worry worth investigating -- perhaps a logical place to start would be by investigating  how much the recent worldwide decline in fertility rates really reflects a decreased desire for children.  A drop in respect for ancestors might also directly cause a drop in concern for descendants -- it might be logical to disregard the lives of future generations if we assume that they (just like us) will ignore the wishes of their ancestors!

Anyways, here are some other pieces that seem relevant to the thread of "investigating what drives moral change":
- AppliedDivinityStudies arguing that moral philosophy is not what actually drives moral progress.
- A lot of Slate Star Codex / Astral Codex Ten is about understanding cultural changes.  Here for instance is a dialogue about shifting moral foundations, expanding circles, and what that might tell us about how things will continue to shift in the future.

Finally, I think that investigating the "expanding circle" is doubly important because it's not just an assumption of a couple people within the nascent EA movement... it's very similar to one of the core legitimizing stories that are held up to justify mainstream liberal democracy!  I am talking about the whole civil-rights story that "the moral arc of the universe bends towards justice", that democracy is good because it has lead an expanding-circle-style transition towards increasingly recognizing the civil rights of women, minorities, etc.  I think this story is true, but I don't know exactly why and I don't think the trend is guaranteed to continue.  (Was democracy itself the cause, or merely another symptom of a larger force like the Industrial Revolution?)

For all these reasons, I think this is a good post worthy of inclusion in the decadal review.

Load More