JW

Jackson Wagner

Scriptwriter for RationalAnimations @ https://youtube.com/@RationalAnimations
3079 karmaJoined Apr 2021Working (6-15 years)Fort Collins, CO, USA

Bio

Scriptwriter for RationalAnimations!  Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc.  Also a big fan of EA / rationalist fiction!

Comments
311

To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):

There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.

Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".

Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).

One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.

I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.

The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.

Here is my attempt at thinking up other historical examples of transformative change that went the other way:

  • Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.

  • Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).

  • You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...

  • You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...

People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.

(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)

Yeah, I wondered what threshold to set things at -- $10m is a pretty easy bar for some of these areas, since of course some of my listed cause areas are more niche / fringe than others. I figure that for the highest-probability markets, where $10m is considered all but certain, maybe I can follow up with a market asking about a $50m or $100m threshold.

I agree that $10m isn't "mainstream" in the sense of joining the pantheon alongside biosecurity, AI safety, farmed animal welfare, etc. But it would still be a big deal to me if, say, OpenPhil doubled their grantmaking to "land use" and split the money equally between YIMBYism and Georgism. Or if mitigating stable totalitarianism risk got as much support as "progress studies"-type stuff. $10m of grants towards studying grabby aliens or the simulation hypothesis or etc would definitely be surprising!

There are definitely a lot of examples of places where some rich people wanted to try to create a kinda dumb, socially-useless tax haven, and then they accomplished that goal, and then the resulting entity had either negative impact or close-to-zero impact on the surrounding area. (I don't know much about Monaco or the Cayman Islands, but these seem like potentially good examples?)  But there have also been times when political leaders have set out to create sustained, long-term, positive-sum economic growth, and this has also occasionally been achieved!  (Dubai, South Korea, Guangzhou... I'm not as familiar with the stories of places like Rwanda or Botswana or Bangladesh, but there are a lot of countries which are trying pretty hard to follow a kind of best-practices economic development playbook, and often seeing decent results.)

Both these phenomena predate the "charter cities" concept... as I understand it, the goal of orgs like the Charter Cities Institute is not to blindly cheerlead the creation of new cities of all kinds (as we mention in the video, lots of new cities are being built already, across the rapidly-urbanizing global South), but rather to encourage a specific model of development that looks more like the Dubai / South Korea / etc story, rather than simply building more cities as relatively useless tax-havens, or small and limited SEZs that won't be able to build their own economic momentum, or as mere infrastructure projects with no economic/legal reform aspect.

I could definitely see myself agreeing with a criticism like "Sure, charter cities advocates do a LITTLE bit of work to avoid accidentally letting their ideas get used as an excuse to actually create useless tax havens, but actually they need to do a LOT MORE work to guard against this failure mode".  Right now I guess I feel like I don't know enough about the status of specific projects to confidently identify what exact mistakes various charter-city groups are making.  But we did try to allude to this failure mode in the video when we talked about Paul Romer's complaints about the Honduras charter cities law.

 

Re: the idea that creating more competition can lead to more good things, but also makes it harder to coordinate to prevent negative externalties -- yup, this is definitely something that I think about.  I tend to think that since there are already almost 200 countries in the world, coordination on the most important topics -- stuff like nuclear nonproliferation, the ongoing global moratorium on slavery, international agreements about climate or potentially soon about AI -- already has to deal with lots of competing stakeholders, and hopefully won't be impeded too much by adding some charter cities to the mix.  (This is one area where it definitely helps that, at the end of the day, charter cities ultimately lack top-level national sovereignty!)  I think charter cities in particular have a lot of potential benefits that could even help with these risks, namely by helping pioneer new styles of governance / regulation / institutions that could find better ways of dealing with some of these problems.  Nevertheless, I agree it's a real trade-off... we're actually working on a draft script about "risks of stable totalitarianism" at RationalAnimations, and in that video we're planning to spend a lot more time talking about a similar tradeoff space.  It's obviously extremely helpful to have global coordination / relatively unified world governance to solve important problems, so the best ways of reducing stable totalitarianism risk are things like differential technological development, or maybe influencing cultural norms or etc, not just decentralizing stuff, since blindly decentralizing stuff makes coordination harder!

Hyperbolic discounting, despite its reputation for being super-short-term and irrational, is actually better in this context, and doesn't run into the same absurd "value an extra meal in 10,000 years more than a thriving civilization in 20,000 years" problems of exponential discounting.

Here is a nice blog post arguing that hyperbolic discounting is actually more rational than exponential: hyperbolic discounting is what you get when you have uncertainty over what the correct discount rate should be.

Nice!  I like this a lot more than the chaotic multi-choice markets trying to figure out exactly why he was fired.

Very interested to find out some of the details here:

  • Why now?  Was there some specific act of wrongdoing that the board discovered (if so, what was it?), or was now an opportune time to make a move that the board members had secretly been considering for a while, or etc?
  • Was this a pro-AI-safety move that EAs should ultimately be happy about (ie, initiated by the most EA-sympathetic board members, with the intent of bringing in more x-risk-conscious leadership)?  Or is this a disaster that will end up installing someone much more focused on making money than on talking to governments and figuring out how to align superintelligence?  Or is it relatively neutral from an EA / x-risk perspective?  (Update: first speculation I've seen is this cautiously optimistic tweet from Eliezer Yudkowsky)
  • Greg Brockman, president of the board, is also stepping down.  How might this be related, and what might this tell us about the politics of the board members and who supported/opposed this decision?

Definitely agree that empathy and other social feelings provide indirect evidence for self-awareness (ie, "modeling stuff about yourself" in your brain) in a way that optimism/pessimism or pain-avoidance doesn't.  (Although wouldn't a sophisticated-enough RL circuit, interacting with other RL circuits in some kind of virtual evolutionary landscape, also develop social emotions like loyalty, empathy, etc?  Even tiny mammals like mice/rats display sophisticated social behaviors...)

I tend to assume that some kind of panpsychism is true, so you don't need extra "circuitry for experience" in order to turn visual-information-processing into an experience of vision.  What would such extra circuitry even do, if not the visual information processing itself?  (Seems like maybe you are a believer in what Daniel Dennet calls the "fallacy of the second transduction"?)
Consequently, I think it's likely that even simple "RL algorithms" might have a very limited, very shallow, non-self-aware kinds of experience: an image-classifier is doing visual-information-processing, so it probably also produces isolated "experiences of vision"!  But of course it would not have any awareness of itself as being a thing-that-sees, nor would those isolated experiences of vision be necessarily tied together into a coherent visual field, etc.

So, I tend to think that fish and other primitive creatures probably have "qualia", including something like a subjective experience of suffering, but that they probably lack any sophisticated self-awareness / self-model, so it's kind of just "suffering happening nowhere" or "an experience of suffering not connected to anything else" -- the fish doesn't know it's a fish, doesn't know that it's suffering, etc, the fish is just generating some simple qualia that don't really refer to anything or tie into a larger system.  Whether you call such a disconnected & shallow experience "real qualia" or "real suffering" is a question of definitions.

I think this personal view of mine is fairly similar to Eliezer's from the Sequences: there are no "zombies" (among humans or animals), there is no "second transduction" from neuron activity into a mythical medium-of-consciousness (no "extra circuitry for experience" needed), rather the information-processing itself somehow directly produces (or is equivalent to, or etc) the qualia.  So, animals and even simpler systems probably have qualia in some sense.  But since animals aren't self-aware (and/or have less self-awareness than humans), their qualia don't matter (and/or matter less than humans' qualia).

...Anyways, I think our core disagreement is that you seem to be equating "has a self-model" with "has qualia", versus I think maybe qualia can and do exist even in very simple systems that lack a self-model.  But I still think that having a self-model is morally important (atomic units of "suffering" that are just floating in some kind of void, unconnected to a complex experience of selfhood, seem of questionable moral relevance to me), so we end up having similar opinions about how it's probably fine to eat fish.

I guess what I am objecting to is that you are acting like these philosophical problems of qualia / consciousness / etc are solved and other people are making an obvious mistake.  I agree that I see a lot of people being confused and making mistakes, but I don't think the problems are solved!

Why would showing that fish "feel empathy" prove that they have inner subjective experience?  It seems perfectly possible to build a totally mechanical, non-conscious system that nevertheless displays signs of empathy.  Couldn't fish just have some kind of built-in, not-necessarily-conscious instinct to protect other fish (for instance, by swimming together in a large school) in order to obtain some evolutionary benefit?

Conversely, isn't it possible for fish to have inner subjective experience but not feel empathy?  Fish are very simple creatures, while "empathy" is a complicated social emotion.  Especially in a solitary creature (like a shark, or an octopus), it seems plausible that you might have a rich inner world of qualia alongside a wide variety of problem-solving / world-modeling skills, but no social instincts like jealousy, empathy, loyalty, etc.  Fish-welfare advocates often cite studies that seem to show fish having an internal sense of pain vs pleasure (eg, preferring water that contains numbing medication), or that bees can have an internal sense of being optimistic/risky vs pessimistic/cautious -- if you think that empathy proves the existence of qualia, why are these similar studies not good enough for you?  What's special about the social emotion of empathy?

Personally, I am more sympathetic to the David Chalmers "hard problem of consciousness" perspective, so I don't think these studies about behaviors (whether social emotions like jealousy or more basic emotions like optimism/pessimism) can really tell us that much about qualia / inner subjective experience.  I do think that fish / bees / etc probably have some kind of inner subjective experience, but I'm not sure how "strong", or vivid, or complex, or self-aware, that experience is, so I am very uncertain about the moral status of animals.

(Personally, I also happily eat fish & shrimp all the time -- this is due to a combination of me wanting to eat a healthy diet without expending too much effort, and me figuring that the negative qualia experienced by creatures like fish is probably very small, so I should spend my efforts trying to improve the lives of current & future humans (or finding more-leveraged interventions to reduce animal farming) instead of on trying to make my diet slightly more morally clean.)

In general, I think this post is talking about consciousness / qualia / etc in a very confused way -- if you think that empathy-behaviors are ironclad proof of empathy-qualia, you should also think that other (pain-related, etc) behaviors are ironclad proof of other qualia.

April fools' day request:

I was reading the openai blog post "learning to summarize with human feedback" from the AI Safety Fundamentals course (https://openai.com/research/learning-to-summarize-with-human-feedback), especially the intriguing bit at the end about how if they try to fully optimize the model for maximum reward, they actually overfit and get lower-quality responses.

My ill-advised request is that I would just LOVE to see the EA Forum's "summaryBot" go similarly haywire for a day and start summarizing every post in the same repetitive / aggressive tone as the paper:

"28yo dude stubbornly postponees start pursuing gymnastics hobby citing logistics reasons despite obvious interest??? negatively effecting long term fitness progress both personally and academically thoght wise? want change this dumbass shitty ass policy pls"

Load more