Scriptwriter for RationalAnimations! Interested in lots of EA topics, but especially ideas for new institutions like prediction markets, charter cities, georgism, etc. Also a big fan of EA / rationalist fiction!
The Christians in this story who lived relatively normal lives ended up looking wiser than the ones who went all-in on the imminent-return-of-Christ idea. But of course, if christianity had been true and Christ had in fact returned, maybe the crazy-seeming, all-in Christians would have had huge amounts of impact.
Here is my attempt at thinking up other historical examples of transformative change that went the other way:
Muhammad's early followers must have been a bit uncertain whether this guy was really the Final Prophet. Do you quit your day job in Mecca so that you can flee to Medina with a bunch of your fellow cultists? In this case, it probably would've been a good idea: seven years later you'd be helping lead an army of 100,000 holy warriors to capture the city of Mecca. And over the next thirty years, you'll help convert/conquer all the civilizations of the middle east and North Africa.
Less dramatic versions of the above story could probably be told about joining many fast-growing charismatic social movements (like joining a political movement or revolution). Or, more relevantly to AI, about joining a fast-growing bay-area startup whose technology might change the world (like early Microsoft, Google, Facebook, etc).
You're a physics professor in 1940s America. One day, a team of G-men knock on your door and ask you to join a top-secret project to design an impossible superweapon capable of ending the Nazi regime and stopping the war. Do you quit your day job and move to New Mexico?...
You're a "cypherpunk" hanging out on online forums in the mid-2000s. Despite the demoralizing collapse of the dot-com boom and the failure of many of the most promising projects, some of your forum buddies are still excited about the possibilities of creating an "anonymous, distributed electronic cash system", such as the proposal called B-money. Do you quit your day job to work on weird libertarian math problems?...
People who bet everything on transformative change will always look silly in retrospect if the change never comes. But the thing about transformative change is that it does sometimes occur.
(Also, fortunately our world today is quite wealthy -- AI safety researchers are pretty smart folks and will probably be able to earn a living for themselves to pay for retirement, even if all their predictions come up empty.)
Pretty much all company owners (or the respective investors) believe that they are most knowledgeable about what's the best way to reinvest income.
Unfortunately, mostly they overestimate their own knowledge in this regard.
The idea that random customers would be better at corporate budgeting than the people who work in those companies and think about corporate strategy every day, is a really strong claim, and you should try to offer evidence for this claim if you want people to take your fintech idea seriously.
Suppose I buy a new car from Toyota, and now I get to decide how Toyota invests the $10K of profit they made by selling me the car. There are immediately so many problems:
Hello!
I'm glad you found my comment useful! I'm sorry if it came across as scolding; I interpreted Tristan's original post to be aimed at advising giant mega-donors like Open Philanthropy, moreso than individual donors. In my book, anybody donating to effective global health charities is doing a very admirable thing -- especially in these dark days when the US government seems to be trying to dismantle much of its foreign aid infrastructure.
As for my own two cents on how to navigate this situation (especially now that artificial intelligence feels much more real and pressing to me than it did a few years ago), here are a bunch of scattered thoughts (FYI these bullets have kind of a vibe of "sorry, i didn't have enough time to write you a short letter, so I wrote you a long one"):
However, unless we very soon get a nightmare-scenario "fast takeoff" where AI recursively self-improves and seizes control of the future over the course of hours-to-weeks, it seems like there will probably be a transition period, where approximately human-level AI is rapidly transforming the economy and society, but where ordinary people like us can still substantially influence the future. There are a couple ways we could hope to influence the long-term future:
For a couple of examples of interventions that could exist midway along a spectrum from givewell-style interventions to AI safety research, which are also focused on influencing the transitional period of AGI, consider Dario Amodei's vision of what an aspirational AGI transition period might look like, and what it would take to bring it about:
"However, the likely mass extinction of K-strategists and the concomitant increase in r-selection might last for millions of years."
I like learning about ecology and evolution, so personally I enjoy these kinds of thought experiments. But in the real world, isn't it pretty unlikely that natural ecosystems will just keep humming along for another million years? I would guess that within just the next few hundred years, human civilization will have grown in power to the point where it can do what it likes with natural ecosystems:
Some of those scenarios might be dismissable as the kind of "silly sci-fi speculation" mentioned by the longtermist-style meme below. But others seem pretty mundane, indeed "to be expected" even by the most conservative visions of the future. To me, the million-year impact of things like climate change only seems relevant in scenarios where human civilization collapses pretty soon, but in a way that leaves Earth's biosphere largely intact (maybe if humans all died to a pandemic?).
Infohazards are indeed a pretty big worry of lots of the EAs working on biosecurity: https://forum.effectivealtruism.org/posts/PTtZWBAKgrrnZj73n/biosecurity-culture-computer-security-culture
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it's much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
Thinking about my point #3 some more (how do you launch a satellite after a nuclear war). I realized that if you put me in charge of making a plan to DIYing this (instead of lobbying the US military to do it for me, which would be my first choice), and if SpaceX also wasn't answering my calls to see if I could buy any surplus starlinks...
You could do worse than partnering with Rocketlab, a satellite and rocket company based in New Zealand, developing the emergency satellite based on their "Photon" platform (design has flown before, small enough to still be kinda cheap, big enough to generate much more power than a cubesat). Then Rocketlab can launch their Electron rocket from New Zealand in the event of a nuclear war, and (in a real crisis like that), the whole company would help make sure the mission happened -- the idea of partnering with someone rather than just buying a satellite is key, IMO, because then it's mostly THEIR end of the world plan and in a crisis would benefit from their expertise / workforce.
I'd try to talk to the CEO, get him on board. Seems like the kind of flashy, Elon-esque, altruistic-in-a-sexy-way mission that could help with making RocketLab seem "cool" and recruiting eager mission-driven employees. (RocketLab's CEO currently has ambitions to do some similar flashy missions, like sending their own probe to Venus.)
But this would definitely be more like a $30M project, than a $300K project.
Kind of a funny selection effect going on here here where if you pick sufficiently promising / legible / successful orgs (like Against Malaria Foundation), isn't that just funging against OpenPhil funding? This leads me to want to upweight new and not-yet-proven orgs (like the several new AIM-incubated charities), plus things like PauseAI and Wild Animal Initiative that OpenPhil feels they can't fund for political reasons. (Same argument would apply for invertebrate welfare, but I personally don't really believe in invertebrate welfare. Sorry!)
I'm also somewhat saddened by the inevitable popularity-contest nature of the vote; I feel like people are picking orgs they've heard of and picking orgs that match their personal cause-prioritization "team" (global health vs x-risk vs animals). I like the idea that EA should be experimental and exploratory, so (although I am a longtermist myself), I tried to further upweight some really interesting new cause areas that I just learned about while reading these various posts:
- Accion Transformadora's crime-reduction stuff seems like a promising new space to explore for potential effective interventions in medium-income countries.
- One Acre Fund is potentially neat, I'm into the idea of economic-growth-boosting interventions and this might be a good one.
- It's neat that Observatorio de Riesgos Catastroficos is doing a bunch of cool x-risk-related projects throughout latin america; their nuclear-winter-resilience-planning stuff in Argentina and Brazil seems like a particularly well-placed bit of local lobbying/activism.
But alas, there can only be three top-three winners, so I ultimately spent my top votes on Team Popular Longtermist Stuff (Nucleic Acid Observatory, PauseAI, MATS) in the hopes that one of them, probably PauseAI, would become a winner.
(longtermist stuff)
1. Nucleic Acid Observatory
2. Observatorio de Riesgos Catastroficos
3. PauseAI
4. MATS
(interesting stuff in more niche cause areas, which i sadly doubt can actually win)
5. Accion Transformadora
6. One Acre Fund
7. Unjournal
(if longtermism loses across the board, I prefer wild animal welfare to invertebrate welfare)
8. Wild Animal Inititative
9. Faunalytics
To answer with a sequence of increasingly "systemic" ideas (naturally the following will be tinged by by own political beliefs about what's tractable or desirable):
There are lots of object-level lobbying groups that have strong EA endorsement. This includes organizations advocating for better pandemic preparedness (Guarding Against Pandemics), better climate policy (like CATF and others recommended by Giving Green), or beneficial policies in third-world countries like salt iodization or lead paint elimination.
Some EAs are also sympathetic to the "progress studies" movement and to the modern neoliberal movement connected to the Progressive Policy Institute and the Niskasen Center (which are both tax-deductible nonprofit think-tanks). This often includes enthusiasm for denser ("yimby") housing construction, reforming how science funding and academia work in order to speed up scientific progress (such as advocated by New Science), increasing high-skill immigration, and having good monetary policy. All of those cause areas appear on Open Philanthropy's list of "U.S. Policy Focus Areas".
Naturally, there are many ways to advocate for the above causes -- some are more object-level (like fighting to get an individual city to improve its zoning policy), while others are more systemic (like exploring the feasibility of "Georgism", a totally different way of valuing and taxing land which might do a lot to promote efficient land use and encourage fairer, faster economic development).
One big point of hesitancy is that, while some EAs have a general affinity for these cause areas, in many areas I've never heard any particular standout charities being recommended as super-effective in the EA sense... for example, some EAs might feel that we should do monetary policy via "nominal GDP targeting" rather than inflation-rate targeting, but I've never heard anyone recommend that I donate to some specific NGDP-targeting advocacy organization.
I wish there were more places like Center for Election Science, living purely on the meta level and trying to experiment with different ways of organizing people and designing democratic institutions to produce better outcomes. Personally, I'm excited about Charter Cities Institute and the potential for new cities to experiment with new policies and institutions, ideally putting competitive pressure on existing countries to better serve their citizens. As far as I know, there aren't any big organizations devoted to advocating for adopting prediction markets in more places, or adopting quadratic public goods funding, but I think those are some of the most promising areas for really big systemic change.