Jackson Wagner

Engineer working on next-gen satellite navigation at Xona Space Systems. I write about rationalist and longtermist topics at jacksonw.xyz

Topic Contributions

Comments

Ideas to improve the Effective Altruism Movement

I think the biggest problem with this idea is that it would take an incredible amount of effort to analyze the altruistic impact of minor personal decisions.  (If I buy a GPU to play videogames, is that shortening or lengthening AI timelines?  Helping or hurting the development of Taiwan or China or etc?  Is this a luxurious waste of resources, or is gaming actually a relatively cheap hobby that allows me to donate more money overall compared to if I pursued more expensive forms of liesure?)

Right now, it takes the analytical bandwidth of the entire EA movement to struggle to build a consensus about the relative effectiveness of a handful of high-level interventions in specific areas (global development, biosecurity, factory farming, etc), and even then, the difficulty of analysis has pushed EA towards the idea that we should be exploring more "megaprojects" -- scalable interventions that consume a high amount of money relative to the necessary analytical effort to identify and implement the idea.

Analyzing the ethical impact of everyday decisions (like about where to live, how to commute, what to eat, who to vote for, etc) is essentially a pitch for "microprojects", and would be more suited to a world where there were very many more people interested in EA but much less funding available.

(All that said, I personally would love to peruse someone's analysis of some everyday lifestyle decisions by altruistic impact -- this has already been done well by farmed-animal-welfare people looking at the impact of eating different kinds of food/meat, and it also has some overlap with Ben Williamson's effort to research Effective Self-Help.  For instance, I would really be curious to read someone's take on things like my GPU-related questions above.)

Digital people could make AI safer

Digital people seem a long way off technologically, versus AI which could either be a long way off or right around the corner.  This would argue against focusing on digital people, since there's only a small chance that we could get digital people first.

But on the other hand, the neuroscience research needed for digital people might start paying dividends far before we get to whole-brain emulation.  High-bandwidth "brain-computer interfaces" might be possible long before digital people, and BCIs might also help with AI alignment in various ways.  (See this LessWrong tag.)  Some have also argued that neuroscience research might help us create more human-like AI systems, although I am skeptical on this point.

Things usually end slowly

Other commenters are arguing that next time things might be different, due to the nature of technological risks like AI.  I agree, but I think there's an even simpler reason to focus attention on rapid-extinction scenarios: we don't have as much time to prevent them!

If we were equally worried about extinction due to AI, versus extinction due to slow economic stagnation / declining birthrates / political decay / etc, we might still want to put most of our effort into solving AI.   As they say, "there's a lot of ruin in an empire" -- if human civilization was on track to dwindle away over centuries, that also means we'd have centuries to try and turn things around.

Here are the finalists from FLI’s $100K Worldbuilding Contest

Ah, I am so sorry!  I must have conflated your entry with 281 -- fixed in the post!

Breaking Up Elite Colleges

I am familiar with this line of thinking, and I am pretty sympathetic to it. (I don't think that literally breaking up universities, antitrust style, would lead to more research happening, but it might perhaps lead to research on more useful topics, or something like that. It might also help reduce cost of living for ordinary folks by limiting/taxing the amounts people spend on education-related signaling, which would be great.) I see "encouraging more competition in education", which includes both taxing incumbent top schools like Harvard and also encouraging the formation of many new types of schools, as something that could be helpful to humanity from a progress-studies perspective of encouraging general economic growth and human thriving.

For better or worse, Effective Altruism often prefers to prioritize extremely heavily on the most effective cause areas, which can leave a lot of progress-studies-ish causes without a good place in EA even when their effects are pretty huge. Things like YIMBY, metascience, prediction markets, anti-aging research, charter cities, increased high-skill immigration, etc, might be huge boons for humanity, but these general interventions can sometimes feel like they've been orphaned by the EA movement, like "middle-term" cause areas lost between longtermism (which dominates on effectiveness) and neartermism (which prefers things to be empirically proveable and relatively non-political).

I say all this to explain that usually I am fighting on behalf of the middle-termist causes, arguing that prediction markets are a great general intervention for civilization, where many EAs would prefer to just use some prediction techniques for understanding AI timelines, and not bother trying to scale up markets and improve society's epistemics overall.

But in this situation, the tables have turned!! Now I find myself in the opposite role -- I agree with you that encouraging competition in higher education would be good and I hope it happens, but I am like "Meh, is this really such a big problem that it should become an important EA cause area?" Instead of this general intervention, why not do something more focused, like deliberately exploiting the broken higher-education signaling game by purchasing influence at an elite university and then using that platform to focus more energy on core cause areas like AI safety: https://forum.effectivealtruism.org/posts/CkEsn3gjaiWJfwHHr/what-brand-should-ea-buy-if-we-had-to-buy-one?commentId=GKp8cwXSpXp6Jfb8H

FLI launches Worldbuilding Contest with $100,000 in prizes

Returning to this thread to note that I eventually did enter the contest, and was selected as a finalist! I tried to describe a world where improved governance / decisionmaking technology puts humanity in a much better position to wisely and capably manage the safe development of aligned AI. https://worldbuild.ai/W-0000000088/

The biggest sense in which I'm "playing on easy mode" is that in my story I make it sound like the adoption of prediction markets and other new institutions was effortless and inevitable, versus in the real world I think improved governance is achievable but is a bit of a longshot to actually happen; if it does, it will be because a lot of people really worked hard on it. But that effort and drive is the very thing I'm hoping to help inspire/motivate with my story, which I feel somehow mitigates the sin of unrealism.

Overall, I am actually suprised at how dystopian and pessimistic many of the stories are. (Unfortunately they are mostly not pessimistic about alignment; rather there are just a lot of doomer vibes about megacorps and climate crisis.) So I don't think people went overboard in the direction of telling unrealistic tales about longshot utopias -- except to the extent that many contestants don't even realize that alignment is a scary and difficult challenge, thus the stories are in that sense overly-optimistic by default.

Change your "Amazon Smile" charity to something effective

It gets even better! You can use the unobtrusive, single-purpose "Smile Always" browser extension and you'll never need to remember to specifically visit smile.amazon.com ever again: your browser will do it for you! https://chrome.google.com/webstore/detail/smile-always/jgpmhnmjbhgkhpbgelalfpplebgfjmbf?hl=en

The amazon feature really does support a huge number of charities -- I have mine set to the Berkeley Existential Risk Initiative.

Against “longtermist” as an identity

Also, "Effective Altruism" and neartermist causes like global health are usually more accessible / easier for ordinary people first learning about EA to understand.   As Effective Altruism attracts more attention from media and mainstream culture, we should probably try to stick to the friendly, approachable "Effective Altruism" branding in order to build good impressions with the public, rather than the sometimes alien-seeming and technocratic "longtermism".

Could economic growth substantially slow down in the next decade?

The original "Limits to Growth" report was produced during the 1970s amid an oil-price crisis and widespread fears of overpopulation and catastrophic environmental decline.  (See also books like "The Population Bomb" from 1968.)  These fears have mostly gone away over time, as population growth has slowed in many countries and the worst environmental problems (like choking smog, acid rain, etc) have been mitigated.

This new paper is taking a 1972 computer model of the world economy and seeing how well it matches current trends.  They claim the match is pretty good, but they don't actually just plot the real-world data anywhere, they merely claim that the predicted data is within 20% of the real-world values.  I suspect they avoided plotting the real-world data because this would make it more obvious that the real world is actually doing significantly better on every measure.  Look at the model errors ("∆ value") in their Table 2:

So, compared to every World3-generated scenario (BAU, BAU2, etc), the real world has:
- higher population, higher fertility, lower mortality (no catastrophic die-offs)
- more food and higher industrial output (yay!)
- higher overall human welfare and a lower ecological footprint (woohoo!)

The only areas where humanity ends up looking bad are in pollution and "services per capita", where the real world has more pollution and fewer services than the World3 model.  But on pollution, the goal-posts have been moved: instead of tracking the kinds of pollution people were worried about in the 1970s (since those problems have mostly been fixed), this measure has been changed to be about carbon dioxide driving climate change.  Is climate change (which is predicted by other economists and scientists to cut a mere 10% of GDP by 2100) really going to cause a total population collapse in the next couple decades, just because some ad-hoc 1970s dynamical model says so?  I doubt it.  Meanwhile, the "services per capita" metric represents the fraction of global GDP spent on education and health -- perhaps it's bad that we're not spending more on education or health, or perhaps it's good that we're saving money on those things, but either way this doesn't seem like a harbinger of imminent collapse.
 
Furthermore, the World3 model predicted that things like industrial output would rise steadily until they one day experienced a sudden unexpected collapse.  This paper is trying to say "see, industrial output has risen steadily just as predicted... this confirms the model, so the collapse must be just around the corner!"  This strikes me as ridiculous: so far the model has probably underperformed simple trend-extrapolation, which in my view means its predictions about dramatic unprompted changes in the near future should be treated as close to worthless.

Why Helping the Flynn Campaign is especially useful right now

This really is a tight race!! Prediction markets at PredictIt and Metaculus are showing Carrick Flynn with just about a 50% chance to win. Political races don't get much more counterfactual than that! https://metaforecast.org/?query=flynn

(In addition to giving him a 47% chance in the primary, Metaculus gives him 40% odds to ultimately win both the primary the general and become a Representative. This implies that if he can make it through the primary, he has an 85% chance (40/47) of winning in November. So, most of the battle is happening this week.)

Load More