Here's an annual thread for spending the day collecting ideas for Cause X, with me stealing the idea from John Maxwell because I'm in Australia and raring to get started.
Remember--serious suggestions only!!!
Here's an annual thread for spending the day collecting ideas for Cause X, with me stealing the idea from John Maxwell because I'm in Australia and raring to get started.
Remember--serious suggestions only!!!
This isn't exactly a proposal for a new cause area, but I've felt that the current names of EA organizations are confusingly named. So I'm proposing some name-swaps:
I estimate that having better names only has a small or medium impact, but that tractability is sky-high. No comment on neglectedness.
What do you blokes think?
Working title: Reversetermism
Longtermists have pointed out that we've often failed to consider the interests or wellbeing of future beings. But an even more neglected space is the past.
If we think that existential risk is sufficiently high in the near future, there is a good chance that the vast majority of moral value is in the past. Just considering humans, there are at least 300,000 years of experiences, all of which we ought to consider just as important as present day ones. If we consider non-humans' interests, there are billions of years and countless individuals who we ought to expand our moral circle to include.
The scale here is obvious, as is the neglectedness - as far as I am aware, there are no groups focused on ensuring that the past is as good as possible. So, how tractable is it?
Immediately, a handful of interventions come to find:
One immediate advantage of reversetermism is that cost-effectiveness can actually be estimated relatively accurately. Here's a simple test:
"On May 5th (Gregorian calendar), 10,560 BC, at 2:00pm Eastern, everything was chill for an hour for everybody."
This expert backcasting took around 12 seconds to produce. Assuming a human population of 2 million, and that you pay expert backcasters $30 USD / hour, this cost $0.10, and created around 228 years of good experiences. With an average lifespan of say 30 years, it costs around $0.013 to save a life. And even more expert backcasters might achieve more efficient results through further work in the field, driving down the cost-effectiveness further.
You neglect to mention that with a time preference discount rate high enough, the past counts disproportionately more than the future. As they say, "Tutankhamun was a billion times more important than you".
Strong middletermism suggests that the best actions are exclusively contained within the set of actions that aim to influence how the next 137 years go (and not a year longer!)
We know that compromising between smart people is a good decision procedure (see "Aumann's agreement theorem" also see how ensemble models generally outperform any individual models). Given that many smart people support near-term causes and many smart people support longtermist causes, I suggest that the highest impact causes will be found in what I call middletermism.
Another important issue is that our predictive track record gets worse as a function of time - increasing time means increasing error. Insofar as we are trying to balance expected impact and robustness of impact calculations, this suggests a time at which error will balance out impact. In my calculations, this occurs exactly 137 years from now. Thus middletermism only focuses on these 137 years.
The acronym EA is so flexible and can be used to create so many puns. And yet there are so little puns being used or made in the EA community. So I think more EAs, on the margin, should create and use puns with the EA acronym. These can be used as names for group events, or to show how EA is already ingrained in so many concepts or causes. Here are a bunch of ideas:
Gargantuan.
How hard can it be?
Truly outrageous.
Irrelevant. Just do it. You have your orders.
Some have proposed that the Importance, Tractability, Neglectedness framework should be complemented with a separate factor for Urgency. This would if anything strengthen the case for this new cause area, given that it is already April 1st, and that each remix would take hours to create (not to mention upwards of hundreds of hours to listen to).
Out of curiosity I stuck an episode into the Wub Machine. It's genuinely mildly listenable. Also takes no time so the cost-effectiveness here might be high. Original audio: 80,000 Hours.
At Effective Remix, we've generally focused on finding the most pressing podcasts and the best genres to remix them into.
But even if some podcast is 'the most pressing'—in the sense of being the highest impact thing for someone to remix if they could be equally successful at remixing anything—it might easily not be the highest impact thing for many people to remix, because people have various talents, experience, and temperaments.
The following are some podcasts that seem like they might be especially pressing from the perspective of improving the vibe of the thing.
More speculatively, for value of information reasons, it could even make sense for 3-50 people with especially strong personal fit to explore the possibility of making trap remixes of the bookThinking, Fast and Slow by Nobel Prize laureate Daniel Kahneman. We think such remixes are unlikely to be competitive with our current priorities, but if they are, making such remixes could potentially absorb hundreds of Oxbridge philosophy & physics double majors spe... (read more)
EA projects should be evidence based: I've done a survey of myself, and the results conclusively show that if 80,000 hours produced dubstep remixes of its podcasts, I would actually listen to them. The results were even more conclusive when the question included "what if Wiblin spliced in 'Wib-wib-wib' noises whenever crucial considerations were touched on?".
It is well established that farm animal suffering is one of the largest moral disasters of our time, because of its negative moral value and scale (stemming from low costs).
We see these as an opportunity and call to action. By the same token, we can, with reasonable cost, raise a huge number of chickens who are wire-headed (using electrodes or chicken heroin) to believe they have the most wonderful life imaginable. This positive moral value can far outweigh the positive moral value of flourishing of human lives - a life is a life after all, and heroin is heroin.
I mean, they're chickens. We don't foresee them mounting an armed resistance. Besides, if they don't like it, we're doing something wrong.
In contrast to humans who show great resistance to any proposed radical change to their lives (like radical life extension), nobody resists when people put countless chickens through a very contorted experience.
I mean, are you working on it? Then I guess it's neglected.
We are currently sourcing people have deep insights into the chicken neurology and experience to help lead the UX research front. If you are one of these "bird brained" experts, we need you!
We must begrudgingly admit that there is a splinter group in our midst which is contemplating instead of raising actual chickens, creating a simulation where even more chickens lead the most wonderful life imaginable. They are currently working on their function applyOptimalHeroinDose(chicken).
Imaginarytermism
I think the axis of Imaginary Time has been entirely neglected. It is time chauvinism to prefer one dimension of time over any other.
See also [New org] Canning What We Give
Reducing Existential Risk by Embracing the Absurd
As we all know, longtermists face a lot of moral cluelessness: it is impossible to predict all of the consequences of any of our actions over the very long term. This makes us especially susceptible to existential crises. As longtermists, we should reduce this existential risk by recognizing that the universe is fundamentally meaningless, and that we are the only ones who can create meaning. We should embrace the absurd.
I think that QURI should be called Probably Good
I suggest that the names be reassigned using the Top Trading Cycles and Trains algorithm.
+1 makes sense.