3RationalityEnhancementGroup2dWe would like to draw your attention to a new conference on using methods from
psychology and other behavioral sciences to understand and promote effective
altruism. We invite you to submit an abstract for a talk, poster, or discussion
session on a relevant topic by April 25.
The inaugural Life Improvement Science conference
[http://www.life-improvement.science] will be held online from June 9-13 2021.
We invite short abstracts for talks, posters, symposia, and panel discussions on
various topics relevant to understanding and promoting the effective pursuit of
prosocial values, optimal personal development (moral learning and cognitive
growth), and reducing unethical behavior.
Please submit your abstract here
[https://www.life-improvement.science/call-for-submissions]. The abstract
submission deadline is April 25.
Relevant topics include prosocial behavior and motivation, moral psychology,
improving human decision-making and rationality, effective altruism, positive
psychology, behavior change, (digital) interventions, character education,
environmental psychology, political psychology, behavioral economics and public
policy, wisdom scholarship, computational psychiatry, psychotherapy and
coaching, intentional personality change, human-centered design and positive
computing, cognitive augmentation, moral philosophy, virtues, value change, and
other topics.
Our confirmed speakers [https://www.life-improvement.science/speaker_info]
include David Reinstein, William Fleeson, Ken Sheldon, Brian Little, Igor
Grossman, Kristján Kristjánsson, and Kendall Bronk.
If you would like to learn more about LIS and the upcoming LIS conference, you
can check out our website [https://www.life-improvement.science/]. If you would
like to stay up to date on Life Improvement Science and the LIS conference, you
can sign up for the conference newsletter here
[https://www.life-improvement.science/registration] or follow us on Twitter
[https://twitter.com/LifeImprovSci] and we will keep
2Nathan_Barnard2dI think empirical claims can be discriminatory. I was struggling with how to
think about this for a while, but I think I've come to two conclusions. The
first way I think that empirical claims can be discrimory is if they express
discriminatory claims with no evidence, and people refusing to change their
beliefs based on evidence. I think the other way that they can be discriminatory
is when talking about the definitions of socially constructed concepts where we
can, in some sense and in some contexts, decide what is true.
11Khorton3dI regularly see people write arguments like "One day, we'll colonize the galaxy
- this shows why working on the far future is so exciting!"
I know the intuition this is trying to trigger is bigger = more impact =
exciting opportunity.
The intuition it actually triggers for me is expansion and colonization = trying
to build an empire = I should be suspicious of these people and their plans.
7anonysaurus30k3dNB: I have my own little archive of EA content and I got an alert that several
links popped up as dead - typically I would just add it to a task list and move
on… but I was surprised to see Joe’ Rogan’s (full) interview with Will Macaskill
in 2017 was no longer available on YouTube. So I investigated and found out
Rogan recently sold his entire catalog
[https://www.digitalmusicnews.com/2021/04/06/joe-rogan-spotify-removing-shows/]
and future episodes to Spotify (for $100 million!). Currently Spotify is
removing episodes from other platforms like Apple, Youtube and Vimeo. They’ve
also decided to not transfer certain episodes
[https://www.digitalmusicnews.com/2021/03/30/spotify-joe-rogan-episodes-removed/]
that violate their platform’s rules on content (i.e. it’s controversial or
offensive). I was a little alarmed that Will’s interview might be on the
cut-list but alas it still exists on Spotify
[https://open.spotify.com/episode/7KGozS19cvAfpv80ermY5q], but you now have to
make a (free) account to access it.
5Harrison D5dEA (forum/community) and Kialo?
TL;DR: I’m curious why there is so little mention of Kialo
[https://www.kialo.com/] as a potential tool for hashing out disagreements in
the EA forum/community, whereas I think it would be at least worth experimenting
with. I’m considering writing a post on this topic, but want to get initial
thoughts (e.g., have people already considered it and decided it wouldn’t be
effective, initial impressions/concerns, better alternatives to Kialo)
The forum and broader EA community has lots of competing ideas and even some
direct disagreements. Will Bradshaw's recent comment
[https://forum.effectivealtruism.org/posts/5iCsbrSqLyrfP55ry/concerns-with-ace-s-recent-behavior-1?commentId=iGD3xNKSyLK7Z8RHu]
about discussing cancel culture on the EA forum is just the latest example of
this that I’ve seen. I’ve often felt that the use of a platform like Kialo
[https://www.kialo.com/] would be a much more efficient way of recording these
disagreements, since it helps to separate out individual points of contention
and allow for deep back-and-forth, among many other reasons. However, when I
search for “Kialo” in the search bar on the forum, I only find a few minor
comments mentioning it (as opposed to posts) and they are all at least 2 years
old. I think I once saw a LessWrong post downplaying the platform, but I was
wondering if people here have developed similar impressions.
More to the point, I was curious to see if anyone had any initial thoughts on
whether it would be worthwhile to write an article introducing Kialo and
highlighting how it could be used to help hash out disagreements here/in the
community? If so, do you have any initial objections/concerns that I should
address? Do you know of any other alternatives that would be better options
(keeping in mind that one of the major benefits of Kialo is its accessibility)?
3RogerAckroyd4dSometimes the concern is raised that caring about wild animal welfare is seen as
unituitive and will bring conflict with the environmental movement. I do not
think large-scale efforts to help wild animals should be an EA cause at the
moment, but in the long-term I don't think environmentalist concerns will be a
limiting factor. Rather, I think environmentalist concerns are partially taken
as seriously as they are because people see it as helping wild animals as well.
(In some perhaps not fully thought out way.) I do not think it is a coindince
that the extinction of animals gets more press than the extinction of plants.
I also note that bird-feeding is common and attracts little criticism from
environmental groups. Indeed, during a cold spell this winter I saw
recommendations from environmental groups to do it.
11evelynciara6dOn the difference between x-risks and x-risk factors
I suspect there isn't much of a meaningful difference between "x-risks
[https://forum.effectivealtruism.org/tag/existential-risk]" and "x-risk factors
[https://forum.effectivealtruism.org/tag/existential-risk-factor]," for two
reasons:
1. We can treat them the same in terms of probability theory. For example, ifX
is an "x-risk" andYis a "risk factor" forX, thenPr(X∣Y)>Pr(X). But we can
also say thatPr(Y∣X)>Pr(Y), because both statements are equivalent toPr(X,Y)
>Pr(X)Pr(Y). We can similarly speak of the total probability of an x-risk
factor because of the law of total probability
[https://en.wikipedia.org/wiki/Law_of_total_probability] (e.g.Pr(Y)=Pr(Y∣X1)
+Pr(Y∣X2)+…) like we can with an x-risk.
2. Concretely, something can be both an x-risk and a risk factor. Climate
change is often cited as an example: it could cause an existential
catastrophe directly by making all of Earth unable to support complex
societies, or indirectly by increasing humanity's vulnerability to other
risks. Pandemics might also be an example, as a pandemic could either
directly cause the collapse of civilization or expose humanity to other
risks.
I think the difference is that x-risks are events that directly cause an
existential catastrophe, such as extinction or civilizational collapse, whereas
x-risk factors are events that don't have a direct causal pathway to
x-catastrophe. But it's possible that pretty much all x-risks are risk factors
and vice versa. For example, suppose that humanity is already decimated by a
global pandemic, and then a war causes the permanent collapse of civilization.
We usually think of pandemics as risks and wars as risk factors, but in this
scenario, the war is the x-risk because it happened last... right?
One way to think about x-risks that avoids this problem is that x-risks can have
both direct and indirect causal pathways to x-catastrophe.
20Pablo7dScott Aaronson just published a post
[https://www.scottaaronson.com/blog/?p=5448] announcing that he has won the ACM
Prize in Computing and the $250k that come with it, and is asking for donation
recommendations. He is particularly interested "in weird [charities] that I
wouldn’t have heard of otherwise. If I support their values, I’ll make a small
donation from my prize winnings. Or a larger donation, especially if you donate
yourself and challenge me to match." An extremely rough and oversimplified
back-of-the-envelope calculation [https://www.getguesstimate.com/models/18118]
suggests that a charity recommendation will cause, in expectation, ~$500 in
donations to the recommended charity (~$70–2800 90% CI).
14evelynciara7d"Quality-adjusted civilization years"
We should be able to compare global catastrophic risks in terms of the amount of
time they make global civilization significantly worse and how much worse it
gets. We might call this measure "quality-adjusted civilization years" (QACYs),
or the quality-adjusted amount of civilization time that is lost.
For example, let's say that the COVID-19 pandemic reduces the quality of
civilization by 50% for 2 years. Then the QACY burden of COVID-19 is0.5×2=1
QACYs.
Another example: suppose climate change will reduce the quality of civilization
by 80% for 200 years, and then things will return to normal. Then the total QACY
burden of climate change over the long term will be0.8×200=160QACYs.
In the limit, an existential catastrophe would have a near-infinite QACY burden.
12MichaelA7dINDEPENDENT IMPRESSIONS
Your independent impression about X is essentially what you'd believe about X if
you weren't updating your beliefs in light of peer disagreement - i.e., if you
weren't taking into account your knowledge about what other people believe and
how trustworthy their judgement seems on this topic relative to yours. Your
independent impression can take into account the reasons those people have for
their beliefs (inasmuch as you know those reasons), but not the mere fact that
they believe what they believe.
Armed with this concept, I try to stick to the following epistemic/discussion
norms [https://forum.effectivealtruism.org/tag/discussion-norms], and think it's
good for other people to do so as well:
* Trying to keep track of my own independent impressions separately from my
all-things-considered beliefs (which also takes into account peer
disagreement)
* Trying to be clear about whether I'm reporting my independent impression or
my all-things-considered belief
* Feeling comfortable reporting my own independent impression, even when I know
it differs from the impressions of people with more expertise in a topic
One rationale for that bundle of norms is to avoid information cascades
[https://www.lesswrong.com/tag/information-cascades].
In contrast, when I actually make decisions, I try to make them based on my
all-things-considered beliefs.
For example, my independent impression is that it's plausible that a stable,
global authoritarian regime
[https://forum.effectivealtruism.org/posts/EMKf4Gyee7BsY2RP8/michaela-s-shortform?commentId=WPsC97MBY2qu5vHWy]
, or some other unrecoverable dystopia
[https://forum.effectivealtruism.org/tag/dystopia], is more likely than
extinction, and that we should prioritise those risks more than we currently do.
But I think that this opinion is probably uncommon among people who've thought a
lot about existential risks
[https://forum.effectivealtruism.org/posts/AJbZ2hHR4bmeZKznG/venn-diagrams-of-exist
1Tankrede8dThe definition of existential risk as ‘humanity losing its long term potential’
in Toby Ord precipice [https://theprecipice.com/] could be specified further.
Without (perhaps) loss of generality, assuming finite total value
[https://philpapers.org/archive/MANWIT-6.pdf] in our universe, one could specify
existential risks into two broad categories of risks such as:
* Extinction risks (X-risks): Human share of total value goes to zero. Examples
could be extinction from pandemics, extreme climate change or some natural
event.
* Agential risks (A-risks): Human share of total value could be greater than in
the X-risks scenarios but keeps being strictly dominated by the share of
total value holded by misaligned agents. Examples could be misaligned
institutions, AIs or loud aliens
[https://arxiv.org/abs/2102.01522?source=techstories.org] controlling most of
the value in the universe and with whom there would be little gain from trade
to be hoped for.
6MichaelA9dBottom line up front: I think it'd be best for longtermists to default to using
more inclusive term “authoritarianism” rather than "totalitarianism", except
when a person really has a specific reason to focus on totalitarianism
[https://forum.effectivealtruism.org/tag/totalitarianism/] specifically.
I have the impression that EAs/longtermists have often focused more on
"totalitarianism" than on "authoritarianism", or have used the terms as if they
were somewhat interchangeable. (E.g., I think I did both of those things myself
in the past.)
But my understanding is that political scientists typically consider
totalitarianism to be a relatively extreme subtype of authoritarianism (see,
e.g., Wikipedia [https://en.wikipedia.org/wiki/Totalitarianism]). And it’s not
obvious to me that, from a longtermist perspective, totalitarianism is a bigger
issue than other types of authoritarian regime. (Essentially, I’d guess that
totalitarianism would have worse effects than other types of authoritarianism,
but that it’s less likely to arise in the first place.)
To provide a bit more of a sense of what I mean and why I say this, here's a
relevant section of a research agenda I recently drafted:
* Longtermism-relevant typology and harms of authoritarianism * What is the
most useful way for longtermists to carve up the space of possible types
of authoritarian political systems (or perhaps political systems more
broadly, or political systems other than full liberal democracies)? What
terms should we be using?
* Which types of
authoritarian political system should we be most concerned about?
* What are the
main ways in which each type of authoritarian political system could
reduce (or increase) the expected value of the long-term future?
* What are the
main pathways by which each type of authoritarian political system could
reduce (or increase) the expected value of the long-term future? * E.g.,
increasing
2ag40009dI was planning to donate some money to a climate cause a few months ago, and I
decide to give some money to Giving Green (this was after the post here
recommending GG). There were some problems with the money going through
(unrelated to GG), but anyways now I can still decide to send the money
elsewhere. I'm thinking about giving the money elsewhere due to the big post
criticizing GG. However, I still think it's probably a good giving opportunity,
given that it's at an important stage of its growth and seems to have gotten a
lot of publicity. Should I consider giving someplace else and doing more
research, or should I keep the plan of giving it to GG? (Sorry if this is vague
-- let me know if I can fill in any details!)