Shortform Content [Beta]

Max_Daniel's Shortform

[EA's focus on marginal individual action over structure is a poor fit for dealing with info hazards.]

I tend to think that EAs sometimes are too focused on optimizing the marginal utility of individual actions as opposed to improving larger-scale structures. For example, I think it'd be good if there was much content and cultural awareness on how to build good organizations as there is on how to improve individual cognition. - Think about how often you've heard of "self improvement" or "rationality" as opposed to things l... (read more)

MichaelA's Shortform

The old debate over "giving now vs later" is now sometimes phrased as a debate about "patient philanthropy". 80,000 Hours recently wrote a post using the term "patient longtermism", which seems intended to:

  • focus only on how the debate over patient philanthropy applies to longtermists
  • generalise the debate to also include questions about work (e.g., should I do a directly useful job now, or build career capital and do directly useful work later?)

They contrast this against the term "urgent longtermism", to describe the view that favours doing more donations a

... (read more)
4MichaelDickens18hI don't think "patient" and "urgent" are opposites, in the way Phil Trammell originally defined patience [https://philiptrammell.com/static/discounting_for_patient_philanthropists.pdf]. He used "patient" to mean a zero pure time preference, and "impatient" to mean a nonzero pure time preference. You can believe it is urgent that we spend resources now while still having a pure time preference. Trammell's paper argued that patient actors should give later, irrespective of how much urgency you believe there is. (Although he carved out some exceptions to this.)

Yes, Trammell writes:

We will call someone “patient” if he has low (including zero) pure time preference with respect to the welfare he creates by providing a good.

And I agree that a person with a low or zero pure time preference may still want to use a large portion of their resources now, for example due to thinking now is a much "hingier"/"higher leverage" time than average, or thinking value drift will be high.

You highlighting this makes me doubt whether 80,000 Hours should've used "patient longtermism" as they did, whether they should've used "patient

... (read more)
evelynciara's Shortform

How pressing is countering anti-science?

Intuitively, anti-science attitudes seem like a major barrier to solving many of the world's most pressing problems: for example, climate change denial has greatly derailed the American response to climate change, and distrust of public health authorities may be stymying the COVID-19 response. (For instance, a candidate running in my district for State Senate is campaigning on opposition to contact tracing as well as vaccines.) I'm particularly concerned about anti-economics attitudes because they lead to bad economi

... (read more)

I think a more general, and less antagonizing, way to frame this is "increasing scientific literacy among the general public," where scientific literacy is seen as a spectrum. For example, increasing scientific literacy among climate activists might make them more likely to advocate for policies that more effectively reduce CO2 emissions.

4Aaron Gertler2moEpistemic status: Almost entirely opinion, I'd love to hear counterexamples When I hear proposals related to instilling certain values widely throughout a population (or preventing the instillation of certain values), I'm always inherently skeptical. I'm not aware of many cases where something like this worked well, at least in a region as large, sophisticated, and polarized as the United States. You could point to civil rights campaigns, which have generally been successful over long periods of time, but those had the advantage of being run mostly by people who were personally affected (= lots of energy for activism, lots of people "inherently" supporting the movement in a deep and personal way). If you look at other movements that transformed some part of the U.S. (e.g. bioethics or the conservative legal movement, as seen in Open Phil's case studies of early field growth [https://www.openphilanthropy.org/research/history-of-philanthropy/some-case-studies-early-field-growth] ), you see narrow targeting of influential people rather than public advocacy. Rather than thinking about "countering anti-science" more generally, why not focus on specific policies with scientific support? Fighting generically for "science" seems less compelling than pushing for one specific scientific idea ("masks work," "housing deregulation will lower rents"), and I can think of a lot of cases where scientific ideas won the day in some democratic context. This isn't to say that public science advocacy is pointless; you can reach a lot of people by doing that. But I don't think the people you reach are likely to "matter" much unless they actually campaign for some specific outcome (e.g. I wouldn't expect a scientist to swing many votes in a national election, but maybe they could push some funding toward an advocacy group for a beneficial policy). **** One other note: I ran a quick search to look for polls on public trust in science, but all I found was a piece from Gallup on public
avacyn's Shortform

Wayne Hsiung, the co-founder of Direct Action Everywhere (DxE) is running for mayor of Berkeley: https://www.wayneformayor.com/

He's running on a left-leaning platform that doesn't explicitly discuss animals, but he will likely focus on animal-friendly policies. For example, he wants to create a "solar powered, pedestrian-only, and plant-based Green District."

DxE has been fairly controversial in the animal advocacy world, but setting aside questions of their particular tactics, having someone so animal friendly in government could be ve... (read more)

Showing 3 of 7 replies (Click to show all)
4Aaron Gertler22dIf you see unsourced positive factual claims about people (e.g. things they said or did), you are welcome to ask for sources the same way I did! I think that the average unsourced negative claim is more likely to hurt the Forum's culture than the average unsourced positive claim, but we would ideally have few of either.
3Dale6dI think Gregory_Lewis is referencing the same poor behavior here [https://forum.effectivealtruism.org/posts/Rcys5RkBzZ5vacBYY/what-is-the-increase-in-expected-value-of-effective-altruist?commentId=dWWLKZvTSKAZLtE6Y] if you are looking for more sources. Please let me know what the organizers say if you ended up asking them.

That's the kind of source I was looking for; thanks for letting me know when it came up.

timunderwood's Shortform

Does anyone know about research on the influence of fiction on changing elite/public behaviors and opinions?

The context of the question is that I'm a self published novelist, and I've decided that I want to focus the half of my time that I'm focusing on less commercial projects on writing books that might be directly useful in EA terms, probably by making certain ideas about AI more widely known. I at some point decided it might be a good idea to learn more about examples of literature actually making an important difference beyond the examp... (read more)

Max_Daniel's Shortform

**Would some moral/virtue intuitions prefer small but non-zero x-risk?**

[Me trying to lean into a kind of philosophical reasoning I don't find plausible. Not important, except perhaps as cautionary tale for what kind of things could happen if you wanted to base the case for reducing x-risk on purely non-consequentialist reasons.]

(Inspired by a conversation with Toby Newberry about something else.)

The basic observation: we sometimes think that a person achieving some success is particularly praiseworthy, remarkable, virtuous, or similar if they could h... (read more)

Showing 3 of 5 replies (Click to show all)

Nice post. I’m reminded of this Bertrand Russell passage:

“all the labours of the ages, all the devotion, all the inspiration, all the noonday brightness of human genius, are destined to extinction in the vast death of the solar system, and that the whole temple of Man's achievement must inevitably be buried beneath the debris of a universe in ruins ... Only within the scaffolding of these truths, only on the firm foundation of unyielding despair, can the soul's habitation henceforth be safely built.” —A Free Man’s Worship, 1903

I take Russell as arguing

... (read more)
4Larks7dHey, yes - I would count that nuclear disarmament breakthrough as being equal to the sum of those annual world-saving instances. So you're right that the number of events isn't fixed, but their measure (as in the % of the future of humanity saved) is bounded.
2Lukas_Gloor7dRelated: Relationships in a post-singularity future can also be set up to work well, so that the setup overdetermines any efforts by the individuals in them. To me, that takes away the whole point. I don't think this would feel less problematic if somehow future people decided to add some noise to the setup, such that relationships occasionally fail. The reason I find any degree of "setup" problematic is because this seems like emphasizing the self-oriented benefits one gets out of relationships, and de-emphasizing the from-you-independent identity of the other person. It's romantic to think that there's a soulmate out there who would be just as happy to find you as you are about finding them. It's not that romantic to think about creating your soulmate with the power of future technology (or society doing this for you). This is the "person-affecting intuition for thinking about soulmates." If the other person exists already, I'd be excited to meet them, and would be motivated to put in a lot of effort to make things work, as opposed to just giving up on myself in the face of difficulties. By contrast, if the person doesn't exist yet or won't exist in a way independent of my actions, I feel like there's less of a point/appeal to it.
vaidehi_agarwalla's Shortform

Some thoughts on stage-wise development of moral circle

Status: Very rough, I mainly want to know if there's already some research/thinking on this.

  • Jean Piaget, a early childhood psychologist from the 1960s, suggested a stage sequential model of childhood developemnt. He suggesting that we progress through different levels of development, and each stage is necessary to develop to the next.
  • Perhaps we can make a similar argument for moral circle expansion. In other words: you cannot run when you don't know how to walk. If you ask someone to believ
... (read more)
2David_Moss6dMy sense is that the idea of sequential stages for moral development is exceedingly likely to be false and in the case of the most prominent theory of this kind, Kolhlberg's, completely debunked in the sense that there was never any good evidence for it (I find the social intuitionist model much more plausible), so I don't see much appeal to trying to understand cause selection in these terms. That said, I'm sure there's a rough sense in which people tend to adopt less weird beliefs before they adopt more weird ones and I think that thinking about this in terms of more/less weird beliefs is likely more informative than thinking about this in terms of more/less distant areas in a "moral circle". I don't think there's a clear non-subjective sense in which causes are more or less weird though. For example, there are many EAs who value the wellbeing of non-actual people in the distant future and not suffering wild animals and vice versa, so which is weirder or more distant from the centre of this posited circle? I hear people assume conflicting answers to this question from time to time (people tend to assume their area is less weird). I would also agree that getting people to agree to beliefs which are less far from what they currently believe can make them more positively inclined to subsequently adopt beliefs related to that belief which are further from their current beliefs. It seems like there are a bunch of non-competing reasons why this could be the case though. For example: * Sometimes belief x1 itself gives a person epistemic reason to believe x2 * Sometimes believing x1 increases your self-identity as a person who believes weird things, making you more likely to believe weird things * Sometimes believing x2 increases your affiliation with a group associated with x1 (e.g. EA) making you more likely to believe x3 which is also associated with that group Notably none of these require that we assume anything about moral circles or general sequenc

Yeah I think you're right. I didn't need to actually reference Piaget (it just prompted the thought). To be clear, I wasn't trying to imply that Piaget/Kohlberg's theories were correct or sound, but rather applying the model to another issue. I didn't make that very clear.  I don't think my argument really requirs the empirical implications of the model (especially because I wasn't trying to imply moral judgement that one moral circle is necessary better/worse). However I didn't flag this. [meta note: I also posted it pretty quickly, didn't think it t

... (read more)
1Misha_Yagudin6dIf longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.
wjaynay's Shortform

<medium-term lurker, first time poster; also, Epistemic Status = spitballing>

Has anyone encountered the idea of 'direct payment' philanthropy in the form of investment portfolios?

Some background:

  • this idea is acknowledged to be out of scope in terms of the central question of maximizing altruistic impact, in the traditional EA sense;
  • also, while inspired in part by the 'Direct Payment' model of poverty reduction, the notion is on a fundamentally different tack and would clearly not address profound poverty directly. It might hope
... (read more)
vaidehi_agarwalla's Shortform

Is anyone aware of/planning on doing any research related to the expected spike in interest for pandemic research due to COVID? 

It would be interesting to see how much new interest is generated, and for which types of roles (e.g. doctors vs researchers). This could be useful to a) identify potential skilled biosecurity recruits b) find out what motivated them about COVID-19 c) figure out how neglected this will be in 5-10 years 

I'd imagine doing a survey after the pandemic starts to die down might be more valuable than right now (maybe after the

... (read more)
0hapless8dHaving done some research on post-graduate education in the past, it's surprisingly difficult to access application rates for classes of programs. Some individual schools publish their application/admission rates, but usually as advertising, so there's a fair bit of cherry picking. It's somewhat more straightforward to access completion rates (at least in the US, universities report this to government). However, that MVP would still be interesting with just a few data points: if any EAs have relationships to a couple relevant programs (in say biosecurity, epidemiology), it may be worth reaching out directly in 6-12 months! A more general point, which I've seen some discussion of here, is how near-miss catastrophes prepare society for a more severe version of the same catastrophe. This would be interesting to explore both theoretically (what's the sweet spot for a near-miss to encourage further work, but not dissuade prevention policies) and empirically. One historical example might be, for example, does a civilization which experienced a bad famine experience fewer famines in a period following that bad famine? How long is that period? In particular, that makes me think of MichaelA's recently excellent Some history topics it might be very valuable to investigate [https://forum.effectivealtruism.org/posts/psKZNMzCyXybcoEZR/some-history-topics-it-might-be-very-valuable-to-investigate] .

In the UK could you access application numbers with a Freedom of Information request?

The Great Reset

This announcement of a World Economic Forum meeting on "The Great Reset"


https://www.weforum.org/great-reset/?fbclid=IwAR2RAlV4cUx6-p_uAJ-uSjliuQYxyqLPh3PFH0ykpfBWXAFWmRed9anA7HQ


sounds very intriguing, and possibly hopeful, but it was hard for me to get a clear sense of whether this is going to mitigate x-risks from e.g. climate change all that much, or whether it's mostly just about providing more robust safety nets against major disasters. It wasn't clear to me from reading this whether it's (significantly) net-positive in terms... (read more)

evelynciara's Shortform

Table test - Markdown

Column A Column B Column C
Cell A1 Cell B1 Cell C1
Cell A2 Cell B2 Cell C2
Cell A3 Cell B3 Cell C3

Seems to work surprisingly well!

omnizoid's Shortform

I had an idea for a potential way for EA to gain more prominence among people who are likely to be productive in the movement. I do high school debate and in high school debate, debaters go very deep into literature around arguments that are useful in debates. For example, baudrillard is pretty obscure, yet debaters go incredibly in depth doing hours and hours of research about baudrillard specifically. The current debate topic in the event where there's the most in depth research is "The United States federal government should enact substanti... (read more)

I like this! I would recommend polishing it into a top level post.

So-Low Growth's Shortform

I'd like feedback on an idea if possible. I have a longer document with more detail that I'm working on but here's a short summary that sketches out the core idea/motivation:

Potential idea: hosting a competition/experiment to find the most convincing argument for donating to long-termist organisations

Brief summary

Recently, Professor Eric Shwitzgebel and Dr Fiery Cushman conducted a study to find the most convincing philosophical/logical argument for short-term causes. By ‘philosophical/logical argument’ I mean an argument that ... (read more)

evelynciara's Shortform

If you're looking at where to direct funding for U.S. criminal justice reform:

List of U.S. states and territories by incarceration and correctional supervision rate

On this page, you can sort states (and U.S. territories) by total prison/jail population, incarceration rate per 100,000 adults, or incarceration rate per 100,000 people of all ages - all statistics as of year-end 2016.

As of 2016, the 10 states with the highest incarceration rates per 100,000 people were:

  1. Oklahoma (990 prisoners/100k)
  2. Louisiana (970)
  3. Mississippi (960)
  4. Georgia (880)
  5. Alabama (840
... (read more)
omnizoid's Shortform

Have there been any efforts from EAs to look into increasing the speed of space colonization. It seems potentially desirable in terms of serving as a bullwark against existential threats.

Here's an EA Global talk on the subject. I find it uncompelling. It's extraordinarily expensive, and does little to protect against the X-risks I'm most concerned about, namely AI risk and engineered pandemics.

1anon8615dhttps://www.centauri-dreams.org/2020/05/29/sublake-settlements-for-mars/#comment-204054 [https://www.centauri-dreams.org/2020/05/29/sublake-settlements-for-mars/#comment-204054]
Lukas_Gloor's Shortform

[Is pleasure ‘good’?]

What do we mean by the claim “Pleasure is good”?

There’s an uncontroversial interpretation and a controversial one.

Vague and uncontroversial claim: When we say that pleasure is good, we mean that all else equal, pleasure is always unobjectionable, and often it is desired.

Specific and controversial claim: When we say that pleasure is good, what we mean is that, all else equal, pleasure is an end we should be striving for. This captures points like:

  • that pleasure is in itself desirable,
  • that no mental st
... (read more)

I agree that pleasure is not intrinsically good (i.e. I also deny the strong claim). I think it's likely that experiencing the full spectrum of human emotions (happiness, sadness, anger, etc.) and facing challenges are good for personal growth and therefore improve well-being in the long run. However, I think that suffering is inherently bad, though I'm not sure what distinguishes suffering from displeasure.

4EdoArad15dAnother argument that points to "pleasure is good" is that people and many animals are drawn to things that gives them pleasure, and that generally people communicate about their own pleasurable states as good. Given a random person off the street, I'm willing to bet that after introspection they will suggest that they value pleasure in the strong sense. So while this may not be universally accepted, I still think it could hold weight. Also, a symmetric statement can be said regarding suffering, which I don't think you'd accept. People who say "suffering is bad" claim that we can establish this by introspection about the nature of suffering. From reading Tranquilism, I think that you'd respond to these as saying that people confuse "pleasure is good" with an internal preference or craving for pleasure, while suffering is actually intrinsically bad. But taking an epistemically modest approach would require quite a bit of evidence for that, especially as part of the argument is that introspection may be flawed. I'm curious as to how strongly you hold this position. (Personally, I'm totally confused here but lean toward the strong sense of pleasure is good but think that overall pleasure holds little moral weight)
16MichaelStJules15dIt's worth pointing out that this association isn't perfect. See [1] [https://www.lesswrong.com/posts/zThWT5Zvifo5qYaca/the-neuroscience-of-pleasure] and [2] [https://longtermrisk.org/hedonistic-vs-preference-utilitarianism/] for some discussion. Tranquilism allows that if someone is in some moment neither drawn to (craving) (more) pleasurable experiences nor experiencing pleasure (or as much as they could be), this isn't worse than if they were experiencing (more) pleasure. If more pleasure is always better, then contentment is never good enough, but to be content is to be satisfied, to feel that it is good enough or not feel that it isn't good enough. Of course, this is in the moment, and not necessarily a reflective judgement. I also approach pleasure vs suffering in a kind of conditional way, like an asymmetric person-affecting view, or "preference-affecting view": I would say that something only matters if it matters (or will matter) to someone, and an absence of pleasure doesn't necessarily matter to someone who isn't experiencing pleasure, and certainly doesn't matter to someone who does not and will not exist, and so we have no inherent reason to promote pleasure. On the other hand, there's no suffering unless someone is experiencing it, and according to some definitions of suffering, it necessarily matters to the sufferer. (A bit more on this argument here [https://forum.effectivealtruism.org/posts/GK7Qq4kww5D8ndckR/michaelstjules-s-shortform?commentId=Y33ZYtA45MeBNznzX] , but applied to good and bad lives.)
CarlyKay's Shortform

Graphic Design / Communications Specialist: Hi everyone! I'm an EA and graphic designer who would like to start doing work for EA orgs. I am flexible and open to volunteer or paid work. I have extensive experience in Adobe Creative Suite as well as photography, videography, basic web design, infographics, newsletters, etc. My portfolio is www.CarlyKemp.com. If anyone has a need or knows of an organization that has a need please let me know! Thank you!

Lukas_Gloor's Shortform

[Takeaways from Covid forecasting on Metaculus]

I’m probably going to win the first round of the Li Wenliang forecasting tournament on Metaculus, or maybe get second. (My screen name shows up in second on the leaderboard, but it’s a glitch that’s not resolved yet because one of the resolutions depends on a strongly delayed source.)

With around 52 questions, this was the largest forecasting tournament on the virus. It ran from late February until early June.

I learned a lot during the tournament. Next to claiming credit, I want to share so... (read more)

I know it might not be what you're looking for, but congratulations!

Lukas_Gloor's Shortform

[I’m an anti-realist because I think morality is underdetermined]

I often find myself explaining why anti-realism is different from nihilism / “anything goes.” I wrote lengthy posts in my sequence on moral anti-realism (2 and 3) about partly this point. However, maybe the framing “anti-realism” is needlessly confusing because some people do associate it with nihilism / “anything goes.” Perhaps the best short explanation of my perspective goes as follows:

I’m happy to concede that some moral facts exist ... (read more)

I think if you concede that some moral facts exist, it might be more accurate to call yourself a moral realist. The indeterminacy of morality could be a fundamental feature, allowing for many more acts to be ethically permissible (or no worse than other acts) than with a linear (complete) ranking. I think consequentialists are unusually prone to try to rank outcomes linearly.

I read this recently, which describes how moral indeterminacy can be accommodated within moral realism, although it was kind of long for what it had to say. I think expert agreement (o... (read more)

Lukas_Gloor's Shortform

[Are underdetermined moral values problematic?]

If I think my goals are merely uncertain, but in reality they are underdetermined and the contributions I make to shaping the future will be driven, to a large degree, by social influences, ordering effects, lock-in effects, and so on, is that a problem?

I can’t speak for others, but I’d find it weird. I want to know what I’m getting up for in the morning.

On the other hand, because it makes it easier for the community to coordinate and pull things in the same directions, there's a sense in which underdetermined values are beneficial.

Load More