Shortform Content [Beta]

James Montavon's Shortform

National Year of Service for Free College as an EA Idea

This is mostly anecdotal and n of 1; interested to hear the community's thoughts. 

  1. During high school, I went on mission trips through my church. One I credit with starting me thinking about EA type trade offs and values- we went to Guatemala, and spent the first week working with La Casa del Alfarero working with the absolute poorest of the poor, people living and working inside of the Guatemala City garbage dump. This organization did real research into what interventions they could do to most he
... (read more)
DanielFilan's Shortform

Sounds like if you could cheaply get rid of anti-money-laundering laws, this would be pretty effective altruism:
> Necessarily applying a broad brush, the current anti-money laundering policy prescription helps authorities intercept about $3 billion of an estimated $3 trillion in criminal funds generated annually (0.1 percent success rate), and costs banks and other businesses more than $300 billion in compliance costs, more than a hundred times the amounts recovered from criminals.
Found at this Marginal Revolution post.

5Larks1dSeems plausible. Presumably if some crime is deterred by these rules, which would leave the $3bn an under-estimate of the benefit. On the other hand, without the rules we might see more innovation in financial services, which would suggest the $300bn an under-estimate of the costs. Unfortunately I think it is very unlikely we could make any progress in this regard, as governments do not like giving up power, and the proximate victims are not viewed sympathetically, even if the true incidence of the costs is broad. There have been attempts in the past to reform, as they particular harm poor immigrants trying to send cash home, but as far as I am aware these attempts have been almost entirely unsuccessful.
1DanielFilan1dI'd imagine that the crime deterred can't be too much more than $3bn worth - altho perhaps if you steal $x, the social cost is much larger than $x.

Poaching, murder, terrorism, and sex trafficking all cause more than just financial harm, although I don't know what portion of the crime prevented by AML laws is these things. Authoritarian states like the PRC, which has been systematically oppressing Muslims and Tibetans, participate in money laundering, too. Decriminalization of drugs and sex work would reduce the amount of illicit drug and sex trafficking, since legal producers would outcompete the criminal organizations, while growing the economy.

Prabhat Soni's Shortform

Socrates' case against democracy

https://bigthink.com/scotty-hendricks/why-socrates-hated-democracy-and-what-we-can-do-about-it

Socrates makes the following argument:

  1. Just like we only allow skilled pilots to fly airplanes, licensed doctors to operate on patients or trained firefighters to use fire enignes, similarly we should only allow informed voters to vote in elections.
  2. "The best argument against democracy is a five minute conversation with the average voter". Half of American adults don’t know that each state gets two senators and two thirds don’t know w
... (read more)

What's the proposed policy change? Making understanding of elections a requirement to vote?

Chi's Shortform

Observation about EA culture and my journey to develop self-confidence:

Today I noticed an eerie similarity between things I'm trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:

'I just had a thought, I don't know if it's worth mentioning...I just had a thought about [X] on this one,... (read more)

Showing 3 of 15 replies (Click to show all)
3Linch1dWhy is this comment downvoted? :)
3EmeryCooper1dI didn't downvote your comment, but I did feel a bit like it wasn't really addressing the points Chi was making, so if I had to guess, I'd say that might be why.

If my comment didn't seem pertinent, I think I most likely misunderstood the original points then. Will reread and try to understand better.

finm's Shortform

I think it can be useful to motivate longtermism by drawing an analogy to the prudential case — swapping out the entire future for your future, and only considering what would make your life go best.

Suppose that one day you learned that your ageing process had stopped. Maybe scientists identified the gene for ageing, and found that your ageing gene was missing. This amounts to learning that you now have much more control over how long you live than previously, because there's no longer a process imposed on you from outside that puts a guaranteed ceiling on... (read more)

vaidehi_agarwalla's Shortform

Reasons for/against Facebook & plans to migrate the community out of there

Epistemitc Status: My very rough thoughts. I am confident of the reasons for/against, but the last section is mostly speculation so I won't attempt to clarify my certainty levels

Reasons for moving away from Facebook

  • Facebook promotes bad discussion norms (see Point 4 here)
  • Poor movement knowledge retention
  • Irritating to navigate: It's easy to not be aware that certain groups exist (since there are dozens) and it's annoying to filter through all the other stuff in Facebook to get
... (read more)
Showing 3 of 4 replies (Click to show all)

Another possible reason against might be:
In some countries there is a growing number of people who intentionally don't use Facebook. Even if their reasons for their decision may be flawed, it might make recruiting more difficult. While I perceive this as quite common among German academics, Germany might also just be an outlier.

Moving certain services found on Facebook to other sites: [...], making it easier for people to reach out to each other (e.g. EA Hub Community directory). Then it may be easier to move whatever is left (e.g. discussions) to a new pl

... (read more)
2Aaron Gertler3dI don't think the Forum is likely to serve as a good "group discussion platform" at any point in the near future. This isn't about culture so much as form; we don't have Slack's "infinite continuous thread about one topic" feature, which is also present on Facebook and Discord, and that seems like the natural form for an ongoing discussion to take. You can configure many bits of the Forum to feel more discussion-like (e.g. setting all the comment threads you see to be "newest first"), but it feels like a round peg/square hole situation. On the other hand, Slack seems reasonable for this!
1Tsunayoshi2dThere is also a quite active EA Discord server, which serves the function of "endless group discussions" fairly well, so another Slack workspace might have negligible benefits.
antimonyanthony's Shortform

Crosspost: "Tranquilism Respects Individual Desires"

I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.

ag4000's Shortform

I can also highly recommend Deep Work by Cal Newport, his main thesis is that 'real' work only happens/productivity is high when you're doing it for a few hours at a time instead of 15min blocks with constant interruptions. Edit: should have read the linked post first haha, so see this as another vote for Cal Newport

richard_ngo's Shortform

What is the strongest argument, or the best existing analysis, that Givewell top charities actually do more good per dollar than good mainstream charities focusing on big-picture issues (e.g. a typical climate change charity, or the US Democratic party)?

If the answer is "no compelling case has been made", then does the typical person who hears about and donates to Givewell top charities via EA understand that?

If the case hasn't been made [edit: by which I mean, if the arguments that have been made are not compelling enough to justify the claims being made], and most donors don't understand that, then the way EAs talk about those charities is actively misleading, and we should apologise and try hard to fix that.

Showing 3 of 18 replies (Click to show all)
2AGB3dI think we’re still talking past each other here. You seem to be implicitly focusing on the question ‘how certain are we these will turn out to be best’. I’m focusing on the question ‘Denise and I are likely to make a donation to near-term human-centric causes in the next few months; is there something I should be donating to above Givewell charities’. Listing unaccounted-for second order effects is relevant for the first, but not decision-relevant until the effects are predictable-in-direction and large; it needs to actually impact my EV meaningfully. Currently, I’m not seeing a clear argument for that. ‘Might have wildly large impacts’, ‘very rough estimates’, ‘policy can have enormous effects’...these are all phrases that increase uncertainty rather than concretely change EVs and so are decision-irrelevant. (That’s not quite true; we should penalise rough things’ calculated EV more in high-uncertainty environments due to winners’ curse effects, but that’s secondary to my main point here). Another way of putting it is that this is the difference between one’s confidence level that what you currently think is best will still be what you think is best 20 years from now, versus trying to identify the best all-things-considered donation opportunity right now with one’s limited information. So concretely, I think it’s very likely that in 20 years I’ll think one of the >20 alternatives I’ve briefly considered will look like it was a better use of my money that Givewell charities, due to the uncertainty you’re highlighting. But I don’t know which one, and I don’t expect it to outperform 20x, so picking one essentially at random still looks pretty bad. A non-random way to pick would be if Open Phil, or someone else I respect, shifted their equivalent donation bucket to some alternative. AFAIK, this hasn’t happened. That’s the relevance of those decisions to me, rather than any belief that they’ve done a secret Uber-Analysis.
2richard_ngo3dHmm, I agree that we're talking past each other. I don't intend to focus on ex post evaluations over ex ante evaluations. What I intend to focus on is the question: "when an EA make the claim that GiveWell charities are the charities with the strongest case for impact in near-term human-centric terms, how justified are they?" Or, relatedly, "How likely is it that somebody who is motivated to find the best near-term human-centric charities possible, but takes a very different approach than EA does (in particular by focusing much more on hard-to-measure political effects) will do better than EA?" In my previous comment, I used a lot of phrases which you took to indicate the high uncertainty of political interventions. My main point was that it's plausible that a bunch of them exist which will wildly outperform GiveWell charities. I agree I don't know which one, and you don't know which one, and GiveWell doesn't know which one. But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments publicly, in a way that we could learn from if we were more open to less quantitative analysis? (Alternatively, could someone know if they tried? But let's go with the former for now.) In other words, consider two possible worlds. In one world GiveWell charities are in fact the most cost-effective, and all the people doing political advocacy are less cost-effective than GiveWell ex ante (given publicly available information). In the other world there's a bunch of people doing political advocacy work which EA hasn't supported even though they have strong, well-justified arguments that their work is very impactful (more impactful than GiveWell's top charities), because that impact is hard to quantitatively estimate. What evidence do we have that we're not in the second world? In both worlds GiveWell would be saying roughly the same thing (because they have a high bar for rigour). Would OpenPhi

But for the purposes of my questions above, that's not the relevant factor; the relevant factor is: does someone know, and have they made those arguments [that specific intervention X will wildly outperform] publicly, in a way that we could learn from if we were more open to less quantitative analysis?


I agree with this. I think the best way to settle this question is to link to actual examples of someone making such arguments. Personally, my observation from engaging with non-EA advocates of political advocacy is that they don't actually make a case; when ... (read more)

WilliamKiely's Shortform

#DonationRegret #Mistakes

Something it occurred to me it might be useful to tell others about that I haven't yet said anywhere:

The only donation I've really regretted making was one of the first significant donations I made: On May 23, 2017, I donated $3,181.00 to Against Malaria Foundation. It was my largest donation to date and my first donation after taking the GWWC pledge (in December 2016).

I primarily regretted and regret making this donation not because I later updated my view toward realizing/believing that I could have done more good by donating th... (read more)

Chi's Shortform

I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that's mostly because they seem to be harmful for underconfident people to hear.

Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.

Way in which this could b... (read more)

Chi's Shortform

Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?

Background: I think it's no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing ... (read more)

SiebeRozendal's Shortform

This is a small write-up of when I applied for a PhD in Risk Analysis 1.5 years ago. I can elaborate in the comments!

I believed doing a PhD in risk analysis would teach me a lot of useful skills to apply to existential risks, and it might allow me to direectly work on important topics. I worked as a Research Associate on the qualitative ide of systemic risk for half a year. I ended up  not doing the PhD because I could not find a suitable place, nor do I think pure research is the best fit for me. However, I still believe more EAs should study somethi... (read more)

Aww yes, people writing about their life and career experiences! Posts of this type seem to have some of the best ratio of "how useful people find this" to "how hard it is to write" -- you share things you know better than anyone else, and other people can frequently draw lessons from them.

Aidan O'Gara's Shortform

Three Scenarios for AI Progress

How will AI develop over the next few centuries? Three scenarios seem particularly likely to me: 

  • "Solving Intelligence": Within the next 50 years, a top AI lab like Deepmind or OpenAI builds a superintelligent AI system, by using massive compute within our current ML paradigm.
  • "Comprehensive AI Systems": Over the next century or few, computers keep getting better at a bunch of different domains. No one AI system is incredible at everything, each new job requires fine-tuning and domain knowledge and human-in-the-loop super
... (read more)
3EdoArad5dI like the no takeoff scenario intuitive analysis, and find that I also haven't really imagined this as a concrete possibility. Generally, I like that you have presented clearly distinct scenarios and that the logic is explicit and coherent. Two thoughts that came to mind: Somehow in the CAIS scenario, I also expect the rapid growth and the delegation of some economic and organizational work to AI to have some weird risks that involve something like humanity getting pushed away from the economic ecosystem while many autonomous systems are self-sustaining and stuck in a stupid and lifeless revenue-maximizing loop. I couldn't really pinpoint an x-risk scenario here. Recursive self-improvement can also happen within long periods of time, not necessarily leading to a fast takeoff, especially if the early gains are much easier than later gains (which might make more sense if we think of AI capability development as resulting mostly from computational improvements rather than algorithmic).

Ah! Richard Ngo had just written something related to the CAIS scenario :)

Awah Eric's Shortform

Updated question (slight update in wording):

I am a Christian pastor for several rural communities of the West Region of Cameroon. I regularly come across opportunities to counterfactually (persons are dying since I do not have the funds to help) save lives with few hundreds of USD. I can select these opportunities.

I am looking for Christian donations. Either otherwise not pledged or used for lower utility. Please let me know if you know of a local Christian church that could be interested in cost-effective global health interventions.

I will be very happy t... (read more)

ag4000's Shortform

Sorry if this isn't directly related to EA.  What is a good way to measure one's own productivity?  I tend to measure the amount of time that I spend doing productive activities, but the discussion here seems to make a convincing case that measuring hours worked isn't the best method to do so.  

7Aaron Gertler10dThis is a really deep topic, but certainly worth asking about; if you're working on an impactful plan, raising your productivity raises your impact. My favorite starting points for thinking about productivity: * Productivity: A summary of what we know [https://www.lesswrong.com/posts/P3zrurj5hHKFKDL3M/productivity-working-towards-a-summary-of-what-we-know] (LessWrong) * The book "Getting Things Done" (which is referenced in the above post, but is quite powerful on its own -- the best book I've read about productivity, out of many) * The Complice blog and app [https://blog.complice.co/] (a bit different from "standard" productivity systems, but as a self-contained system, it works well for many people I know)

Thanks so much! I've been doing some stuff related to GTD, but haven't read the whole book -- will do so.

MichaelA's Shortform

Have any EAs involved in GCR-, x-risk-, or longtermism-related work considered submitting writing for the Bulletin? Should more EAs consider that?

I imagine many such EAs would have valuable things to say on topics the Bulletin's readers care about, and that they could say those things well and in a way that suits the Bulletin. It also seems plausible that this could be a good way of: 

  • disseminating important ideas to key decision-makers and thereby improving their decisions
    • either through the Bulletin articles themselves or through them allowing one to
... (read more)
6RyanCarey10dhttps://thebulletin.org/biography/andrew-snyder-beattie/ [https://thebulletin.org/biography/andrew-snyder-beattie/] https://thebulletin.org/biography/gregory-lewis/ [https://thebulletin.org/biography/gregory-lewis/] https://thebulletin.org/biography/max-tegmark/ [https://thebulletin.org/biography/max-tegmark/]

Thanks for those links!

(I also realise now that I'd already seen and found useful Gregory Lewis's piece for the Bulletin, and had just forgotten that that's the publication it was in.)

4MichaelA10dHere's [https://thebulletin.org/write-for-the-bulletin/] the Bulletin's page on writing for them. Some key excerpts: And here's [https://thebulletin.org/2015/02/voices-of-tomorrow-and-the-leonard-m-rieser-award/] the page on the Voices of Tomorrow feature:
RogerAckroyd's Shortform

On 80000 hours webpage they have a profile on factory farming, where they say they estimate ending factory farming would increase the expected value of the future of humanity by between 0.01% and 0.1%. I realize one cannot hope for precision in these things but I am still curious if anyone knows anything more about the reasoning process that went into making that estimate.  

Note: I don't work for 80,000 Hours, and I don't know how closely the people who wrote that article/produced their "scale" table would agree with me.

For that particular number, I don't think there was an especially rigorous reasoning process. As they say when explaining the table in their scale metric, "the tradeoffs across the columns are extremely uncertain". 

That is, I don't think that there's an obvious chain of logic from "factory farming ends" to "the future is 0.01% better". Figuring out what constitutes "the value of the future" is too big a p... (read more)

Savva_Kerdemelidis's Shortform

Looks interesting! I think you might have some interest in MichaelA's shortform about impact certificates. I saw you mentioned some orgs that are in this space. You may also want to check out Dr. Aidan Hollis' paper, An Efficient Reward System for Pharmaceutical Innovation, and his organization which tries to pay for success, the Health Impact Fund.

Buck's Shortform

I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.

I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.

But I don’t think that this prediction is true: I... (read more)

I tried searching the literature a bit, as I'm sure that there are studies on the relation between rationality and altruistic behavior. The most relevant paper I found (from about 20 minutes of search and reading) is The cognitive basis of social behavior (2015). It seems to agree with your hypothesis. From the abstract:

Applying a dual-process framework to the study of social preferences, we show in two studies that individuals with a more reflective/deliberative cognitive style, as measured by scores on the Cognitive Reflection Test (CRT), are more likely

... (read more)
Load More