All of elifland's Comments + Replies

How likely is World War III?

5 forecasters from Samotsvety Forecasting discussed the forecasts in this post.

 

First, I estimate that the chance of direct Great Power conflict this century is around 45%. 

Our aggregated forecast was 23.5%. Considerations discussed were the changed incentives in the nuclear era, possible causes (climate change, AI, etc.) and the likelihood of specific wars (e.g. US-China fighting over Taiwan).

 

Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%.


Our aggregated forecast was 25%, though we were unsure if t... (read more)

Impactful Forecasting Prize for forecast writeups on curated Metaculus questions

Hey, thanks for sharing these other options. I agree that one of these choices makes more sense than forecasting in many cases, and likely (90%) the majority. But I still think forecasting is a solid contender and plausibly (25%) the best in the plurality of cases. Some reasons:

  1. Which activity is best likely depends a lot on which is easiest to actually start doing, because I think the primary barrier to doing most of these usefully is "just" actually getting started and completing something. Forecasting may (40%)[1] be the most fun and least intimidat
... (read more)
I feel anxious that there is all this money around. Let's talk about it

It also strikes against recent work on patient philanthropy, which is supported by Will MacAskill's argument that we are not living in the most influential time in human history.

 

Note that patient philanthropy includes investing in resources besides money that will allow us to do more good later; e.g. the linked article lists "global priorities research" and "Building a long-lasting and steadily growing movement" as promising opportunities from a patient longtermist view.

Looking at the Future Fund's Areas of Interest, at least 5 of the 10 strike me as... (read more)

Impactful Forecasting Prize Results and Reflections

At first I thought the scenarios were separate so they would be combined with an OR to get an overall probability, which then made me confused when you looked at only scenario 1 for determining your probability for technological feasibility.

I was also confused about why you assigned 30% to polygenic scores reaching 80% predictive power in Scenario 2 while assigning 80% to reaching saturation at 40% predictive power in the Scenario 1, because when I read 80% to reach saturation at 40% predictive power I read this as "capping out at around 40%" which would o... (read more)

2Ryan Beck2mo
There are good points and helpful, thanks! I agree I wasn't clear about viewing the scenarios exclusively in the initial comment, I think I made that a little clearer in the follow up. Ah I think I see how that's confusing. My use of the term saturation probably confuses things too much. My understanding is saturation is the likely maximum that could be explained with current approaches, so my forecast was an 80% chance we get to the 40% "saturation" level, but I think there's a decent chance our technology/understanding advances so that more than the saturation can be explained, and I gave a 30% chance that we reach 80% predictive power. That's a good point about iterated embryo selection, I totally neglected that. My initial thought is it would probably overlap a lot with the scenarios I used, but I should have given that more thought and discussed it in my comment.
Impactful Forecasting Prize Results and Reflections

Thanks for sharing Ryan, and that makes sense in terms of another unintended consequence of our judging criteria; good to know for future contests.

1Ryan Beck2mo
No problem! Also if you're interested in elaborating about why my scenarios were unintuitive I'd appreciate the feedback, but if not no worries!
Samotsvety Nuclear Risk Forecasts — March 2022

Great point. Perhaps we should have ideally reported the mean of this type of distribution, rather than our best guess percentages. I'm curious if you think I'm underconfident here?

Edit: Yeah I think I was underconfident, would now be at ~10% and ~0.5% for being 1 and 2 orders of magnitude too low respectively, based primarily on considerations Misha describes in another comment placing soft bounds on how much one should update from the base rate. So my estimate should still increase but not by as much (probably by about 2x, taking into account possibility... (read more)

Samotsvety Nuclear Risk Forecasts — March 2022

The estimate being too low by 1-2 orders of magnitude seems plausible to me independently (e.g. see the wide distribution in my Squiggle model [1]), but my confidence in the estimate is increased by it being the aggregated of several excellent forecasters, who were reasoning independently to some extent. Given that, my all-things-considered view is that 1 order of magnitude off[2] feels plausible but not likely (~25%?), and 2 orders of magnitude seems very unlikely (~5%?).

  1. ^

    EDIT: actually looking closer at my Squiggle model I think it should be mo

... (read more)
6kokotajlod2mo
That makes sense. 2 OOMs is clearly too high now that you mention it. But I stand by my 1 OOM claim though, until people convince me that really this is much more like an ordinary business-as-usual month than I currently think it is. Which could totally happen! I am not by any means an expert on this stuff, this is just my hot take!
8Misha_Yagudin2mo
I will just note that 10x with 20% (= 25% - 5%) and 100x with 5% would/should dominate EV of your estimate. P = .75 * X + .20 * 10 * X + .05 * 100 * X = .75 X + 7 X = 7.75 X.
Samotsvety Nuclear Risk Forecasts — March 2022

I agree the risk should be substantially higher than for an average month and I think most Samotsvety forecasters agree. I think a large part of the disagreement may be on how risky the average month is.

From the post:

(a) may be due to having a lower level of baseline risk before adjusting up based on the current situation. For example, while Luisa Rodríguez’s analysis puts the chance of a US/Russia nuclear exchange at .38%/year. We think this seems too high for the post-Cold War era after new de-escalation methods have been implemented and lessons have bee

... (read more)
2kokotajlod2mo
OK, thanks!
The Future Fund’s Project Ideas Competition

Adversarial collaborations on important topics

Epistemic Institutions

There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are  reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about comb... (read more)

1brb2433mo
What topics? Which are not yet covered? (E. g. militaries already talk about peace) What adversaries? Are they rather collaborators (such as considering mergers and acquisitions and industry interest benefits for private actors and trade and alliance advantages for public actors)? Do you mean decisionmaker-nondecisionmaker collaborations - the issue is that systems are internalized, so you can get from the nondecisionmakers I want to be as powerful over others as the decisionmakers or also an inability to express or know their preferences (a chicken is in the cage so what can it say or a cricket is on the farm what do they know about their preferences) - probably, adversaries would prefer to talk about 'how can we get the other to give us profit' rather than 'how can we make impact' since the agreement is 'not impact, profit?'
Theses on Sleep

I found this thought-provoking. I'd be curious to hear more about your recommendations for readers. I'm wondering:

  1. Would you recommend ~all readers try decreasing their sleep to ~6 hours a night and observe the effects? Or should they slowly decrease until the effects are negative?
    1. If not, how should they decide whether it makes sense for them?
  2. What percentage of readers do you estimate would be overall more productive with ~6 hours of sleep than their unrestricted amount?
    1. How much individual variability do you think there is here?

 

Some background is that... (read more)

4Misha_Yagudin3mo
Thanks to Nuño Sempere, the questions are now also viewable as a Metaforecast dashboard [https://metaforecast.org/dashboards?dashboardId=6fa27dad1e].
We are giving $10k as forecasting micro-grants

They generally only accept applications from registered charities, but speculation grants (a) might be a good fit for smaller projects (40%).

 

My read is that speculation grants are a way for projects applying to SFF to get funding more quickly, rather than a way for projects that aren't eligible for SFF to get funding (I believe SFP serves this purpose).

2NunoSempere4mo
I agree that this is what it says on the page, but I think that for a promising enough project, rules could be twisted.
Impactful Forecasting Prize for forecast writeups on curated Metaculus questions

There results are pretty interesting! I'm surprised at how much optimism there is about 25 unique people/groups compared to 100 total entries; my intuition for expecting an average of about 4 entries per person/group was that most would only submit 1-2, but it only takes a few to submit on many questions to drive the average up substantially.

What's your prior probability that "good things are good" (for the long-term future)?

My answer to your question depends on how you define "good for the long-term future". When I think about evaluating the chance an action is good including of long-run effects, specifying a few more dimensions matters to me. It feels like several combinations of these could be reasonable and would often lead to fairly different probabilities.

Expected value vs. realized value

Does "good for the long-term future" mean: good in expectation, or actually having good observed effects?

What is the ground truth evaluation?

Is the ground truth evaluation one that would... (read more)

2WilliamKiely2mo
I came here to say this--in particular that I think my prior probability for "good things are good for the long-term future" might be very different than my prior for "good things are good for the long-term future in expectation", so it matters a lot which is being asked. I think the former is probably much closer to 50% than the latter. These aren't my actual estimates, but for illustrative purposes I think the numbers might be something like 55% and 90%. I agree with Eli that my actual estimates would also depend on the other questions Eli raises. Another factor that might affect my prior a lot is what the reference class of "good things" looks like. In particular, are we weighting good things based on how often these good things are done / how much money is spent on them, or weighting them once per unique thing as if someone were generating a list of good things? E.g. Does "donation to a GiveWell top charity" count a lot, or once? (Linch's wording at the end of the post makes it seem like he means the latter.) Perhaps it would be helpful to Linch's question to generate a list of 10-20 "good things" and then actually think about each one carefully and estimate the probability that it is good for the future, and good for the future in expectation, and use these 10-20 data points to estimate what one's prior should be. (Any thoughts on whether this would be a worthwhile research activity, Linch or others reading this?)
1elifland4mo
There results are pretty interesting! I'm surprised at how much optimism there is about 25 unique people/groups compared to 100 total entries; my intuition for expecting an average of about 4 entries per person/group was that most would only submit 1-2, but it only takes a few to submit on many questions to drive the average up substantially.
Forecasting Newsletter: Looking back at 2021.

Seconding Nuño's assessment that this comment is awesome. While waiting for his response I'll butt in with some quick off-the-cuff takes of my own.

 

On why no countries use prediction markets / forecasting to make crucial decisions:

My first reaction is "idk, but your comment already provides a really great breakdown of options that I would be excited to be turned into a top-level post."

If I had to guess I think it's some combination of universal human biases and fundamental issues with the value of prediction markets at present. On human biases, it see... (read more)

elifland's Shortform

Really appreciate hearing your perspective!

On causal evidence of RCTs vs. observational data: I'm intuitively skeptical of this but the sources you linked seem interesting and worthwhile to think about more before setting an org up for this. (Edited to add:) Hearing your view  already substantially updates mine, but I'd be really curious to hear more perspectives from others with lots of experience working on this type of stuff, to see if they'd agree, then I'd update more. If you have impressions of how much consensus there is on this question that w... (read more)

2mnoetel4mo
I should clarify: RCTs are obviously generally >> even a very well controlled propensity score matched quasi-experiment, but I just don't think the former is 'bulletproof' anymore. The former should update your priors more but if you look at the variability among studies in meta-analyses, even among low-risk-of-bias RCTs, I'm now much less easily swayed by any single one.
elifland's Shortform

This all makes sense to me overall. I'm still excited about this idea (slightly less so than before) but I think/agree there should be careful considerations on which interventions make the most sense to test.

I think it's really telling that Google and Amazon don't have internal testing teams to study productivity/management techniques in isolation. In practice, I just don't think you learn that much, for the cost of it.

What these companies do do, is to allow different managers to try things out, survey them, and promote the seemingly best practices throug

... (read more)
elifland's Shortform

A variant I'd also be excited about (could imagine even moreso, could go either way after more reflection) that could be contained within the same org or separate: the same thing but for companies (particularly, startups) edit to clarify: test policies/strategies across companies, not on people within companies

elifland's Shortform

I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit.

Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn't expect the difference to be particularly amazing. (I'd be surprised if the average person could increase their productivity by more than ~20% with any of those)

 

I think our main disagreement is around the likely effect sizes; e.g. I think blocking out focused work could easily have an effect size of >50% (but am pretty uncertain wh... (read more)

3Ozzie Gooen4mo
The health interventions seem very different to me than the productivity interventions. The health interventions have issues with long time-scales, which productivity interventions don't have as much. However, productivity interventions have major challenges with generality. When I've looked into studies around productivity interventions, often they're done in highly constrained environments, or environments very different from mine, and I have very little clue what to really make of them. If the results are highly promising, I'm particularly skeptical, so it would take multiple strong studies to make the case. I think it's really telling that Google and Amazon don't have internal testing teams to study productivity/management techniques in isolation. In practice, I just don't think you learn that much, for the cost of it. What these companies do do, is to allow different managers to try things out, survey them, and promote the seemingly best practices throughout. This happens very quickly. I'm sure we could make tools to make this process go much faster. (Better elicitation, better data collection of what already happens, lots of small estimates of impact to see what to focus more on, etc). In general, I think traditional scientific experimentation on humans is very inefficient, and we should be aiming for much more efficient setups. (But we should be working on these!)
elifland's Shortform

Votes/considerations on why this is a good or bad idea are also appreciated!

elifland's Shortform

Reflecting a little on my shortform from a few years ago, I think I wasn't ambitious enough in trying to actually move this forward.

I want there to be an org that does "human challenge"-style RCTs across lots of important questions that are extremely hard to get at otherwise, e.g. (top 2 are repeated from previous shortform. edited to clarify: these are some quick examples off the top of my head, should be more consideration into which are the best for this org):

  1. Health effects of veganism
  2. Health effects of restricting sleep
  3. Productivity of remote vs. in-pers
... (read more)
6mnoetel4mo
Yeah these are interesting questions Eli. I've worked on a few big RCTs and they're really hard and expensive to do. It's also really hard to adequately power experiments for small effect sizes in noisy environments (e.g., productivity of remote/in-person work). Your suggestions to massively scale up those interventions and to do things online would make things easier. As Ozzie mentioned, the health ones require such long and slow feedback loops that I think they might not be better than well (statistically) controlled alternatives. I used to think RCTs were the only way to get definitive causal data. The problem is, because of biases that can be almost impossible to eliminate (https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool [https://sites.google.com/site/riskofbiastool/welcome/rob-2-0-tool]) RCTs are seldom perfect causal data. Conversely, with good adjustment for confounding, observational data can provide very strong causal evidence (think smoking; I recommend my PhD students do this course for this reason https://www.coursera.org/learn/crash-course-in-causality [https://www.coursera.org/learn/crash-course-in-causality]). For the ones with fast feedback loops, I think some combination of "priors + best available evidence + lightweight tests in my own life" works pretty well to see if I should adopt something. At a meta-level, in an ideal world, the NSF and NIH (and global equivalents) are probably designed to fund people to address questions that are most important and with the highest potential. There are probably dietetics/sleep/organisational psychology experts who have dedicated their careers to questions #1-4 above, and you'd hope that those people are getting funded if those questions are indeed critical to answer. In reality, science funding probably does not get distributed based on criteria that maximises impartial welfare, so maybe that's why #1-4 would get missed. As mentioned in a recent forum post, I think the mega-org could be be
1elifland4mo
A variant I'd also be excited about (could imagine even moreso, could go either way after more reflection) that could be contained within the same org or separate: the same thing but for companies (particularly, startups) edit to clarify: test policies/strategies across companies, not on people within companies
3Ozzie Gooen4mo
I think the obvious answer is that doing controlled trials in these areas is a whole lot of work/expense for the benefit. Some things like health effects can take a long time to play out; maybe 10-50 years. And I wouldn't expect the difference to be particularly amazing. (I'd be surprised if the average person could increase their productivity by more than ~20% with any of those) On "challenge trials"; I imagine the big question is how difficult it would be to convince people to accept a very different lifestyle for a long time. I'm not sure if it's called "challenge trial" in this case.
1elifland4mo
Votes/considerations on why this is a good or bad idea are also appreciated!
Prediction Markets in The Corporate Setting

I really appreciate that you break down explanatory factors in the way you do.

 

I'm happy that this was useful for you!

I have a hard time making a mental model of their relative importance compared to each other. Do you think that such an exercise is feasible, and if so, do any of you have a conception of the relative explanatory strength of any factor when considered against the others?

Good question. We also had some trouble with this, as it's difficult to observe the reasons many corporate prediction markets have failed to catch on. That being said, ... (read more)

4Paal Fredrik Skjørten Kvarberg4mo
Thank you for this. This is all very helpful, and I think your explanations of giving differential weights to factors for average orgs and EA orgs seems very sensible. The 25% for unknown unknowns is probably right too. It doesn't seem unlikely to me that most folks at average orgs would fail to understand the value of prediction markets even if they turned out to be valuable (since it would require work to prove it). It would really surprise me if the 'main reason' why there is a lack of prediction markets had nothing to do with anything mentioned in the post. I think all unknown unknowns might conjunctly explain 25% of why prediction markets aren't adopted, but the chance of any single unknown factor being the primary reason is, I think, quite slim.
elifland's Shortform

Appreciate the compliment. I am interested in making it a Forum post, but might want to do some more editing/cleanup or writing over next few weeks/months (it got more interest than I was expecting so seems more likely to be worth it now). Might also post as is, will think about it more soon.

elifland's Shortform

Hi Lizka, thanks for your feedback and think it touched on some of the sections that I'm most unsure about / could most use some revision which is great!

  1. [Bottlenecks] You suggest "Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making" as a crucial step in the "story" of crowd forecasting’s success (the "pathway to impact"?) --- this seems very true to me. But then you write "I doubt this is the main bottleneck right now but it may be in the future" (and don't really
... (read more)
3MichaelA9mo
Fwiw, I expect to very often see forecasts as an input into important decisions, but also usually seem them as a somewhat/very crappy input. I just also think that, for many questions that are key to my decisions or to the decisions of stakeholders I seek to influence, most or all of the available inputs are (by themselves) somewhat/very crappy, and so often the best I can do is: 1. try to gather up a bunch of disparate crappy inputs with different weaknesses 2. try to figure out how much weight to give each 3. see how much that converges on a single coherent picture and if so what picture (See also consilience [https://en.wikipedia.org/wiki/Consilience].) (I really appreciated your draft outline and left a bunch of comments there. Just jumping in here with one small point.)
elifland's Shortform

I wrote a draft outline on bottlenecks to more impactful crowd forecasting that I decided to share in its current form rather than clean up into a post [edited to add: I ended up revising into a post here].

Link

Summary:

  1. I have some intuition that crowd forecasting could be a useful tool for important decisions like cause prioritization but feel uncertain
  2. I’m not aware of many example success stories of crowd forecasts impacting important decisions, so I define a simple framework for how crowd forecasts could be impactful:
    1. Organizations and individuals (stakeho
... (read more)
2Aaron Gertler9mo
I liked this document quite a bit, and I think it would be a reasonable Forum post even without further cleanup — you could basically copy over this Shortform, minus the bit about not cleaning it up. This lets the post be tagged, be visible to more people, etc. (Though I understand if you'd rather leave it in a less-trafficked area.)

I really enjoyed your outline, thank you! I have a few questions/notes: 

  1. [Bottlenecks] You suggest "Organizations and individuals (stakeholders) making important decisions are willing to use crowd forecasting to help inform decision making" as a crucial step in the "story" of crowd forecasting’s success (the "pathway to impact"?) --- this seems very true to me. But then you write "I doubt this is the main bottleneck right now but it may be in the future" (and don't really return to this). 
    1.  Could you explain your reasoning here? My intuit
... (read more)
Towards a Weaker Longtermism

A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future.

 

I feel that EA shouldn't spend all or nearly all of its resources on the far future, but I'm uncomfortable with incorporating a moral discount rate for future humans as part of "regular longtermism" since it's very intuitive to me that future lives should matter the same amount as present ones.

I prefer objections from the epistemic c... (read more)

3evelynciara10mo
Yeah. I have this idea that the EA movement should start with short-term interventions and work our way to interventions that operate over longer and longer timescales, as we get more comfortable understanding their long-term effects.
Incentivizing forecasting via social media

Overall I like this idea, appreciate the expansiveness of the considerations discussed in the post, and would excited to hear takes from people working at social media companies.

Thoughts on the post directly

Broadly, we envision i) automatically suggesting questions of likely interest to the user—e.g., questions related to the user’s current post or trending topics—and ii) rewarding users with higher than average forecasting accuracy with increased visibility

I think some version of some type of boosting visibility based on forecasting accuracy seems promisi... (read more)

2David_Althaus1y
Thanks, great points! Yeah, me too. For what it's worth, Forecast mentions our post here [https://twitter.com/ForecastByNPE/status/1339261655113297925]. Yeah, as we discuss in this section [https://forum.effectivealtruism.org/posts/842uRXWoS76wxYG9C/incentivizing-forecasting-via-social-media#Why_focus_on_forecasting_and_not_on_other_factors_] , forecasting accuracy is surely not the most important thing. If it were up to me, I'd focus on spreading (sophisticated) content on, say, effective altruism, AI safety, and so on. Of course, most people would never agree with this. In contrast, forecasting is perhaps something almost everyone can get behind and is also objectively measurable. I agree that the concerns you list under (b) need to be addressed.
Incentivizing forecasting via social media

The forecasting accuracy of Forecast’s users was also fairly good: “Forecast's midpoint brier score [...] across all closed Forecasts over the past few months is 0.204, compared to Good Judgement's published result of 0.227 for prediction markets.”

For what it's worth , as noted in Nuño's comment this comparison holds little weight when the questions aren't the same or on the same time scales; I'd take it as fairly weak evidence from my prior that real-money prediction markets are much more accurate.

Elicit Prediction (elicit.org/binary/questions/
... (read more)
2David_Althaus1y
Right, definitely, I forgot to add this. I wasn't trying to say that Forecast is more accurate than real-money prediction markets (or other forecasting platforms for that matter) but rather that Forecasts' forecasting accuracy is at least clearly above the this-is-silly level.
Delegate a forecast

My forecast is pretty heavily based on the GoodJudgment article How to Become a Superforecaster. According to it they identify Superforecasters each autumn and require forecasters to have made 100 forecasts (I assume 100 resolved), so now might actually be the worst time to start forecasting. It looks like if you started predicting now the 100th question wouldn't close until the end of 2020. Therefore it seems very unlikely you'd be able to become a Superforecaster in this autumn's batch.

[Note: alexrjl clarified over PM that I should treat t... (read more)

Delegate a forecast

Here's my forecast. The past is the best predictor of the future, so I looked at past monthly data as the base rate.

I first tried to tease out whether there was a correlation in which months had more activity between 2020 and 2019. It seemed there was a weak negative correlation, so I figured my base rate should be just based on the past few months of data.

In addition to the past few months of data, I considered that part of the catalyst for record-setting July activity might be Aaron's "Why you should put on the EA Forum" EAGx talk. Du... (read more)

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

I've recently gotten into forecasting and have also been a strategy game addict enthusiast at several points in my life. I'm curious about your thoughts on the links between the two:

  • How correlated is skill at forecasting and strategy games?
  • Does playing strategy games make you better at forecasting?
3Linch2y
I’m not very good at strategy games, so hopefully not much! The less quippy answer is that strategy games are probably good training grounds for deliberate practice and quick optimization loops, so that likely counts for something (see my answer to Nuno about games [https://forum.effectivealtruism.org/posts/83rHdGWy52AJpqtZw/i-m-linch-zhang-an-amateur-covid-19-forecaster-and?commentId=nqoZCk2jsJ7Ek5Epi#comments] ). There are also more prosaic channels, like general cognitive ability and willingness to spend time in front of a computer. I’m guessing that knowing how to do deliberate practice and getting good at a specific type of optimization is somewhat generalizable, and it's good to do that in something you like (though getting good at things you dislike is also plausibly quite useful). I think specific training usually trumps general training, so I very much doubt playing strategy games is the most efficient way to get better at forecasting, unless maybe you’re trying to forecast results of strategy games [https://twitter.com/ptetlock/status/1117163957096189963].
Problem areas beyond 80,000 Hours' current priorities

Relevant Metaculus question about whether the impact of the Effective Altruism movement will still be picked up by Google Trends in 2030 (specifically, whether it will have at least .2 times the total interest from 2017) has a community prediction of 70%

7Stefan_Schubert2y
Yes, though it's possible that some or all of the ideas and values of effective altruism could live on under other names or in other forms even if the name "effective altruism" ceased to be used much.
elifland's Shortform

The efforts by https://1daysooner.org/ to use human challenge trials to speed up vaccine development make me think about the potential of advocacy for "human challenge" type experiments in other domains where consequentialists might conclude there hasn't been enough "ethically questionable" randomized experimentation on humans. 2 examples come to mind:

My impression of the nutrition field is that it's very hard to get causal evidence because people won't change their diet at random for an experiment.

Why We Sleep has been ... (read more)

2Khorton2y
Challenge trials face resistance for very valid historical reasons - this podcast has a good summary. https://80000hours.org/podcast/episodes/marc-lipsitch-winning-or-losing-against-covid19-and-epidemiology/ [https://80000hours.org/podcast/episodes/marc-lipsitch-winning-or-losing-against-covid19-and-epidemiology/]
How should longtermists think about eating meat?

I think we have good reason to believe veg*ns will underestimate the cost of not-eating-meat for others due to selection effects. People who it's easier for are more likely to both go veg*n and stick with it. Veg*ns generally underestimating the cost and non-veg*ns generally overestimating the cost can both be true.

The cost has been low for me, but the cost varies significantly based on factors such as culture, age, and food preferences. I think that in the vast majority of cases the benefits will still outweigh the costs and most would agree with a n... (read more)

Why not give 90%?
If I was donating 90% every year, I think my probability of giving up permanently would be even higher than 50% each year. If I had zero time and money left to enjoy myself, my future self would almost certainly get demotivated and give up on this whole thing. Maybe I’d come back and donate a bit less but, for simplicity, let’s just assume that if Agape gives up, she stays given up.

The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she s... (read more)

3HaydenW2y
Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially. That certainly sounds sensible to me!