All of MichaelStJules's Comments + Replies

Clarifying the Petrov Day Exercise

Thanks for explaining.

I agree that the standards can be too high, especially when participants are both fully informed and give consent (e.g. COVID vaccine trials). I think in this case, participants were not properly informed of the potential (community, social and career) risks ahead of time and made sure they understood before participating.

When I wrote my comment, I actually had in mind the Stanford prison experiment, the Milgram experiment and Vsauce convincing people they were in a real trolley problem (this one had debriefing but not informed consen... (read more)

2Linch17hI agree that the Stanford Prison experiment and actually convincing people they are in a trolley problem is not reasonable scientific ethics and can reasonably be expected to traumatize people. I think additionally both Milgram and SPE additionally have significant validity [https://en.wikipedia.org/wiki/Milgram_experiment#Validity] issues [https://www.newyorker.com/science/maria-konnikova/the-real-lesson-of-the-stanford-prison-experiment] which calls into question their scientific usefulness.
Clarifying the Petrov Day Exercise

I think it would read something closer to "We won! Everyone who opted in decided to cooperate!"

Clarifying the Petrov Day Exercise

Based on how others have been warning you, it feels like the kind of psychological/social experiment you would need to have a psychological debriefing after to get ethics approval, and even then, still might not get approval.

(I downvoted this comment because I think the degree of ethics approvals needed for certain classes of science experiments is immorally high under some reasonable assumptions, and the EAF should not endorse arguments coming out of status quo bias. It's also reasonably possible that I would not have downvoted if Michael wasn't a coworker)

7Khorton2dYes, I agree
Clarifying the Petrov Day Exercise

Thanks for writing this, and I agree with your take that it's toxic when people find out after starting to engage that they may face serious consequences for not taking it seriously enough (and indeed whether or not they actually would, since it's still unsettling to believe it). I'm sorry that this has been your experience.

Clarifying the Petrov Day Exercise

I agree with you, although someone might still opt in treating it like a game and not initially taking it as seriously as others in the community are, and then take the site down. Last year, a user was manipulated into taking down LW by someone claiming the user had to enter their code to save LW.

Clarifying the Petrov Day Exercise

I thought it was explicit in the announcement post that we should take this seriously, but in not the e-mail I got:

If LessWrong chose any launch code recipients they couldn't trust, the EA Forum will go down, and vice versa. One of the sites going down means that people are blocked from accessing important resources: the destruction of significant real value. What's more, it will damage trust between the two sites ("I guess your most trusted users couldn't be trusted to not take down our site") and also for each site itself ("I guess the admins couldn't fi

... (read more)
Honoring Petrov Day on the EA Forum: 2021

I don't think anything happens unless you enter the code, too.

Honoring Petrov Day on the EA Forum: 2021

They should have left it up longer if they wanted to test us with it, since it was gone when I reloaded the pages and the timer was never updated while it was up, even though each side would have an hour to retaliate (or it was supposed to give the impression that the hour was over, and it was already too late).

Honoring Petrov Day on the EA Forum: 2021

How could we be convinced that the donations were counterfactual?

Also, do you mean you're (considering) taking bribes (to EA charities) to push the button?

4Nathan Young2dI think I'd ask for the community here to agree first, but if someone suggested an amount that got half the upvotes of the total of this page I'd probably push it. That seems like the ethical choice.
Honoring Petrov Day on the EA Forum: 2021

Since the timer wasn't updating on either site, I assume they weren't testing us (yet).

Honoring Petrov Day on the EA Forum: 2021

I briefly saw a "Missile Incoming" message with a 60:00 timer (that wasn't updating) on the buttons on the front pages of both LW and the EA Forum, at around 12pm EST, on mobile. Both messages were gone when I refreshed. Was this a bug or were they testing the functionality, testing us or preparing to test us?

3Jsevillamol2dI think it was an intentional false alarm, to better simulate Petrov's situation
2MichaelStJules2dSince the timer wasn't updating on either site, I assume they weren't testing us (yet).
Why I am probably not a longtermist

is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence?

I think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.

Honoring Petrov Day on the EA Forum: 2021

I'm also interested in people's predictions had the codes been anonymous (not been personalized). In this case, individual reputational risk would be low, so it would mostly be a matter of community reputational risk, and we'd learn more about if EAs or LWers would stab each other in the back (well, inconvenience each other) if they could get away with it.

2Linch3dI mean, having a website shut down is also annoying.
Why I am probably not a longtermist

That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?

Under the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options.

  1. If you can bring a good life into existence or none, it is at least permissible to choose none, and under basically any asymmetry tha
... (read more)
3Mauricio3dThanks! I can see that for people who accept (relatively strong versions of) the asymmetry. But (I think) we're talking about what a wide range of ethical views say--is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence? (I imagine, if they thought it was that bad, they wouldn't call it "the good life"?)
Why I am probably not a longtermist

Curious why you think this first part? Seems plausible but not obvious to me.

I think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only matterin... (read more)

3Mauricio3dFair points. Your first paragraph seems like a good reason for me to take back the example of freedom/autonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldn't be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.) (Maybe there's some asymmetry in people's views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives aren't worthwhile.) This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection. I think you're generally right that a significant fraction of non-utilitarian views wouldn't be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (I'd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like they'd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So I'd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. "humanity is inherently super awesome").
Is working on AI safety as dangerous as ignoring it?

If some technical AI safety work accelerates AI, we could miss opportunities for AI safety governance/policy work as a result. OTOH, AI safety governance/policy work, if not done carefully, could give an edge to those unconcerned with safety by impeding everyone else, and that could be bad.

Why I am probably not a longtermist

In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled: freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.

I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others, and you should also consider the other side: in an empty future, everyone has full freedom/autonomy and gets everything they wan... (read more)

3Mauricio4dThanks! Curious why you think this first part? Seems plausible but not obvious to me. I have trouble seeing how this is a meaningful claim. (Maybe it's technically right if we assume that any claim about the elements of an empty set is true, but then it's also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.) Yup, agreed that empty futures are better than some alternatives under many value systems. My claim is just that many value systems leave substantial room for the world to be better than empty. Yeah, agreed that something probably won't get astronomical weight if we're doing (non-fanatical forms of) moral pluralism. The paper you cite seems to suggest that, although people initially see the badness of extinction as primarily the deaths, that's less true when they reflect:
Why I am probably not a longtermist

For similar moral views (asymmetric, but not negative utilitarian), this paper might be of interest:

Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term" (also on the EA Forum). See especially section 6 (maybe after watching the talk, instead of reading the paper, since the paper gets pretty technical).

Why I am probably not a longtermist

(My views are suffering-focused and I'm not committed to longtermism, although I'm exploring s-risks slowly, mostly passively.)

I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.

Do you mean you expect all of our descendants to be wiped out, with none left? What range would you give for your probability of extinction (or unrecoverable collapse) each year?

If we colonize space and continue to expand (whi... (read more)

Why I am probably not a longtermist

To be clear, by "x-risk" here, you mean extinction risks specifically, and not existential risks generally (which is what "x-risk" was coined to refer to, from my understanding)? There are existential risks that don't involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.

1AppliedDivinityStudies4dAh, yes, extinction risk, thanks for clarifying.
Invertebrate pain and suffering: What do analgesic studies tell us?

Do analgesics also reduce reflexive responses to noxious stimuli in humans? If so, this might be an argument against them merely reducing responses to noxious stimuli at all being good evidence for conscious pain (effects on learning strengthens the argument somewhat, though). We'd want something that selectively targets the (consciously experienced) negative affect of pain in humans, but as far as I know, reflexive responses may be possible without negative affect (in humans and nonhumans).

2Timothy Chan5dThat's an excellent point. If analgesics also reduce reflex responses towards noxious stimuli, then in some cases analgesics could be diminishing nociceptive responses while not inhibiting conscious (reportable) pain. I don't know much about how analgesics affect nociceptive reflexive responses in humans. According to the abstract of this study [https://pubmed.ncbi.nlm.nih.gov/7743193/] on non-human primates (haven't looked into the study in detail), "depending on the dose, nociceptive reflexes [are] facilitated or inhibited" by morphine. So this possibility might prevent us from updating too much to "analgesics are preventing pain when they inhibit nociception" to the extent that the analgesics are inhibiting reflexive nociceptive responses. One way this might not be an issue is if someone thinks consciousness is " smeared spatially and temporally [https://reducing-suffering.org/consciousness-is-a-process-not-a-moment/]" or if they think nested minds [https://reducing-suffering.org/fuzzy-nested-minds-problematize-utilitarian-aggregation/] are possible. For them, through analogies in function, they might think the reflexive responses themselves could be in pain. But then again, there are probably fewer people who think like this than people who think invertebrates feel pain.
1Jpmos5dA related thought: Some humans are much less sensitive to physical pain. 1. Could an observer correctly differentiate between those with normal and abnormally low sensitivity to pain? 2. For humans who're relatively insensitive to pain, but still exhibit the appropriate response to harm signals (assuming they exist), would analgesics diminish the "appropriateness" of their response to a harm signal?
Can you control the past?

I think we should be able to find lots of examples in the real world like Smoking Lesion, and I think CDT looks better than EDT in more typical choice scenarios because of it. The ones where CDT goes wrong and EDT is right (as discussed in your post) seem pretty atypical to me, although they could still matter a lot. I think both theories are probably wrong.

What matters in Smoking Lesion are:

  • Variables  and , with  and .
    • In Smoking Lesion,  and .
  •  is what you're
... (read more)
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

Ya, that's fair. If this is the case, I might say that the biological neurons don't have additional useful degrees of freedom for the same number of inputs, and the paper didn't explicitly test for this either way, although, imo, what they did test is weak Bayesian evidence for biological neurons having more useful degrees of freedom, since if they could be simulated with few artificial neurons, we could pretty much rule out that hypothesis. Maybe this evidence is too weak to update much on, though, especially if you had a prior that simulating biological neurons would be pretty hard even if they had no additional useful degrees of freedom.

2kokotajlod11dNow I think we are on the same page. Nice! I agree that this is weak bayesian evidence for the reason you mention; if the experiment had discovered that one artificial neuron could adequately simulate one biological neuron, that would basically put an upper bound on things for purposes of the bio anchors framework (cutting off approximately the top half of Ajeya's distribution over required size of artificial neural net). Instead they found that you need thousands. But (I would say) this is only weak evidence because prior to hearing about this experiment I would have predicted that it would be difficult to accurately simulate a neuron, just as it's difficult to accurately simulate a falling leaf. Pretty much everything that happens in biology is complicated and hard to simulate.
Cultured meat predictions were overly optimistic

Woops, ya, I got my dates mixed up for COVID and JUST.

However, that would presumably also be true for whatever other tools or sources we might alternatively rely on for cultured meat timelines and so I don't think it changes the overall conclusion on how much stock to put into the types of predictions/predictors represented in this dataset.

I'm not sure what you mean by this. My point is that COVID might have made some of these predictions false, when they would have otherwise ended up true without COVID, so these groups just got very unlucky, and we should... (read more)

The motivated reasoning critique of effective altruism

Just defer to Mike Huemer. He gets from common sense mortality to veganism and anarcho-capitalism. :P

Cultured meat predictions were overly optimistic

EDIT: Woops, got my COVID dates mixed up; I was thinking March 2020.

March 2019 "JUST, the San Francisco-based company racing to be the first to bring cell-based meat to market, announced in a CBS San Francisco interview last month that they would debut their first product — a cultured chicken nugget — in Asia sometime this year"

I think it's reasonably likely this was delayed by COVID-19, given they made this prediction when it wasn't clear how bad things would be, they debuted in a restaurant in Singapore at the end of 2020, and restaurants where they were... (read more)

I don't think it's reasonably likely this particular prediction was delayed by COVID-19, given they made this prediction in early 2019 about a product being on offer *in 2019*. I don't think there is much to suggest any impediments to a  product roll-out in 2019  from the pandemic since it only started having major impacts/reactions in 2020. 

For other predictions in this dataset made by companies, research institutes, and reported in the media it seems likely the pandemic threw up an unexpected obstacle and delay. However, that would presuma... (read more)

Cultured meat predictions were overly optimistic

This could be in part because GFI got more financial support from the EA community, both from Open Phil and due to ACE.

  • 2012: ACE was founded.
  • 2014: ACE did an exploratory review of New Harvest.
  • 2015: Lewis Bollard joined Open Phil in September to start its grantmaking in animal welfare. New Harvest was named a standout charity by ACE at the end of 2015.
  • 2016: GFI is founded. Open Phil made its first animal welfare grants. GFI received its first grant from Open Phil, of $1M. GFI become an ACE top charity at the end of the year.
  • 2017: Open Phil made another gran
... (read more)
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

Ya, this is what I'm thinking, although have to is also a matter of scaling, e.g. a larger brain could accomplish the same with less powerful neurons. There's also probably a lot of waste in the human brain, even just among the structures most important for reasoning (although the same could end up being true or an AGI/TAI we try to build; we might need a lot of waste before we can prune or make smaller student networks, etc.).

On falling leaves, the authors were just simulating the input and output behaviour of the neurons, not the physics/chemistry/biolog... (read more)

6kokotajlod11dWhat I meant by the falling leaf thing: If we wanted to accurately simulate where a leaf would land when dropped from a certain height and angle, it would require a ton of complex computation. But (one can imagine) it's not necessary for us to do this; for any practical purpose we can just simplify it to a random distribution centered directly below the leaf with variance v. Similarly (perhaps) if we want to accurately simulate the input-output behavior of a neuron, maybe we need 8 layers of artificial neurons. But maybe in practice if we just simplified it to "It sums up the strength of all the neurons that fired at it in the last period, and then fires with probability p, where p is an s-curve function of the strength sum..." maybe that would work fine for practical purposes -- NOT for purpose of accurately reproducing the human brain's behavior, but for purposes of building an approximately brain-sized artificial neural net that is able to learn and excel at the same tasks. My original point no. 1 was basically that I don't see how the experiment conducted in this paper is much evidence against the "simplified model would work fine for practical purposes" hypothesis.
The motivated reasoning critique of effective altruism

Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it's nearly as likely to be net-negative as net-positive given our great uncertainty

Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?

The motivated reasoning critique of effective altruism

I guess no one is really publishing these CEAs, then?

Do you also have CEAs of the meta work you fund, in terms of AI risk reduction/increase?

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

I would say a biological neuron can compute more complex functions or a wider variety of functions of its inputs than standard artificial neurons in deep learning (linear combination of inputs followed by a nonlinear real-valued function with one argument), and you could approximate functions of interest with fewer biological neurons than artificial ones. Maybe biological neurons have more (useable) degrees of freedom for the same number of input connections.

4kokotajlod12dI think I get it, thanks! (What follows is my understanding, please correct if wrong!) The idea is something like: A falling leaf is not a computer, it can't be repurposed to perform many different useful computations. But a neuron is; depending on the weights of its synapses it can be an and gate, an or gate, or various more complicated things. And this paper in the OP is evidence that the range of more complicated useful computations it can do is quite large, which is reason to think that in maybe in the relevant sense a lot of the brain's skills have to involve fancy calculations within neurons. (Just because they do doesn't mean they have to, but if neurons are general-purpose computers capable of doing lots of computations, that seems like evidence compared to if neurons were more like falling leaves) I still haven't read the paper -- does the experiment distinguish between the "it's a tiny computer" hypothesis vs. the "it's like a falling leaf -- hard to simulate, but not in an interesting way" hypothesis?
The motivated reasoning critique of effective altruism

You could move me by building an explicit quantitative model for a popular question of interest in longtermism that (a) didn't previously have models (so e.g. patient philanthropy or AI racing doesn't count), (b) has an upshot that we didn't previously know via verbal arguments, (c) doesn't involve subjective personal guesses or averages thereof for important parameters, and (d) I couldn't immediately tear a ton of holes in that would call the upshot into question.

 

I feel that (b) identifying a new upshot shouldn't be necessary; I think it should be e... (read more)

2rohinmshah13dYeah, I agree that would also count (and as you might expect I also agree that it seems quite hard to do). Basically with (b) I want to get at "the model does something above and beyond what we already had with verbal arguments"; if it substantially affects the beliefs of people most familiar with the field that seems like it meets that criterion.
The motivated reasoning critique of effective altruism

Fair. I should revise my claim to being about the likelihood of a catastrophe and the risk reduction from working on these problems (especially or only in AI; I haven't looked as much at what's going on in other x-risks work). AI Impacts looks like they were focused on timelines.

It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

On point 1, my claim is that the paper is evidence for the claim that biological neurons are more computationally powerful than artificial ones, not that we'd achieve AGI/TAI by simulating biological brains. I agree that for those who already expected this, this paper wouldn't be much of an update (well, maybe the actual numbers matter; 1000x seemed pretty high, but is also probably an overestimate).

I also didn't claim that the timelines based on biological anchors that I linked to would actually be affected by this (since I didn't know either way whether ... (read more)

2kokotajlod13dWhat does it mean to say a biological neuron is more computationally powerful than an artificial one? If all it means is that it takes more computation to fully simulate its behavior, then by that standard a leaf falling from a tree is more computationally powerful than my laptop. (This is a genuine question, not a rhetorical one. I do have some sense of what you are saying but it's fuzzy in my head and I'm wondering if you have a more precise definition that isn't just "computation required to simulate." I suspect that the Carlsmith report I linked may have already answered this question and I forgot what it said.)
The motivated reasoning critique of effective altruism

Hmm, I guess I hadn't read that post in full detail (or I did and forgot about the details), even though I was aware of it. I think the argument there that mortality will roughly match some time after transition is pretty solid (based on two datasets and expert opinion). I think there was still a question of whether or not the "short-term" increase in mortality outweighs the reduction in behavioural deprivation, especially since it wasn't clear how long the transition period would be. This is a weaker claim than my original one, though, so I'll retract my ... (read more)

The motivated reasoning critique of effective altruism

Some other concerns that seem to me to be consistent with motivated reasoning in animal welfare have been:

  1. Our treatment of diet change effects (including from alternative proteins) on wild animals, especially wild aquatic animals, but also generally through land use change. Mostly this has been to ignore these effects, or with wild aquatic animals, sometimes count the direct short-run deaths averted, but ignore additional deaths (including from future fishing!) from larger populations than otherwise due to reduced fishing pressure, and effects on non-targe
... (read more)
The motivated reasoning critique of effective altruism

I'm not defending what you think is a bailey, but as a practical matter, I would say until recently (with Open Phil publishing a few models for AI), longtermists have not been using numbers or models much, or when they do, some of the most important parameters are extremely subjective personal guesses or averages of people's guesses, not based on reference classes, and risks of backfire were not included.

9NunoSempere14dThis seems to me to not be the case. For a very specific counterexample, AI Impacts [https://aiimpacts.org] has existed since 2015.
2rohinmshah14dReplied to Linch -- TL;DR: I agree this is true compared to global poverty or animal welfare, and I would defend this as simply the correct way to respond to actual differences in the questions asked in longtermism vs. those asked in global poverty or animal welfare. You could move me by building an explicit quantitative model for a popular question of interest in longtermism that (a) didn't previously have models (so e.g. patient philanthropy or AI racing doesn't count), (b) has an upshot that we didn't previously know via verbal arguments, (c) doesn't involve subjective personal guesses or averages thereof for important parameters, and (d) I couldn't immediately tear a ton of holes in that would call the upshot into question.
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

Here's some supporting evidence for it being hard to map:

In 2016, the Intelligence Advanced Research Projects Activity of the United States government launched MICrONS, a five-year, multi-institute project to map one cubic millimeter of rodent visual cortex, as part of the BRAIN Initiative.[33][34] Though only a small volume of biological tissue, this project will yield one of the largest micro-scale connectomics datasets currently in existence.

A mouse brain is about 500x that.

 

On the the other hand, progress with OpenWorm has been kind of slow, despi... (read more)

3Will Aldred13dif interested, here's some further evidence that it's just really hard to map: Learning from connectomics on the fly - ScienceDirect [https://www.sciencedirect.com/science/article/abs/pii/S2214574517301578]
It takes 5 layers and 1000 artificial neurons to simulate a single biological neuron [Link]

Thanks, these are both excellent points. I did hint to the first one, and I specifically came back to this post to mention the second, but you beat me to it. ;)

I've edited my post.

EDIT: Also edited again to emphasize the weaknesses.

Can you control the past?

That is, suppose that before you read Wikipedia, you were 50% on the Egyptians were at 0 welfare, and 50% they were at 10 welfare, so 5 in expectation, but reading is 0 EV. After reading, you find out that their welfare was 10. OK, should we count this action, in retrospect, as worth 5 welfare for the Egyptians? I'd say no, because the ex post evaluation should go: "Granted that the Egyptians were at 10 welfare, was it good to learn that they were at 10 welfare?". And the answer is no: the learning was a 0-welfare change.

This sounds like CDT, though, by co... (read more)

Gifted $1 million. What to do? (Not hypothetical)

I think this should be the default option for your donations (setting aside how much and when you want to donate), and you should either defer to them or make a serious effort to beat them (possibly with their help). You can talk to the fund managers for advice. These fund managers have a good idea if how money is being spent within their causes and how their grants/donations might affect where other funding goes (e.g. where Open Phil grants).

It could be worth asking them for estimates of the cost-effectiveness of their marginal grants, although I don't know if they will actually keep track of this, and it could have extremely high uncertainty, depending on the cause.

How much should we still worry about catching COVID? [Links and Discussion Thread]

That's fair, but also it seems kind of implicit and would make a good chunk of the title get cut off on the front page.

Extrapolated Age Distributions after We Solve Aging

Your're thinking they'd be lower, right? Presumably people would have better quality of life and mental health, so be less inclined to commit suicide each year.

2Peter Wildeford1moThat's what I was thinking.
6Linch1moAlso presumably selection effects matter quite a lot, eventually.
Can you control the past?

I think the thought experiments you give are pretty decisive in favour of the EDT answers over the CDT answers, and I guess I would agree that we have some kind of subtle control over the past, but I would also add:

Acting and conditioning on our actions doesn't change what happened in the past; it only tells us more about it. Finding out that Ancient Egyptians were happier than you thought before doesn't make it so that they were happier than you thought before; they already observed their own welfare, and you were just ignorant of it. While EDT would not ... (read more)

2Joe_Carlsmith1moI think this is an interesting objection. E.g., "if you're into EDT ex ante, shouldn't you be into EDT ex post, and say that it was a 'good action' to learn about the Egyptians, because you learned that they were better off than you thought in expectation?" I think it depends, though, on how you are doing the ex post evaluation: and the objection doesn't work if the ex post evaluation conditions on the information you learn. That is, suppose that before you read Wikipedia, you were 50% on the Egyptians were at 0 welfare, and 50% they were at 10 welfare, so 5 in expectation, but reading is 0 EV. After reading, you find out that their welfare was 10. OK, should we count this action, in retrospect, as worth 5 welfare for the Egyptians? I'd say no, because the ex post evaluation should go: "Granted that the Egyptians were at 10 welfare, was it good to learn that they were at 10 welfare?". And the answer is no: the learning was a 0-welfare change.
Even Allocation Strategy under High Model Ambiguity

Added an extra bit:

Furthermore, "doing nothing"/"not investing" is only one option among our multiple options, and if it's equally ambiguous, then it will only make up 1/Nth of the optimal portfolio. This is an argument against paralysis, i.e. doing nothing, when faced with complex cluelessness.

Changes in conditions are a priori bad for average animal welfare

Similarly, I would guess random changes are more likely to reduce population sizes than increase them (in the short term) because animals are somewhat finely tuned for their specific conditions, and if it's the case that animal welfare is on average bad in the wild, then the expected decrease in average welfare would be made up for by a large enough reduction in the number of animals. If average welfare is positive or 0, then a random change seems bad in expectation.

In the long term, we need to compare equilibria, and I don't have any reason to believe a r... (read more)

Incentivizing Donations through Mutual Matching

Would you need to choose the leverage schedule so that you're unlikely to fully fund the project? Otherwise, the leverage guarantee could be misleading: once it is (nearly?) fully funded, leverage must decrease with the number of donors, since some could have dropped out without reducing overall funding to the project.

6Florian Habermacher1moYou're right. I see two situations here: (i) the project has a strict upper limit on funding required. In this case, if you must (a) limit the pool of participants, and/or (b) their allowed contribution scales, and/or (c) maybe indeed the leverage progression, meaning you might incentivize people less strongly. (ii) the project has strongly decreasing 'utility'-returns for additional money (at some point). In this case, (a), (b), (c) from above may be used, or in theory you as organizer could simply not care: your funding collection leverage still applies, but you let donors judge whether they find they discount the leverage for large contributions, as they judge the value of the money being less valuable on the upper tail; they may then accordingly decide to not contribute, or to contribute with less. Finally, there is simply the possibility to use a cutoff point, above which the scheme simply must be cancelled, to address the issue that you raise, or the one I discuss in the text: to prevent individual donors to have to contribute excessive amounts, when more than expected commitments are received. If that cutoff point is high enough so that it is unlikely enough to be reached, you as organizer may be happy to accept it. Of course one could then think about dynamics, e.g. cooling-off period before you can re-run the cancelled collection, without indirectly (too strongly) undermining the true marginal effect in a far-sighted assessment of the entire situation. In reality: I fear even with this scheme, if in some cases it hopefully turns to be practical, many public goods problems remain underfunded (hopefully simply a bit less strongly) rather than overfunded, so, I'm so far not too worried about that one.
Load More