Thanks for explaining.
I agree that the standards can be too high, especially when participants are both fully informed and give consent (e.g. COVID vaccine trials). I think in this case, participants were not properly informed of the potential (community, social and career) risks ahead of time and made sure they understood before participating.
When I wrote my comment, I actually had in mind the Stanford prison experiment, the Milgram experiment and Vsauce convincing people they were in a real trolley problem (this one had debriefing but not informed consen... (read more)
I think it would read something closer to "We won! Everyone who opted in decided to cooperate!"
Based on how others have been warning you, it feels like the kind of psychological/social experiment you would need to have a psychological debriefing after to get ethics approval, and even then, still might not get approval.
(I downvoted this comment because I think the degree of ethics approvals needed for certain classes of science experiments is immorally high under some reasonable assumptions, and the EAF should not endorse arguments coming out of status quo bias. It's also reasonably possible that I would not have downvoted if Michael wasn't a coworker)
Thanks for writing this, and I agree with your take that it's toxic when people find out after starting to engage that they may face serious consequences for not taking it seriously enough (and indeed whether or not they actually would, since it's still unsettling to believe it). I'm sorry that this has been your experience.
I agree with you, although someone might still opt in treating it like a game and not initially taking it as seriously as others in the community are, and then take the site down. Last year, a user was manipulated into taking down LW by someone claiming the user had to enter their code to save LW.
I thought it was explicit in the announcement post that we should take this seriously, but in not the e-mail I got:
If LessWrong chose any launch code recipients they couldn't trust, the EA Forum will go down, and vice versa. One of the sites going down means that people are blocked from accessing important resources: the destruction of significant real value. What's more, it will damage trust between the two sites ("I guess your most trusted users couldn't be trusted to not take down our site") and also for each site itself ("I guess the admins couldn't fi
I don't think anything happens unless you enter the code, too.
They should have left it up longer if they wanted to test us with it, since it was gone when I reloaded the pages and the timer was never updated while it was up, even though each side would have an hour to retaliate (or it was supposed to give the impression that the hour was over, and it was already too late).
How could we be convinced that the donations were counterfactual?
Also, do you mean you're (considering) taking bribes (to EA charities) to push the button?
Since the timer wasn't updating on either site, I assume they weren't testing us (yet).
I briefly saw a "Missile Incoming" message with a 60:00 timer (that wasn't updating) on the buttons on the front pages of both LW and the EA Forum, at around 12pm EST, on mobile. Both messages were gone when I refreshed. Was this a bug or were they testing the functionality, testing us or preparing to test us?
is it at all common for proponents of objective list theories of well-being to hold that the good life is worse than nonexistence?
I think this would be pretty much only antinatalists who hold stronger forms of the asymmetry, and this kind of antinatalism (and indeed all antinatalism) is relatively rare, so I'd guess not.
I'm also interested in people's predictions had the codes been anonymous (not been personalized). In this case, individual reputational risk would be low, so it would mostly be a matter of community reputational risk, and we'd learn more about if EAs or LWers would stab each other in the back (well, inconvenience each other) if they could get away with it.
That's very unintuitive to me. If "the good life" is significantly more valuable than a meh life, and a meh life is just as valuable as nonexistence, doesn't it follow that a flourishing life is significantly more valuable than nonexistence?
Under the asymmetry, any life is at most as valuable as nonexistence, and depending on the particular view of the asymmetry, may be as good only when faced with particular sets of options.
Curious why you think this first part? Seems plausible but not obvious to me.
I think, for example, it's silly to create more people just so that we can instantiate autonomy/freedom in more people, and I doubt many people think of autonomy/freedom this way. I think the same is true for truth/discovery (and my own example of justice). I wouldn't be surprised if it wasn't uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think it's more natural to think of these things as only matterin... (read more)
If some technical AI safety work accelerates AI, we could miss opportunities for AI safety governance/policy work as a result. OTOH, AI safety governance/policy work, if not done carefully, could give an edge to those unconcerned with safety by impeding everyone else, and that could be bad.
In the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilled: freedom/autonomy, relationships (friendship/family/love), art/beauty/expression, truth/discovery, the continuation of tradition/ancestors' efforts, etc.
I think many (but not all) of these values are mostly conditional on future people existing or directed at their own lives, not the lives of others, and you should also consider the other side: in an empty future, everyone has full freedom/autonomy and gets everything they wan... (read more)
For similar moral views (asymmetric, but not negative utilitarian), this paper might be of interest:
Teruji Thomas, "The Asymmetry, Uncertainty, and the Long Term" (also on the EA Forum). See especially section 6 (maybe after watching the talk, instead of reading the paper, since the paper gets pretty technical).
(My views are suffering-focused and I'm not committed to longtermism, although I'm exploring s-risks slowly, mostly passively.)
I do not feel compelled by arguments that the future could be very long. I do not see how this is possible without at least soft totalitarianism, which brings its own risks of reducing the value of the future.
Do you mean you expect all of our descendants to be wiped out, with none left? What range would you give for your probability of extinction (or unrecoverable collapse) each year?
If we colonize space and continue to expand (whi... (read more)
To be clear, by "x-risk" here, you mean extinction risks specifically, and not existential risks generally (which is what "x-risk" was coined to refer to, from my understanding)? There are existential risks that don't involve extinction, and some s-risks (or all, depending on how we define s-risk) are existential risks because of the expected scale of their suffering.
Do analgesics also reduce reflexive responses to noxious stimuli in humans? If so, this might be an argument against them merely reducing responses to noxious stimuli at all being good evidence for conscious pain (effects on learning strengthens the argument somewhat, though). We'd want something that selectively targets the (consciously experienced) negative affect of pain in humans, but as far as I know, reflexive responses may be possible without negative affect (in humans and nonhumans).
I think we should be able to find lots of examples in the real world like Smoking Lesion, and I think CDT looks better than EDT in more typical choice scenarios because of it. The ones where CDT goes wrong and EDT is right (as discussed in your post) seem pretty atypical to me, although they could still matter a lot. I think both theories are probably wrong.
What matters in Smoking Lesion are:
Ya, that's fair. If this is the case, I might say that the biological neurons don't have additional useful degrees of freedom for the same number of inputs, and the paper didn't explicitly test for this either way, although, imo, what they did test is weak Bayesian evidence for biological neurons having more useful degrees of freedom, since if they could be simulated with few artificial neurons, we could pretty much rule out that hypothesis. Maybe this evidence is too weak to update much on, though, especially if you had a prior that simulating biological neurons would be pretty hard even if they had no additional useful degrees of freedom.
Woops, ya, I got my dates mixed up for COVID and JUST.
However, that would presumably also be true for whatever other tools or sources we might alternatively rely on for cultured meat timelines and so I don't think it changes the overall conclusion on how much stock to put into the types of predictions/predictors represented in this dataset.
I'm not sure what you mean by this. My point is that COVID might have made some of these predictions false, when they would have otherwise ended up true without COVID, so these groups just got very unlucky, and we should... (read more)
Just defer to Mike Huemer. He gets from common sense mortality to veganism and anarcho-capitalism. :P
EDIT: Woops, got my COVID dates mixed up; I was thinking March 2020.
March 2019 "JUST, the San Francisco-based company racing to be the first to bring cell-based meat to market, announced in a CBS San Francisco interview last month that they would debut their first product — a cultured chicken nugget — in Asia sometime this year"
I think it's reasonably likely this was delayed by COVID-19, given they made this prediction when it wasn't clear how bad things would be, they debuted in a restaurant in Singapore at the end of 2020, and restaurants where they were... (read more)
I don't think it's reasonably likely this particular prediction was delayed by COVID-19, given they made this prediction in early 2019 about a product being on offer *in 2019*. I don't think there is much to suggest any impediments to a product roll-out in 2019 from the pandemic since it only started having major impacts/reactions in 2020. For other predictions in this dataset made by companies, research institutes, and reported in the media it seems likely the pandemic threw up an unexpected obstacle and delay. However, that would presuma... (read more)
This could be in part because GFI got more financial support from the EA community, both from Open Phil and due to ACE.
Ya, this is what I'm thinking, although have to is also a matter of scaling, e.g. a larger brain could accomplish the same with less powerful neurons. There's also probably a lot of waste in the human brain, even just among the structures most important for reasoning (although the same could end up being true or an AGI/TAI we try to build; we might need a lot of waste before we can prune or make smaller student networks, etc.).
On falling leaves, the authors were just simulating the input and output behaviour of the neurons, not the physics/chemistry/biolog... (read more)
Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it's nearly as likely to be net-negative as net-positive given our great uncertainty
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?
I guess no one is really publishing these CEAs, then?
Do you also have CEAs of the meta work you fund, in terms of AI risk reduction/increase?
I would say a biological neuron can compute more complex functions or a wider variety of functions of its inputs than standard artificial neurons in deep learning (linear combination of inputs followed by a nonlinear real-valued function with one argument), and you could approximate functions of interest with fewer biological neurons than artificial ones. Maybe biological neurons have more (useable) degrees of freedom for the same number of input connections.
You could move me by building an explicit quantitative model for a popular question of interest in longtermism that (a) didn't previously have models (so e.g. patient philanthropy or AI racing doesn't count), (b) has an upshot that we didn't previously know via verbal arguments, (c) doesn't involve subjective personal guesses or averages thereof for important parameters, and (d) I couldn't immediately tear a ton of holes in that would call the upshot into question.
I feel that (b) identifying a new upshot shouldn't be necessary; I think it should be e... (read more)
Fair. I should revise my claim to being about the likelihood of a catastrophe and the risk reduction from working on these problems (especially or only in AI; I haven't looked as much at what's going on in other x-risks work). AI Impacts looks like they were focused on timelines.
On point 1, my claim is that the paper is evidence for the claim that biological neurons are more computationally powerful than artificial ones, not that we'd achieve AGI/TAI by simulating biological brains. I agree that for those who already expected this, this paper wouldn't be much of an update (well, maybe the actual numbers matter; 1000x seemed pretty high, but is also probably an overestimate).
I also didn't claim that the timelines based on biological anchors that I linked to would actually be affected by this (since I didn't know either way whether ... (read more)
Hmm, I guess I hadn't read that post in full detail (or I did and forgot about the details), even though I was aware of it. I think the argument there that mortality will roughly match some time after transition is pretty solid (based on two datasets and expert opinion). I think there was still a question of whether or not the "short-term" increase in mortality outweighs the reduction in behavioural deprivation, especially since it wasn't clear how long the transition period would be. This is a weaker claim than my original one, though, so I'll retract my ... (read more)
Some other concerns that seem to me to be consistent with motivated reasoning in animal welfare have been:
I'm not defending what you think is a bailey, but as a practical matter, I would say until recently (with Open Phil publishing a few models for AI), longtermists have not been using numbers or models much, or when they do, some of the most important parameters are extremely subjective personal guesses or averages of people's guesses, not based on reference classes, and risks of backfire were not included.
Here's some supporting evidence for it being hard to map:
In 2016, the Intelligence Advanced Research Projects Activity of the United States government launched MICrONS, a five-year, multi-institute project to map one cubic millimeter of rodent visual cortex, as part of the BRAIN Initiative. Though only a small volume of biological tissue, this project will yield one of the largest micro-scale connectomics datasets currently in existence.
A mouse brain is about 500x that.
On the the other hand, progress with OpenWorm has been kind of slow, despi... (read more)
Thanks, these are both excellent points. I did hint to the first one, and I specifically came back to this post to mention the second, but you beat me to it. ;)
I've edited my post.
EDIT: Also edited again to emphasize the weaknesses.
Check the COVID-19 tag on LessWrong.
That is, suppose that before you read Wikipedia, you were 50% on the Egyptians were at 0 welfare, and 50% they were at 10 welfare, so 5 in expectation, but reading is 0 EV. After reading, you find out that their welfare was 10. OK, should we count this action, in retrospect, as worth 5 welfare for the Egyptians? I'd say no, because the ex post evaluation should go: "Granted that the Egyptians were at 10 welfare, was it good to learn that they were at 10 welfare?". And the answer is no: the learning was a 0-welfare change.
This sounds like CDT, though, by co... (read more)
New links added:
I think this should be the default option for your donations (setting aside how much and when you want to donate), and you should either defer to them or make a serious effort to beat them (possibly with their help). You can talk to the fund managers for advice. These fund managers have a good idea if how money is being spent within their causes and how their grants/donations might affect where other funding goes (e.g. where Open Phil grants).
It could be worth asking them for estimates of the cost-effectiveness of their marginal grants, although I don't know if they will actually keep track of this, and it could have extremely high uncertainty, depending on the cause.
That's fair, but also it seems kind of implicit and would make a good chunk of the title get cut off on the front page.
Your're thinking they'd be lower, right? Presumably people would have better quality of life and mental health, so be less inclined to commit suicide each year.
I think the thought experiments you give are pretty decisive in favour of the EDT answers over the CDT answers, and I guess I would agree that we have some kind of subtle control over the past, but I would also add:
Acting and conditioning on our actions doesn't change what happened in the past; it only tells us more about it. Finding out that Ancient Egyptians were happier than you thought before doesn't make it so that they were happier than you thought before; they already observed their own welfare, and you were just ignorant of it. While EDT would not ... (read more)
Added an extra bit:
Furthermore, "doing nothing"/"not investing" is only one option among our multiple options, and if it's equally ambiguous, then it will only make up 1/Nth of the optimal portfolio. This is an argument against paralysis, i.e. doing nothing, when faced with complex cluelessness.
Similarly, I would guess random changes are more likely to reduce population sizes than increase them (in the short term) because animals are somewhat finely tuned for their specific conditions, and if it's the case that animal welfare is on average bad in the wild, then the expected decrease in average welfare would be made up for by a large enough reduction in the number of animals. If average welfare is positive or 0, then a random change seems bad in expectation.
In the long term, we need to compare equilibria, and I don't have any reason to believe a r... (read more)
Would you need to choose the leverage schedule so that you're unlikely to fully fund the project? Otherwise, the leverage guarantee could be misleading: once it is (nearly?) fully funded, leverage must decrease with the number of donors, since some could have dropped out without reducing overall funding to the project.