To clarify, the context of the quoted remark was that, just as we can care for those we love in the face of cluelessness, we can likewise care for and benefit strangers.
Specifically in relation to this:
we still have reason to respect other values we hold dear — those that were never grounded purely in the impartial good in the first place. Integrity, care for those we love, and generally not being a jerk, for starters. Beyond that, my honest answer is: I don’t know.
I think the "other values we hold dear" can and should also include a strong focus on helpin...
I was simply erring on the side of being generous to the non-clueless view
Right, I suspected that — hence the remark about infinite ethics considerations counting as an additional problem to what's addressed here. My point was that the non-clueless view addressed here (finite case) already implicitly entails scope limitations, so if one embraces that view, the question seems to be what the limitation (or discounting) in scope is, not whether there is one.
To clarify, what I object to here is not a claim like "very strong consequence-focused impartiality is most plausible all things considered", or "alternative views also have serious problems". What I push back against is what I see as an implied brittleness of the general project of effective altruism (broadly construed), along the lines of "it's either very strong consequence-focused impartiality or total bust" when it comes to working on EA causes/pursuing impartial altruism in some form.
On the first point, you're right, I should have phrased this differently: it's not that those passages imply that impartiality entails consequentialism ("an act is right iff it brings about the best consequences"). What I should have said is that they seem to imply that impartiality at a minimum entails strong forms of consequence-focused impartiality, i.e. the impartiality component of (certain forms of) consequentialism ("impartiality entails that we account for all moral patients, and all the most significant impacts"). My point was ...
At a conceptual level, I think it's worth clarifying that "impartiality" and "impartial altruism" do not imply consequentialism. For example, the following passages seem to use these terms as though they must imply consequentialism. [Edit: Rather, these passages seem to use the terms as though "impartiality" and the like must be focused on consequences.]
...impartiality entails that we account for all moral patients, and all the most significant impacts we could have on them. ...
Perhaps it’s simply indeterminate whether any act ha
"reliably" doesn't mean "perfectly"
Right, I guess within my intuitive conceptions and associations, it's more like a spectrum, with "perfectly" being the very strongest, "reliably" being somewhere in between, and something like "the tiniest bit better than chance" being the weakest. I suspect many would endorse ~the latter formulation without endorsing anything quite as strong as "reliably".
To be clear, I don't think this is a matter of outright misrepresenting others' views; I just suspect that many, maybe most, of those who hold a contrary view would say...
Yeah, my basic point was that just as I don't think we need to ground a value like "caring for those we love" in whether it has the best consequences across all time and space, I think the same applies to many other instances of caring for and helping individuals — not just those we love.
For example, if we walk past a complete stranger who is enduring torment and is in need of urgent help, we would rightly take action to help this person, even if we cannot say whether this action reduces total suffering or otherwise improves the world overall. I think that...
Why would sentient beings' interests matter less intrinsically when those beings are more distant or harder to precisely foresee?
I agree with that sentiment :) But I don't think one would be committed to saying that distant beings' interests matter less intrinsically if one "practically cares/focuses" disproportionally on beings who are in some sense closer to us (e.g. as a kind of mid-level normative principle or stance). The latter view might simply reflect the fact that we inhabit a particular place in time and space, and that we can plausibly better he...
I use "impartiality" loosely, in the sense in the first sentence of the intro: "gives moral weight to all consequences, no matter how distant".
Thanks for clarifying. :)
How about views that gradually discount at the normative level based on temporal distance, like or so? They would give weight to consequences no matter how distant, and still give non-trivial weight to fairly distant consequences (by ordinary standards), yet the weight would go to zero as the distance grows. If normative neartermism is largely immune to your arguments, migh...
I meant "the reason to work on such causes that my target audience actually endorses."
I suspect there are many people in your target audience who don't exclusively endorse, or strictly need to rely on, the views you critique as their reason to work on EA causes (I guess I'm among them).
Toward the very end, you write:
“But what should we do, then?” Well, we still have reason to respect other values we hold dear — those that were never grounded purely in the impartial good in the first place. Integrity, care for those we love, and generally not being a jerk, for starters. Beyond that, my honest answer is: I don’t know.
You obviously don't exclude the following, but I would strongly hope that — beyond just integrity, care for those we love, and not being a jerk — we can also at a minimum endorse a commitment to reducing overt and gratuitous s...
I should probably have made it more clear that this isn't an objection, and maybe not even much of a substantive point, but more just a remark on something that stood out to me while reading, namely that the views critiqued often seemed phrased in much stronger terms than what people with competing views would necessarily agree with.
Some of the examples that stood out were those I included in quotes above.
You write the following in the first post in the sequence (I comment on it here because it relates closely to similar remarks in this post):
if my arguments hold up, our reason to work on EA causes is undermined.
This claim seems to implicitly assume that perfect impartiality [edit: or very strong forms of impartiality] across all space and time is the only reason or grounding we could have for working on EA causes. But that's hardly the case — there are countless alternative reasons or moral stances that could ground support for EA (or work on EA ...
The first thought I have is mostly an impression or something that stood out to me: it seems to me like the word choices here sometimes don't quite reflect the point being made or the full range of views being critiqued, arguably including the strongest competing views.
For example, when talking about heuristics that are supposed to be "robust", or strategies we can "reliably intervene on", or whether we can "reliably weigh up" relevant effects, etc, it seems to me that these word choices convey something much stronger than what would necessarily be endorse...
This is a very loose idea, based on observations like these:
The project w...
The Codeforces Elo progression from o1-mini to o3-mini was around 400 points (with compute costs held constant). Similarly, the Elo jumps from 4o (~800) to o1-preview (~1250) to o1-mini (~1650) were also each around 400 points (the compute costs of 4o appear similar to those of o1-mini, while they're higher for o1-preview).
People from OpenAI report that o4 is now being trained and that training runs take around three months in the current "reasoning paradigm". So if we were to engage in naive projection, we might project...
This is what I meant:
it seems to me like a striking ... kind of coincidence to end at exactly — or indistinguishably close to — ... any position of complete agnosticism
That is, I think it tends to apply to complete and perfect agnosticism in general, even if one doesn't frame or formulate things in terms of 50/50 or the like. (Edit: But to clarify, I think it's less striking the less one has thought about a given choice and the less the options under consideration differ in character; so I think there are many situations in which practically complete agnosticism is reasonable.)
Thanks for your comment :)
fwiw, I think I'm more skeptical than you that we'll ever find evidence robust enough to warrant updating away from radical agnosticism on whether our influence on cosmic actors makes the future better or worse
I guess there are various aspects that are worth teasing apart there, such as: humanity's overall influence on other cosmic actors, a given altruistic community's influence on cosmic actors, individual actions taken (at least partly) with an eye to having a beneficial influence on (or together with) other cosmic actors, and ...
On "cold computing": to clarify, the piece I linked to was not about aestivation / waiting. It was about using "cold computing" right away.
The comment from gwern lists some reasons that may speak against "cold computing" (in general) as playing a significant role in answering the Fermi question, but again, a question is how decisive those reasons are. Even if such reasons should lead us to think that "cold computing" plays no significant role with 95 percent confidence, it still seems worth avoiding the mistake of belief digitization: simply collapsing the...
If one includes sims, grabby civs would possibly but not necessarily have more observers (like us) than quiet expansionist civs. For example, the expected number of sims may be roughly the same, or even larger, in quiet expansionist scenarios that involve a deadline/shift (cf. sec. 4).[1] There's also the possibility that computation could be more efficient in quiet regimes (some have argued along these lines, though I'm by no means saying it's correct; I'm not sure if we currently understand physics well enough to make confident pronouncements either...
The dark matter thought has crossed my mind too (and others have also speculated along those lines). Yet the fact that dark matter appears to have been present in the very early universe speaks strongly against it — at least when it comes to the stronger "be" conjecture, less so the weaker "contain" conjecture, which seems more plausible.
I see, thanks for clarifying.
In terms of potential tradeoffs between expansion speeds vs. spending resources on other things, it seems to me that one could argue in both directions regarding what the tradeoffs would ultimately favor. For example, spending resources on the creation of Dyson swarms/other clearly visible activity could presumably also divert resources away from maximally fast expansion. (There is also the complication of transmitting the resulting energy/resources to frontier scouts, who might be difficult to catch up with if they are at ~max...
Thanks for your comment. :) One reason I didn't use the term "zoo hypothesis" is that I've seen it defined in rather different ways. Relatedly, I'm unsure what you mean by zoo vs. natural reserve hypotheses/scenarios. How are these different, as you use these terms? Another question is whether proportions of zoos vs. natural reserves on Earth can necessarily tell us much about "zoos" vs. "natural reserves" in a cosmic context.
Thanks for your comment, Jim. :)
Why would you expect grabby aliens to expand faster than quiet expansionist ones? I didn't readily find a reason in your linked piece, and I don't see why loud vs. quiet per se should influence expansion speeds; both could presumably approach the ultimate limit of what is physically possible?
Thanks for your comment and for the links :)
I don't think we have compelling video evidence at all
I'd agree that there's no compelling video evidence in the sense of it being remotely conclusive; it's possible that it's all mundane. But it seems to me that some of the footage is sufficiently puzzling/sufficiently unclear so as to be worthy of investigation, and that it provides some (further) reason to take this issue seriously. I agree that the reports, including reports involving radar evidence, are more noteworthy in terms of existing evidence.
Regarding...
I feel like this post gets it backwards and tries to find reasons why it’s reasonable to take UFOs seriously instead of arriving at that conclusion after careful deliberation.
The starting point of the post is that there are sufficient grounds for curiosity in light of existing reports/evidence. So to be clear, the initial and main motivation I present for taking UFOs seriously is that evidence, which I claim crosses the bar for "worthy of taking a closer look".
...I admit that the connection UFO <-> extraterrestrial intelligence is more immediate than in
For one, eye-witness reports of UFOs have in many cases been corroborated by radar evidence, which to my knowledge has not happened in the case of any claimed miracles. (See e.g. this playlist, the 1952 Washington DC incident, the 1986 Brazil incident, and Coumbe, 2022.)
Second, the eye-witnesses are in many cases trained pilots who describe going through a fairly rational process of hypothesis testing, like "first I thought it might be a balloon, then that was ruled out by x maneuver", "then I thought of y conventional hypothesis, but that was ruled out by...
Thanks for asking :)
Some background notes that may be relevant: When I first heard about the UFO topic in a more serious way (I think when Sam Harris first talked about it, ~2017-2018?), I searched for debunkings and came upon Mick West's debunking videos. I found them convincing and in effect I dismissed the topic for years, feeling vindicated in my pre-existing position of total dismissal toward the topic (until I read Hanson's post "My awkward inference" in late April 2023 and decided to take a deeper look, as described at the outset of this post).
Secon...
FWIW, I don't see that piece as making a case against panpsychism, but rather against something like "pansufferingism" or "pansentienceism". In my view, these arguments against the ontological prevalence of suffering are compatible with the panpsychist view that (extremely simple) consciousness / "phenomenality" is ontologically prevalent (cf. this old post on "Thinking of consciousness as waves").
The following list of reports may or may not be helpful to include in the 'Further reading' section, but I don't think that's for me to decide since it's collected by me and published on my blog: https://magnusvinding.com/2023/06/11/what-credible-ufo-evidence/
A similar critique has been made in Friederich & Wenmackers' article "The future of intelligence in the Universe: A call for humility", specifically in the section "Why FAST and UNDYING civilizations may not be LOUD".
Thus it is not at all true that that we ignore the possibility of many quiet civs.
But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).
[Edit: For more on quiet expansionist civilizations, see the more recent post "Silent cosmic rulers".]
Thanks, and thanks for the question! :)
It's indeed not obvious what I mean when I write "a smoothed-out line between the estimated growth rate at the respective years listed along the x-axis". It's neither the annual growth rate in that particular year in isolation (which is subject to significant fluctuations), nor the annual average growth rate from the previously listed year to the next listed year (which would generally not be a good estimate for the latter year).
Instead, it's an estimated underlying growth rate at that year based on the growth rates i...
I think this is an important point. In general terms, it seems worth keeping in mind that option value also entails option disvalue (e.g. the option of losing control and giving rise to a worst-case future).
Regarding long reflection in particular, I notice that the quotes above seem to mostly mention it in a positive light, yet its feasibility and desirability can also be separately criticized, as I've tried to do elsewhere:
...First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it woul
Thanks for your question, Péter :)
There's not a specific plan, though there is a vague plan to create an audio version at some point. One challenge is that the book is full of in-text citations, which in some places makes the book difficult to narrate (and it also means that it's not easy to create a listenable version with software). You're welcome to give it a try if you want, though I should note that narration can be more difficult than one might expect (e.g. even professional narrators often make a lot of mistakes that then need to be corrected).
Thanks for your comment, Michael :)
I should reiterate that my note above is rather speculative, and I really haven't thought much about this stuff.
1: Yes, I believe that's what inflation theories generally entail.
2: I agree, it doesn't follow that they're short-lived.
In each pocket universe, couldn't targeting its far future be best (assuming risk neutral expected value-maximizing utilitarianism)? And then the same would hold across pocket universes.
I guess it could be; I suppose it depends both on the empirical "details" and one's decision theory.
Regardin...
These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.
I don't agree that these points are properly disclaimed in the post. I think the post gives an imbalanced impression of the discussion and potential biases around these issues, and I think that impression is worth balancing out, even if presenting a balanced impression wasn't the point of the post.
...The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the rec
I agree that vegan advocacy is often biased and insufficiently informed. That being said, I think similar points apply with comparable, if not greater, strength in the "opposite" direction, and I think we end up with an unduly incomplete perspective on the broader discussion around this issue if we only (or almost only) focus on the biases of vegan advocacy alone.
For example, in terms of identifying reasonable moral views (which, depending on one's meta-ethical view, isn't necessarily a matter of truth-seeking, but perhaps at least a matter of being "plaus...
These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.
The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the recommendation to take animal welfare more seriously.
The view obviously does have "implausible" implications, if that means "implications that conflict with what seems obvious to most people at first glance".
I don't think what Knutsson means by "plausible" is "what seems obvious to most people at first glance". I also don't think that's a particularly common or plausible use of the term "plausible". (Some examples of where "plausible" and "what seems obvious to most people at first glance" plausibly come apart include what most people in the past might at first glance have considered obvious about the moral ...
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):
...I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to m
Thanks! :)
Assigning a single number to such a prior, as if it means anything, seems utterly absurd.
I don't agree that it's meaningless or absurd. A straightforward meaning of the number is "my subjective probability estimate if I had to put a number on it" — and I'd agree that one shouldn't take it for more than that.
I also don't think it's useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on expli...
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?
It was in large part based on the considerations reviewed in the section "I. An extremely low prior in near aliens". The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):
...All-things-considered probability estimates: Priors on near aliens
Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specificall
Thanks for your comment. I basically agree, but I would stress two points.
First, I'd reiterate that the main conclusions of the post I shared do not rest on the claim that extraordinary UFOs are real. Even assuming that our observed evidence involves no truly remarkable UFOs whatsoever, a probability of >1 in 1,000 in near aliens still looks reasonable (e.g. in light of the info gain motive), and thus the possibility still seems (at least weakly) decision-relevant. Or so my line of argumentation suggests.
Second, while I agree that the wild abilities are...
I think it would have been more fair if you hadn't removed all the links (to supporting evidence) that were included in the quote below, since it just comes across as a string of unsupported claims without them:
...Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectivel
I didn't claim that there isn't plenty more data. But a relevant question is: plenty more data for what? He says that the data situation looks pretty good, which I trust is true in many domains (e.g. video data), and that data would probably in turn improve performance in those domains. But I don't see him claiming that the data situation looks good in terms of ensuring significant performance gains across all domains, which would be a more specific and stronger claim.
Moreover, the deference question could be posed in the other direction as well, e.g. do y...
To clarify, the general approach outlined here doesn't rest on the use of discount rates — that's just a simple and illustrative example of scope-restriction.