The following list of reports may or may not be helpful to include in the 'Further reading' section, but I don't think that's for me to decide since it's collected by me and published on my blog: https://magnusvinding.com/2023/06/11/what-credible-ufo-evidence/
A similar critique has been made in Friederich & Wenmackers' article "The future of intelligence in the Universe: A call for humility", specifically in the section "Why FAST and UNDYING civilizations may not be LOUD".
Yeah, it would make sense to include it. :) As I wrote "Robin Hanson has many big ideas", and since the previous section was already about signaling and status, I just mentioned some other examples here instead. Prediction markets could have been another one (though it's included in futarchy).
Thus it is not at all true that that we ignore the possibility of many quiet civs.
But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).
Thanks, and thanks for the question! :)
It's indeed not obvious what I mean when I write "a smoothed-out line between the estimated growth rate at the respective years listed along the x-axis". It's neither the annual growth rate in that particular year in isolation (which is subject to significant fluctuations), nor the annual average growth rate from the previously listed year to the next listed year (which would generally not be a good estimate for the latter year).
Instead, it's an estimated underlying growth rate at that year based on the growth rates i...
I think this is an important point. In general terms, it seems worth keeping in mind that option value also entails option disvalue (e.g. the option of losing control and giving rise to a worst-case future).
Regarding long reflection in particular, I notice that the quotes above seem to mostly mention it in a positive light, yet its feasibility and desirability can also be separately criticized, as I've tried to do elsewhere:
...First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it woul
Thanks for your question, Péter :)
There's not a specific plan, though there is a vague plan to create an audio version at some point. One challenge is that the book is full of in-text citations, which in some places makes the book difficult to narrate (and it also means that it's not easy to create a listenable version with software). You're welcome to give it a try if you want, though I should note that narration can be more difficult than one might expect (e.g. even professional narrators often make a lot of mistakes that then need to be corrected).
Thanks for your comment, Michael :)
I should reiterate that my note above is rather speculative, and I really haven't thought much about this stuff.
1: Yes, I believe that's what inflation theories generally entail.
2: I agree, it doesn't follow that they're short-lived.
In each pocket universe, couldn't targeting its far future be best (assuming risk neutral expected value-maximizing utilitarianism)? And then the same would hold across pocket universes.
I guess it could be; I suppose it depends both on the empirical "details" and one's decision theory.
Regardin...
These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.
I don't agree that these points are properly disclaimed in the post. I think the post gives an imbalanced impression of the discussion and potential biases around these issues, and I think that impression is worth balancing out, even if presenting a balanced impression wasn't the point of the post.
...The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the rec
I agree that vegan advocacy is often biased and insufficiently informed. That being said, I think similar points apply with comparable, if not greater, strength in the "opposite" direction, and I think we end up with an unduly incomplete perspective on the broader discussion around this issue if we only (or almost only) focus on the biases of vegan advocacy alone.
For example, in terms of identifying reasonable moral views (which, depending on one's meta-ethical view, isn't necessarily a matter of truth-seeking, but perhaps at least a matter of being "plaus...
These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.
The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the recommendation to take animal welfare more seriously.
The view obviously does have "implausible" implications, if that means "implications that conflict with what seems obvious to most people at first glance".
I don't think what Knutsson means by "plausible" is "what seems obvious to most people at first glance". I also don't think that's a particularly common or plausible use of the term "plausible". (Some examples of where "plausible" and "what seems obvious to most people at first glance" plausibly come apart include what most people in the past might at first glance have considered obvious about the moral ...
The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article.
I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):
...I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to m
Thanks! :)
Assigning a single number to such a prior, as if it means anything, seems utterly absurd.
I don't agree that it's meaningless or absurd. A straightforward meaning of the number is "my subjective probability estimate if I had to put a number on it" — and I'd agree that one shouldn't take it for more than that.
I also don't think it's useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on expli...
You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?
It was in large part based on the considerations reviewed in the section "I. An extremely low prior in near aliens". The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):
...All-things-considered probability estimates: Priors on near aliens
Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specificall
Thanks for your comment. I basically agree, but I would stress two points.
First, I'd reiterate that the main conclusions of the post I shared do not rest on the claim that extraordinary UFOs are real. Even assuming that our observed evidence involves no truly remarkable UFOs whatsoever, a probability of >1 in 1,000 in near aliens still looks reasonable (e.g. in light of the info gain motive), and thus the possibility still seems (at least weakly) decision-relevant. Or so my line of argumentation suggests.
Second, while I agree that the wild abilities are...
I think it would have been more fair if you hadn't removed all the links (to supporting evidence) that were included in the quote below, since it just comes across as a string of unsupported claims without them:
...Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectivel
I didn't claim that there isn't plenty more data. But a relevant question is: plenty more data for what? He says that the data situation looks pretty good, which I trust is true in many domains (e.g. video data), and that data would probably in turn improve performance in those domains. But I don't see him claiming that the data situation looks good in terms of ensuring significant performance gains across all domains, which would be a more specific and stronger claim.
Moreover, the deference question could be posed in the other direction as well, e.g. do y...
I think it's a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing "fates worse than death".
I'm not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it's still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of bac...
What are the downsides from slowing down?
I'd again prefer to frame the issue as "what are the downsides from spending marginal resources on efforts to slow down?" I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement "fail-safe measures"/"separation from hyperexistential risk" in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.
In other words, a serious downside of betting chiefly on efforts ...
I'm not sure what you are saying here? Do you think there is a risk of AI companies deliberately causing s-risks (e.g. releasing a basilisk) if we don't play nice!?
No, I didn't mean anything like that (although such crazy unlikely risks might also be marginally better reduced through cooperation with these actors). I was simply suggesting that cooperation could be a more effective way to reduce risks of worst-case outcomes that might occur in the absence of cooperative work to prevent them, i.e. work of the directional kind gestured at in my other comment ...
Thanks for your reply, Greg :)
I don't think this matters, as per the next point about there already being enough compute for doom
That is what I did not find adequately justified or argued for in the post.
I think the burden of proof here needs to shift to those willing to gamble on the safety of 100x larger systems.
I suspect that a different framing might be more realistic and more apt from our perspective. In terms of helpful actions we can take, I more see the choice before us as one between trying to slow down development vs. trying to steer future devel...
To push back a bit on the fast software-driven takeoff (i.e. a fast takeoff driven primarily by innovations in software):
Common objections to this narrative [of a fast software-driven takeoff] are that there won’t be enough compute, or data, for this to happen. These don’t hold water after a cursory examination of our situation. We are nowhere near close to the physical limits to computation ...
While we're nowhere near the physical limits to computation, it's still true that hardware progress has slowed down considerably on various measures. I t...
Current scaling "laws" are not laws of nature. And there are already worrying signs that things like dataset optimization/pruning, curriculum learning and synthetic data might well break them - It seems likely to me that LLMs will be useful in all three. I would still be worried even if LLMs prove useless in enhancing architecture search.
I agree that the reduction of s-risks is underprioritized, but it's unclear whether the aim of reducing s-risks would render research into the nature of sentience a high priority; and there are even reasons to think that it could be harmful.
I've tried to outline what I see as some relevant considerations here.
By "I am confused by your argument against scaling", I thought you meant the argument I made here, since that was the main argument I made regarding scaling; the example with robots wasn't really central.
I'm also a bit confused, because I read your arguments above as being arguments in favor of explosive economic growth rates from hardware scaling and increasing software efficiency. So I'm not sure whether you believe that the factors mentioned in your comment above are sufficient for causing explosive economic growth. Moreover, I don't yet understand why ...
To be clear, I don't mean to claim that we should give special importance to current growth rates in robotics in particular. I just picked that as an example. But I do think it's a relevant example, primarily due to the gradual nature of the abilities that robots are surpassing, and the consequent gradual nature of their employment.
Unlike fusion, which is singular in its relevant output (energy), robots produce a diversity of things, and robots cover a wide range of growth-relevant skills that are gradually getting surpassed already. It is this gradual nat...
I agree with premise 3. Where I disagree more comes down to the scope of premise 1.
This relates to the diverse class of contributors and bottlenecks to growth under Model 2. So even though it's true to say that humans are currently "the state-of-the-art at various tasks relevant to growth", it's also true to say that computers and robots are currently "the state-of-the-art at various tasks relevant to growth". Indeed, machines/external tools have been (part of) the state-of-the-art at some tasks for millennia (e.g. in harvesting), and computers and robots ...
Regarding explosive growth in the amount of hardware: I meant to include the scale aspect as well when speaking of a hardware explosion. I tried to outline one of the main reasons I'm skeptical of such an 'explosion via scaling' here. In short, in the absence of massive efficiency gains, it seems even less likely that we will see a scale-up explosion in the future.
Incidentally, the graphs you show for the decline in innovations per capita start dropping around 1900 ... which is pretty different from the 1960s.
That's right, but that's consistent with the pe...
I wrote earlier that I might write a more elaborate comment, which I'll attempt now. The following are some comments on the pieces that you linked to.
I disagree with this series in a number of places. For example, in the post "This Can't Go On", it says the following in the context of an airplane metaphor for our condition:
We're going much faster than normal, and there isn't enough runway to do this much longer ... and we're accelerating.
As argued above, in terms of economic growth rates, we're in fact not accelerating, ...
Thanks for this, it's helpful. I do agree that declining growth rates is significant evidence for your view.
I disagree with your other arguments:
For one, an AI-driven explosion of this kind would most likely involve a corresponding explosion in hardware (e.g. for reasons gestured at here and here), and there are both theoretical and empirical reasons to doubt that we will see such an explosion.
I don't have a strong take on whether we'll see an explosion in hardware efficiency; it's plausible to me that there won't be much change there (and also plausible t...
I do not claim otherwise in the post :) My claim is rather that proponents of Model 1 tend to see a much smaller distance between these respective definitions of intelligence, almost seeing Intelligence 1 as equivalent to Intelligence 2. In contrast, proponents of Model 2 see Intelligence 1 as an important yet still, in the bigger picture, relatively modest subset of Intelligence 2, alongside a vast set of other tools.
At any given point in time, I expect that progress looks like "taking the low-hanging fruit"; the reason growth goes up over time anyway is because there's a lot more effort looking for fruit as time goes on, and it turns out that effect dominates.
I think the empirical data suggests that that effect generally doesn't dominate anymore, and that it hasn't dominated in the economy as a whole for the last ~3 doublings. For example, US Total Factor Productivity growth has been weakly declining for several decades despite superlinear growth in the effective numb...
You're trying to argue for "there are no / very few important technologies with massive room for growth" by giving examples of specific things without massive room for growth.
I should clarify that I’m not trying to argue for that claim, which is not a claim that I endorse.
My view on this is rather that there seem to be several key technologies and measures of progress that have very limited room for further growth, and the ~zero-to-one growth that occurred along many of these key dimensions seems to have been low-hanging fruit that coincided with the high ...
Thanks for highlighting that. :)
I agree that this is relevant and I probably should have included it in the post (I've now made an edit). It was part of the reason that I wrote "it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs". But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand "intentional harm" in a variety of ways. In any case, I think it's important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught.
It's not the case that there are N technologies and progress consists solely of improving those technologies; progress usually happens by developing new technologies.
Yeah, I agree with that. :)
But I think we can still point to some important underlying measures — say, "the speed at which we transmit signals around Earth" or "the efficiency with which we can harvest solar energy" — where there isn't much room for further progress. On the first of those two measures, there basically isn't any room for further progress. On the second, we can at the very most ...
Thanks :)
I recently asked the question whether anyone had quantified the percent of tasks that computers are superhuman at as a function of time - has anyone?
I'm not aware of any. Though I suppose it would depend a lot on how such a measure is operationalized (in terms of which tasks are included).
This is seriously cherry picked.
I quoted that line of Murphy's as one that provides examples of key technologies that are close to hitting ultimate limits; I didn't mean to say that they were representative of all technologies. :)
But it's worth noting that ...
Thanks for your question :)
I might write a more elaborate comment later, but to give a brief reply:
It’s true that Model 2 (defined in terms of those three assumptions) does not rule out significantly higher growth rates, but it does, I believe, make explosive growth quite a lot less likely compared to Model 1, since it does not imply that there’s a single bottleneck that will give rise to explosive growth.
...I think most of your arguments for Model 2 also apply to this perspective. The one exception is the observation that growth rates are declining, though t
Asserting (as epicurean views do) death is not bad (in itself) for the being that dies is one thing.
But Epicureans tend to defend a stronger claim, namely that there is nothing suboptimal about death — or rather, about being dead — for the being who dies (which is consistent with Epicurean views of wellbeing). I believe this is the view defended in Hol, 2019.
Asserting (as the views under discussion do) that death (in itself) is good
But death is not good in itself on any of the views under discussion. First, death in itself has no value or disvalu...
Varieties of experientialist minimalist views that are overlooked in this piece
I think the definition of experientialist minimalism employed in the post is in need of elaboration, as it seems that there are in fact minimalist experientialist views that would not necessarily have the implications that you inquire about, yet these views appear to differ from the experientialist minimalist views considered in the post.
To give an example, one could think that what matters is only the reduction of experiential disvalue (and thereby be an experientialist minimal...
>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)
I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”, “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.
The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones...
[the view that intrinsically positive lives do not exist] implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.
This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.[1]
First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfacti...
Thanks for your question, Michael :)
I should note that the main thing I take issue with in that quote of MacAskill's is the general (and AFAICT unargued) statement that "any argument for the first claim would also be a good argument for the second". I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).
As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are ...
I'm not sure how I feel about relying on intuitions in thought experiments such as those. I don't necessarily trust my intuitions.
If you'd asked me 5-10 years ago whose life is more valuable: an average pig's life or a severely mentally-challenged human's life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.
My point is w...
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.'s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.
I understand that you feel that the asymmetry is true
Just to clarify, I wouldn't say that. :)
and as such it feels ok not to have addressed it in a popular book.
But the book does briefly take up the Asymmetry, and makes a couple of arguments against it. The point I was trying to make in the first section is that these arguments don't seem convincing.
The questions that aren't addressed are those regarding interpersonal outweighing — e.g. can purported goods morally outweigh extreme suffering? Can happy lives morally outweigh very bad lives? (As I hint in the...
It's unfortunate that the quote I selected implies "all minimalist axiologies" but I really was trying to talk about this post.
Perhaps it would be good to add an edit on that as well? E.g. "The author agrees that the answers to these questions are 'yes' (for the restricted class of minimalist axiologies he explores here)." :)
(The restriction is relevant, not least since a number of EAs do seem to hold non-experientialist minimalist views.)
The author agrees that the answers to these questions are "yes".
Not quite. The author assumes a certain class of minimalist axiologies (experientialist ones), according to which the answers to those questions are:
Thanks for summarizing it.
The worries I respond to are complex and the essay has many main points. Like any author, I hope that people would consider the points in their proper context (and not take them out of context). One main point is the contextualization of the worries itself, which is highlighted by the overviews (1.1–1.2) focusing a lot on the relevant assumptions and on minding the gap between theory and practice.
To complex questions, I don't think it's useful to reduce answers to either "yes" or "no", especially when the answers rest on unrealistic assumptions and look very different in theory versus practice. Between theory and practice, I also tend to consider the practical implications more important.
This analysis seems to neglect all "net negative outcomes", including scenarios in which s-risks are realized (as Mjeard noted), the badness of which can go all the way to the opposite extreme (see e.g. "Astronomical suffering from slightly misaligned artificial intelligence").
Including that consideration may support a more general focus on ensuring a better quality of the future, which may also be supported by considerations related to grabby aliens.
I think it's important to stress that it's not just that some people with an extremely high IQ fail to change their minds on certain issues, and more generally fail to overcome confirmation bias (which I think is fairly unsurprising). A key point is that there actually doesn't appear to be much of a correlation at all between IQ and resistance to confirmation bias.
So to slightly paraphrase what you wrote above, I didn't just write the post because a correlation across a population is of limited relevance when you’re dealing with a smart individual wh...
You argue that EA overrates IQ
As noted above, my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice (because those other important traits seem neglected relative to smarts, and also because — at the level of what we seek to develop and incentivize — those other traits seem more elastic and improvable).
I noted in the comment above that:
...one line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and
“Science advances one funeral at a time.” If that’s true,
If that were literally true, then science wouldn't ever advance much. :)
It seems that most scientists are in fact willing to change their minds when strong evidence has been provided for a hypothesis that goes against the previously accepted view. The "Planck principle" seems more applicable to scientists who are strongly invested in a given hypothesis, but even in that reference class, I suspect that most scientists do actually change their minds during their lifetime when the evidence is strong. An...
Thanks for your comment and for listing those traits and skills; I strongly agree that those are all useful qualities. :)
One might argue that willingness to do grunt work, taking initiative, and mental stamina all belong in a broader "drive/conscientiousness" category, but I think they are in any case important and meaningfully distinct traits worth highlighting in their own right.
Likewise, one could perhaps argue that "ability to network well" falls under a broader category of "social skills", in which interpersonal kindness and respect might also be said...
FWIW, I don't see that piece as making a case against panpsychism, but rather against something like "pansufferingism" or "pansentienceism". In my view, these arguments against the ontological prevalence of suffering are compatible with the panpsychist view that (extremely simple) consciousness / "phenomenality" is ontologically prevalent (cf. this old post on "Thinking of consciousness as waves").