All of Magnus Vinding's Comments + Replies

FWIW, I don't see that piece as making a case against panpsychism, but rather against something like "pansufferingism" or "pansentienceism". In my view, these arguments against the ontological prevalence of suffering are compatible with the panpsychist view that (extremely simple) consciousness / "phenomenality" is ontologically prevalent (cf. this old post on "Thinking of consciousness as waves").

2
MichaelStJules
1mo
Good point. I think we can extend your argument to one against pan-experience-of-X-ism, for (almost?) any given X, no matter how specific or broad, with your other example for X being "wanting to go to a Taylor Swift concert so as to share the event with your Instagram followers". This is distinct from panpsychism, which only (?) requires that mental contents or experiences of something in general be widespread, not that any given (specific or kind of) mental content X be widespread.

The following list of reports may or may not be helpful to include in the 'Further reading' section, but I don't think that's for me to decide since it's collected by me and published on my blog: https://magnusvinding.com/2023/06/11/what-credible-ufo-evidence/

A similar critique has been made in Friederich & Wenmackers' article "The future of intelligence in the Universe: A call for humility", specifically in the section "Why FAST and UNDYING civilizations may not be LOUD".

Yeah, it would make sense to include it. :) As I wrote "Robin Hanson has many big ideas", and since the previous section was already about signaling and status, I just mentioned some other examples here instead. Prediction markets could have been another one (though it's included in futarchy).

Thus it is not at all true that that we ignore the possibility of many quiet civs.

But that's not the claim of the quoted text, which is explicitly about quiet expansionist aliens (e.g. expanding as far and wide as loud expansionist ones). The model does seem to ignore those (and such quiet expansionists might have no borders detectable by us).

Thanks, and thanks for the question! :)

It's indeed not obvious what I mean when I write "a smoothed-out line between the estimated growth rate at the respective years listed along the x-axis". It's neither the annual growth rate in that particular year in isolation (which is subject to significant fluctuations), nor the annual average growth rate from the previously listed year to the next listed year (which would generally not be a good estimate for the latter year).

Instead, it's an estimated underlying growth rate at that year based on the growth rates i... (read more)

4
Vasco Grilo
6mo
Thanks for clarifying!

I think this is an important point. In general terms, it seems worth keeping in mind that option value also entails option disvalue (e.g. the option of losing control and giving rise to a worst-case future).

Regarding long reflection in particular, I notice that the quotes above seem to mostly mention it in a positive light, yet its feasibility and desirability can also be separately criticized, as I've tried to do elsewhere:

First, there are reasons to doubt that a condition of long reflection is feasible or even desirable, given that it woul

... (read more)

Thanks for your question, Péter :)

There's not a specific plan, though there is a vague plan to create an audio version at some point. One challenge is that the book is full of in-text citations, which in some places makes the book difficult to narrate (and it also means that it's not easy to create a listenable version with software). You're welcome to give it a try if you want, though I should note that narration can be more difficult than one might expect (e.g. even professional narrators often make a lot of mistakes that then need to be corrected).

3
Péter Drótos
6mo
Great to hear that there's something in mind. Also thanks for highlighting the difficulties. I definitely wanted to use software for it but if it's not as easy as it first sounds, then I'll just save myself from the struggle and leave it to an experienced person.

Thanks for your comment, Michael :)

I should reiterate that my note above is rather speculative, and I really haven't thought much about this stuff.

1: Yes, I believe that's what inflation theories generally entail.

2: I agree, it doesn't follow that they're short-lived.

In each pocket universe, couldn't targeting its far future be best (assuming risk neutral expected value-maximizing utilitarianism)? And then the same would hold across pocket universes.

I guess it could be; I suppose it depends both on the empirical "details" and one's decision theory.

Regardin... (read more)

These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.

I don't agree that these points are properly disclaimed in the post. I think the post gives an imbalanced impression of the discussion and potential biases around these issues, and I think that impression is worth balancing out, even if presenting a balanced impression wasn't the point of the post.

The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the rec

... (read more)

I agree that vegan advocacy is often biased and insufficiently informed. That being said, I think similar points apply with comparable, if not greater, strength in the "opposite" direction, and I think we end up with an unduly incomplete perspective on the broader discussion around this issue if we only (or almost only) focus on the biases of vegan advocacy alone.

For example, in terms of identifying reasonable moral views (which, depending on one's meta-ethical view, isn't necessarily a matter of truth-seeking, but perhaps at least a matter of being "plaus... (read more)

These are cached arguments that are irrelevant to this particular post and/or properly disclaimed within the post.

The asks from this post aren't already in the water supply of this community; everyone reading EA Forum has, by contrast, already encountered the recommendation to take animal welfare more seriously.

The view obviously does have "implausible" implications, if that means "implications that conflict with what seems obvious to most people at first glance".

I don't think what Knutsson means by "plausible" is "what seems obvious to most people at first glance". I also don't think that's a particularly common or plausible use of the term "plausible". (Some examples of where "plausible" and "what seems obvious to most people at first glance" plausibly come apart include what most people in the past might at first glance have considered obvious about the moral ... (read more)

The reason this matters is that EA frequently decides to make decisions, including funding decisions, based on these ridiculously uncertain estimates. You yourself are advocating for this in your article. 

I think that misrepresents what I write and "advocate" in the essay. Among various other qualifications, I write the following (emphases added):

I should also clarify that the decision-related implications that I here speculate on are not meant as anything like decisive or overriding considerations. Rather, I think they would mostly count as weak to m

... (read more)
1
titotal
9mo
To be clear, I think you included all the necessary disclaimers, your article was well written, well argued, and the use of probability was well within the standard for how probability is used in EA.  My issue is that I think the way probability is presented in EA is bad, misleading, and likely to lead to errors. I think this is the exact type of problem (speculative, unbounded estimates) where the EA method fails.    My specific issue here is how uncertainty is taken out of the equation and placed into preambles, and how a highly complex belief is reduced to a single number. This is typical on this forum and in EA (see P|doom). When bayes is used for science, on the other hand, the prior will be a distribution. (See the pdf of the first result here).  My concern is that EA is making decisions based on these point estimates, rather than on peoples true distributions, which is likely to lead people astray.  I’m curious: When you say that your prior for alien presence is 1%, what is your distribution?  Is 1% your median estimate? How shocked would you be if the “true value” was 0.001%?  If probabilities of probabilities is confusing, do the same thing for “how many civilisations are there in the galaxy”. 

Thanks! :)

Assigning a single number to such a prior, as if it means anything, seems utterly absurd.

I don't agree that it's meaningless or absurd. A straightforward meaning of the number is "my subjective probability estimate if I had to put a number on it" — and I'd agree that one shouldn't take it for more than that.

I also don't think it's useless, since numbers like these can at least help give a very rough quantitative representation of beliefs (as imperfectly estimated from the inside), which can in turn allow subjective ballpark updates based on expli... (read more)

-7
titotal
9mo

You give a prior of 1 in a hundred that aliens have a presence on earth. Where did this number come from?

It was in large part based on the considerations reviewed in the section "I. An extremely low prior in near aliens". The following sub-section provides a summary with some attempted sanity checks and qualifications (in addition to the general qualifications made at the outset):

All-things-considered probability estimates: Priors on near aliens

Where do all these considerations leave us? In my view, they overall suggest a fairly ignorant prior. Specificall

... (read more)

Thanks for your comment. I basically agree, but I would stress two points.

First, I'd reiterate that the main conclusions of the post I shared do not rest on the claim that extraordinary UFOs are real. Even assuming that our observed evidence involves no truly remarkable UFOs whatsoever, a probability of >1 in 1,000 in near aliens still looks reasonable (e.g. in light of the info gain motive), and thus the possibility still seems (at least weakly) decision-relevant. Or so my line of argumentation suggests.

Second, while I agree that the wild abilities are... (read more)

I think it would have been more fair if you hadn't removed all the links (to supporting evidence) that were included in the quote below, since it just comes across as a string of unsupported claims without them:

Beyond the environmental effects, there are also significant health risks associated with the direct consumption of animal products, including red meat, chicken meat, fish meat, eggs and dairy. Conversely, significant health benefits are associated with alternative sources of protein, such as beans, nuts, and seeds. This is relevant both collectivel

... (read more)
-2
Elizabeth
11mo
I've fixed this on my blog but LW's editor is being difficult (and because this is a cross-post I can only fix it there), I've pinged the team about getting access to the right editor.  I wish I'd included your links because it's always good to quote people more accurately. I'm not sure it matters materially, because I preemptively agreed that overconsumption of animal products has its own risks.  I think most of the nutritional harm would be mitigated if vegan advocates said "all large dietary changes have challenges, here's an easy guide to starting" in a way people believed and followed up on (and no one loudly argued to the contrary, which at one point was very widespread within EA). I would still think there was something important in acknowledging that it can't work for everyone, and thus the strongest forms of vegan advocacy will leave those people malnourished. I can respect arguments that this is a regrettable necessity, but not blindness to it.  I would also still see value in arguing about the ideal diet, or more properly how to discover an individual's ideal diet, which is so complicated and has so much variation between people. But I wouldn't have put nearly this level of work in if I didn't see people being harmed. 

I didn't claim that there isn't plenty more data. But a relevant question is: plenty more data for what? He says that the data situation looks pretty good, which I trust is true in many domains (e.g. video data), and that data would probably in turn improve performance in those domains. But I don't see him claiming that the data situation looks good in terms of ensuring significant performance gains across all domains, which would be a more specific and stronger claim.

Moreover, the deference question could be posed in the other direction as well, e.g. do y... (read more)

4
Greg_Colbourn
1y
Let's hope that OpenAI is forced to pull GPT-4 over the illegal data harvesting used to create it.

I think it's a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing "fates worse than death".

I'm not talking about people sacrificing themselves or the whole world. Even if we were to adopt a purely survivalist perspective, I think it's still far from obvious that trying to slow things down is more effective than is focusing on other aims. After all, the space of alternative aims that one could focus on is vast, and trying to slow things down comes with non-trivial risks of its own (e.g. risks of bac... (read more)

0
Greg_Colbourn
1y
I feel like I'm one of the main characters in the film Don't Look Up here. Please can you name 10? The way I see it is - either alignment is solved in time with business as usual[1], or we Pause to allow time for alignment to be solved (or establish it's impossibility). It is not a complicated situation. No need to be worrying about "fates worse than death" at this juncture. 1. ^ seems highly unlikely, but please say if you think there are promising solutions here

What are the downsides from slowing down?

I'd again prefer to frame the issue as "what are the downsides from spending marginal resources on efforts to slow down?" I think the main downside, from this marginal perspective, is opportunity costs in terms of other efforts to reduce future risks, e.g. trying to implement "fail-safe measures"/"separation from hyperexistential risk" in case a slowdown is insufficiently likely to be successful. There are various ideas that one could try to implement.

In other words, a serious downside of betting chiefly on efforts ... (read more)

5
Greg_Colbourn
1y
I think it's a very hard sell to try and get people to sacrifice themselves (and the whole world) for the sake of preventing "fates worse than death". At that point most people would probably just be pretty nihilistic. It also feels like it's not far off basically just giving up hope: the future is, at best, non-existence for sentient life; but we should still focus our efforts on avoiding hell. Nope. We should be doing all we can now to avoid having to face such a predicament! Global moratorium on AGI, now.

I'm not sure what you are saying here? Do you think there is a risk of AI companies deliberately causing s-risks (e.g. releasing a basilisk) if we don't play nice!?

No, I didn't mean anything like that (although such crazy unlikely risks might also be marginally better reduced through cooperation with these actors). I was simply suggesting that cooperation could be a more effective way to reduce risks of worst-case outcomes that might occur in the absence of cooperative work to prevent them, i.e. work of the directional kind gestured at in my other comment ... (read more)

5
Greg_Colbourn
1y
Ok. I don't put much weight on s-risks being a likely outcome. Far more likely seems to be just that the solar system (and beyond) will be arranged in some (to us) arbitrary way, and all carbon-based life will be lost as collateral damage.  Although I guess if you are looking a bit nearer term, then s-risk from misuse could be quite high. But I don't think any of the major players (OpenAI, Deepmind, Anthropic) are even really working on trying to prevent misuse at all as part of their strategy (their core AI Alignment work is on aligning the AIs, rather than the humans using them!) So actually, this is just another reason to shut it all down.

Thanks for your reply, Greg :)

I don't think this matters, as per the next point about there already being enough compute for doom

That is what I did not find adequately justified or argued for in the post.

I think the burden of proof here needs to shift to those willing to gamble on the safety of 100x larger systems.

I suspect that a different framing might be more realistic and more apt from our perspective. In terms of helpful actions we can take, I more see the choice before us as one between trying to slow down development vs. trying to steer future devel... (read more)

5
Greg_Colbourn
1y
What are the downsides from slowing down? Things like not curing diseases and ageing? Eliminating wild animal suffering? I address that here: "it’s a rather depressing thought. We may be far closer to the Dune universe than the Culture one (the worry driving a future Butlerian Jihad will be the advancement of AGI algorithms to the point of individual laptops and phones being able to end the world). For those who may worry about the loss of the “glorious transhumanist future”, and in particular, radical life extension and cryonic reanimation (I’m in favour of these things), I think there is some consolation in thinking that if a really strong taboo emerges around AGI, to the point of stopping all algorithm advancement, we can still achieve these ends using standard supercomputers, bioinformatics and human scientists. I hope so." To be clear, I'll also say that it's far too late to only steer future development better. For that, Alignment needs to be 10 years ahead of where it is now! I don't think you need to believe this to want to be slamming on the brakes now. As mentioned in the OP, is the prospect of mere imminent global catastrophe not enough?

To push back a bit on the fast software-driven takeoff (i.e. a fast takeoff driven primarily by innovations in software): 

Common objections to this narrative [of a fast software-driven takeoff] are that there won’t be enough compute, or data, for this to happen. These don’t hold water after a cursory examination of our situation. We are nowhere near close to the physical limits to computation ...

While we're nowhere near the physical limits to computation, it's still true that hardware progress has slowed down considerably on various measures. I t... (read more)

2
Greg_Colbourn
11mo
Coming back to the point about data. Whilst Epoch gathered some data showing that the stock high quality text data might soon be exhausted, their overall conclusion is that there is only a “20% chance that the scaling (as measured in training compute) of ML models will significantly slow down by 2040 due to a lack of training data.”. Regarding Jacob Buckman's point about chess, he actually outlines a way around that (training data provided by narrow AI). As a counter to the wider point about the need for active learning, see DeepMind's Adaptive Agent and the Voyager "lifelong learning" Minecraft agent, both of which seem like impressive steps in this direction.  
3
Matt Brooks
1y
Do you not trust Ilya when he says they have plenty more data? https://youtu.be/Yf1o0TQzry8?t=656
4
Greg_Colbourn
1y
I don't think this matters, as per the next point about there already being enough compute for doom [Edit: I've relegated the "nowhere near close to the physical limits to computation" sentence to a footnote and added Magnus' reference on slowdown to it]. I think the burden of proof here needs to shift to those willing to gamble on the safety of 100x larger systems. All I'm really saying here is that the risk is way too high for comfort (given the jumps in capabilities we've seen so far going from GPT-3->GPT3.5->GPT-4). [Meta: would appreciate separate points being made in separate comments].  Will look into your links re data and respond later. I'm not sure what you are saying here? Do you think there is a risk of AI companies deliberately causing s-risks (e.g. releasing a basilisk) if we don't play nice!? They may be crazy in a sense of being reckless with the fate of billions of people's lives, but I don't think they are that crazy (in a sense of being sadistically malicious and spiteful toward their opponents)!
5
PeterSlattery
1y
Thanks for writing this - it was useful to read the pushbacks!  As I said below, I want more synthesis of these sorts of arguments. I know that some academic groups are preparing literature reviews of the key arguments for and against AGI risk. I really think that we should be doing that for ourselves as a community and to make sure that we are able to present busy smart people with more compelling content than a range of arguments spread across many different forum posts.  I don't think that that is going to cut it for many people in the policy space.

Current scaling "laws" are not laws of nature. And there are already worrying signs that things like dataset optimization/pruning, curriculum learning and synthetic data might well break them - It seems likely to me that LLMs will be useful in all three. I would still be worried even if LLMs prove useless in enhancing architecture search.

I agree that the reduction of s-risks is underprioritized, but it's unclear whether the aim of reducing s-risks would render research into the nature of sentience a high priority; and there are even reasons to think that it could be harmful.

I've tried to outline what I see as some relevant considerations here.

By "I am confused by your argument against scaling", I thought you meant the argument I made here, since that was the main argument I made regarding scaling; the example with robots wasn't really central.

I'm also a bit confused, because I read your arguments above as being arguments in favor of explosive economic growth rates from hardware scaling and increasing software efficiency. So I'm not sure whether you believe that the factors mentioned in your comment above are sufficient for causing explosive economic growth. Moreover, I don't yet understand why ... (read more)

4
Rohin Shah
1y
If we assume innovations decline, then it is primarily because future AI and robots will be able to automate far more tasks than current AI and robots (and we will get them quickly, not slowly). Imagine that currently technology A that automates area X gains capabilities at a rate of 5% per year, which ends up leading to a growth rate of 10% per year. Imagine technology B that also aims to automate area X gains capabilities at a rate of 20% per year, but is currently behind technology A. Generally, at the point when B exceeds A, I'd expect growth rates of X-automating technologies to grow from 10% to >20% (though not necessarily immediately, it can take time to build the capacity for that growth). For AI, the area X is "cognitive labor", technology A is "the current suite of productivity tools", and technology B is "AI". For robots, the area X is "physical labor", technology A is "classical robotics", and technology B is "robotics based on foundation models". ---------------------------------------- That was just assuming hardware scaling, and it justifies a growth in some particular growth rates, but not a growth explosion. If you add in the software efficiency, then I think you are just straightforwardly generating lots of innovations (what else is leading to the improved software efficiency?) and that's how you get the growth explosion, at least until you run out of software efficiency improvements to make.

To be clear, I don't mean to claim that we should give special importance to current growth rates in robotics in particular. I just picked that as an example. But I do think it's a relevant example, primarily due to the gradual nature of the abilities that robots are surpassing, and the consequent gradual nature of their employment.

Unlike fusion, which is singular in its relevant output (energy), robots produce a diversity of things, and robots cover a wide range of growth-relevant skills that are gradually getting surpassed already. It is this gradual nat... (read more)

I agree with premise 3. Where I disagree more comes down to the scope of premise 1.

This relates to the diverse class of contributors and bottlenecks to growth under Model 2. So even though it's true to say that humans are currently "the state-of-the-art at various tasks relevant to growth", it's also true to say that computers and robots are currently "the state-of-the-art at various tasks relevant to growth". Indeed, machines/external tools have been (part of) the state-of-the-art at some tasks for millennia (e.g. in harvesting), and computers and robots ... (read more)

6
Rohin Shah
1y
I don't disagree with any of the above (which is why I emphasized that I don't think the scaling argument is sufficient to justify a growth explosion). I'm confused why you think the rate of growth of robots is at all relevant, when (general-purpose) robotics seem mostly like a research technology right now. It feels kind of like looking at the current rate of growth of fusion plants as a prediction of the rate of growth of fusion plants after the point where fusion is cheaper than other sources of energy. (If you were talking about the rate of growth of machines in general I'd find that more relevant.)

Regarding explosive growth in the amount of hardware: I meant to include the scale aspect as well when speaking of a hardware explosion. I tried to outline one of the main reasons I'm skeptical of such an 'explosion via scaling' here. In short, in the absence of massive efficiency gains, it seems even less likely that we will see a scale-up explosion in the future.

Incidentally, the graphs you show for the decline in innovations per capita start dropping around 1900 ... which is pretty different from the 1960s.

That's right, but that's consistent with the pe... (read more)

9
Rohin Shah
1y
I am confused by your argument against scaling. My understanding of the scale-up argument is: 1. Currently humans are state-of-the-art at various tasks relevant to growth. 2. We are bottlenecked on scaling up humans by a variety of things (e.g. it takes ~20 years to train up a new human, you can't invest money into the creation of new humans with the hope of getting a return on it, humans only work ~8 hours a day) 3. At some point AI / robots will be able to match human performance at these tasks. 4. AI / robots will not be bottlenecked on those things. In some sense I agree with you that you have to see efficiency improvements, but the efficiency improvements are things like "you can create new skilled robots in days, compared to the previous SOTA of 20 years". So I think if you accept (3) then I think you are already accepting massive efficiency improvements. I don't see why current robot growth rates are relevant. When you have two different technologies A and B where A works better now, but B is getting better faster than A, then there will predictably be a big jump in the use of B once it exceeds A, and extrapolating the growth rates of B before it exceeds A is going to predictably mislead you. (For example, I'd guess that in 1975, you would have done better thinking about how / when the personal computer would overtake other office productivity technologies, perhaps based on Moore's law, rather than trying to extrapolate the growth rate of personal computers. Indeed, according a random website I just found, it looks like the growth rate accelerated till the EDIT: 1980s, though it's hard to tell from the graph.) (To be clear, this argument doesn't necessarily get you to "transformative impact on growth comparable to the industrial revolution", I'd guess you do need to talk about innovations to get that conclusion. But I'm just not seeing why you don't expect a ton of scaling even if innovations are rarer, unless you deny (3), but it mostly seems like

I wrote earlier that I might write a more elaborate comment, which I'll attempt now. The following are some comments on the pieces that you linked to.

1. The Most Important Century series

I disagree with this series in a number of places. For example, in the post "This Can't Go On", it says the following in the context of an airplane metaphor for our condition:

We're going much faster than normal, and there isn't enough runway to do this much longer ... and we're accelerating.

As argued above, in terms of economic growth rates, we're in fact not accelerating, ... (read more)

Thanks for this, it's helpful. I do agree that declining growth rates is significant evidence for your view.

I disagree with your other arguments:

For one, an AI-driven explosion of this kind would most likely involve a corresponding explosion in hardware (e.g. for reasons gestured at here and here), and there are both theoretical and empirical reasons to doubt that we will see such an explosion.

I don't have a strong take on whether we'll see an explosion in hardware efficiency; it's plausible to me that there won't be much change there (and also plausible t... (read more)

I do not claim otherwise in the post :) My claim is rather that proponents of Model 1 tend to see a much smaller distance between these respective definitions of intelligence, almost seeing Intelligence 1 as equivalent to Intelligence 2. In contrast, proponents of Model 2 see Intelligence 1 as an important yet still, in the bigger picture, relatively modest subset of Intelligence 2, alongside a vast set of other tools.

At any given point in time, I expect that progress looks like "taking the low-hanging fruit"; the reason growth goes up over time anyway is because there's a lot more effort looking for fruit as time goes on, and it turns out that effect dominates.

I think the empirical data suggests that that effect generally doesn't dominate anymore, and that it hasn't dominated in the economy as a whole for the last ~3 doublings. For example, US Total Factor Productivity growth has been weakly declining for several decades despite superlinear growth in the effective numb... (read more)

You're trying to argue for "there are no / very few important technologies with massive room for growth" by giving examples of specific things without massive room for growth.

I should clarify that I’m not trying to argue for that claim, which is not a claim that I endorse.

My view on this is rather that there seem to be several key technologies and measures of progress that have very limited room for further growth, and the ~zero-to-one growth that occurred along many of these key dimensions seems to have been low-hanging fruit that coincided with the high ... (read more)

2
Rohin Shah
1y
Hmm, it seems to me like these observations are all predicted by the model I'm advocating, so I don't see why they're evidence against that model. (Which is why I incorrectly thought you were instead saying that there wasn't room for much growth, sorry for the misunderstanding.) (I do agree that declining growth rates are evidence against the model.) At any given point in time, I expect that progress looks like "taking the low-hanging fruit"; the reason growth goes up over time anyway is because there's a lot more effort looking for fruit as time goes on, and it turns out that effect dominates. For example, around 0 AD you might have said "recent millennia have had much higher growth rates because of the innovations of agriculture, cities and trade, which allowed for more efficient food production and thus specialization of labor. The zero-to-one growth on these key dimensions was low-hanging fruit, so this is modest evidence against further increases in growth in the future"; that would have been been an update in the wrong direction.

Thanks for highlighting that. :)

I agree that this is relevant and I probably should have included it in the post (I've now made an edit). It was part of the reason that I wrote "it is unclear whether this pattern in moral judgment necessarily applies to all or even most kinds of acts inspired by different moral beliefs". But I still find it somewhat striking that such actions seemed to be considered as bad as, or even slightly worse than, intentional harm. But I guess subjects could also understand "intentional harm" in a variety of ways. In any case, I think it's important to reiterate that this study is in itself just suggestive evidence that value differences may be psychologically fraught.

It's not the case that there are N technologies and progress consists solely of improving those technologies; progress usually happens by developing new technologies.

Yeah, I agree with that. :)

But I think we can still point to some important underlying measures — say, "the speed at which we transmit signals around Earth" or "the efficiency with which we can harvest solar energy" — where there isn't much room for further progress. On the first of those two measures, there basically isn't any room for further progress. On the second, we can at the very most ... (read more)

4
Rohin Shah
1y
You're trying to argue for "there are no / very few important technologies with massive room for growth" by giving examples of specific things without massive room for growth. In general arguing for "there is no X that satisfies Y" by giving examples of individual Xs that don't satisfy Y is going to be pretty rough and not very persuasive to me, unless there's some reason that can be abstracted out of the individual examples that is likely to apply to all Xs, which I don't see in this case. I don't care much whether the examples are technologies or measures (though I do agree measures are better). (I'm also less convinced because I can immediately think of related measures where it seems like we have lots of room to grow, like  "the speed at which we can cost-effectively transmit matter around Earth" or "the efficiency with which we can harvest fusion energy".) For similar reasons I don't update much on empirical trends in hardware progress (there's still tons of progress to be made in software, and still tons of progress to be made in areas other than computing). I agree that explosive growth looks unlikely without efficiency gains; "no efficiency gains" means that the positive feedback loop that drives hyperbolic growth isn't happening. (But for this to move me I need to believe "no/limited efficiency gains".) I think the decline in innovations per capita is the strongest challenge to this view; I just don't really see the others as significant evidence one way or the other.

Thanks :)

I recently asked the question whether anyone had quantified the percent of tasks that computers are superhuman at as a function of time - has anyone?

I'm not aware of any. Though I suppose it would depend a lot on how such a measure is operationalized (in terms of which tasks are included).

This is seriously cherry picked.

I quoted that line of Murphy's as one that provides examples of key technologies that are close to hitting ultimate limits; I didn't mean to say that they were representative of all technologies.  :)

But it's worth noting that ... (read more)

Thanks for your question :)

I might write a more elaborate comment later, but to give a brief reply:

It’s true that Model 2 (defined in terms of those three assumptions) does not rule out significantly higher growth rates, but it does, I believe, make explosive growth quite a lot less likely compared to Model 1, since it does not imply that there’s a single bottleneck that will give rise to explosive growth.

I think most of your arguments for Model 2 also apply to this perspective. The one exception is the observation that growth rates are declining, though t

... (read more)
6
Rohin Shah
1y
Yes, sorry, I shouldn't have said "most". Yeah, I mostly don't buy the argument (sorry for not noting that earlier). It's not the case that there are N technologies and progress consists solely of improving those technologies; progress usually happens by developing new technologies. So I don't see the fact that some technologies are near-perfect as all that relevant. For example: Even if we get literally no improvement in any of these technologies, we could still see huge growth in this sector by developing new technologies for energy generation that generate much more power than we can currently generate.

Asserting (as epicurean views do) death is not bad (in itself) for the being that dies is one thing.

But Epicureans tend to defend a stronger claim, namely that there is nothing suboptimal about death — or rather, about being dead — for the being who dies (which is consistent with Epicurean views of wellbeing). I believe this is the view defended in Hol, 2019.

Asserting (as the views under discussion do) that death (in itself) is good

But death is not good in itself on any of the views under discussion. First, death in itself has no value or disvalu... (read more)

Varieties of experientialist minimalist views that are overlooked in this piece

I think the definition of experientialist minimalism employed in the post is in need of elaboration, as it seems that there are in fact minimalist experientialist views that would not necessarily have the implications that you inquire about, yet these views appear to differ from the experientialist minimalist views considered in the post.

To give an example, one could think that what matters is only the reduction of experiential disvalue (and thereby be an experientialist minimal... (read more)

8
Gregory Lewis
1y
Asserting (as epicurean views do) death is not bad (in itself) for the being that dies is one thing. Asserting (as the views under discussion do) that death (in itself) is good - and ongoing survival bad - for the being that dies is quite another.  Besides its divergence from virtually everyone's expressed beliefs and general behaviour, it doesn't seem to fare much better under deliberate reflection. For the sake of a less emotionally charged variant of Mathers' example, responses to the Singer's shallow pond case along the lines of, "I shouldn't step in, because my non-intervention is in the child's best interest: the normal life they could 'enjoy' if they survive accrues more suffering in expectation than their imminent drowning" appear deranged.   

>The following is, as far as I can tell, the main argument that MacAskill makes against the Asymmetry (p. 172)

I’m confused by this sentence. The Asymmetry endorses neutrality about bringing into existence lives that have positive wellbeing, and I argue against this view for much of the population ethics chapter, in the sections “The Intuition of Neutrality”,  “Clumsy Gods: The Fragility of Identity”, and “Why the Intuition of Neutrality is Wrong”.

The arguments presented against the Asymmetry in the section “The Intuition of Neutrality” are the ones... (read more)

[the view that intrinsically positive lives do not exist] implies that there wouldn't be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn't be destroying anything positive.

This is not true. The view that killing is bad and morally wrong can be, and has been, grounded in many ways besides reference to positive value.[1]

First, there are preference-based views according to which it would be bad and wrong to thwart preferences against being killed, even as the creation and satisfacti... (read more)

9
Mau
2y
Thanks for the thoughtful reply; I've replied to many of these points here. On a few other ends: * I agree that strong negative utilitarian views can be highly purposeful and compassionate. By "semi-nihilistic" I was referring to how some of these views also devalue much (by some counts, half) of what others value. [Edit: Admittedly, many pluralists could say the same to pure classical utilitarians.] * I agree classical utilitarianism also has bullets to bite (though many of these look like they're appealing to our intuitions in scenarios where we should expect to have bad intuitions, due to scope insensitivity).

Thanks for your question, Michael :)

I should note that the main thing I take issue with in that quote of MacAskill's is the general (and AFAICT unargued) statement that "any argument for the first claim would also be a good argument for the second". I think there are many arguments about which that statement is not true (some of which are reviewed in Gloor, 2016; Vinding, 2020, ch. 3; Animal Ethics, 2021).

As for the particular argument of mine that you quote, I admit that a lot of work was deferred to the associated links and references. I think there are ... (read more)

I'm not sure how I feel about relying on intuitions in thought experiments such as those. I don't necessarily trust my intuitions.

If you'd asked me 5-10 years ago whose life is more valuable: an average pig's life or a severely mentally-challenged human's life I would have said the latter without a thought. Now I happen to think it is likely to be the former. Before I was going off pure intuition. Now I am going off developed philosophical arguments such as the one Singer outlines in his book Animal Liberation, as well as some empirical facts.

My point is w... (read more)

It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.'s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.

6
NunoSempere
2y
Makes sense.

I understand that you feel that the asymmetry is true

Just to clarify, I wouldn't say that. :)

and as such it feels ok not to have addressed it in a popular book.

But the book does briefly take up the Asymmetry, and makes a couple of arguments against it. The point I was trying to make in the first section is that these arguments don't seem convincing.

The questions that aren't addressed are those regarding interpersonal outweighing — e.g. can purported goods morally outweigh extreme suffering? Can happy lives morally outweigh very bad lives? (As I hint in the... (read more)

It's unfortunate that the quote I selected implies "all minimalist axiologies" but I really was trying to talk about this post.

Perhaps it would be good to add an edit on that as well? E.g. "The author agrees that the answers to these questions are 'yes' (for the restricted class of minimalist axiologies he explores here)." :)

(The restriction is relevant, not least since a number of EAs do seem to hold non-experientialist minimalist views.)

4
Rohin Shah
2y
Sure, done.

The author agrees that the answers to these questions are "yes".

Not quite. The author assumes a certain class of minimalist axiologies (experientialist ones), according to which the answers to those questions are:

  1. Yes (though a world with untroubled sentient beings would be equally perfect, and there are good reasons to focus more on that ideal of minimalism in practice).
  2. If the hypothetical world contains no disvalue, then pressing the button is not strictly better, but if the hypothetical world does contain disvalue, then it would be better to press a cess
... (read more)
5
Rohin Shah
2y
I ignored the first footnote because it's not in the posts' remit, according to the post itself: If you assume this limited scope, I think the answer to the second question is "yes" (and that the post agrees with this). I agree that things change if you expand the scope to other minimalist axiologies. It's unfortunate that the quote I selected implies "all minimalist axiologies" but I really was trying to talk about this post. I shouldn't have called it "the main point", I should have said something like "the main point made in response to the two questions I mentioned", which is what I actually meant. I agree that there is more detail about why the author thinks you shouldn't be worried about it that I did not summarize. I still think it is accurate to say that the author's main response to question 1 and 2, as written in Section 2, is "the answers are yes, but actually that's fine and you shouldn't be worried about it", with the point about cessation implications being one argument for that view.

Thanks for summarizing it.

The worries I respond to are complex and the essay has many main points. Like any author, I hope that people would consider the points in their proper context (and not take them out of context). One main point is the contextualization of the worries itself, which is highlighted by the overviews (1.1–1.2) focusing a lot on the relevant assumptions and on minding the gap between theory and practice.

To complex questions, I don't think it's useful to reduce answers to either "yes" or "no", especially when the answers rest on unrealistic assumptions and look very different in theory versus practice. Between theory and practice, I also tend to consider the practical implications more important.

This analysis seems to neglect all "net negative outcomes", including scenarios in which s-risks are realized (as Mjeard noted), the badness of which can go all the way to the opposite extreme (see e.g. "Astronomical suffering from slightly misaligned artificial intelligence").

Including that consideration may support a more general focus on ensuring a better quality of the future, which may also be supported by considerations related to grabby aliens.

I think it's important to stress that it's not just that some people with an extremely high IQ fail to change their minds on certain issues, and more generally fail to overcome confirmation bias  (which I think is fairly unsurprising). A key point is that there actually doesn't appear to be much of a correlation at all between IQ and resistance to confirmation bias.

So to slightly paraphrase what you wrote above, I didn't just write the post because a correlation across a population is of limited relevance when you’re dealing with a smart individual wh... (read more)

2
Stefan_Schubert
2y
I think the studies you refer to may underrate the importance of IQ for good epistemics. First, as I mentioned in my other comment, the correlation between IQ-like measures and the most comprehensive test of rationality was as high as 0.695. This is especially noteworthy considering the fact that Stanovich in particular (I haven't followed the others' work) has for a long time argued along your lines - that there are many things that IQ tests miss. So if anything one would expect him to be biased in the direction of a too low correlation. Second, psychological studies of confirmation bias  and other biases tend to study participants' reactions to short vignettes. They don't follow participants over longer periods of time. And I think this may lead them to underrate the importance of intelligence for good epistemics; in particular in communities like the effective altruism and rationalist communities. I think that people can to some extent (though certainly not fully) overcome conformation bias and other biases through being alert to them (not the least in interpersonal discussions), through forming better mental habits, through building better epistemic institutions, and so on. This work is, however, quite cognitively demanding, and I would expect more intelligent people to be substantially better at it. Less intelligent people are likely not as good as engaging in the kind of reflection on their own and others' thought-processes to get these kinds of efforts off the ground. I think that the effective altruist and rationalist communities are unusually good at it: they are constantly on the lookout for biased reasoning, and often engage in meta-discussions about their own and each others' reasoning - whether they, e.g. show signs of confirmation bias. And I think a big reason why that works so well is that these communities are comprised by so many intelligent people.  In general,  I think that IQ is tremendously important and not overrated by effective altruists

You argue that EA overrates IQ

As noted above, my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice (because those other important traits seem neglected relative to smarts, and also because — at the level of what we seek to develop and incentivize — those other traits seem more elastic and improvable).

I noted in the comment above that:

one line of evidence I have for this is how often I see references to smarts, including in internal discussions related to career and

... (read more)
3[anonymous]2y
"my main claim is not that "EA overrates IQ" at a purely descriptive level, but rather that other important traits deserve more focus in practice" The claim that EA overrates IQ is the same as the claim that other traits deserve more attention

“Science advances one funeral at a time.” If that’s true,

If that were literally true, then science wouldn't ever advance much. :)

It seems that most scientists are in fact willing to change their minds when strong evidence has been provided for a hypothesis that goes against the previously accepted view. The "Planck principle" seems more applicable to scientists who are strongly invested in a given hypothesis, but even in that reference class, I suspect that most scientists do actually change their minds during their lifetime when the evidence is strong. An... (read more)

2
FCCC
2y
Yep, that’s why I referred to your 2nd and 3rd traits: A better competing theory is only an inconvenient conclusion if you’re invested in the wrong theory (especially if you yourself created that theory). I know IQ and these traits are probably correlated (again, since some level intelligence is a prerequisite for most of the traits). But I’m assuming the reason you wrote the post is that a correlation across a population isn’t relevant when you’re dealing with a smart individual who lacks one of these traits.

Thanks for your comment and for listing those traits and skills; I strongly agree that those are all useful qualities. :)

One might argue that willingness to do grunt work, taking initiative, and mental stamina all belong in a broader "drive/conscientiousness" category, but I think they are in any case important and meaningfully distinct traits worth highlighting in their own right.

Likewise, one could perhaps argue that "ability to network well" falls under a broader category of "social skills", in which interpersonal kindness and respect might also be said... (read more)

Load more