I'd recommend Reflections on Intelligence (the revised edition) by Magnus Vinding. It's a short e-book/long essay which is rather critical about the notion of "superintelligent" AI specifically.
My minimum commitment for the upcoming meetup is to 1) keep researching options for my potential full-time volunteering for an EA org that cannot visa-sponsor yet, and 2) share the opportunity to propose a roundtable discussion at the UK's ARIA w/ relevant orgs (e.g. nonprofits in animal testing alternatives and in cell-ag).
Looking forward to meeting up w/ everyone )
[Updated w/ an expression of interest]
I'm glad to see such research is being done. I especially appreciate the section on proposed resilience measures. Thanks, Matt & Nick!
(I'm currently being torn between staying in the UK or relocating to NZ long-term (I need to decide ASAP, as I have a job offer). So this post is quite timely for me ATM, FWIW.)
Thanks for linking to GFI’s posts (and for your and Linch’s article in the first place!). GFI’s concerns w/ some of the points made in the TEAs and the Counter article seem sound to me (FWIW, as I have only some basic knowledge of the field at best). I couldn’t find any responses to GFI, unfortunately.
For those who might be interested in checking GFI’s posts, below are some excerpts.
From the overview of “Preliminary review of technical assumptions within the Humbird analysis”:
...... [Humbird's] analysis is valuable for identifying and prioritizing areas where
I just got a notification that the livestream starts in an hour. One can still register for it here: https://hopin.com/events/new-harvest-2022. Similar to the past conferences, the livestream may be recorded.
Thanks for the post and the interview, Gaetan!
For any one interested, David Pearce's own written response on EA's longtermism can be found on his website.
Does enhancing one’s mood / increasing one’s hedonic set-point and making one more resistant to suffering fall within your definition of mind enhancement? I think a case can be made that wellbeing can be hugely empowering (an intuition pump: imagine waking up in an extremely good mood, w/ a sense of things to be done…). David Pearce may be the most prominent EA writing on (e.g. one, two) and promoting (and defending) this type of mind enhancement. And then there is one EA-aligned organization working in this area as well, called Invincible Wellbeing.
I’d be...
Pearce calls it "full-spectrum" to emphasise the difference w/ Bostrom's "Super-Watson" (using Pearce's words).
... a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it's likely initial goals ...
Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won't notice or won't be able to stop.
...Regarding feasibilit
Magnus Vinding, the author of Reflections on Intelligence, recently appeared on the Utilitarian podcast, where he shared his views on intelligence as well (this topic starts at 35:30).
Thanks for the guide, Alex!
You say from the start that most of the advice is applicable to similar tools, but I'd still note that one limitation of (the free version of) Slack is that message history is limited to 10,000 messages (incl. private messages). So one cannot search and view messages made 10,000 messages before.
Discord (as well as Mattermost and self-hosted Zulip), in contrast, have unlimited message histories (paid versions of Slack or Zulip don't have this limitation as well, but the pricing (x$ per user per month) isn't suitable for a public g...
> ... perhaps they should be deliberately aimed for?
David Pearce might argue for this if he thought that a "superintelligent" unconscious AGI (implemented on a classical digital computer) were feasible. E.g. from his The Biointelligence Explosion:
...Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.
Could there arise "evil" mirror-touch synaesthetes? In one sense, no. You can't go around wa
Hi, Greg :)
Thanks for taking your time to read that excerpt and to respond.
First of all, the author’s scepticism in a “superintelligent” AGI (as discussed by Bostrom at least) doesn’t rely on consciousness being required for an AGI: i.e. one may think that consciousness is fully orthogonal to intelligence (both in theory and practice) but still on the whole updating away from the AGI risk based on the author’s other arguments from the book.
Then, while I do share your scepticism about social skills requiring consciousness (once you have data from conscious ...
The book that made me significantly update away from “superintelligent” AGI being a realistic threat is Reflections on Intelligence (2016/2020) (free version) by Magnus Vinding. The book criticises several lines of arguments of Nick Bostrom’s Superintelligence and the notion of “intelligence explosion”, talks about an individual’s and a collective’s goal-achieving abilities, whether consciousness is orthogonal or crucial for intelligence, and the (unpredictability of) future intelligence/goal-seeking systems. (Re. “intelligence explosion” see also the auth...
Have you considered asking for this thesis advice the Effective Thesis? They probably can connect you w/ someone w/ a background in this area.
[Below are my (totally optional) thoughts on this (and I'm only a software engineer working in a genomic research institute w/ no formal biomed background):]
I would think that (ethically approved) studies on humans are more useful in general, as they translate better to other humans. Also, the more we try to reduce and substitute non-human animal experimentations w/ alternatives, the more there is incentive to develo...
- For a new investor, I think a simple and good method is getting a Vanguard Lifestrategy ISA with 100% equities - this buys you stocks across lots of different markets.
Does anyone know if there's an ISA (Individual Savings Account) w/ a fund that doesn't invest in meat and dairy companies and companies that test on animals? (I know that I can open an ISA on something like Trade 212 and invest in individual stocks myself. But due to having more important things to work on, I'm looking for a more "invest-and-forget" type of investing.)
Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)
(I observed downvotes from 10 to 5. Is there anything that controversial in or about the post?..)
I'm also a bit surprised, if I'm not mistaken the post had negative karma at one point. People of course downvote for other reasons than controversy, e.g. from the forum's voting norms section:
“I didn’t find this relevant.”
“I think this contains an error.”
“This is technically fine, but annoying to read.”
But I'd be sad if people get the impression that posts like this that reflect on altruistic motivations would not be welcome.
Imagine how it would change humanity's priorities if each day, "just" for a minute, each human adult experienced the worst suffering occurring that day on the planet (w/o going psychotic afterwards somehow). (And, for the reasons outlined in the post, we probably underestimate how much that torturous mind-"broadcasting" would change humanity's lived-out ethics.)
The slow (if not revese) progress towards a world without intense suffering is depressing, to say the least. So thank you for writing this inspiring piece.
It aslo reminded me of David Pearce's essay "High-tech Jainism". It outlines a path towards civilization that abolished suffering while also warns about potential pitfalls like forgetting about suffering too soon, before it's prevented for all sentient beings. (In Suffering-Focused Ethics: Defense and Implications (ch. 13) mentioned in the post, Vinding even argues that, given the irreducible uncertainty...
The more pessimistic argument is that moral progress arises as a function of economic and technological progress, and can't occur in isolation. We didn't give up slaves until it was economically convenient to do so, and likely won't give up meat until we have cost and flavor competitive alternatives.
FWIW this assessment seems true to me, at least for eating non-human animals, for I don't know enough about the economic drives behind slavery. (If one is interested, there's a report by the Sentience Institute on the topic, titled "Social Movement Lessons F...
That conclusion doesn't necessarily have to be as pessimistic as you seem to imply ("we do what is most convenient to us"). An alternative hypothesis is that people to some extent do want to do the right thing, and are willing to make sacrifices for it - but not large sacrifices. So when the bar is lowered, we tend to act more on those altruistic preferences. Cf. this recent paper:
...[Subjective well-being] mediates the relationship between two objective measures of well-being (wealth and health) and altruism...results indicate that altruism increases when re
I didn't have in mind to sound harsh. Thanks for pointing this out: it now seems obvious to me that that part sounds uncharitable. I do appologise, belatedly :(
What I meant is that currently these new, evolving inclusion criteria are difficult to find. And if they are used in dispute resolutions (from this case onwards), perhaps they should be referenced for contributors as part of the introduction text, for example.
Perhaps voting on cases where there is a disagreement could achieve a wider inclusiveness or at least less controversy? Voters would be e.g. the moderators (w/ an option to abstain) and several persons who are familiar w/ the work of a proposed person.
It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.
I think discussion will probably usually be sufficient. Using upvotes and downvotes as info seems useful, but probably not letting them be decisive.
It may also help if inclusion criteria are more specific and are not hidden until a dispute arises.
This might just be a case where written communication on the internet makes the tone seem off, but "hidden" sounds to me unfair and harsh. That seems to imply Pablo already knew what the inclusion criteria should be, and was set on them, but deliberately withheld them. This seems extremely unlikely.
I t...
I should have been more clear about Drexler: I don't dispute that he is “connected to EA to a significant degree”. But so is Pearce, in my view, for the reasons outlined in this thread.
(I think it's weird and probably bad that this comment of nil's has negative karma. nil is just clarifying what they were saying, and what they're saying is within the realm of reason, and this was said politely.)
Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance".
First, I want to make it clear that I don’t question that any of the persons I listed in my previous comment should be removed from the wiki. I just disagree that not including Pearce is justified.
Again, I honestly don’t think that it is true that Chalmers and Drexler are “connec...
Thank you for appreciating the contribution.
Since Pablo is trusted w/ deciding on the issue, I will address my questions about the decision directly to him in this thread.
I'm sorry to hear this, Pablo, as I haven't been convinced that Pearce isn't relevant enough for effective altruism.
Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce (that is not necessarily to say that their entities aren’t warranted!). May it be correct to infer that at least some of these entries received less scrutiny than Pearce’s nomination?
And perhaps:
...After reviewing the discussion, and seeing that no new comments have b
>Also, I really don’t see how the persons below have contributed more or are more relevant to effective altruism than Pearce
I tried to outline some criteria in an earlier comment. Chalmers and Hassabis fall under the category of "people who have attained eminence in their fields and who are connected to EA to a significant degree". Drexler, and perhaps also Chalmers, fall under the category of "academics who have conducted research of clear EA relevance". Matthews doesn't fall under any of the categories listed, though he strikes me as someone wor...
For those who may want to see the deleted entry, I'm posting it below:
...
David Pearce is a philosopher and writer best known for his 1995 manifesto The Hedonistic Imperative and the associated ideas about abolishing suffering for all sentient life using biotechnology and other technologies.Pearce argues that it is "technically feasible" and ethically rational to abolish suffering on the planet by replacing Darwinian suffering-based motivational systems with minds animated by "information-sensitive gradients of intelligent bliss" (as opposed to indiscriminate
... I think the reason for The Hassenfeld Exception is that, as far as I'm aware, the vast majority of his work has been very connected with GiveWell. So it's very important and notable, but doesn't need a distinct entry. Somewhat similar with Tegmark inasmuch as he relates to EA, though he's of course notable in the physics community for non-FLI-related reasons. ...
This makes sense to me, although one who is more familiar w/ their work may find their exclusion unwarranted. Thanks for clarifying!
In this light I still think an entry for Pearce is justifi...
David Pearce (the tag will be removed if others think it’s not warranted)
Arguments against:
Michael is correct that the inclusion criteria for entries of individual people hasn't been made explicit. In deciding whether a person was a fit subject for an article, I haven't followed any conscious procedure, but merely relied on my subjective sense of whether the person deserved a dedicated article. Looking at the list of people I ended up including, a few clusters emerge:
Hi Pablo,
I'll propose the tag on that page, for I do think that a tag for David Pearce is justified (and if it isn't, then I might question some existing tags for EA persons).
This is not on direct harm, but if AI risks are exaggerated to a degree that the worst scenarios are not even possible, then a lot of EA talent might be wasted.
For those who are skeptical about AI skepticism may be interested in reading Magnus Vinding's "Why Altruists Should Perhaps Not Prioritize Artificial Intelligence: A Lengthy Critique".
David Pearce, a negative utilitarian, is the founding figure for [suffering abolition].
It might be of interest for some that Pearce is/was skeptical about possibility or probability of s-risks related to digital sentience and space colonization: see his reply to What does David Pearce think about S-risks (suffering risks)? on Quora (where he also mentions the moral hazard of "understanding the biological basis of unpleasant experience in order to make suffering physically impossible").
Thanks for sharing, Michael!
I think the Center for Reducing Suffering's Open Research Questions may be a helpful addition to Research ideas. (Do let me know if you think otherwise!)
Relatedly, CRS has an internship opportunity.
Also, perhaps this is intentional but "Readings and notes on how to do high-impact research" is repeated twice in the list.
Has the team considered making the Forum open-source* and accepting code contributions from the community and others? What are the reasons for keeping the code repository private? Thank you!
* As far as I know, the EA Forum is not open-source, although it is based on Less Wrong platform, which is open-source.
Thanks for doing this work!
Are we living in a simulation?
For IMO a cogent argument against the possibility that we live in a (digitally) simulated universe, please consider adding Gordon McCabe's paper "Universe creation on a computer".
There's a new free open-source alternative called Logseq ("inspired by Roam Research, Org Mode, Tiddlywiki, Workflowy and Cuekeeper").
For those who won't read the paper, the phenomenon is called pluralistic ignorance (Wikipedia):
... is a situation in which a majority of group members privately reject a norm, but go along with it because they assume, incorrectly, that most others accept it.
Thank you!
Thanks for the questions!
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
I think this depends on many factual beliefs you hold, including what groups of creatures count and what time period you are concerned about. Restricting ourselves to the present and assuming all plausibly sentient minds count (and ignoring extremes, say, less than 0.1% chance), I think farm and wild animals are plausibly candidates for enduring...
If one is only concerned w/ preventing needless suffering, prioritising the most extreme suffering, would donating to Rethink Priorities be a good investment for them, and if so, how so?
Contrary to organizations like OPIS, Center for Reducing Suffering, and Center on long-term risk, we don't have reducing extreme suffering set as our only priority. We sometimes work on reducing suffering that may not be classified as extreme (arguably, our work on cage-free hen campaigns fall into this category). And perhaps some other work is not directly about reducin...
What are the biggest mistakes Rethink Priorities did?
I can’t speak for the entire organization, but I can talk about what I see as my biggest mistakes since I started working at Rethink Priorities:
What new charities do you want to be created by EAs?
For me it's a lobbying organization against baitfish farming in the U.S. I wrote about the topic two years ago here. Many people complimented me on it but no one did anything. I talked with some funders who said they would be interested in funding someone suitable pursuing this, but I haven’t found who could this be. The main argument against it used to be that the industry is declining. But the recently released aquaculture census suggests that it is no longer declining (see my more recent thoughts on...
...In the real world, maybe we're alone. The skies look empty. Cynics might point to the mess on Earth and echo C.S. Lewis: "Let's pray that the human race never escapes from Earth to spread its iniquity elsewhere." Yet our ethical responsibility is to discover whether other suffering sentients exist within our cosmological horizon; establish the theoretical upper bounds of rational agency; and assume responsible stewardship of our Hubble volume. Cosmic responsibility entails full-spectrum superintelligence: to be blissful but not "blissed out" - high-tech J
...There’s ongoing sickening cruelty: violent child pornography, chickens are boiled alive, and so on. We should help these victims and prevent such suffering, rather than focus on ensuring that many individuals come into existence in the future. When spending resources on increasing the number of beings instead of preventing extreme suffering, one is essentially saying to the victims: “I could have helped you, but I didn’t, because I think it’s more important that individuals are brought into existence. Sorry.”
-- Simon Knutsson, "The One-Paragraph Case for
If humanity is to minimize suffering in the future, it must engage with the world, not opt out of it.
-- Magnus Vinding (2015), Anti-Natalism and the Future of Suffering: Why Negative Utilitarians Should Not Aim For Extinction
Also, If there's sentient life on reachable planets or a chance of it emerging in the future, some NUs might also argue that the chance of human descendants ending/preventing suffering on such planets might be worth the risk of spreading suffering. (Cf. David Pearce's "cosmic rescue mission".)