software developer | abolitionist transhumanist | Abolishing severe suffering is my supreme goal.


13 Very Different Stances on AGI

Pearce calls it "full-spectrum" to emphasise the difference w/ Bostrom's "Super-Watson" (using Pearce's words).

... a blind optimisation process that has no subjective internal experience, and no inclination to gain one, given it's likely initial goals ...

Given how apparently useful cross-modal world simulations (ie consciousness) have been for evolution, I, again, doubt that such a dumb (in a sense of not knowing what it is doing) process can pose an immediate existential danger to humanity that we won't notice or won't be able to stop.

Regarding feasibility of conscious AGI / Pearce's full-spectrum superintelligence, maybe it would be possible with biology involved somewhere. But the getting from here to there seems very fraught ethically (e.g. the already-terrifying experiments with mini-brains).

Actually, if I remember correctly, Pearce thinks that if "full-spectrum superintelligence" is going to emerge, it's most likely to be biological, and even post-human (ie it is human descendants who will poses such super minds, not (purely?) silicon-based machines). Pearce sometimes calls this "biotechnological singularity", or "BioSingularity" for short, analogously to Kurzweil's "technological singularity". One can read more about this in Pearce's The Biointelligence Explosion (or in this "extended abstract").

13 Very Different Stances on AGI

Magnus Vinding, the author of Reflections on Intelligence, recently appeared on the Utilitarian podcast, where he shared his views on intelligence as well (this topic starts at 35:30).

How to make Slack workspaces welcoming and valuable

Thanks for the guide, Alex!

You say from the start that most of the advice is applicable to similar tools, but I'd still note that one limitation of (the free version of) Slack is that message history is limited to 10,000 messages (incl. private messages). So one cannot search and view messages made 10,000 messages before.

Discord (as well as Mattermost and self-hosted Zulip), in contrast, have unlimited message histories (paid versions of Slack or Zulip don't have this limitation as well, but the pricing (x$ per user per month) isn't suitable for a public group). That said, these platforms must have their own downsides, which may still make one choose Slack in the end.

13 Very Different Stances on AGI

> ... perhaps they should be deliberately aimed for?

David Pearce might argue for this if he thought that a "superintelligent" unconscious AGI (implemented on a classical digital computer) were feasible. E.g. from his The Biointelligence Explosion:

Full-spectrum superintelligence, if equipped with the posthuman cognitive generalisation of mirror-touch synaesthesia, understands your thoughts, your feelings and your egocentric perspective better than you do yourself.

Could there arise "evil" mirror-touch synaesthetes? In one sense, no. You can't go around wantonly hurting other sentient beings if you feel their pain as your own. Full-spectrum intelligence is friendly intelligence. But in another sense yes, insofar as primitive mirror-touch synaesthetes are prey to species-specific cognitive limitations that prevent them acting rationally to maximise the well-being of all sentience. Full-spectrum superintelligences would lack those computational limitations in virtue of their full cognitive competence in understanding both the subjective and the formal properties of mind. Perhaps full-spectrum superintelligences might optimise your matter and energy into a blissful smart angel; but they couldn't wantonly hurt you, whether by neglect or design.

13 Very Different Stances on AGI

Hi, Greg :)

Thanks for taking your time to read that excerpt and to respond.

First of all, the author’s scepticism in a “superintelligent” AGI (as discussed by Bostrom at least) doesn’t rely on consciousness being required for an AGI: i.e. one may think that consciousness is fully orthogonal to intelligence (both in theory and practice) but still on the whole updating away from the AGI risk based on the author’s other arguments from the book.

Then, while I do share your scepticism about social skills requiring consciousness (once you have data from conscious people, that is), I do find the author’s points about “general wisdom” (esp. about having phenomenological knowledge) and the science of mind much more convincing (although they are probably much less relevant to the AGI risk). (I won’t repeat the author’s point here: the two corresponding subsections from the piece are really short to read directly.)

In GPT-3 we already have a (narrow) AI that can convincingly past the Turing Test in writing. Including writing displaying "social skills" and "general wisdom".

Correct me if I’m wrong, but these "social skills" and "general wisdom" are just generalisations (impressive and accurate as they may be) from actual people’s social skills and knowledge. GPT-3 and other ML systems are inherently probabilistic: when they are ~right, they are ~right by accident. They don’t know, esp. about what-it-is-likeness of any sentient experience (although, once again, this may be orthogonal to the risk, at least in theory with unlimited computational power).

What’s to say that a sufficiently large pile of linear algebra, seeded with a sufficiently large amount of data, and executed on a sufficiently fast computer, could not build an accurate world model, recursively rewrite more efficient versions of itself, reverse engineer human psychology, hide it’s intentions from us, create nanotech in secret, etc etc, on the way to turning the future lightcone into computronium in pursuit of the original goal programmed into it at its instantiation (making paperclips, making a better language model, making money on the stock market, or whatever), all without a single conscious subjective internal experience?

“Sufficiently” does a lot of work here IMO. Even if something is possible in theory, doesn’t mean it’s going to happen in reality, especially by accident. Also, "... reverse engineer human psychology, hide it’s intentions from us ..." arguably does require a conscious mind, for I don't think (FWIW) that there could be a computationally-feasible substitute (at least one implemented on a classical digital computer) for being conscious in the first place to understand other people (or at least to be accurate enough to mislead all of us into a paperclip "hell").

(Sorry for a shorthand reply: I'm just afraid of mentioning things that have been discussed to death in arguments about the AGI risk, as I don’t have any enthusiasm in perpetuating similar (often unproductive IMO) threads. (This isn’t to say though that it necessarily wouldn’t be useful if, for example, someone who were deeply engaged in the topic of “superintelligent” AGI read the book and had a recorded discussion w/ the author for everyone’s benefit…))

13 Very Different Stances on AGI

The book that made me significantly update away from “superintelligent” AGI being a realistic threat is Reflections on Intelligence (2016/2020) (free version) by Magnus Vinding. The book criticises several lines of arguments of Nick Bostrom’s Superintelligence and the notion of “intelligence explosion”, talks about an individual’s and a collective’s goal-achieving abilities, whether consciousness is orthogonal or crucial for intelligence, and the (unpredictability of) future intelligence/goal-seeking systems. (Re. “intelligence explosion” see also the author’s “A Contra AI FOOM Reading List”.)

(I’m mentioning the book because my impression, FWIW, is that “anti-AGI” or just AGI-deprioritisation arguments are rarely even mentioned on the Forum. I recommend the book if one wants to learn more about such consideration. (The book is pretty short BTW - 53 pages in the PDF version I have - in part because it is focused on the arguments w/ few digression .))

Biomedical Research Models: Mice vs Man?
Answer by nilDec 27, 20213

Have you considered asking for this thesis advice the Effective Thesis? They probably can connect you w/ someone w/ a background in this area.

[Below are my (totally optional) thoughts on this (and I'm only a software engineer working in a genomic research institute w/ no formal biomed background):]
I would think that (ethically approved) studies on humans are more useful in general, as they translate better to other humans. Also, the more we try to reduce and substitute non-human animal experimentations w/ alternatives, the more there is incentive to develop these alternatives. I must admit though that I'm "biased", as I consider non-human animal suffering no less urgent than equivalent suffering of humans, other things being equal.

Investing to Give Beginner Advice?
  1. For a new investor, I think a simple and good method is getting a Vanguard Lifestrategy ISA with 100% equities - this buys you stocks across lots of different markets.

Does anyone know if there's an ISA (Individual Savings Account) w/ a fund that doesn't invest in meat and dairy companies and companies that test on animals? (I know that I can open an ISA on something like Trade 212 and invest in individual stocks myself. But due to having more important things to work on, I'm looking for a more "invest-and-forget" type of investing.)

Propose and vote on potential EA Wiki entries

Thanks, Pablo. The criteria will help to avoid some future long disputes (and thus save time for more important things), although it wouldn't have prevented my creating the entry for David Pearce, for he does fit the second condition, I think. (We disagree, I know.)

The unthinkable urgency of suffering

(I observed downvotes from 10 to 5. Is there anything that controversial in or about the post?..)

Load More