All of valence's Comments + Replies

You might browse Intro to Brain-Like-AGI Safety , or check back in a few weeks once it's all published. Towards the end of the sequence Steve intends to include "a list of open questions and advice for getting involved in the field."

DeepMind takes a fair amount of inspiration from neuroscience.
Diving in to their related papers might be worthwhile, though the emphasis is often on capabilities rather than safety.


Your personal fit is a huge consideration when evaluating the two paths (80k hours might be able to help you think through this). But if you're on t... (read more)

The AngelList link is disrupted by that trailing '.', without that it works: https://angel.co/l/2vTgdS

1
samstowers
2y
Fixed! (rather, replaced the link with text)
Ok, so you want to know if whales experience more suffering than ants? And you're proposing a way to do this that involves putting them into an fMRI scanner? That seems like not the best way of asking or answering the question of how much does being X suffer and how does it compare to being Y.

I did not propose putting whales into fMRI scanners. I would not have proposed trying to weigh distant stars with a scale either, yet somehow we've learned how to say some things about their mass and contents.

What are the consequences of the answers you get? If new
... (read more)
4. Why can't you just ask people if they're suffering? What's the value of quantifying the degree of their suffering using harmonic coherence?

Why can't you just observe that objects fall towards the ground? What's the value of quantifying the degree of their falling using laws of motion?

How much do newborns suffer? Whales? Ants?

1[comment deleted]3y

I agree that honesty is more important than weirdness. Maybe I’m being taken, but I see miscommunication and not dishonesty from QRI.

I am not sure what an appropriate standard of rigor is for a preparadigmatic area. I would welcome more qualifiers and softer claims.

3
Holly_Elmore
3y
At the very least, miscommunication this bad is evidence of serious incompetence at QRI. I think you are mistaken to want to excuse that. 

I'll take a shot at these questions too, perhaps being usefully only partially familiar with QRI.

1.  Which question is QRI trying to answer?

Is there a universal pattern to conscious experience? Can we specify a function from the structure and state of a mind to the quality of experience it is having?

2. Why does that all matter?

If we discover a function from mind to valence, and develop the right tools of measurement and intervention (big IFs, for sure), we can steer all minds towards positive experience.

Until recently we only had intuitive physics, u... (read more)

This is a post from an organization trying to raise hundreds of thousands of dollars. 

...

If the Qualia Research Institute was a truth seeking institution, they would have either run the simple experiment I proposed themselves, or had any of the neuroscientists they claim to be collaborating with run it for them.

This reads to me as insinuating fraud, without much supporting evidence.

This is a bad post and it should be called out as such. I would have been more gentle if this was a single misguided researcher and not the head of an organization that pub

... (read more)

I appreciate that in other comments you followed up with more concrete criticisms, but this still feels against the "Keep EA Weird" spirit to me.

Keeping EA honest and rigorous is much higher priority. Making excuses for incompetence or lack of evidence base is the opposite of EA. 

3
MikeJohnson
3y
Thanks valence. I do think the ‘hits-based giving’ frame is important to develop, although I understand it’s doesn’t have universal support as some of the implications may be difficult to navigate. And thank for appreciating the problem; it’s sometimes hard for me to describe how important the topic feels and all the reasons for working on it.

It's not catchy, but conceptually I like Hans Rosling's classification into Levels 1, 2, 3, & 4, with breakpoints around $2, $8, and $32 per day. It's also useful to be able to say "Country X is largely at Level 2, but a significant population is still at Level 1 and would benefit from Intervention Y."

A short review of Factfulness: https://www.gatesnotes.com/books/factfulness

This post describes related concerns, and helpfully links to previous discussions in Appendix 1.

I am also highly uncertain of EAs' ability to intervene in cultural change, but I do want us to take a hard look at it and discuss it. It may be a cause that is tractable early on, but hopeless if ignored.

You may not think Hsu's case "actually matters", but how many turns of the wheel is it before it is someone else?

Peter Singer has taken enough controversial stances to be "cancelled" from any direction. I want the next Singer(s) to still feel free to try to figure out what really matters, and what we should do.

4
ChichikoBendeliani
4y
I'm glad Singer has survived through stuff (and indeed, arguably his willingness to say true&controversial things is part of his appeal). For what it's worth, there's historical precedent for selective self-censorship of true views from our predecessors, cf Bentham's unpublished essay on homosexuality: The decline of Mohism seems like a good cautionary tale of a movement that tries to both a) get political and b) not be aware of political considerations.

This post describes related concerns, and helpfully links to previous discussions in Appendix 1.

We needn't take on reputational risk unnecessarily, but if it is possible for EAs to coordinate to stop a Cultural Revolution, that would seem to be a Cause X candidate. Toby Ord describes a great-power war as an existential risk factor, as it would hurt our odds on: AI, nuclear war, and climate change, all at once. I think losing free expression would also qualify as an existential risk factor.

2
Nathan Grant
4y
I agree that if it were possible to stop it, we should, but the EA movement is only a few thousand people. Even if we devoted all our resources to this issue, I doubt EA has enough influence over broad political trends to make much difference.

I'm extremely skeptical of EAs' ability to coordinate to stop a Cultural Revolution. "Politics is the mind killer." Better to treat it like the weather and focus on the things that actually matter and we have a chance of affecting, and that our movement has a comparative advantage in (figuring out things about physical reality and plugging in holes in places left dangerously unguarded).


It also doesn't seem that important in the grand scheme of things; relative to the much more direct existential risks.