All of Kaj_Sotala's Comments + Replies

Thanks for noticing that! These days I would always use "their" in this kind of context, but I guess I didn't yet have that habit back in 2014. Edited.

(Upvoted.)

Some of the attempts within EA to solve this seem to be to push even more towards just being a professional network. I think that's dangerously wrong, because it doesn't remove the informal networks and their power. It just makes access to them harder, and people more desperate to get in.

Somewhat relevant counterpoint:

For everyone to have the opportunity to be involved in a given group and to participate in its activities the structure must be explicit, not implicit. The rules of decision-making must be open and available to everyone, and this ca

... (read more)
4
Severin
1y
I partially agree. I love that definition of elites, and can definitely see how it corresponds to to how money, power, and intellectual leadership in EA revolves around the ancient core orgs like CEA, OpenPhil, and 80k. However, the sections of Doing EA Better that called for more accountability structures in EA left me a bit frightened. The current ways don't seem ideal, but I think there are innumerable ways how formalization of power can make institutions more rather than less molochian, and only a few that actually significantly improve the way things are done. Specifically, i see two types of avenues for formalizing power in EA that would essentially make things worse: 1. Professional(TM) EA might turn into the outer facade of what is actually still run by the now harder to reach and harder to get into traditional elite. That's the concern I already pointed towards in the post above. 2. The other way things could go wrong was if we built something akin to modern-day democratic nation states: Giant sluggish egregores of paperwork that reliably produce bad compromises nobody would ever have agreed to from first principles, via a process that is so time-consuming and ensnaring to our tribal instincts that nobody has energy left to have the important truth-seeking debates that could actually solve the problems at hand. Personally, the types of solutions I'm most excited about are ones that enable thousands of people to coordinate decentralizedly around the same shared goal without having to vote or debate everything out. I think there are some organizations out there that have solved information flows and resource allocation way more efficiently than not only hierarchical technocratic organizations like traditional corporations, socialist economies, or the central parts of present-day EA, but also more efficiently than modern democracies. For example, in regards to collective decisionmaking, I'm pretty excited about some things that happen in new social move

gwern on /r/machinelearning:

There's no comparison to prior full-press Diplomacy agents, but if I'm reading the prior-work cites right, this is because basically none of them work - not only do they not beat humans, they apparently don't even always improve over themselves playing the game as if it was no-press Diplomacy (ie not using dialogue at all). That gives an idea how big a jump this is for full-press Diplomacy.

2
WilliamKiely
1y
Helpful, thanks! I watched the commentated video you and Lawrence shared, and it still wasn't clear to me from seeing the gameplay how much the press-component was actually helping the Diplomacy agents. (e.g. I wasn't sure if the bots were cooperating/backstabbing or if they were just always set on playing the moves that they did regardless of what was being said in the Press.) In a game with just one human and the rest bots obviously the human wouldn't have an advantage of the bots all behaved like No Press bots. I think a mixed game with multiple humans and multiple bots would provide more insightful.

There's a commentated video by someone who plays as the only human in an otherwise all-Cicero game, which at least makes it seem like the dialogue is doing a lot.

2
WilliamKiely
1y
A random thought I had while watching the video was related to how the commentator pointed out that the bots predictably seem to behave in their own self-interest, whereas human players in bad/losing positions will generally throw all their forces against whoever backstabbed them rather than try to salvage a hopeless position. My personal style of play is much more bot-like than what the commentator described as what human professionals typically do. If the game wasn't anonymous then I see the incentive to retaliate to build an incentive for future games, but given that players' usernames are anonymous then it seems to me like the bots' approach to always try to improve their position seems best.

Worth noting that this was "Blitz" Diplomacy with only five-minute negotiation rounds. Still very impressive though.

Some behavioral traits such as general intelligence show very high heritability – over 0.70 – in adults, which is about as heritable as human height.

I'm very confused about what numbers such as this mean in practice, since the most natural interpretation ("70% of the trait is genetically determined") is wrong, but there aren't very many clear explanations of what the correct interpretation is. When I tried asking this on LW, the top-voted answer was that it's a number that's mostly useful if you're doing animal breeding, but probably not useful for much el... (read more)

1
Kevin_Cornbob
1y
My understanding is that the technical translation is: 70% of the variance in that trait is attributable to genes, given the time and place of the studied population.  For example, 70% of the variance in intelligence is attributable to genes, given a white American population, living in non-abusive homes, from the 1960s to the 1990s. (The specifics are just to provide a concrete example.) The farther one gets from the originally studied population, the less one can extrapolate exact findings. And vice versa. 
8
Ives Parr
2y
I recommend Making Sense of Heritability by Neven Sesardić.

Hi Kaj,

There's a lot of politically motivated misinformation about heritability, mostly so people feel comfortable ignoring and dismissing the results of behavior genetics.

IMHO, if one knows that the heritability of a psychological trait is fairly high, and the long-term effect of shared family environment is fairly low, this has big practical implications in a number of real-life domains:

  1. Mate choice. It's really important to pay attention to highly heritable trait when choosing a mate to have kids with, because their genes will have a big impact on their
... (read more)

I'm not sure if there is any reason that should be strongly persuasive to a disinterested third party, at the moment. I think the current evidence is more on the level of "anecdotally, it seems like a lot of rationalists and EAs get something out of things like IFS". 

But given that one can try out a few sessions of a therapy and see whether you seem to be getting anything out of it, that seems to be okay?  Anecdotal evidence isn't enough to strongly show that one should definitely do a particular kind of therapy. But it can be enough to elevate a therapy to the level of things that might be worth giving a shot to see if anything comes out of it.

If they are better, why haven't they been more widely adopted by mainstream medicine?

Part of it is that the effectiveness of therapy is often hard and slow to study, so it's hard to get unambiguous evidence of one being better than another. E.g. many therapists, even if working within a particular school of therapy, will apply it in an idiosyncratic style that draws upon their entire life/job experience and also knowledge of any other therapy styles they might have. That makes it hard to verify the extent to which all the therapists in the study really are... (read more)

4
That's Confidential
2y
Why should a disinterested third party, without firsthand experience, recommend these therapies over CBT given the relative lack of evidence? E.g. Other than those who practice them and their clients, do they have any reputable backers? Has it been shown by disinterested parties to work better on certain types of people? Is there any other source of evidence that backs the notion that they are predictably superior for at least a subset of the population?

Oh wow, that is a really great paper! Thank you very much for linking it.

why anyone would choose one big ritual like 'Doom circles' instead of just purposefully inculcating a culture of opennes to giving / receiving critique that is supportive and can help others?

These don't sound mutually exclusive to me; you can have a formal ritual about something and also practice doing some of the related skills on a more regular basis.

That said, for many people, it would be emotionally challenging if they needed to be ready to receive criticism all the time. A doom circle is something where you can receive feedback at such a time when you... (read more)

That's correct, most of the people in the circle (including the person with the wizard line, I think) I'd only met a couple of days before.

That's great, thank you :D 

It's my impression that in writing workshops where people bring their writing to be criticized, it's also a common rule that the writers are not allowed to respond to the feedback. I believe the rule exists exactly because of what you say: because another person's feedback may be off or biased for a variety of reasons. If there was a discussion about it, the recipient of the feedback might get defensive and want to explain why the feedback was flawed. That would risk the conversation taking an unpleasant tone and also any correct feedback not being properl... (read more)

1
Jakub Stencel
2y
Thanks for the answer. I think I got it more and I find the reasoning convincing, but in the end it seems to be then quite dependent on the context. I find what you said optimal in not-so-ideal psychological safety environment, but with teams high in psychological safety it's not really about things you listed, like but rather truth-seeking approach to make sure we are really elevating the person. For this two-sided communication performs better. Anecdotally, from my perspective in public feedback rounds it's not so much defense, but more like "I think you are onto something, but consider this... ". Which seems to me a bit more productive and optimal for the person than just listening. Then the two models can inform each other. For an extreme outcome example on one of such rounds in a team - one person criticized public speaking skills of one person and said the person should speak more. But after discussion we all agreed that it was not a good strength to invest in for that person and their comparative advantage lies elsewhere, so in the end it's not a good feedback. So the giver was missing some crucial considerations that indeed changed that person's feedback. I found it way more productive than I would find a one-sided communication. I also think if it's done with compassion and intent to help each other then it shouldn't break the atmosphere. But after your and Amy's answers, I get now that it's a bit different environment that Doom Circle aims to create. It seems to me that Doom Circle requires less vulnerability thanks to these rules which makes sense, especially for less psychologically safe teams. So this seems good for people that know each other less.

I would like to encourage environments in EA  which are mutually supportive and kind.

For what it's worth, my experience of Doom Circles is that they felt explicitly supportive and kind. It felt like other people were willing to pay a social cost to give me honest feedback in a way that they would otherwise feel hesitant to do, and I appreciated them doing that for me.

The incentives are to provide harsh critique to appear novel and insightful of the other person. 

I wouldn't say that there's an incentive to be harsh, since while you are providing f... (read more)

2
KMF
2y
I am super pleased to hear that. Seems like, despite the scary-sounding name, people have had positive experiences with this which is great! :)

I feel like this post is lacking an explanation of what's good about this practice, so I'll share my experiences.

I think I've attended a couple of Doom Circles (weirdly, my memory claims that there would have been one at my 2014 CFAR workshop but the post says they were first done is 2015, so maybe I'm mixing that up with some later event). I've usually thought of myself as pretty thin-skinned (much less these days but much more back then), but my recollection is that the experience was clearly positive. 

It's very rare to actually get critical feedbac... (read more)

2
Amy Labenz
2y
Thanks, Kaj! This is really helpful. This inspired me to make a  picture of you as a cackling mad wizard using DALL·E. Let me know if you'd like to see it!

Fair. In that case this seems like a necessary prerequisite result for doing that deeper investigation, though, so valuable in that respect.

At least for myself, it wouldn't have been obvious in advance that there would be exactly two factors, as opposed to (say) one, three or four.

It's unclear to me we've really investigated deeply enough to say that. We just know these factors matter, but it still seems quite possible that lots of other factors matter or that those other factors cause these two.

Perhaps more educated people are more happy with their career and thus more reluctant to change it?

Or just more invested in it - if you've spent several years acquiring a degree in a topic, you may be quite reluctant to go do something completely different.

For future studies, might be worth rephrasing this item in a way where this doesn't act as a confounder for the results? I'd expect people in their early twenties to answer it quite differently than people in their early forties.

2
David_Althaus
2y
Good point!  >I'd expect people in their early twenties to answer it quite differently than people in their early forties. I'd have expected this as well but according to the data age doesn't make a difference when it comes to answering the career item (r = -.04, p = .56). 

I was thinking that if they insist on requiring it (and I get around actually participating), I'll just iterate on some prompts on wombo.art or similar until I get something decent

Because it also mentions woo, so I think it’s talking about a broader class if unjustified beliefs than you think.

My earlier comment mentioned that "there are also lots of different claims that seem (or even are) irrational but are pointing to true facts about the world." That was intended to touch upon "woo"; e.g. meditation used to be, and to some extent still is, considered "woo", but there nonetheless seem to be reasonable grounds to think that there's nonetheless something of value to be found in meditation (despite there also being various crazy clai... (read more)

What makes you think it isn't? To me it seems both like a reasonable interpretation of the quote (private guts are precisely the kinds of positions you can't necessarily justify, and it's talking about having beliefs you can't justify) as well as a dynamic that feels like one that I recognize as one that has been occasionally present in the community. Fortunately posts like the one about private guts have helped push back against it.

Even if this interpretation wasn't actually the author's intent, choosing to steelman the claim in that way turns the essay into a pretty solid one, so we might as well engage with the strongest interpretation of it.

What makes you think it isn't? To me it seems both like a reasonable interpretation of the quote (private guts are precisely the kinds of positions you can't necessarily justify, and it's talking about having beliefs you can't justify) as well as a dynamic that feels like one that I recognize as one that has been occasionally present in the community.

Because it also mentions woo, so I think it’s talking about a broader class if unjustified beliefs than you think.

Even if this interpretation wasn't actually the author's intent, choosing to steelman the

... (read more)

There are a few different ways of interpreting the quote, but there's a concept of public positions and private guts. Public positions are ones that you can justify in public if pressed on, while private guts are illegible intuitions you hold which may nonetheless be correct - e.g. an expert mathematician may have a strong intuition that a particular proof or claim is correct, which they will then eventually translate to a publicly-verifiable proof. 

As far as I can tell, lizards probably don’t have public positions, but they probably do have private g

... (read more)
6
D0TheMath
2y
If this is what the line was saying, I agree. But it’s not, and having intuitions & a track record (or some reason to believe) those intuitions correlate with reality, and useful but known to be not true models of the world is a far cry from having unjustified beliefs & believing in woo, and the lack of these is what the post actually claims is the toxic social norm in rationality.

This is indeed a wonderful story!

This version has nicer line breaks, in my opinion.

Here's an audio version read by Leonard Nimoy.

Draft and re-draft (and re-draft). The writing should go through many iterations. You make drafts, you share them with a few people, you do something else for a week. Maybe nobody has read the draft, but you come back and you’ve rejuvenated your wonderful capacity to look at the work and know why it’s terrible.

Kind of related to this: giving a presentation about the ideas in your article is something that you can use as a form of a draft. If you can't get anyone to listen to a presentation, or don't want to give one quite yet, you can pick some people whos... (read more)

Depends on exactly which definition of s-risks you're using; one of the milder definitions is just "a future in which a lot of suffering exists", such as humanity settling most of the galaxy but each of those worlds having about as much suffering as the Earth has today. Which is arguably not a dystopian outcome or necessarily terrible in terms of how much suffering there is relative to happiness, but still an outcome in which there is an astronomically large absolute amount of suffering.

2
Jim Buhler
3y
Good point, and it is consistent with CLR's s-risks definition. :) 

Fair point. Though apparently measures of 'life satisfaction' and 'meaning' produce different outcomes:

So, how did the World Happiness Report measure happiness? The study asked people in 156 countries to “value their lives today on a 0 to 10 scale, with the worst possible life as a 0 and the best possible life as a 10.” This is a widely used measure of general life satisfaction. And we know that societal factors such as gross domestic product per capita, extensiveness of social services, freedom from oppression, and trust in government and fellow

... (read more)
8
JoelMcGuire
3y
I haven't explored this in depth, but it's worth stressing that this indicates that measures of meaning appear to lead to a much more counter intuitive ranking of countries than LS or happiness. If meaning matters more to well-being than happiness or life satisfaction, then we are probably very, very wrong about what makes a life go well. 

Interesting, especially that Togo and Senegal are top of the ranking! I'd imagine the Togolese and Senegalese are having quite a lot of children as well.

It has been suggested that people are succumbing to a focusing illusion when they think that having children will make them happy, in that they focus on the good things without giving much thought to the bad.

Worth noting that you might get increased meaningfulness in exchange for the lost happiness, which isn't necessarily an irrational trade to make. E.g. Robin Hanson:

Stats suggest that while parenting doesn’t make people happier, it does give them more meaning. And most thoughtful traditions say to focus more on meaning that happiness. Meaning is how you

... (read more)
9
AGB
3y
FWIW, I think this accidentally sent this subthread off on a tangent because of the phrasing of 'in exchange for the lost happiness'.  My read of the stats, similar to this Vox article and to what Robin actually said, is that people with children (by choice) are neither more nor less happy on average than childless people (by choice), so any substantial boost to meaning should be seen as a freebie, rather than something you had to give up happiness for. I think there's a related error where people look at the costs of having children (time, money, etc.) and conclude that it's not worth it if the children aren't even making you happy at the end of all that. But this doesn't make sense, at least from a selfish perspective: the parents in these studies were also paying all those costs, their childless counterparts were not, and yet the bottom line was essentially no overall effect, suggesting that children are either providing something which makes up for these costs or that the costs are not as big as people sometimes make out (my suspicion as a father of two is that it's a bit of both). And so as Vox put it:
6
JackM
3y
Yeah I agree that trading off happiness for meaning can make sense. I would just point out the following from the article I linked to: I'm not sure how selective the author may (or may not) be being here, and there could certainly be confounding variables that aren't controlled for in the studies (I haven't looked at them so can't really say). The reason I draw out that quote is that 'life satisfaction' may be the best overall measure of wellbeing we have, and it should incorporate 'meaning' to some extent, so that Di Tella study should be concerning. It would be cool for someone to do an in-depth review of the evidence on how children impact on wellbeing. Maybe I will, if I find the time...

Thanks. It looks to me that much of what's being described at these links is about the atmosphere among the students at American universities, which then also starts affecting the professors there. That would explain my confusion, since a large fraction of my academic friends are European, so largely unaffected by these developments.

there could be a number of explanations aside from cancel culture not being that bad in academia.

I do hear them complain about various other things though, and I also have friends privately complaining about cancel culture in non-academic contexts, so I'd generally expect this to come up if it were an issue. But I could still ask, of course.

We also discussed some possible reasons for why there might be a disappointing future in the sense of having a lot of suffering, in sections 4-5 of Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. A few excerpts:

4.1 Are suffering outcomes likely?

Bostrom (2003a) argues that given a technologically mature civilization capable of space colonization on a massive scale, this civilization "would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living",
... (read more)
1
Jim Buhler
3y
Michael's definition of risks of disappointing futures doesn't include s-risks though, right?  I guess we get something like "risks of negative (or nearly negative) future" adding up the two types.
yet academia is now the top example of cancel culture

I'm a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don't think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

I have lots of friends in academia and follow academic blogs etc., and basically don't hear any of them talking about cancel culture within that context. I did recently see a philosopher... (read more)

I’m a little surprised by this wording? Certainly cancel culture is starting to affect academia as well, but I don’t think that e.g. most researchers think about the risk of getting cancelled when figuring out the wording for their papers, unless they are working on some exceptionally controversial topic?

Professors are already overwhelmingly leftists or left-leaning (almost all conservatives have been driven away or self-selected away), and now even left-leaning professors are being canceled or fearful of being canceled. See:

... (read more)

On the positive side, a recent attempt to bring cancel culture to EA was very resoundingly rejected, with 111 downvotes and strongly upvoted rebuttals.

That cancellation attempt was clearly a bridge too far. EA Forum is comparatively a bastion of free speech (relative to some EA Facebook groups I've observed and as we've now seen, local EA events), and Scott Alexander clearly does not make a good initial target. I'm worried however that each "victory" by CC has a ratcheting effect on EA culture, whereas failed cancellations don't really matter in the long run, as CC can always find softer targets to attack instead, until the formerly hard targets have been isolated and weakened.

Honestly I'm not sure what... (read more)

I don't know, but I get the impression that SWB questions are susceptible to framing effects in general: for example, Biswas-Diener & Diener (2001) found that when people in Calcutta were asked for their life satisfaction in general, and also for their satisfaction in 12 subdomains (material resources, friendship, morality, intelligence, food, romantic relationship, family, physical appearance, self, income, housing, and social life), they gave on average a slightly negative rating for the global satisfaction, while also giving positive ... (read more)

In particular, Elon Musk claims that BCIs may allow us to integrate with AI such that AI will not need to outcompete us (Young, 2019). It is unclear at present by what exact mechanism a BCI would assist here, how it would help, whether it would actually decrease risk from AI, or if it is a valid claim at all. Such a ‘solution’ to AGI may also be entirely compatible with global totalitarianism, and may not be desirable. The mechanism by which integrating with AI would lessen AI risk is currently undiscussed; and at present, no serious academi
... (read more)

Let's look at some of your references. You say that Scott has endorsed eugenics; let's look up the exact phrasing (emphasis mine):

Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.

"I don't like this, though it would probably be better than the even worse situation that we have today" isn't exact... (read more)

-47
Effective_Altruism_Person
4y

Also the sleight of hand where the author implies that Scott is a white supremacist, and supports this not by referencing anything that Scott said, but by referencing things that unrelated people hanging out on the SSC subreddit have said and which Scott has never shown any signs of endorsing. If Scott himself had said anything that could be interpreted as an endorsement of white supremacy, surely it would have been mentioned in this post, so its absence is telling.

As Tom Chivers recently noted:

It’s part of the SSC ethos that “if you don
... (read more)
Not to be rude, but what context do you recommend would help for interpreting the statement, "I like both basic income guarantees and eugenics," or describing requiring poor people to be sterilized to receive basic income as "probably better than what we have right now?"

The part from the middle of that excerpt that you left out certainly seems like relevant context: "Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together – making the income conditional ... (read more)

Malevolent humans with access to advanced technology—such as whole brain emulation or other forms of transformative AI—could cause serious existential risks and suffering risks.

Possibly relevant: Machiavellians Approve of Mind Upload Technology Directly and Through Utilitarianism (Laakasuo et al. 2020), though it mainly tested whether machiavellians express moral condemnation of mind uploading, rather than their interest directly.

In this preregistered study, we have two novel findings: 1) Utilitarian moral preferences are strongly and psyc
... (read more)

You seem to be working under the assumption that we have either emotional or logical motivations for doing something. I think that this is mistaken: logic is a tool for achieving our motivations, and all of our motivations ultimately ground in emotional reasons. In fact, it has been my experience that focusing too much on trying to find "logical" motivations for our actions may lead to paralysis, since absent an emotional motive, logic doesn't provide any persuasive reason to do one thing over another.

You said that people act altruistically ... (read more)

My perspective here is that many forms of fairness are inconsistent, and fall apart on significant moral introspection as you try to make your moral preferences consistent. I think the skin-color thing is one of them, which is really hard to maintain as something that you shouldn't pay attention to, as you realize that it can't be causally disentangled from other factors that you feel like you definitely should pay attention to (such as the person's physical strength, or their height, or the speed at which they can run).

I think that a sens... (read more)

On the other hand, there are also arguments for why one should work to prevent extinction even if one did have the kind of suffering-focused view that you're arguing for; see e.g. this article. To briefly summarize some of its points:

If humanity doesn't go extinct, then it will eventually colonize space; if we don't colonize space, it may eventually be colonized by an alien species with even more cruelty than us.

Whether alternative civilizations would be more or less compassionate or cooperative than humans, we can only guess. We may however
... (read more)
Do you have a short summary of why he thinks that someone answering the question of "would you have preferred to die right after child birth?" with "No?" is not strong evidence that they should have been born?

I don't know what Benatar's response to this is, but - consider this comment by Eliezer in a discussion of the Repugnant Conclusion:

“Barely worth living” can mean that, if you’re already alive and don’t want to die, your life is almost but not quite horrible enough that you would rather commit suicide than endure. But if
... (read more)
In the past [EAF/FRI] have been rather negative utilitarian, which I have always viewed as an absurd and potentially dangerous doctrine. If you are interested in the subject I recommend Toby Ord’s piece on the subject. However, they have produced research on why it is good to cooperate with other value systems, making me somewhat less worried.

(I work for FRI.) EA/FRI is generally "suffering-focused", which is an umbrella term covering a range of views; NU would be the most extreme form of that, and some of us do lean that way, but many disagree w... (read more)

The following is roughly how I think about it:

If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.

Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way th... (read more)

0
Jeffhe
6y
Hi Kaj, Thanks for your response. Please refer to my conversation with brianwang712. It addresses this objection!

Hi Daniel,

you argue in section 3.3 of your paper that nanoprobes are likely to be the only viable route to WBE, because of the difficulty in capturing all of the relevant information in a brain if an approach such as destructive scanning is used.

You don't however seem to discuss the alternative path of neuroprosthesis-driven uploading:

we propose to connect to the human brain an exocortex, a prosthetic extension of the biological brain which would integrate with the mind as seamlessly as parts of the biological brain integrate with each other. [...] we ma

... (read more)
1
Daniel_Eth
6y
Neuroprosthesis-driven uploading seems vastly harder for several reasons: • you'd still need to understand in great detail how the brain processes information (if you don't, you'll be left with an upload that, while perhaps intelligent, would not act like how the person acted, and perhaps even drastically so that it might be better to imagine it as a form of NAGI than as WBE) • integrating the exocortex with the brain would likely still require nanotechnology able to interface with the brain • ethical/ regulatory hurdles here seem immense I'd actually expect that in order to understand the brain enough for neuroprosthesis-driven uploading, we'd still likely need to run experiments with nanoprobes (for the same arguments as in the paper: lots of the information processing happens on the sub-cellular level - this doesn't mean that we have to replicate this information processing in a biologically realistic manner, but we likely will need to at least understand how the information is processed)

Also, one forthcoming paper of mine released as a preprint; and another paper that was originally published informally last year but published in somewhat revised and peer-reviewed form this year:

Both were done as part of my research for the Foundational Research Institute; maybe include us in yo... (read more)

There seem to be a lot of leads that could help us figure out the high-value interventions, though: i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group... (read more)

3
itaibn
7y
First, I consider our knowledge of psychology today to be roughly equivalent to that of alchemists when alchemy was popular. Like with alchemy, our main advantage over previous generations is that we're doing lots of experiments and starting to notice vague patterns, but we still don't have any systematic or reliable knowledge of what is actually going on. It is premature to seriously expect to change human nature. Improving our knowledge of psychology to the point where we can actually figure things out could have a major positive effect on society. The same could be said for other branches of science. I think basic science is a potentially high-value cause, but I don't see why psychology should be singled out. Second, this cause is not neglected. It is one of the major issues intellectuals have been grappling with for centuries or more. Framing the issue in terms of "tribalism" may be a novelty, but I don't see it as an improvement. Finally, I'm not saying that there's nothing the effective altruism community can do about tribalism. I'm saying I don't see how this post is helping. edit: As an aside, I'm now wondering if I might be expressing the point too rudely, especially the last paragraph. I hope we manage to communicate effectively in spite of any mistakes on my part.

I think whether suffering is a 'natural kind' is prior to this analysis: e.g., to precisely/objectively explain the functional role and source of something, it needs to have a precise/crisp/objective existence.

I take this as meaning that you agree that accepting functionalism is orthogonal to the question of whether suffering is "real" or not?

If it is a placeholder, then I think the question becomes, "what would 'something better' look like, and what would count as evidence that something is better?

What something better would look lik... (read more)

2
MikeJohnson
7y
An additional note on this: I'd propose that if we split the problem of building a theory of consciousness up into subproblems, the task gets a lot easier. This does depend on elegant problem decompositon. Here are the subproblems I propose: http://opentheory.net/wp-content/uploads/2016/11/Eight-Problems2-1.png A quick-and-messy version of my framework: * (1) figure out what sort of ontology you think can map to both phenomenology (what we're trying to explain) and physics (the world we live in); * (2) figure out what subset of that ontology actively contributes to phenomenology; * (3) figure out how to determine the boundary of where minds stop, in terms of that-stuff-that-contributes-to-phenomenology; * (4) figure out how to turn the information inside that boundary into a mathematical object isomorphic to phenomenology (and what the state space of the object is); * (5) figure out how to interpret how properties of this mathematical object map to properties of phenomenology. The QRI approach is: * (1) Choice of core ontology -> physics (since it maps to physical reality cleanly, or some future version like string theory will); * (2) Choice of subset of core ontology that actively contributes to phenomenology -> Andres suspects quantum coherence; I'm more agnostic (I think Barrett 2014 makes some good points); * (3) Identification of boundary condition -> highly dependent on (2); * (4) Translation of information in partition into a structured mathematical object isomorphic to phenomenology -> I like how IIT does this; * (5) Interpretation of what the mathematical output means -> Probably, following IIT, the dimensional magnitude of the object could correspond with the degree of consciousness of the system. More interestingly, I think the symmetry of this object may plausibly have an identity relationship with the valence of the experience. Anyway, certain steps in this may be wrong, but that's what the basic QRI "full stack" approach looks l
4
Brian_Tomasik
7y
I don't. :) I see lots of free parameters for what flavor of functionalism to hold and how to rule on the Aaronson-type cases. But functionalism (perhaps combined with some other random criteria I might reserve the right to apply) perfectly captures my preferred way to think about consciousness. I think what is unsatisfactory is that we still know so little about neuroscience and, among other things, what it looks like in the brain when we feel ourselves to have qualia.
4
MikeJohnson
7y
Ah, the opposite actually- my expectation is that if 'consciousness' isn't real, 'suffering' can't be real either. Thanks, this is helpful. :) The following is tangential, but I thought you'd enjoy this Yuri Harari quote on abstraction and suffering:
Load more