R

River

350 karmaJoined Jul 2022

Posts
2

Sorted by New
37
River
· 8mo ago · 1m read
49

Comments
25

Are you familiar with any concerns about nonlinear not raised in Ben's post? Ben seems particularly concerned that nonlinear creates an epistemic environment where he wouldn't know if there was more. If there is, that seems pretty central to confirming Ben's concerns.

Thank you for sharing Minh, I think this is one of the most important updates. 

If our goal is (as I think it should be) only to figure out whether we want to interact with any of these people in the future, and not to exact retribution for past wrongs against third parties, then we don't need to know exactly what happened between nonlinear and Alice and Chloe. That's good, since we probably never will. What does seem to be the case is this. (1) Everybody involved agrees that something went badly wrong in the relationships between Kat/Emerson and Alice/Chloe, though they may dramatically disagree about what. (2) Kat/Emerson have changed their behavior in a way that prevents a repeat. Your testimony is good evidence for 2. And given that, I don't think I will update much on whether I want to interact with them in the future. So thank you for your testimony.

(disclaimers: my past interactions with Kat have been positive but not extensive. I don't believe I have interacted with Emerson. And I was not asked to comment by anyone involved.)

I guess my fundamental question right now is what do we mean by intelligence? Like, with humans, we have a notion of IQ, because lots of very different cognitive abilities happen to be highly correlated in humans, and this allows us to summarize them all with one number. But different cognitive abilities aren't correlated in the same way in AI. So what do we mean when we talk about an AI being much smarter than humans? How do we know there even are significantly higher levels of intelligence to go to, since nothing much more intelligent than humans has ever existed? I'm not sure why people seem to assume that possible levels of intelligence just keep going.


My other question, related to the first, is how do we know that more intelligence, whatever we mean by that, would be particularly useful? Some things aren't computable. Some things aren't solvable within the laws of physics. Some systems are chaotic. So how do we know that more intelligence would somehow translate into massively more power in domains that we care about?

I do not like this. One of the fundamental premises of EA is to be neutral about who we are helping - people here, people there, people now, people later, all get weighted the same. Specifically setting out to help only Muslims therefor seems non-EA. If Muslims want to do it, I guess they have that right, but EA shouldn't be touching it.

Lastly and probably most significantly, there is obviously the loss of an additional individual who would likely have been economically productive over the course of their lifetime

 

From a common-sensical point of view, it’s difficult to know exactly where to “draw the line”; it seems crazy to imagine a baby dying during labour as anything other than a rich, full potential liife lost, but if we extend that logic too far backwards then we might imagine any moment that we are not reproducing to be costing one “life’s worth” of DALYs.

 

There seems to be an obvious route of inquiry to address this quandary, which is to ask what impact a stillbirth has on the number of children a woman has during her life. I imagine some nontrivial fraction of women who have stillbirths go on to become pregnant again in relatively short order, and end up having just as many children as they would have had the pregnancy succeeded. If, hypothetically, 90% of women who have stillbirths go on to have just as many children as they would have without the stillbirth, and 10% have one fewer children, then it seems straightforward to me that we should count a stillbirth as costing 0.1 lives. I don't know actual numbers about how stillbirths impact women's later reproductive choices, but presumably somebody has studied this.

I agree on food. I was careless with my qualifications, sorry about that.

I think part of the difficulty here is that "wokism" seems to refer to a cluster of ideas and practices that seem to be a genuine cluster, but don't have especially clear boundaries or a singular easy definition.

What I do notice is that none of the ideas you listed, at least at the level of abstraction at which you listed them, are things that anyone, woke or anti-woke or anywhere in between, will disagree with. But I'll try to give some analysis of what I would understand to be woke in the general vicinity of these ideas. Note that I am not asserting any normative position myself, just trying to describe what I understand these words to mean.

I don't think veganism really has much to do with wokism. Whatever you think about EA event catering, it just seems like an orthogonal issue.

I suspect everyone would prefer that EA spaces be welcoming of trans people, but there may be disagreement on what exactly that requires on a very concrete level, or how to trade it off against other values. Should we start meetings by having everyone go around and give their pronouns? Wokism might say yes, other people (including some trans people) might say no. Should we kick people out of EA spaces for using the "wrong" pronouns? Wokism might say yes, other might say no as that is a bad tradeoff against free speech and epistemic health.

I suspect everyone thinks reports of assault and harassment should be taken seriously. Does that mean that we believe all women? Wokism might say yes, others might so no. Does that mean that people accused should be confronted with the particular accusations against them, and allowed to present evidence in response? Wokism might say no, others might say yes, good epistemics requires that.

I'm honestly not sure what specifically you mean by "so-called 'scientific' racism" or "scourge", and I'm not sure if that's a road worth going down.

Again, I'm not asserting any position myself here, just trying to help clarify what I think people mean by "wokism", in the hopes that the rest of you can have a productive conversation.

synonyms might be "SJW" or "DEI".

You think that dating a coworker or whatever without sleeping with them is less likely to cause problems than the reverse? That does not ring true to me at all. It does ring of Christian purity culture, which I would not have expected to encounter in EA.

Is it true that other successful institutions generally have norms against dating within them? (I don't want to use the term "sleeping around", which feels derogatory in this particular context). My company only prohibits dating people in your chain of command, and I am certainly aware of relationships within the company that have not caused any objections or issues that I know of. Though my company is tens of thousands of people, with thousands in my building, so maybe it doesn't qualify as tight-nit. I also haven't perceived any of my friend groups as having a norm against dating. Family seems obviously different, because there is that incest norm, and that impossibility of stepping away on the off chance that things go really badly. Though again, maybe you have a family with different dynamics - to the best of my knowledge, I've never met a cousin's spouse's anything. Anyway, point is, I don't think it's actually true that the rest of society operates this way.

Load more