I feel like it's more relevant what a person actually believes than whether they think of themselves as uncertain. Moral certainty seems directly problematic (in terms of risks of recklessness and unilateral action) only when it comes together with moral realism: If you think you know the single correct moral theory, you'll consider yourself justified to override other people's moral beliefs and thwart the goals they've been working towards.
By contrast, there seems to me to be no clear link from "anti-realist moral certainty in some subjectivist axiology" ...
In general (whether realist or anti-realist), there is "no clear link" between axiological certainty and oppressive behavior, precisely because there are further practical norms (e.g. respect for rights, whether instrumentally or non-instrumentally grounded) that mediate between evaluation and action.
You suggest that it "seems only intuitive/natural" that an anti-realist should avoid being "too politically certain that what they believe is what everyone ought to believe." I'm glad to hear that you're naturally drawn to liberal tolerance. But many human bei...
This kind of reminds me of a psychological construct called the Militant Extremist Mindset. Roughly, the mindset is composed of three loosely related factors: proviolence, vile world, and Utopianism. The idea is that elevated levels in each of the three factors is most predictive of fanaticism I think (total) utilitarianism/strong moral realism/lack of uncertainty/visions of hedonium-filled futures fall into the utopian category. I think EA is pretty pervaded but vile world thinking, including reminders about how bad the world is/could be and cynicism abou...
Sorry, I hate it when people comment on something that has already been addressed.
FWIW, though, I had read the paper the day it was posted on the GPI fb page. At that time, I didn't feel like my point about "there is no objective axiology" fit into your discussion.
I feel like even though you discuss views that are "purely deontic" instead of "axiological," there are still some assumptions from the axiology-based framework that underly your conclusion about how to reason about such views. Specifically, when explaining why a view says that it would be wrong ...
I feel like you're trying to equivocate "wrong or heartless" (or "heartless-and-prejudiced," as I called it elsewhere) with "socially provocative" or "causes outrage to a subset of readers."
That feels like misdirection.
I see two different issues here:
(1) Are some ideas that cause social backlash still valuable?
(2) Are some ideas shitty and worth condemning?
My answer is yes to both.
When someone expresses a view that belongs into (2), pointing at the existence of (1) isn't a good defense.
You may be saying that we should be humble and can't tell the dif...
It seems to me like there's no disagreement by people familiar with Hanania that his views were worse in the past. That's a red flag. Some people say he's changed his views. I'm not per se against giving people second chances, but it seems suspicious to me that someone who admits that they've had really shitty racist views in the past now continues to focus on issues where they – even according to other discussion participants here who defend him – still seem racist.
Agreed. I think the 2008-10 postings under the Hoste pseudonym are highly relevant insofar ...
+1
If even some of the people defending this person start with "yes, he's pretty racist," that makes me think David Mathers is totally right.
Regarding cata's comment:
But I think that the modern idea that it's good policy to "shun" people who express wrong (or heartless, or whatever) views is totally wrong, and is especially inappropriate for EA in practice, the impact of which has largely been due to unusual people with unusual views.
Why move from "wrong or heartless" to "unusual people with unusual views"? None of the people who were important to EA histor...
We can't use the argument that it is better from an impartial view to focus on existing-and-sure-to-exist people/beings because of the classic 'future could be super-long' argument.
I'd say the two are tied contenders for "what's best from an impartial view."
I believe the impartial view is under-defined for cases of population ethics, and both of these views are defensible options in the sense that some morally-motivated people would continue to endorse them even after reflection in an idealized reflection procedure.
For fixed population contexts, the ...
In my post Population Ethics Without [An Objective] Axiology, I argued that person-affecting views are IMO underappreciated among effective altruists.
Here’s my best attempt at a short version of my argument:
So is it basically saying that many people follow different types of utilitarianism (I'm assuming this means the "ambitious moralities")
Yes to this part. ("Many people" maybe not in the world at large, but especially in EA circles where people try to orient their lives around altruism.)
Also, I'm here speaking of "utilitarianism as a personal goal" rather than "utilitarianism as the single true morality that everyone has to adopt."
This distinction is important. Usually, when people speak about utilitarianism, or when they write criticisms of utilitari...
I realized the same thing and have been thinking about writing a much shorter, simplified account of this way of thinking about population ethics. Unfortunately, I haven't gotten around to writing that.
I think the flowchart in the middle of the post is not a terrible summary to start at, except that it doesn't say anything about what "minimal morality" is in the framework.
Basically, the flowchart shows how there are several defensible "ambitious moralities" (axiological frameworks such as the ones in various types of utilitarianism, which specify how someo...
I don't want to spend too much time on this so won't answer to all points, but I wanted to point you to some examples for this bit about evasiveness by saying things like, "I don't know what this is referring to":
I'd be interested to hear examples (genuinely)
See the transcript here: the word "referring" occurs 30 times and at least a couple of those times strike me as the weasel-like suspicious behavior of someone whose approach to answering questions is "never admit to anything unless you learn that they already have the evidence." So, he always ans...
I downvoted the question.
I'd have found it okay if the question had explicitly asked for just good summaries of the trial coverage or the sentencing report.
(E.g., there's the twitter handle Inner city press that was tweeting transcript summaries of every day on trial, or the Carl Reilly youtube channel for daily summaries of the trial. And there's the more recent sentencing report that someone here linked to.)
Instead, the question came across as though there's maybe a mystery here for which we need the collective smarts and wisdom of the EA forum.
There are...
What I meant with by "he didn't take it back" is a situation as follows:
The prosecution asks him if he made certain claims in the media. SBF says "yes" or "it appears that way" or whatever. The prosecution at some other point in the trial (maybe days earlier, maybe afterwards) asks some specific details about how FTX accounts were structured and how money was moved that contradicts what SBF said in the media. At some third point in the trial, they ask him if he deliberately lied to the media/gave false accounts about how things worked, and he said no...
It wasn't evasiveness, in my view.
I agree that some of his behavior was just unproblematic "being very literal about answers."
But the thing I mean by evasiveness was more stuff like:
I think people who have followed this unusually closely should be encouraged to argue for what they think is right if they have a strong take, but I just don't think this theory is likely. An innocent person would be more likely to talk more freely about things/be less evasive and they'd probably have a better explanation of how it is that they could have missed an 8 billion hole in the bank. It's suspicious if you need to make the same move ("he could've not seen this" or "he could have not looked closely at that") multiple times to preserve the chance of...
That's a good point. If there weren't a convincing story for why more donations weren't made or at least set up to be made soon, I'd say your point counts for quite a lot!
However, in this specific case, I feel like there are good reasons why I wouldn't expect that many donations to be made right away:
I think a conclusion that he acted from mixed motives is better supported by the evidence.
I disagree, but it obviously depends what exactly we're discussing.
Was his judgment for not coming clean when things were only starting to get bad compromised by not wanting to lose his influence, money, and reputation? Probably!
However, do I think he made some of his most consequential decisions to a significant degree because he thought he could get nice things for himself that way? I actually don't think so!
Making big decisions for reasons other than impact w...
Thank you for engaging with my post!! :)
Also I'm not sure how I would form object-level moral convictions even if I wanted to. No matter what I decide today, why wouldn't I change my mind if I later hear a persuasive argument against it? The only thing I can think of is to hard-code something to prevent my mind being changed about a specific idea, or to prevent me from hearing or thinking arguments against a specific idea, but that seems like a dangerous hack that could mess up my entire belief system.
I don't think of "convinctions" as anywhere near as str...
This comment I just made on Will Aldred's Long Reflection Reading List seems relevant for this topic.
Overall, I'd say there's for sure going to be some degree of moral convergence, but it's often overstated, and whether the degree of convergence is strong enough to warrant going for the AI strategies you discuss in your subsequent posts (e.g., here) would IMO depend on a tricky weighting of risks and benefits (including the degree to which alternatives seem promising).
...Does moral realism imply the convergent morality thesis? Not strictly, although it’
Many of those posts in the list seem really relevant to me for the cluster of things you're pointing at!
On some of the philosophical background assumptions, I would consider adding my ambitiously-titled post The Moral Uncertainty Rabbit Hole, Fully Excavated. (It's the last post in my metaethics/anti-realism sequence.)
Since the post is long and it says that it doesn't work maximally well as a standalone piece (without two other posts from earlier in my sequence), it didn't get much engagement when I published it, so I feel like I should do some advertizing...
The way I envision him (obviously I don't know and might be wrong):
Related to your point 1 :
I think one concrete complexity-increasing ingredient that many (but not all) people would want in a utopia is for one's interactions with other minds to be authentic – that is, they want the right kind of "contact with reality."
So, something that would already seem significantly suboptimal (to some people at least) is lots of private experience machines where everyone is living a varied and happy life, but everyone's life in the experience machines follows pretty much the same template and other characters in one's simulatio...
Here are (finally) some thoughts:
I'm not sure why your comment was downvoted. I think it's a perfectly reasonable request since, as you say correctly in other comments, people who don't know enough to form their own opinion can't just trust that other forum commenters with direct opinions are well-calibrated/have decent people judgment about this.
I started writing down some points, but it's not easy and I don't want to do it in a half-baked fashion and then have readers go "oh, those data points and interpretations all sound pretty spurious, if that's all you have, it seems weird that you...
Here are (finally) some thoughts:
I agree that the women affected are what this is primarily about. But there's also an issue with not wanting to ascribe to anyone how we think they likely feel, without knowing much about them. Like, maybe at least some of the women who had negative experiences have nuanced feelings that aren't best described as "I feel bad/invalidated whenever I see someone say positive things about Owen, even if they take care to not thereby downplay that the things he did weren't acceptable." Maybe some feel things like, "this stuff was messed up and really needed to be...
I agree with those points and they seem important.
I didn't write this further above, but thinking about it now, I think there was also another dimension that fed into me thinking of this case as "atypical." (But maybe this isn't the best wording and these things are more typical than we think, but what I'm trying to gesture at is "the sort of thing that has high chances of getting fixed.") In any case, when I think of cases of "harm through neglect," where someone isn't ill-intentioned but still has a pattern of making others uncomfortable, some cases that...
I view power differentials, workplace dating, etc., as something that's risky/delicate, but it can be fine if done carefully. Even if something goes poorly in one instance, it doesn't necessarily mean that a person did something immoral.
However, when there's a pattern of several people complaining, that's indicative of some kind of problem.
It means likely that either a person was particularly likely to make people really uncomfortable with their advances when they made them, or that the person made a ton of advances in professional contexts (and a small po...
Not sure if everyone does it this way, but I find agree/disagree votes more important for what you're saying than merely upvotes. In cases like this, I would use agree/disagree votes if I know a lot about either Owen directly, or about Jonas's judgment in situations like this.* Even though it's technically anonymous, I think of agree/disagree votes in situations like this as "staking a small part of my own reputation on the claims in the comment." I'd use upvotes more liberally and upvote things that sound potentially important or insightful even if I'm st...
For reasons I went into here, I think it often sets things up for vexed discussion dynamics when we're criticizing how others are reacting or aren't reacting, and whether they are emphasizing the right points with the appropriate degree of strength. (I do this myself occasionally, and there isn't anything wrong with doing it, per se. I'm just pointing out why we're doomed to have an unpleasant discussion experience.)
I would even add that assuming that the community will conflate Owen and Epstein's case is patronizing and far-fetched;
I feel like you're bein...
It's important to point out how this case is atypical
I want to distinguish between "he is not the kind of deliberate predator you typically think of when you hear about sexual harassment" and "he is different than most people who sexually harass others".
I think that "well-meaning person does damage through neglect rather than malice or deliberate disregard" is a fairly typical case; maybe more common than deliberate predation. You can do a lot of damage through neglect alone, especially when you underestimate your power in a situation. So while...
[T]here is a certain irony to see these two people coming to defend Owen while the community health head, Julia, admits to a certain level of bias when handling this affair since he was her friend.
Jonas's comment includes statements like "This obviously doesn’t make his past behavior any less bad and doesn’t excuse any of it" and "I think a temporary ban is important, both as an incentive against bad behavior and as a precaution so the harms don’t continue. That said, two years are a long time, [...]"
So, I don't think this would be repeating the mistakes t...
Edit to add: I edited my original comment to hopefully address these misunderstandings
Yep - indeed - I assumed it's obvious to everyone that it's a bad idea to make [things that are perceived as] unwanted romantic or sexual advances towards people, and that serious action should be taken if someone receives repeated complaints about that.
The intentions of my comment were to give information that might be helpful + informative for people deciding how to best achieve a goal of something like "make the community safe and welcoming for people in general,...
Your "most mothers" example is confounded because mothers are related to their children. They wouldn't readily accept death if it meant that someone else's infant got to live.
Still, one can argue from intuition that there must be a reason to value the lives of babies over just simple sperm.
That speaks in favor of a gradual increase of intrinsic moral relevance as the infant becomes more aware of the world and its own point of view in it, forming life plans and so on.
I assumed that what we were talking about is whether an adult person's life is equally wort...
This is discussed under the "argument from potential" in ethics. One problem in that argument is if potential matters when babies have it, it seems like it should also matter when other things have it. For instance, a fertilized embryo, a man and woman in a room who could start making a baby, or even a pile of organic matter that, with the help of highly-advanced future technology, could be assembled correctly into a fully-functioning adult ((let's suppose we had such technology now: would we then think piles of organic matter are similarly important as ex...
I think the questions you're raising are important. I got kind of triggered by the issue I pointed out (and the fact that it's something that has already been discussed in the comments of the other post), so I downvoted the comment overall. (Also, just because Chloe is currently anonymous doesn't mean it's risk-free to imply misleading and damaging things about her – anonymity can be fragile.)
There were many parts of your comment that I agree with. I agree that we probably shouldn't have a norm that guarantees anonymity unconditionally. (But the anonymity ...
So, what do you all think?
I continue to think that something went wrong for people to come away with takes that lump together Alice and Chloe in these ways.
Not because I'm convinced that Alice is as bad as Nonlinear makes it sound, but because, even based on Nonlinear's portrayal, Chloe is portrayed as having had a poor reaction to the specific employment situation, and (unlike Alice) not as having a general pattern/history of making false/misleading claims. That difference matters immensely regarding whether it's appropriate to warn future potential...
I do understand where people are coming from defending Nonlinear. Even if, like me, someone thinks there's a lot about them that didn't go well or that doesn't look good in terms of their processing and reflection skills, it's still important that the "flagship accusations" [edit: this was a poor choice of words, I should have said "smoking-gun, most outrageous-sounding examples of the accusations." The original post by Ben – search for "summary of my epistemic state" here – listed four bullet points as the main concerns, and I think 3/4 of those still see...
Note that I didn't go through all the pages of the appendix looking for something particularly worthy of critique. Instead, I remembered that Chloe's comments in her own words seemed quite compelling to me three months ago, so I wanted to re-read it and compare it to what Nonlinear wrote about this incident. When I did so, I thought "wow this is worse than I thought; this warrants its own comment." Note that this is one of the only times I went back to source material and compared it directly to Nonlinear's appendix.
...I feel the same way about what happened
I find it interesting and revealing to look at how Nonlinear re-stated Chloe's initial account of an incident into a shorter version.
First, here's their shortened version (by Nonlinear):
...One of Chloe’s jobs was to organize fun day trips (which she’d join us on). In fact, one of her unofficial titles was Fun Lord of Nonlinear, First of Her Name. One day, spontaneously, we decided to go on a trip to St. Barths. Emerson asked her to do her usual job, and she said “It’s a weekend” and he said, “But you like organizing fun trips!” - she had said so many times -
This comment sounds very reasonable, but I think it really isn't. Not because anything you said is false; I agree that the summary left out relevant sections, but because the standard is unreasonably high. This is a 134 page document. I expect that you could spend hours poking one legitimate hole after another into how they were arguing or paraphrasing.
Since I expect that you can do this, I don't it makes sense to update based on you demonstrating it.
I feel the same way about what happened itself. It seems like Chloe really wanted to have a free day, but E...
Yeah.
Let's assume Nonlinear are completely right about how they describe Chloe and Alice. I'd summarize their perspective as follows:
Alice-as-described-by-Nonlinear is likely to be destructive in other contexts as well because that is a strong pattern with her generally. :(
By contrast,
Chloe-as-described-by-Nonlinear is significantly less likely to be destructive in other contexts. While Nonlinear claim that Chloe is entitled, it's still the case that her beef with them is largely around the tensions of living together (primes her to expect equal-ness...
I don't know who Chloe is in real life (nor Alice for that matter), but based on what I've read, it seems really really off to me to say that she has the potential to be destructive to others in the community. [Edit: I guess you're not outright saying that, but I'm reading your comment as "if all that Nonlinear are saying about Chloe is true, then...," and my take on that is that apart from their statements of the sort of "Chloe is so mentally unhealthy that she makes things up" (paraphrased), none of the concrete claims are obviously red flags to me. It's...
It's a fair point that we should treat Alice and Chloe separately and that deanonymizing one need not imply that we should deanonymize the other.
Why are you saying "these orgs"? I feel like even though it's common in EAs to use money to buy time and productivity, combining world travels and living in luxury locations with impactful work is something that was unique to Nonlinear as far as I'm aware.
Also, why are you assuming it's "donated money" that was used for this, rather than them having earmarked funding for specific projects while they use Emerson's savings (seems rich or has rich parents) for the luxury expenses? I mean, sure, earmarking is a fuzzy concept, but are you saying that people wit...
On this point, your reply seems very compelling to me. ((Though it's at least imaginable that Chloe would point out ways in which this is misleading – e.g., maybe her bf had "EA potential" or got along well with Emerson or you and some other friends of hers didn't, and maybe someone made comments about her other friends. Idk.))
I think it's important to not hold people to unreasonable standards when they try to present a lot of evidence. If this (the invites allowed list) is one of only few instances where it's overstated how important a particular piece of...
Overall, there just feels like too little engagement with the possibility that Chloe's experience was maybe predictable and not out of the ordinary, i.e., that Chloe wasn't entitled or disgruntled to react the way she did.
To give some more context on this:
Let's take the claim that it was discouraged to talk to friends or family (this was one of the things were I thought Nonlinear's reply seemed more convincing than I would have expected, but still leaves me with uncertainty rather than settling everything for sure).
Nonlinear links to a screenshot wit...
This on its own, maybe. But Chloe's boyfriend was invited to travel with us for 2 of the 5 months she was with us, and we were about to invite him to travel with us indefinitely, free of charge. That's a hard to fake signal that she was more than welcome to invite friends and family.
We also show text messages of us encouraging them to invite people over. We even have text messages showing me encouraging Chloe to see her boyfriend sooner and her saying no. Alice invited multiple friends to travel with us. When Chloe quit one of her friends was visitin...
I read this post and about half of the appendix.
(1) I updated significantly in the direction of "Nonlinear leadership has a better case for themselves than I initially thought" and "it seems likely to me that the initial post indeed was somewhat careless with fact-checking."
(I'm still confused about some of the fact-checking claims, especially the specific degree to which Emerson flagged early on that there were dozens of extreme falsehoods, or whether this only happened when Ben said that he was about to publish the post. Is it maybe possible that Emerso...
...I still find Chloe's broad perspective credible and concerning [...] it's begging the question to self-describe your group with "Your group has a really optimistic and warm vibe. [...]" some of the short-summary replies to Chloe seemed uncharitable to the point of being mean. [...] I thought it's simply implausible that the most Nonlinear leadership could come up with in terms of "things we could've done differently" is stuff like "Emerson shouldn't have snapped at Chloe during that one stressful day" [...] Even though many the things in my elaboration of
Overall, there just feels like too little engagement with the possibility that Chloe's experience was maybe predictable and not out of the ordinary, i.e., that Chloe wasn't entitled or disgruntled to react the way she did.
To give some more context on this:
Let's take the claim that it was discouraged to talk to friends or family (this was one of the things were I thought Nonlinear's reply seemed more convincing than I would have expected, but still leaves me with uncertainty rather than settling everything for sure).
Nonlinear links to a screenshot wit...
I agree it can be okay/excusable to give in to the urge of taking digs at people who you think have unfairly harmed you. At the same time, I think it can make a big difference whether someone is doing this because of (1) or (2) of the following:
(1) they perceive situations like this as a social game about who manages to get the audience on their side, within which tactics like making insinuations about others' character or repeating hearsay is fair game as long as it works / if the audience will think it's okay/excusable/justified, etc.
or whether it's ...
I'm excited about this!
One question, I notice a bit of a tension with the EA justification of this project ("improving EA productivity") and the common EA mental health issues around feeling pressure to be productive. I know CBT is more about providing thinking tools rather than giving concrete advice on what to do/try, but might there be a risk that people who take part will feel like they are expected to show a productivity increase? Would you still recommend to EA clients to take time off generously if someone is having burnout symptoms? I'm curious to hear your thoughts on this.
By the way, this discussion (mostly my initial comment and what it's in reaction to; not so much specifics about CEA history) reminded me of this comment about the difficulty of discussing issues around culture and desired norms. Seems like maybe we'd be better off discussing what each of us thinks would be best steps forward to improve EA culture or find a way to promote some kind of EA-relevant message (EA itself, the importance of AI alignment, etc.) and do movement building around that so it isn't at risk of backfiring.
Interesting; I didn't remember this about Tara.
Two data points in the other direction:
Yeah, I should've phrased (3) in a way that's more likely to pass someone like habryka's Ideological Turing Test.
Basically, I think if EAs were even just a little worse than typical people in positions of power (on the dimension of integrity), that would be awful news! We really want them to be significantly better.
I think EAs are markedly more likely to be fanatical naive consequentialists, which can be one form of "lacking in integrity" and is the main thing* I'd worry about in terms of me maybe being wrong. To combat that, you need to be above average i...
That's indeed shocking, and now that you mention it, I also remember the Pareto fellowship Leverage takeover attempt. Maybe I'm too relaxed about this, but it feels to me like there's no nearby possible world where this situation would have kept going? Pretty much everyone I talked to in EA always made remarks about how Leverage "is a cult" and the Leverage person became CEA's CEO not because it was the result of a long CEO search process, but because the previous CEO left abruptly and they had few immediate staff-internal options. The CEO (edit: CEA!) boa...
I think the self-correction mechanism was not very strong. I think if Tara (who was also strongly supportive of the Leverage faction, which is why she placed Larissa in charge) had stayed, I think it would have been the long-term equilibrium of the organization. The primary reason why the equilibrium collapsed is because Tara left to found Alameda.
I agree with what you say in the last paragraph, including the highlighting of autonomy/placing value on it (whether in a realist or anti-realist way).
I'm not convinced by what you said about the effects of belief in realism vs anti-realism.
Sure, but that feels like it's begging the question.
Let's grant that the people we're comparing already have liberal intuitions. After all, this discussion started in a ... (read more)