All of xccf's Comments + Replies

Independent impressions

Good points.

We should try to track the uncertainty in our all-things-considered beliefs, and we should take a portfolio approach.

It's not enough to just track the uncertainty, you also have to have visibility into current resource allocation. The "defer if there's an incentive to do so" idea helps here, because if there's an incentive, that suggests someone with such visibility thinks there is an under-allocation.

Independent impressions

And this all-things-considered belief is what guides my research and career decisions.

A few arguments for letting your independent impression guide your research and career decisions instead:

  • If everyone in EA follows the strategy of letting their independent impression guide their research and career decisions, our distribution of research and career decisions will look like the aggregate of everyone's independent impressions, which is a decent first approximation for what our all-things-considered belief should be as a community. By contrast, if eve
... (read more)
I agree with your second and third arguments and your two rules of thumb. (And I thought about those second and third arguments when posting this and felt tempted to note them, but ultimately decided to not in order to keep this more concise and keep chugging with my other work. So I'm glad you raised them in your comment.) I partially disagree with your first argument, for three main reasons: * People have very different comparative advantages (in other words, people's labour is way less fungible than their donations). * Imagine Alice's independent impression is that X is super important, but she trusts Bob's judgement a fair bit and knows B thinks Y is super important, and Alice is way more suited to doing Y. Meanwhile, Bob trusts Alice's judgement a fair bit. And they both know all of this. In some cases, it'll be best from everyone's perspective if Alice does Y and Bob does X. (This is sort of analogous to moral trade [], but here the differences in views aren't just moral.) * Not in all cases! Largely for the other two reasons you note. All else held constant, it's good for people to work on things they themselves really understand and buy the case for. But I think this can be outweighed by other sources of comparative advantage. * As another analogy, imagine how much the economy would be impeded if people decided whether they overall think plumbing or politics or or physics research are the most important thing in general and then they pursue that, regardless of their personal skill profiles. * I also think it makes sense for some people to specialise much more than others for working out what our all-things-considered beliefs should be on specific things. * Some people should do macrostrategy reseach, others should learn how US politics works a
What we learned from a year incubating longtermist entrepreneurship

I think offering funding & advice causes more people to work with you, and the closer they are working with you, the larger the influence your opinion is likely to have on the question of whether they should shut down their project.

You can now apply to EA Funds anytime! (LTFF & EAIF only)

I don't think risk goes up linearly with time. Many people quit their PhDs when they aren't a good fit.

Fair enough.

Maybe a pragmatic solution here is to emphasize to people who get a grant to do independent research that they can quit and give back the remainder of their grant at any time?

Yeah, I think we've done that a few times, but not confident. Would have to look over a bunch of records to be confident.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

Sure. Well when the LTFF funds graduate students who aren't even directly focused on improving the long-term future, just to help them advance their careers, I think that sends a strong signal that the LTFF thinks grad school should be the default path. Counterfactually, if grad school is 5-10x the risk of independent research, it seems like you should be 5-10x as hesitant to fund grad students compared to independent researchers. (Assuming for the moment that paternalism is in fact the correct orientation for a grantmaker to have.)

I don't think that's an accurate estimate of the relevant risk. I don't think risk goes up linearly with time. Many people quit their PhDs when they aren't a good fit. I mean, I don't think there is currently a great "default path" for doing work on the long-term future. I feel like we've said some things to that effect. I think grad school is a fine choice for some people, but I think we are funding many fewer people for grad school than we are funding them for independent research (there are some people who we are funding for an independent research project during grad school, but that really isn't the same as funding someone for grad school), but would have to make a detailed count to be totally confident of this. Pretty confident this is true for my grant votes/recommendations.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

My model for why there's a big discrepancy between what NIH grantmakers will fund and what Fast Grants recipients want to do is that NIH grantmakers adopt a sort of conservative, paternalistic attitude. I don't think this is unique to NIH grantmakers. For example, in your comment you wrote:

we want to avoid funding people for independent research when they might do much better in an organization

The person who applies for a grant knows a lot more about their situation than the grantmaker does: their personal psychology, the nature of their research int... (read more)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

The feedback loops in grantmaking aren't great. There's a tendency for everyone to assume that because you control so much money, you must know what you're doing. (I talked to an ex-grantmaker who said that even after noticing and articulating this tendency, he continued to see it operating in himself.) And people who want to get a grant will be extra deferential:

once you become a philanthropist, you never again tell a bad joke… everyone wants to be on your good side. And I think that can be a very toxic environment…


So I think it's important ... (read more)

You can now apply to EA Funds anytime! (LTFF & EAIF only)

I think you're splitting hairs here--my point is that your "hesitation" doesn't really seem to be justified by the data.

trying to pursue an independent research path will be a really big waste of human capital, and potentially cause some pretty bad experiences

I think this is even more true for graduate school:

Independent research seems superior to graduate school for multiple reasons, but one factor is that the time commitment is much lower.... (read more)

I am indeed even more hesitant to recommend grad school to people than independent research. See my comments here: 

You can now apply to EA Funds anytime! (LTFF & EAIF only)

It’s hard to find great grants

Pardon me if this is overly pedantic, but I think you might be missing a map/territory distinction here. "It's hard to find great grants" seems different than "It's hard to find grants we really like". For example, the LTFF managers mentioned multiple times in this post that they're skeptical of funding independent researchers, but this analyst found (based on a limited sample size) that independent researchers outperformed organizations among LTFF grant recipients. Similarly, a poll of Fast Grants recipients found that ... (read more)

that they're skeptical of funding independent researchers

Just for the record, this is definitely not an accurate one-line summary of my stance, and I am pretty confident it's also not a very good summary of other people on the LTFF. Indeed, I don't know of almost any other funding body that has funded as many independent researchers as the LTFF. 

The linked post just says that Adam "tends to apply a fairly high bar to long-term independent research", which I do agree implies some level of hesitation, but I don't think it implies a general stance of ske... (read more)

"It's hard to find great grants" seems different than "It's hard to find grants we really like".

I would expect that most grantmakers (including ones with different perspectives) would agree with this and would find it hard to spend money in useful ways (e.g., I suspect that Nuño might say something similar if he were running the LTFF, though not sure). So while I think your framing is overall slightly more accurate, I feel like it's okay to phrase it the way I did.

that they're skeptical of funding independent researchers

I don't think this characterization ... (read more)

Is the map/territory distinction central to your point? I get the impression that you're mostly expressing the opinion that the LTFF has too high a bar or idiosyncratic (or too narrow) research taste. (I'd imagine that grantmakers are trying to do what's best on impact grounds.)
The Long-Term Future Fund has room for more funding, right now

Sure. I guess I don't have a lot of faith in your team's ability to do this, since you/people you are funding are already saying things that seem amateurish to me. But I'm not sure that is a big deal.

Status update: Getting money out of politics and into charity

Perhaps the biggest area of agreement was that one hurdle we would face is getting voters to trust us -- not just that it was a good idea to give money to our platform, but that we wouldn’t steal their money. This requires getting some high-profile backing (from both parties).

Is there any way to create legal infrastructure so that voters could sue if you didn't follow through on your promises? And so that your finances are transparent? Perhaps the legal concept of "escrow" could be useful?

The Long-Term Future Fund has room for more funding, right now

I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work.

Let's compare the situation of the Long-Term Future Fund evaluating the quality of a grant proposal to that of the academic community evaluating the quality of a published paper. Compared to the LTFF evaluating a grant proposal, the academic community evaluating the quality of a published paper has big advantage... (read more)

I'm also a little skeptical of your "low-quality work dilutes the quality of those fields and attracts other low-quality work" fear--since high citation count is often thought of as an ipso facto measure of quality in academia, it would seem that if work attracts additional related work, it is probably not low quality.

The difference here is that most academic fields are pretty well-established, whereas AI safety, longtermism, and longtermist subparts of most academic fields are very new. The mechanism for attracting low-quality work I'm imagining is that s... (read more)

RyanCarey's Shortform

I thought this Astral Codex Ten post, explaining how the GOP could benefit from integrating some EA-aligned ideas like prediction markets into its platform, was really interesting. Karl Rove retweeted it here. I don't know how well an anti-classism message would align with EA in its current form though, if Habryka is right that EA is currently "too prestige-seeking".

My favorite example of Slate Star Codex translating into Republican is the passage on climate change starting with "In the 1950s":

The Long-Term Future Fund has room for more funding, right now

there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for.

I imagine this could be one of the highest-leverage places to apply additional resources and direction though. People who are applying for funding for independent projects are people who desire to operate autonomously and execute on their own vision. So I imagine they'd require much less direction than marginal employees at an EA organization, for instance.

I also think there's an epi... (read more)

I don't have a strong take on whether people rejected from the LTFF are the best use of mentorship resources. I think many employees at EA organizations are also selected for being self-directed. I know of cases where mentorship made a big difference to both existing employees and independent LTFF applicants. We do weigh individual talent heavily when deciding what to fund, i.e., sometimes we will fund someone to do work we're less excited about because we're interested in supporting the applicant's career. I'm not in favor of funding exclusively based on talent, because I think a lot of the impact of our grants is in how they affect the surrounding field, and low-quality work dilutes the quality of those fields and attracts other low-quality work. Whoops, yeah-- we were previously overwhelmed with requests for feedback, so we now only offer feedback on a subset of applications where fund managers are actively interested in providing it.
The Long-Term Future Fund has room for more funding, right now

I see. That suggests you think the LTFF would have much more room for funding with some not-super-large changes to your processes, such as encouraging applicants to submit multiple project proposals, or doing calls with applicants to talk about other projects they could do, or modifications to their original proposal which would make it more appealing to you.

Sadly, I think those changes would in fact be fairly large and would take up a lot of fund manager time. I think small modifications to original proposals wouldn't be enough, and it would require suggesting new projects or assessing applicants holistically and seeing if a career change made sense.

In my mind, this relates to ways in which mentorship is a bottleneck in longtermist work right now--  there are probably lots of people who could be doing useful direct work, but they would require resources and direction that we as a community don't have the capacity for. I don't think the LTFF is well-placed to provide this kind of mentorship, though we do offer to give people one-off feedback on their applications.

The Long-Term Future Fund has room for more funding, right now

We received 129 applications this round, desk rejected 33 of them, and are evaluating the remaining 96. Looking at our preliminary evaluations, I’d guess we’ll fund 20 - 30 of these.

I keep hearing that there is "plenty of money for AI safety" and things like that. But by the reversal test, don't these numbers imply you think that most LTFF applicants could do more good earning to give? (Assuming they can make at least the hourly wage they requested on their application in the private sector.)

If they request a grant with a wage of $X/hr, and you reject... (read more)

I think many applicants who we reject could apply with different proposals that I'd be more excited to fund-- rejecting an application doesn't mean I think there's no good direct work the applicant could do.

I would guess some people would be better off earning to give, but I don't know that I could say which ones just from looking at one application they've sent us.

Please stand with the Asian diaspora

However, I'm not sure in practice there is very much we can directly do about the issue.

Maybe it's worth pointing out that the OP doesn't ask us to do anything other than "stand with the Asian diaspora", which doesn't seem very hard. (I'm reminded of that relationship cliche where one partner tells the other partner about a problem they have, and their partner responds by trying to solve the problem, when all that was really desired was a sympathetic ear.)

I stand with the Asian diaspora. Even if the shooting was not motivated by anti-Asian prejudice, ... (read more)

Politics is far too meta

Well put. Sadly, the horse race/popularity contest is more dramatic, more interesting, and easier to follow than important policy details. And the more polarized our discussion becomes, the easier it is to rationalize a focus on "making sure the good guys win" over figuring out policy details which might change our view regarding what it is good to do, or even who the good guys are.

In terms of concrete solutions, I wonder if the best approach is satirizing excessive meta-ness. (One possible example of how this might be done: "How will voters react to Cli... (read more)

Articles are invitations

In my experience, when I Facebook message or email EAs I have met in person, bringing up conversation topics I think are substantially higher value than the median topic we would probably wander into during casual chitchat at a party, my message is ignored a large fraction of the time. I don't think this is specific to EAs, I think people are just really flakey when it comes to responding to messages. But it is demoralizing and IMO it destroys a lot of the value of the EA network.

I guess what I'm saying is, maybe keep your expectations low for sending peo... (read more)

Google's ethics is alarming

I feel very conflicted about this.

On the one hand, we don't want researchers at Google to feel any reluctance to blow the whistle on ethical issues with Google's AI algorithms.

On the other hand, I'm not convinced that the original founders of the AI ethics group were the right people for the job--you mentioned radicalization; one of them responded with "You can go fuck yourself" when asked a question about the ethics of political violence.  The new ethics head says "what I’d like to do is have people have [the conversation about AI ethics] in a more d... (read more)

For context, the specific 'question about the ethics of political violence' was  itself somewhat inflammatory:

"So you’re in favor of mob violence, as long as it comes from the left?"

Getting money out of politics and into charity

I think the Center for Election Science, an EA organization that advocates approval voting, could be an effective anti-polarization organization.  There seems to be widespread dissatisfaction with the 2-party system, and I believe it's contributing significantly to polarization.

There's something rather delightful about money being matched from Republican and Democrat donors in order to fund an organization which aims to get rid of the 2-party system :)

Naturally, as the ED for CES, this is my favorite idea!
Prabhat Soni's Shortform

The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?

That sounds harder to misinterpret, yeah.

Prabhat Soni's Shortform

That's a good point, it's not a connection I've heard people make before but it does make sense.

I'm a bit concerned that the message "you can do 80% of the good with only 20% of the donation" could be misinterpreted:

  • I associate the Pareto principle with saving time and money.  EA isn't really a movement about getting people to decrease the amount of time and money they spend on charity though, if anything probably the opposite.
  • To put it another way, the top opportunities identified by EA still have room for more funding.  So the mental motion I w
... (read more)
1Prabhat Soni2y
Hey, thanks for your reply. By the Pareto Principle, I meant something like "80% of the good is achieved by solving 20% of the problem areas". If this is easy to misinterpret (like you did), then it might not be a great idea :P The idea of fat-tailed distribution of impact of interventions might be a better alternative to this maybe?
When does it make sense to support/oppose political candidates on EA grounds?

Was  the "at least one EA" someone in a position of influence?

Not really.

most of his current work seems either opposed to or orthogonal to common EA positions.

I think you have to be careful here, because if someone's work is "opposed" to a common EA position, it's possible that they disagree on facts related to that position but they are still motivated by doing the most good.  It plays into the feedback loop I was talking about in the other comment.  If you disagree with someone a lot, and you don't think you will be able to change their mind, you might not want to invest the time in exploring that disagreement.

5Aaron Gertler2y
Sure -- that's a good thing to clarify. When I say "opposed to," I mean that it seems like the things he presently cares about don't seem connected to a cause-neutral welfare-maximizing perspective (though I can't say I know what his motivations are, so perhaps that is what he's aiming for). Most notably, his PAC explicitly supports an "America First immigration policy," [] which seems difficult to square with his espoused libertarianism and his complaints about technological slowdown in addition to being directly opposed to work from Open Phil and others. I don't understand exactly what his aims are at this point, but it feels like he's far away enough from the EA baseline that I wouldn't want to assume a motivation of "do the most good in a cause-neutral way" anymore.
When does it make sense to support/oppose political candidates on EA grounds?

I think you should feel free to participate in politics as an individual, but I'm pretty uncomfortable with the EA movement developing an official ideology in an organic and ad-hoc way. It seems easy for a feedback loop to form where an ideology becomes associated with a particular group, and people who disagree with the ideology leave the group, and that strengthens the association. I know an online forum roughly as erudite as the EA forum where this happened in the opposite direction, and the majority of the participants (I believe) are voting for... (read more)

When does it make sense to support/oppose political candidates on EA grounds?
What do you predict would happen to someone like that? Would you expect them to be fired if they held a position at an EA org? Barred from attending EA Global?  Shunned by people in their local group?

Peter Thiel spoke at the EA Summit in 2014 I think, what happened to him? I heard at least one EA say we should kick him out.

2Aaron Gertler2y
Was the "at least one EA" someone in a position of influence? My understanding is that Thiel stopped being especially interested in EA around the time he got into politics, but he might still be making AI-related donations here and there. I'd be surprised if he had wanted to speak at any recent EA Global conference, as most of his current work seems either opposed to or orthogonal to common EA positions. But I don't have any special knowledge here. (Certainly he was never Glebbed [] .)
When does it make sense to support/oppose political candidates on EA grounds?

First, I want to say I'm glad you're voting for Joe and I hope you will tell all your Pennsylvanian friends to do the same. Nevertheless I think there are a few key considerations around EA getting involved in politics on a movement level that your comment misses.

I also want to note that I find it odd that post got downvoted (possibly for being explicitly partisan?) vs posts like this, which don't explicitly claim to be partisan / engaging in politics but I think are actually extremely political.

That post relates to a case where politics go... (read more)

Getting money out of politics and into charity

This sounds like a very promising initiative. However, you're asking for advice, so I'll try and identify potential problems.

The platform would collect money from donors to both campaigns; let’s say for example that Harris donors give us $10 million and Pence donors give us $8 million. We would send matching amounts ($8 million on each side) to charity and donate the remaining amount to the political campaign that raised more ($2 million to Harris).

When I pretend I'm a Republican evaluating this proposal, I think: "If the campai... (read more)

6Eric Neyman2y
Thanks for the thoughts. I agree that the first thing you point out is a problem, but let me just point out: in the event that it becomes a problem, that means that our platform is already a wild success. After all, I'd be very happy if our platform took out single-digit millions of money out of politics (compared to the single-digit billions that are spent). If we become a large fraction of all money going into politics, then yeah, this will become a problem, perhaps solvable in the way you suggest. Regarding your thoughts on ads, that seems like a plausible hypothesis. But regarding matching funds going toward anti-polarization organizations: well, I'd be quite interested in that if there were effective anti-polarization organizations. And maybe there are, but I'm not aware of any, and I'm not super optimistic.
Getting money out of politics and into charity

Maybe have staff members who are respected members of both parties?

Or set up individual wealthy donors who are planning to donate roughly the same amount with one another and have them place money in escrow?

1Eric Neyman2y
If we find such wealthy donors, we could match them against each other instead! But I suppose it's possible that we'd find donors who'd be willing to match with each other up to however much is contributed to the platform, as a way of raising interest. Like how cool would it be if Sheldon Adelson and George Soros agreed to this sort of thing? (I'm not even remotely optimistic though :P)
Should we think more about EA dating?

Some more considerations:

  • If you have a bad experience dating EAs, that might cause you to sour on the EA movement. (Personally, after getting rejected by some EAs, part of my brain pointed out "hey you're putting a lot of effort into this EA thing and it doesn't seem to be helping where survival or reproduction are concerned." Since this isn't something I want my brain to think, I no longer ask EAs out.)
  • There's also the possibility of general awkwardness that could interfere with professional relationships.
  • For heterosexual dat
... (read more)
Are there lists of causes (that seemed promising but are) known to be ineffective?

One problem with making a list like this: People already get mad at EA for saying that their favorite trendy cause is ineffective. I'm somewhat sympathetic to this: Even if I don't think [super trendy cause] is the most important cause, I'm usually glad people are working on it, and I don't want to discourage them (if the likely result of me discouraging them is that they switch to playing video games or something). It's also bad from a public relations point of view for EA to be seen as existing in opposition to trendy causes.

Therefore, if anyone makes a list like this, I suggest you stick to obscure causes.

EA considerations regarding increasing political polarization

EDIT: I want to highlight this take by someone who's much more knowledgable than I am; you should probably read it before reading my comment.

First, the immediate stakes are far lower - in the Cultural Revolution, "counter-revolutionary revisionists" were sent cross-country to re-education camps, tortured, killed, even eaten. As far as I am aware, none of these things have happened recently in America to public figures (or made-public-by-Twitter figures) as a result of the sort of backlash you discuss.

This hasn't happened yet, and probab... (read more)

EA considerations regarding increasing political polarization
Secondly, effective altruists are disproportionately employed at companies like Google and Facebook. The policies of social media giants can influence discourse norms on the Web and therefore society as a whole. While EAs working at tech giants may not have enough power within the organizational hierarchy to make a meaningful difference, it’s something worth considering. Another way in this vein that EAs could make a difference is by creating or popularizing discussion platforms that promote rational argumentation and mutual understanding instead of
... (read more)
EA considerations regarding increasing political polarization
increasing the presence of public service broadcasting

I don't know how well that would work in the US--it seems that existing public service broadcasters (PBS and NPR) are perceived as biased by American conservatives.

A related idea I've seen is media companies which sell cancellation insurance (archive). The idea being that this is a business model which incentivizes obtaining the trust and respect of as many people as possible, as opposed to inspiring a smaller number of true believers to share/subscribe. One advantage of this idea is it does... (read more)

Should EAs be more welcoming to thoughtful and aligned Republicans?

Maybe you could choose to only vote in a party's primary if you also precommit to voting for your chosen candidate in the general election if they win the primary.

Should EAs be more welcoming to thoughtful and aligned Republicans?

I think if you're in a blue state like California, it generally makes sense to register as Republican to vote in the Republican primary, because there will be fewer California Republicans voting in that primary, but California still contributes the same number of electoral college votes?

3Aaron Gertler2y
I disagree with this perspective for many of the reasons outlined in this comment [] (on a thread about registering with the Labour party for similar reasons). I think it's good if EA action, political or otherwise, is taken in the spirit of cooperation and honesty, and registering for a party whose positions you broadly oppose doesn't seem to follow that. (There are exceptions to this principle, of course, but given the relatively low impact of voting in statewide or national elections, that doesn't seem like an exception worth making.)
Genetic Enhancement as a Cause Area

I think a good way to explore potential downsides of this proposal, and also potentially reduce the taboo around genetic enhancement, would be to steelman the concerns of people who are reflexively opposed to it.

For example, how likely is it that talking about genes more (e.g. the genetic basis of intelligence) will cause people to associate moral value with genes or feel contempt for those are genetically unlucky? You could do psychology experiments where you tell participants that X% of variation in some trait is genetic and see how that affects their a... (read more)

Leverage Research: reviewing the basic facts
I also hope your faith in Bennett is well-placed, that whatever mix of vices led him to write vile antisemitic ridicule on an email list called 'morning hate' in 2016 bear little relevance to the man he was when with Leverage in ~~2018, or the man he is now.

Perhaps it'd be helpful for Bennett to publish a critique of alt-right ideas in Palladium Magazine?

  • In Bennett's statement on Medium, he says now that he's Catholic, he condemns the views he espoused. If that's true, he should be glad to publish a piece which reduces their
... (read more)
Evaluating Communal Violence from an Effective Altruist Perspective
communal violence seems to be common in post-colonial contexts, where many borders have been drawn in disregard to geographic grouping of social groups.

Hmmm, would this reasoning also imply that immigration restrictions could reduce communal violence in some cases? If putting people of different social groups in the same country tends to cause conflict.

I don't think I'd interpret this pattern to mean that restricting immigration would reduce communal violence. Rather, that places where communal violence is happening may correlate with any number of the side effects of colonialism, including a weakened concept of a nation state due to borders that don't reflect identity groups. In other words, the context by which diversity happens may play a role in whether communal violence happens (forced cohabitation versus elected migration). There's not enough data to sufficiently support either idea, I just want to be clear that I don't see the evidence to suggest anti-immigration as an effective peacebuilding mechanism.
Is preventing child abuse a plausible Cause X?

There are non-political ways to address this, such as better contraceptives like Vasalgel.

EA already has semi-official positions on intractable political issues like immigration. If stable two-parent families are indeed an effective way to prevent child abuse, I don't see why we shouldn't have a semi-official position on promoting those as well. It could help address conservative underrepresentation in the EA movement. I think if some positions are taken publicly on both sides, that increases our credibility as an independent source of truth. ... (read more)

1Nathan Young3y
In this sense I think the govt should create appropriate incentives for long term committed relationships where children are concerned - perhaps like a no claims bonus (an increasing yearly benefit of not crashing a car in the UK) for each year parents with children who stay together until their last child is 18?
Is preventing child abuse a plausible Cause X?

"Obviously we can't say this is all causal - in general all good properties are correlated, so it's likely there are shared genetic etc. causes."

Possible causal mechanism:

Through infanticide, males can eliminate the offspring of their competition and get the female back to full baby-making capacity faster

Can you provide a reliable source supporting the claim that the UK legal system does not allow the accused access to all evidence?

I did some research of my own, and from what I can gather, it seems the provision you refer to is mostly about not letting the public know the name of the alleged victim. I find it hard to believe that the accused sometimes does not know the name of the alleged victim in the UK legal system.


FIRE has some discussion on their website if you search for "cross-examine" here. Maybe you can provide legal background on how this situation differs from a college disciplinary hearing.

But I'm less interested in legal technicalities and more interested in what the best policies for Effective Altruism are. There's a decent chance this is the end of Jacy's career as an EA. It's important for CEA to wield its power in this area responsibly.

I'm not saying Jacy should definitely be allowed to cross-examine witnesses. I'... (read more)

Two of FIRE's conditions request that victims of sexual assault must face their assailant in order to have any hope of justice.

If I'm not mistaken, only one condition requires this ("Right to face accuser and witness"). I don't see how the "Access to all evidence" condition requires this.

You seem to have strong feelings about this. I think these are complex issues that deserve careful consideration. Here in the US, the right to confront witnesses is a guarantee provided by the Sixth Amendment to our constitution. I'd want a good understanding of why it's there before being confident in its removal.

I do have strong feelings about this, but having strong feelings and having given complex issues careful consideration are not mutually exclusive, and the implication otherwise was uncalled for. Having carefully considered the issue, I have concluded the anonymity of sexual assault victims is the most important factor here, I'm not alone in this conclusion. The UK legal system, for example, agrees. Give that you easily identified that "access all evidence" was the other criterion which risked anonymity, I don't think it's too hard to see the connection between them.

The Sixth Amendment only applies in criminal cases. These are not criminal cases.

Very few teenagers are formally accused of sexual misconduct, and even fewer expelled from a university following an accusation.

I searched for information on how Brown University handles sexual misconduct and quickly found two cases of judges siding with students who felt they were treated unfairly by Brown University tribunals.

A federal judge has reinstated a Brown University student after finding that the Ivy League school in Providence, R.I., improperly judged him responsible for sexual misconduct.
"After the preliminary injunction, this Court wa
... (read more)

I don't think it's clear from this post which steps weren't voluntary, and I don't think we should make assumptions.

I'm familiar with a specific case in this area where CEA's response seemed excessive to me. And I've heard of CEA employees, people who were middle-of-the-road politically, who began to tire of CEA's excessive concern for its public image and the public image of the EA movement.

But the thing is that excessive concern for public image might not be a bad idea in this day and age. People have written books about this.

He has himself agreed to step back from the EA community more generally, and to step back from public life in general, which would be an odd move if these were minor misdemeanours.

Not necessarily, in the current cultural milieu.

I think enforcement of this stuff is very uneven and depends a lot on your social circle. Some social circles are underzealous in their enforcement, others overzealous. Given purity spiral dynamics which seem present in the animal rights movement, it seems possible their enforcement is overzealous.

The Importance of Truth-Oriented Discussions in EA

Thanks for the info.

The way I'm reading these excerpts, only one refers to an alienating conversation of the sort discussed in Making Discussions Inclusive (the one about the "somewhat sexist" comment).
The other three seem like complaints about the "vibe", which feels like a separate issue. (Not saying there's nothing to do, just that Making Discussions Inclusive doesn't obviously offer suggestions.) Indeed, there could even be a tradeoff: Reading posts like Making Discussions Inclusive makes me less inclined to talk ... (read more)

The comment about how the gender imbalance “led to different topics to be discussed” might (or might not) reflect alienating conversations, but I agree with your general point that the survey quotes are more about the "vibe". I think the quotes suggest that simple things like running icebreakers and saying hi to people (whether or not they are women and/or queer) can be really valuable.
The Importance of Truth-Oriented Discussions in EA

My intent was to point out that you can make the slippery slope argument in either direction. I wasn't trying to claim it was more compelling in one direction or the other.

If you believe EA has Epistemic Honor, that argument works in both directions too: "Because EA has Epistemic Honor, any rules we make will be reasonable, and we won't push people out just for having an unfashionable viewpoint."

I do think slippery slope arguments have some merit, and group tendencies can be self-reinforcing. Birds of feather flock together. Because ... (read more)

Nah, it does apply to itself :) But you think pushing them out is the right thing to do, correct? Let me just make sure I understand the gears of your model. Do you think one person with an unfashionable viewpoint would inherently be a problem? Or will it only become a problem when this becomes a majority position? Or perhaps, is the boundary the point where this viewpoint starts to influence decisions? Do you think any tendency exists for the consensus view to drift towards something reasonable and considerate, or do you think that it is mostly random, or perhaps there is some sort of moral decay that we have to actively fight with moderation? Surely, well kept gardens die by pacifism, and so you want to have some measures in place to keep the quality of discussion high, both in the inclusivity/consideration sense and in the truth sense. I just hope that this is possible without banning topics. For most of the reasons stated by the OP. Before we start banning topics, I would want to look for ways that are less intrusive. Case in point: it seems like we're doing just fine right now. Maybe this isn't a coincidence (or maybe I'm overlooking some problems, or maybe it's because we already ignore some topics)
Load More