To be clear - the exact problem is that you are proposing excluding specific speakers (Hanania, Hsu, Hanson, the Collinses, etc) - who I find valuable to various degrees, not ideas. If Manifest issued a notice that it was not a venue to discuss IQ or heritability, that seems much more reasonable than excluding these thinkers.
(Why do Hanson and Hanania need to be speakers? They are the foremost advocates of prediction markets on the Right. Their support would be incredibly important in building a cross-party coalition).
As a right-wing person sympathetic to many EA ideals, I'm surprised when I read these posts about how we need to exclude these people to make attendees comfortable. In fact, excluding these people - who I find incredibly smart, reasonable, and valuable - would make me (and I'm sure many of my friends on the Right) extremely uncomfortable.
It seems like you are referring to Richard Hanania - who has been invited twice. I suspect that he was invited because Hanania has been an outspoken advocate of prediction markets. I find it highly doubtful that Hanania has, on net, pushed more people away from Manifest (and prediction markets) than been a draw to them attending.
It's not just a matter of a speaker's net effect on attendance/interest. Alex Jones would probably draw lots of new people to a Manifest conference, but are they types of people you want to be there? Who you choose to platform, especially at a small, young conference, will have a large effect on the makeup and culture of the related communities.
Additionally, given how toxic these views are in the wider culture, any association between them and prediction markets are likely to be bad for the long-term health of the prediction community.
Consider the relative sizes of the groups, and their respective intellectual honesty and calibre. Manifest can be intellectually open, rigorous, and not deliberately platform racists - it really is possible. And to be clear, I'm not saying ban people who agree with XYZ speaker with racist ties - I'm saying don't seek to deliberately invite those speakers. Manifest has already heard from them, do they really need annual updates?
(1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human.
It seems like about half the country disagrees with that intuition?
When I have read grants, most have (unfortunately) fallen closer to: "This idea doesn't make any sense" than "This idea would be perfect if they just had one more thing". When a grant falls into the latter, I suspect recipients do often get advice.
I think the problem is that most feedback would be too harsh and fundamental -- these are very difficult and emotionally costly conversations to have. It can also make applicants more frustrated and spread low fidelity advice on what the grant maker is looking for. A rejection (hopefully) encourages the applicant to read and network more to form better plans.
I would encourage rejected applicants to speak with accepted ones for better advice.
Most of this seems focused on Alice's experience and allegations. As I understand it, most parties involved - including Kat - believe Chloe to be basically reliable, or at least much more reliable.
Given all that, I'm surprised that this piece does not do more to engage with what Chloe herself wrote about her experience in the original post: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=gvjKdRaRaggRrxFjH
Chloe has been unreliable. She lied about not having a work contract, she lied about the compensation structure, she lied about how many incubatees we had, she lied about being able to live/work apart, doing the accounting, etc etc. Almost all of the falsehoods and misleading claims we cover are also told by her because she signed off on Ben's post and didn't correct the dozens of falsehoods and misleading claims in it.
We originally thought she was more reliable because we hadn't heard from reliable sources what she was saying. Now that it's in writing, we have firm evidence that she has told dozens of falsehoods and misleading claims.
There was a Works in Progress magazine article about this https://worksinprogress.co/issue/markets-in-fact-checking
GiveDirectly, Effective Altruism Australia, EA Aotearoa New Zealand, Every.org, The Life You Can Save
What's going on with the coauthorship here - multiple organizations wrote this post together? Should this be read as endorsements, or something else?
(1) The topic is often sensationalised by many who talk about it
Many things are sensationalized. This is not good evidence for or against fertility being a problem. Many accuse AIXR of being sensationalized.
(2) some of these people, infer that it could result in humanity going extinct.
I do not think smart fertility advocates believe that populations would slowly dwindle until there was one person left. Obviously that is a silly model. The serious model, described in Ch. 7 of What We Owe the Future, is that economic growth will slow to a c...
I wrote about every mention, but some were summaries rather than direct copies and pastes, which I thought was straightforward for readers.
For example when I say, "He devotes several pages to talking about Peter Singer, Toby Ord and Will MacAskill, and the early version of 80,000 Hours Will was promoting on his visit to Harvard", I mean there were many mentions of effective altruism on those pages!
I also include sections of the book that talk about effective altruism without using that exact phrase.
I don't think there are any I didn't either quote or summarise, but I only read it once, so I could have missed some
sociological (e.g. richer people want less kids)
This misunderstands the fertility problem. Most fertility advocates focus on the fertility gap - the gap between how many children people want to have and actually have (which is fewer than they want). It's also not that richer people (within countries) want to have less kids. We're seeing U shaped fertiliy trends, where the rich have more children than the middle class.
This implies it is not a "sociological phenomenon" (except in a trivial sense) and is instead a complex mix of social, cultural and economic ...
I feel like this is a cheap shot, and don't like seeing it on the top of this discussion.
I think it can be easy to belittle the accomplishments of basically any org. Most startups seem very unimpressive when they're small.
A very quick review would show other initiatives they've worked on. Just go to their tag, for instance:
https://forum.effectivealtruism.org/topics/nonlinear-fund
(All this isn't to say where I side on the broader discussion. I think the focus now should be on figuring out the key issues here, and I don't think comments like this help ...
I certainly don't think it suggests he's a bad actor, but it seems reasonable to consider it improper conduct with a small organization of people living and working together - even if Alice and Chloe don't see it as an issue. I don't have a strong view one way or the other, but it seemed worth flagging in the context of your claim .
Repost from LW:
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.
This is really weird to me. These allegations have been circling for over a year, and presumably Nonlinear has known about this piece for months now. Why do they still need to get their evidence together? And even if they do - just due to extr...
To be clear I only informed them about my planned writeup on Friday.
(The rest of the time lots of other people involved were v afraid of retaliation and intimidation and I wanted to respect that while gathering evidence. I believe if I hadn't made that commitment to people then I wouldn't have gotten the evidence.)
they’ll be paying maybe $500 for a ticket that costs us $1000.
There may be room for more effective price discrimination here. When one buys a ticket to EAG from a corporation that is not price sensitive, ideally they would pay (at least) the complete cost of their admission. I recall their being tiers beyond "full price" - to sponsor other attendees - but this would not be a legitimate corporate expense. Could there be an easy way for corporate attendees to pay the full price?
IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you.
Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place).
OP seems to believe students are deferring because of other frustrations. As many have quoted: "If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong".
I've attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me.
but I am very concerned with just how little cause prioritization seems to be happening at my university group
I've heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It's not surprising to me that most people converge on the most popular position within the broader movement.
Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship
IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you. I think most people probably shouldn't spend lots of time doing cause prio from scratch, but I do think most people should judge the existing cause prio literature on object level and judge them from the best of the ability.
My read of the sentence indicated that there was too much deferring and not enough thinking through the arguments oneself.
Dwarkesh Patel recently asked Holden about this:
Dwarkesh Patel
Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI.
Holden Karnofsky
I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net...
This is really sad and frustrating to see, that a community which prides itself in rigorous and independent thinking has taken to reciting by the same platitudes that every left wing organization does. We're supposed to hold ourselves to higher standards than this.
Posts like this makes me much less being interested in being a part of EA.
it's not anywhere in any canonical EA materials
This seems a bit obtuse. In any local EA community I've been a part of, poly plays a big part in the culture.
Plenty of EAs are criticizing it in this very thread.
This is sort of true, but most of them are receiving a lot of downvotes. And this is the first time I've seen a proper discussion about it.
I don't have a particular agenda about "what should happen" here. I've said we should scrutinize the ways that polyamorous norms could be abused in high trust communities. I'm not sure what the outcome would be, but I would certainly hope it's not intolerance of poly communities.
I would readily agree that some - perhaps most - of these problems could also be solved by ensuring EA spaces are purely professional, but it does seem a bit obtuse to not understand that someone could feel more uncomfortable when asked to join a polycule at an EA meet ...
I certainly don't think it's conclusive, or even strong evidence. As I said, I think it's one thing among many that should inform our priors here. There's also a different vein of anthropological research that looks at non-monogamy and abuse in cults and other religious contexts, but I'm less familiar with it.
The alternative - accepting norms of sexual minorities without scrutiny - seems perfectly reasonable in many cases, but because of those reasons I don't think it should be abided by here, especially in light of these women's accounts. ...
I'm very surprised by this. There are number of anthropological findings which connect monogamous norms to greater gender equality and other positive social outcomes. Recently arguments along these lines have been advanced by Joseph Henrich, one of the most prominent evolutionary biologists.
Something that is above question or criticism or question (see here), in this case because discourse is often cast as intolerant or phobic
Jeff was probably not asking what "sacred cow" means; more likely the question was asking in what way polyamory is a sacred cow of EA. I will grant that EA is more tolerant of most personal traits than society typically is, and therefore is more supportive of polyamory than other groups just by not being against it, but it's not anywhere in any canonical EA materials, and certainly not a sacred cow. Plenty of EAs are criticizing it in this very thread.
Fwiw, I think this is precisely why you don't want to invite people solely on popularity. Jones is popular and charismatic but epistemically not someone I want to benefit.