All of burner's Comments + Replies

(1) fetal anesthesia as a cause area intuitively belongs with 'animal welfare' rather than 'global health & development', even though fetuses are human.

It seems like about half the country disagrees with that intuition?

What are the best summer opportunities for (freshman and sophomore) college students in CS/ML interested in technical alignment or AI policy?

2
Agustín Covarrubias
3mo
It depends on their level of past engagement and experience, but I would start by checking aisafety.training (for training and research programs). Internships and job opportunities are more diffuse, but a good place to start is the 80,000 hours job board. Do you have a more specific audience in mind?

When I have read grants, most have (unfortunately) fallen closer to: "This idea doesn't make any sense" than "This idea would be perfect if they just had one more thing". When a grant falls into the latter, I suspect recipients do often get advice.

I think the problem is that most feedback would be too harsh and fundamental -- these are very difficult and emotionally costly conversations to have. It can also make applicants more frustrated and spread low fidelity advice on what the grant maker is looking for. A rejection (hopefully) encourages the applicant to read and network more to form better plans. 

I would encourage rejected applicants to speak with accepted ones for better advice. 

5
Ariel Pontes
3mo
"these are very difficult and emotionally costly conversations to have" I don't think this has to be the case. These things can usually be circumvented with sufficiently impersonal procedures, such as rating the application and having a public guide somewhere with tips on how to interpret the rating and a suggested path for improvement (e.g. talking to a successful grant recipient in a similar domain). A "one star" or "F" or "0" rating would probably still be painful, but that's inevitable. Trying to protect people from that strikes me as paternalistic and "ruinously empathetic".
burner
4mo56
16
6
3

Most of this seems focused on Alice's experience and allegations. As I understand it, most parties involved - including Kat - believe Chloe to be basically reliable, or at least much more reliable. 

Given all that, I'm surprised that this piece does not do more to engage with what Chloe herself wrote about her experience in the original post: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=gvjKdRaRaggRrxFjH

Chloe has been unreliable. She lied about not having a work contract, she lied about the compensation structure, she lied about how many incubatees we had, she lied about being able to live/work apart, doing the accounting, etc etc. Almost all of the falsehoods and misleading claims we cover are also told by her because she signed off on Ben's post and didn't correct the dozens of falsehoods and misleading claims in it. 

We originally thought she was more reliable because we hadn't heard from reliable sources what she was saying. Now that it's in writing, we have firm evidence that she has told dozens of falsehoods and misleading claims. 

From Ben: "After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened."

Could you comment?

throw e/acc on there too

Thanks, that's helpful context! 

I find it a bit weird - possibly unhelpful - to blend a big picture cause prioritization argument and the promotion of a specific matching campaign.  

GiveDirectly, Effective Altruism Australia, EA Aotearoa New Zealand, Every.org, The Life You Can Save

What's going on with the coauthorship here - multiple organizations wrote this post together? Should this be read as endorsements, or something else?

5
Effective Altruism Australia
6mo
Effective Altruism Australia is partnering with GiveDirectly to promote a matching opportunity for International Day for the Eradication of Poverty for Australian donors, amongst promoting our other partners which target extreme poverty and improving and saving the life of those living today. From the post:

(1) The topic is often sensationalised by many who talk about it 

Many things are sensationalized. This is not good evidence for or against fertility being a problem. Many accuse AIXR of being sensationalized. 

(2) some of these people, infer that it could result in humanity going extinct. 

I do not think smart fertility advocates believe that populations would slowly dwindle until there was one person left. Obviously that is a silly model. The serious model, described in Ch. 7 of What We Owe the Future, is that economic growth will slow to a c... (read more)

It seems somewhat irresponsible to title this post "every mention of EA in Going Infinite" if it only includes a handful of the many mentions of EA in Going Infinite. Appreciate you for clarifying!

I wrote about every mention, but some were summaries rather than direct copies and pastes, which I thought was straightforward for readers.

For example when I say, "He devotes several pages to talking about Peter Singer, Toby Ord and Will MacAskill, and the early version of 80,000 Hours Will was promoting on his visit to Harvard", I mean there were many mentions of effective altruism on those pages!

I also include sections of the book that talk about effective altruism without using that exact phrase.

I don't think there are any I didn't either quote or summarise, but I only read it once, so I could have missed some

Yes, I think the title should be changed.

sociological (e.g. richer people want less kids)

This misunderstands the fertility problem. Most fertility advocates focus on the fertility gap - the gap between how many children people want to have and actually have (which is fewer than they want). It's also not that richer people (within countries) want to have less kids. We're seeing U shaped fertiliy trends, where the rich have more children than the middle class.

This implies it is not a "sociological phenomenon" (except in a trivial sense) and is instead a complex mix of social, cultural and economic ... (read more)

2
ElliotJDavies
6mo
My claim is (1) The topic is often sensationalised by many who talk about it (2) some of these people, infer that it could result in humanity going extinct. (3) If it's a sociological phenomenon, it's substantially less likely to result in x-risk, because presumably when faced with extinction, future humans would be willing to have more children.    All of these fit squarely under a broad term like "sociological factors".  To be clear, my point wasn't that fertility advocates are correct to point towards this category of explanations, but that they often do, and they're wrong in doing so.   

It's very likely a sociological phenomena, and so behaviour change could occur if/when time occurs


This is extremely vague and hard to parse.

-3
ElliotJDavies
6mo
Broadly speaking, there's 2 categories of explanations for depopulation: sociological (e.g. richer people want less kids) and biological (e.g. sperm counts are declining). Because the conversation around depopulation is very emotive, which doesn't bring out the best epistemics in people. It's worth pointing out that the cause is almost certainly sociological. Said another way, I think some actors refer to depopulation as a "fertility crisis" and this is misleading, and aspecially unhelpful when trying to derive solutions and forecasts.
Answer by burnerOct 06, 20232
3
1

Chapter 7 of What We Owe the Future has some discussion along these lines. I hope that most EAs are not prioritizing this issue not because it's not important, but because short to medium AI timelines present a more urgent problem.

Yes, instead they should take a play money low liquidity prediction market at face value

5
titotal
7mo
How about looking at the evidence instead? A large amount of which will soon be evaluated in public court, including testimony from co-conspirators, which is not looking good for SBF.  It's kinda dumb to speculate now when we are getting a ton of extra evidence in the next month, but I personally think there's bugger all chance that all the missing billions are recovered. 
burner
7mo2
21
12
1

There's something darkly funny about the idea that one would need to "be a shark," "move fast and break things," threaten and coerce employees," ""crush enemies"...

All to... publish a podcast of already written articles? Do some career coaching?

-11
Morpheus_Trinity
7mo

I feel like this is a cheap shot, and don't like seeing it on the top of this discussion.

I think it can be easy to belittle the accomplishments of basically any org. Most startups seem very unimpressive when they're small. 

A very quick review would show other initiatives they've worked on. Just go to their tag, for instance:
https://forum.effectivealtruism.org/topics/nonlinear-fund

(All this isn't to say where I side on the broader discussion. I think the focus now should be on figuring out the key issues here, and I don't think comments like this help ... (read more)

I certainly don't think it suggests he's a bad actor, but it seems reasonable to consider it improper conduct with a small organization of people living and working together - even if Alice and Chloe don't see it as an issue. I don't have a strong view one way or the other, but it seemed worth flagging in the context of your claim .

Thanks - more sympathetic to the ask in that case, though I don't think you were obliged to wait. 

Within the community tab 'New and Upvoted' seems to still be the same posts, month after month. Perhaps new should gain more weight, given the current posting frequency and upvoting?

The article alleges he was dating an employee who seems to have been a subborniate, which someone might claim is improper conduct. 

Repost from LW:

My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)

They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.

This is really weird to me. These allegations have been circling for over a year, and presumably Nonlinear has known about this piece for months now. Why do they still need to get their evidence together? And even if they do - just due to extr... (read more)

To be clear I only informed them about my planned writeup on Friday.

(The rest of the time lots of other people involved were v afraid of retaliation and intimidation and I wanted to respect that while gathering evidence. I believe if I hadn't made that commitment to people then I wouldn't have gotten the evidence.)

I'm very disappointed in the author for writing a non-rigorous, slanderous accusation of an organization that does a whole lot of good

What are you accusing of being slanderous?

Influencing the creation of Professor Quirrel in HPMOR and being influenced by Professor Quirrel in HPMOR both seem to correlate with being a bad actor in EA - a potential red flag to watch out for. 

7
David M
8mo
Who’s the other example?
7
Linch
8mo
Relevant clip: Emphasis mine. How do you know someone's a bad actor, scary to be around, psychopathic, literally Voldemort, etc? Well sometimes the call is actually pretty hard and requires a lot of detailed investigations, nuanced contextual understanding, etc. But in some other times, they'll just tell you.
4
Elityre
8mo
I recommend that you use a spoiler tag for that last part. Not everyone who wants to has finished the story!

they’ll be paying maybe $500 for a ticket that costs us $1000.

There may be room for more effective price discrimination here. When one buys a ticket to EAG from a corporation that is not price sensitive, ideally they would pay (at least) the complete cost of their admission. I recall their being tiers beyond "full price" - to sponsor other attendees - but this would not be a legitimate corporate expense. Could there be an easy way for corporate attendees to pay the full price?

IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you.

Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place). 

OP seems to believe students are deferring because of other frustrations. As many have quoted: "If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong". 

I've attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me. 

but I am very concerned with just how little cause prioritization seems to be happening at my university group

I've heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It's not surprising to me that most people converge on the most popular position within the broader movement. 

9
Thomas Larsen
8mo
IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you. I think most people probably shouldn't spend lots of time doing cause prio from scratch, but I do think most people should judge the existing cause prio literature on object level and judge them from the best of the ability.  My read of the sentence indicated that there was too much deferring and not enough thinking through the arguments oneself. 
2
Jakob Lohmar
8mo
I'd say that critically examining arguments in cause prioritization is an important part of doing cause prioritization. Just as examining philosophical arguments of others is part of doing philosophy. At least, reviewing and judging arguments does not amount to deferring - which is what the post seems mainly concerned about. Perhaps there is actually no disagreement?
  • Establishing a Justice, Equity, Inclusion and Diversity (JEID) committee.

Why was this valuable?

Dwarkesh Patel recently asked Holden about this:

Dwarkesh Patel 

Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI. 

Holden Karnofsky 

I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net... (read more)

3
NickLaing
9mo
Wow thanks so much. Basically he seems very, very uncertain about whether it is positive or negative.  Very interesting
8
Jeff Kaufman
1y
How so? Aren't these both cases where Habryka has similar amounts of professional knowledge? If not, which case do you think he knew more about?

This is really sad and frustrating to see, that a community which prides itself in rigorous and independent thinking has taken to reciting by the same platitudes that every left wing organization does. We're supposed to hold ourselves to higher standards than this. 

Posts like this makes me much less being interested in being a part of EA.

lol when people use this burner account, it's usually closer to "this argument could get a bit annoying" than "I feel the need to protect my anonymity for fear of retribution." please don't speak for all burners

5
Nathan Young
1y
Naaah that's what main is for :P

I disagree with this. For one, OpenPhil has a higher bar now. There's a lot of work that needs to be done. ASB and others might already think this was a very bad grant. There's a cost to dwelling on these things, especially as EA Forums drama rather than a high quality post mortem.  

8
Jason
1y
I don't think anyone has actually treated this as a scandal. There have been fair questions and criticisms, but I don't think anyone has suggested improper conduct by Open Phil.

it's not anywhere in any canonical EA materials

This seems a bit obtuse. In any local EA community I've been a part of, poly plays a big part in the culture. 

Plenty of EAs are criticizing it in this very thread.

This is sort of true, but most of them are receiving a lot of downvotes. And this is the first time I've seen a proper discussion about it. 

 I don't have a particular agenda about "what should happen" here. I've said we should scrutinize the ways that polyamorous norms could be abused in high trust communities. I'm not sure what the outcome would be, but I would certainly hope it's not intolerance of poly communities. 

I would readily agree that some - perhaps most - of these problems could also be solved by ensuring EA spaces are purely professional, but it does seem a bit obtuse to not understand that someone could feel more uncomfortable when asked to join a polycule at an EA meet ... (read more)

I certainly don't think it's conclusive, or even strong evidence. As I said, I think it's one thing among many that should inform our priors  here. There's also a different vein of anthropological research that looks at non-monogamy and abuse in cults and other religious contexts, but I'm less familiar with it. 

The alternative - accepting norms of sexual minorities without scrutiny - seems perfectly reasonable in many cases, but because of those reasons I don't think it should be abided by here, especially in light of these women's accounts. ... (read more)

if you are saying "we shouldn't tolerate this in the community", that just is intolerant. 

Ok, fortunately that is not what I am saying. 

Could you clarify what concretely you do want to happen, then, if not less tolerance of polyamory? What would be different, if polyamory was not a sacred cow? What are the possible conclusions we could come to after reflecting on this?

I'm very surprised by this. There are number of anthropological findings which connect monogamous norms to greater gender equality and other positive social outcomes. Recently arguments along these lines have been advanced by Joseph Henrich, one of the most prominent evolutionary biologists. 

Isn't the research on this almost all comparing monogamy to polygyny? But polyamory, especially as practiced among EAs and adjacent groups doesn't seem very similar to polygyny to me?

What's the mechanism whereby it leads to greater gender equality? 

Something that is above question or criticism or question (see here), in this case because discourse is often cast as intolerant or phobic 

Jeff was probably not asking what "sacred cow" means; more likely the question was asking in what way polyamory is a sacred cow of EA. I will grant that EA is more tolerant of most personal traits than society typically is, and therefore is more supportive of polyamory than other groups just by not being against it, but it's not anywhere in any canonical EA materials, and certainly not a sacred cow. Plenty of EAs are criticizing it in this very thread.

2
Amber Dawn
1y
It literally is intolerant. Like if you are saying "we shouldn't tolerate this in the community", that just is intolerant. 

This post is a bit weak in making its case but it is blindingly obvious that Helena is a grift and I'm a bit unimpressed by galaxy brain'd reasons (hit based, etc)  for thinking it might be good. 

But in the big picture, occasionally a grant is bad. We can't treat every bad grant as a scandal. 

Though there's a point of diminishing returns to treating every bad grant as a scandal, 500000 $ seems non negligible and worth scandaling about at least a little. If we do scandals on all large grants, then it incentivizes to start with smaller grants for hits based giving (where possible)

burner
1y38
30
28

It's surprising to me that polyamory continues to be such a sacred cow of EA. It's been highly negative for EA's public image, and now it seems to be connected to a substantial amount of abuse. There's a number of reasons our priors should suggest that non-monogomous relationships in high trust, insular communities can easily lead to abuse. It's always seemed overly optimistic to think EA could avoid these problems. Of course, there have been similar ongoing discussions in the Berkeley Rationalist community for a number of years now. 

This seems like one of the most important community issues to reflect on. 

I voted disagree & want to explain why:


I don't think it's a “sacred cow" in EA and I don't think there are a number of reasons our priors should be that way.  I very strongly don't think it can be generalised to that extent. (Background: I've been on the receiving end of some bad social dynamics in which polyamory kind of played a role. Think unwanted attention of a person with more social power, not knowing what to do about it, etc. So I think I  know what I'm talking about, at least to a small extent.)
 

I think the main negative prior s... (read more)

I don't see why priors should make us suspect non-monogamous relationships would lead to more abuse than monogamous ones.

polyamory continues to be such a sacred cow of EA

I'm not sure what you mean by this?

No, that's not really what I mean. I mean that I generally doubt these public apologies are generally able to give people the emotional reconciliation that they desire.

They can provide a few things, presumably including PR damage mitigation, a sincere account of their thinking, and perhaps some amount of reconciliation.

My criticism of your post is that it seems intent on optimizing for only one of those - indeed considering it entirely sufficient for a "good apology" without considering how these things trade off, nor considering what we might normatively want an apology to do. In my view, a sincere account of someone's beliefs is very valuable. 

burner
1y20
11
13

Could you (or someone else) actually make the case for "good apologies" (in the sense you outline in this post) that goes beyond PR concerns?

I understand the desire to know what Bostrom really thinks, but the attention on the structural quality of his apology seems completely undue. None of these elements would presumably reveal more about how Bostrom really thinks than his actual apology. 

In fact, it seems like if our preference is to understand how Bostrom really feels, your "good apology" approach might take us further away from that! Your emphasis... (read more)

What I am arguing for are principles of kindness, empathy, and decency. 

When you engage in actions that hurt people, I think it is a good thing to address that hurt and make things right, and mitigate the damage as much as possible. I do not think Bostrom achieved this goal with his apology. 

 I do not oppose people stating beliefs that might be upsetting to some people, if such beliefs are relevant and important to a discussion at hand.  However, when those beliefs are stated, they should be done so in an empathetic and sensitive manner... (read more)

-2
David M
1y
I guess that the motivation for OP was that people were referring to Bostrom's apology as evidence that he sincerely repented, and deserves to be welcomed back into the fold already; whereas in fact the apology provides scant evidence of sincere introspection and remorse, and so we should not treat him as redeemed, yet. OP describes the way the apology fails to provide this evidence, without which there's no cause for redeeming him yet. Perhaps unlike OP, I don't want Bostrom to write a false apology by following those rules. Nor do I want a lazy or perfunctory apology to be accepted by the community. We should welcome Bostrom back into the fold on certain conditions, namely, that he is sincerely remorseful; and writing a dysfunctional apology doesn't get him closer to meeting that condition. If we successfully coordinated to withhold our acceptance until he makes serious amends, we may in fact succeed in causing him to introspect and change more than he otherwise would. Or he might just lie about his remorse. But accepting a bad (or non-) apology throws away the possibility of Nick introspecting.
1
timunderwood
1y
The problem with Bostrom's apology is that it made the argument worse rather than achieving (the presumed) goal of making the conversation around it as small as possible. There were true things and true impressions he could have said and left that would have done that.
-6
Guy Raveh
1y

Phrases like "EA elevates people" are becoming common, but it is very unclear what it means. Nick Bostrom created groundbreaking philosophical ideas. Will MacAskill has written extremely popular books and built communities and movements. Sam Bankman Fried became the richest man under 30 in a matter of months. All of these people have influenced and inspired many EAs because of their actions. 

Under any reasonable sense of the word, people are elevating themselves. I think EA is incredibly free from 'cult of personality' problems - in fact it's amazing how quickly people will turn against popular EAs. But in any group, some people are going to get status for doing their work well. 

burner
1y28
19
29

I am very surprised by the warm reception to this post. To my mind, this is exactly the type of rhetoric we should be discouraging on the Forums. It's insinuating all kinds of scandals

(I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement)

without making any specific allegations or points, which becomes somehow acceptable within the emotional frame of "I am TIRED." Presumably many other people, including those directly impacted by these things, are tired too, and we need to use reason to adjudicate how we should respond.  

I had a negative reaction to the post but felt hesitant to reply because of the emotional content. It does suck what the OP is experiencing - I think they (and others) could make less of their identity be about the EA movement and that this would be a good thing. I don't like that 'small-scale EA community builders' are having to apologise for things others into EA have done or having to spend time figuring out how to react to EA drama. That does seem like a waste of time and emotional energy, and also unnecessary. 

I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don't think is a good use of the Forum. For what it's worth, I haven't seen your Twitter or anything from you.

I should have emphasized more that there are consistent critics of EA who I don't think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example. 

Your Bayesian argument may apply in some cases but it fails in othe... (read more)

4
Peter McLaughlin
1y
I'm not sure what you mean by saying that my Bayesian argument fails in some cases? 'P(X|E)>P(X) if and only if P(E|X)>P(E|not-X)' is a theorem in the probability calculus (assuming no probabilities with value zero or one). If the likelihood ratio of X given E is greater than one, then upon observing E you should rationally update towards X. If you just mean that there are some values of X which do not explain the events of the last week, such that P(events of the last week | X) ≤ P(events of the last week | not-X), this is true but trivial. Your post was about cases where 'this catastrophe is in line with X thing [critics] already believed'. In these cases, the rational thing to do is to update toward critics.

These areas all seem well-identified, but the essential problem is that EA doesn't have near the sufficient talent for top priority causes already. 

Load more