What are the best summer opportunities for (freshman and sophomore) college students in CS/ML interested in technical alignment or AI policy?
When I have read grants, most have (unfortunately) fallen closer to: "This idea doesn't make any sense" than "This idea would be perfect if they just had one more thing". When a grant falls into the latter, I suspect recipients do often get advice.
I think the problem is that most feedback would be too harsh and fundamental -- these are very difficult and emotionally costly conversations to have. It can also make applicants more frustrated and spread low fidelity advice on what the grant maker is looking for. A rejection (hopefully) encourages the applicant to read and network more to form better plans.
I would encourage rejected applicants to speak with accepted ones for better advice.
Most of this seems focused on Alice's experience and allegations. As I understand it, most parties involved - including Kat - believe Chloe to be basically reliable, or at least much more reliable.
Given all that, I'm surprised that this piece does not do more to engage with what Chloe herself wrote about her experience in the original post: https://forum.effectivealtruism.org/posts/32LMQsjEMm6NK2GTH/sharing-information-about-nonlinear?commentId=gvjKdRaRaggRrxFjH
Chloe has been unreliable. She lied about not having a work contract, she lied about the compensation structure, she lied about how many incubatees we had, she lied about being able to live/work apart, doing the accounting, etc etc. Almost all of the falsehoods and misleading claims we cover are also told by her because she signed off on Ben's post and didn't correct the dozens of falsehoods and misleading claims in it.
We originally thought she was more reliable because we hadn't heard from reliable sources what she was saying. Now that it's in writing, we have firm evidence that she has told dozens of falsehoods and misleading claims.
From Ben: "After this, there were further reports of claims of Kat professing her romantic love for Alice, and also precisely opposite reports of Alice professing her romantic love for Kat. I am pretty confused about what happened."
Could you comment?
There was a Works in Progress magazine article about this https://worksinprogress.co/issue/markets-in-fact-checking
Thanks, that's helpful context!
I find it a bit weird - possibly unhelpful - to blend a big picture cause prioritization argument and the promotion of a specific matching campaign.
GiveDirectly, Effective Altruism Australia, EA Aotearoa New Zealand, Every.org, The Life You Can Save
What's going on with the coauthorship here - multiple organizations wrote this post together? Should this be read as endorsements, or something else?
(1) The topic is often sensationalised by many who talk about it
Many things are sensationalized. This is not good evidence for or against fertility being a problem. Many accuse AIXR of being sensationalized.
(2) some of these people, infer that it could result in humanity going extinct.
I do not think smart fertility advocates believe that populations would slowly dwindle until there was one person left. Obviously that is a silly model. The serious model, described in Ch. 7 of What We Owe the Future, is that economic growth will slow to a c...
It seems somewhat irresponsible to title this post "every mention of EA in Going Infinite" if it only includes a handful of the many mentions of EA in Going Infinite. Appreciate you for clarifying!
I wrote about every mention, but some were summaries rather than direct copies and pastes, which I thought was straightforward for readers.
For example when I say, "He devotes several pages to talking about Peter Singer, Toby Ord and Will MacAskill, and the early version of 80,000 Hours Will was promoting on his visit to Harvard", I mean there were many mentions of effective altruism on those pages!
I also include sections of the book that talk about effective altruism without using that exact phrase.
I don't think there are any I didn't either quote or summarise, but I only read it once, so I could have missed some
sociological (e.g. richer people want less kids)
This misunderstands the fertility problem. Most fertility advocates focus on the fertility gap - the gap between how many children people want to have and actually have (which is fewer than they want). It's also not that richer people (within countries) want to have less kids. We're seeing U shaped fertiliy trends, where the rich have more children than the middle class.
This implies it is not a "sociological phenomenon" (except in a trivial sense) and is instead a complex mix of social, cultural and economic ...
It's very likely a sociological phenomena, and so behaviour change could occur if/when time occurs
This is extremely vague and hard to parse.
Chapter 7 of What We Owe the Future has some discussion along these lines. I hope that most EAs are not prioritizing this issue not because it's not important, but because short to medium AI timelines present a more urgent problem.
There's something darkly funny about the idea that one would need to "be a shark," "move fast and break things," threaten and coerce employees," ""crush enemies"...
All to... publish a podcast of already written articles? Do some career coaching?
I feel like this is a cheap shot, and don't like seeing it on the top of this discussion.
I think it can be easy to belittle the accomplishments of basically any org. Most startups seem very unimpressive when they're small.
A very quick review would show other initiatives they've worked on. Just go to their tag, for instance:
https://forum.effectivealtruism.org/topics/nonlinear-fund
(All this isn't to say where I side on the broader discussion. I think the focus now should be on figuring out the key issues here, and I don't think comments like this help ...
I certainly don't think it suggests he's a bad actor, but it seems reasonable to consider it improper conduct with a small organization of people living and working together - even if Alice and Chloe don't see it as an issue. I don't have a strong view one way or the other, but it seemed worth flagging in the context of your claim .
Thanks - more sympathetic to the ask in that case, though I don't think you were obliged to wait.
Within the community tab 'New and Upvoted' seems to still be the same posts, month after month. Perhaps new should gain more weight, given the current posting frequency and upvoting?
The article alleges he was dating an employee who seems to have been a subborniate, which someone might claim is improper conduct.
Repost from LW:
My understanding (definitely fallible, but I’ve been quite engaged in this case, and am one of the people Ben interviewed) has been that Alice and Chloe are not concerned about this, and in fact that they both wish to insulate Drew from any negative consequences. This seems to me like an informative and important consideration. (It also gives me reason to think that the benefits of gaining more information about this are less likely to be worth the costs.)
They also said that in the past day or so (upon becoming aware of the contents of the post), they asked Ben to delay his publication of this post by one week so that they could gather their evidence and show it to Ben before he publishes it (to avoid having him publish false information). However, he refused to do so.
This is really weird to me. These allegations have been circling for over a year, and presumably Nonlinear has known about this piece for months now. Why do they still need to get their evidence together? And even if they do - just due to extr...
To be clear I only informed them about my planned writeup on Friday.
(The rest of the time lots of other people involved were v afraid of retaliation and intimidation and I wanted to respect that while gathering evidence. I believe if I hadn't made that commitment to people then I wouldn't have gotten the evidence.)
I'm very disappointed in the author for writing a non-rigorous, slanderous accusation of an organization that does a whole lot of good
What are you accusing of being slanderous?
Influencing the creation of Professor Quirrel in HPMOR and being influenced by Professor Quirrel in HPMOR both seem to correlate with being a bad actor in EA - a potential red flag to watch out for.
they’ll be paying maybe $500 for a ticket that costs us $1000.
There may be room for more effective price discrimination here. When one buys a ticket to EAG from a corporation that is not price sensitive, ideally they would pay (at least) the complete cost of their admission. I recall their being tiers beyond "full price" - to sponsor other attendees - but this would not be a legitimate corporate expense. Could there be an easy way for corporate attendees to pay the full price?
IMO there's a difference between evaluating arguments to the best of your ability and just deferring to the consensus around you.
Of course. I just think evaluating and deferring can look quite similar (and a mix of the two is usually taking place).
OP seems to believe students are deferring because of other frustrations. As many have quoted: "If after Arete, someone without background in AI decides that AI safety is the most important issue, then something likely has gone wrong".
I've attended Arete seminars at Ivy League universities and seen what looked liked fairly sophisticated evaluation to me.
but I am very concerned with just how little cause prioritization seems to be happening at my university group
I've heard this critique in different places and never really understood it. Presumably undergraduates who have only recently heard of the empirical and philosophical work related to cause prioritization are not in the best position to do original work on it. Instead they should review arguments others have made and judge them, as you do in the Arete Fellowship. It's not surprising to me that most people converge on the most popular position within the broader movement.
Dwarkesh Patel recently asked Holden about this:
Dwarkesh Patel
Are you talking about OpenAI? Yeah. Many people on Twitter might have asked if you were investing in OpenAI.
Holden Karnofsky
I mean, you can look up our $30 million grant to OpenAI. I think it was back in 2016–– we wrote about some of the thinking behind it. Part of that grant was getting a board seat for Open Philanthropy for a few years so that we could help with their governance at a crucial early time in their development. I think some people believe that OpenAI has been net...
This is really sad and frustrating to see, that a community which prides itself in rigorous and independent thinking has taken to reciting by the same platitudes that every left wing organization does. We're supposed to hold ourselves to higher standards than this.
Posts like this makes me much less being interested in being a part of EA.
lol when people use this burner account, it's usually closer to "this argument could get a bit annoying" than "I feel the need to protect my anonymity for fear of retribution." please don't speak for all burners
I disagree with this. For one, OpenPhil has a higher bar now. There's a lot of work that needs to be done. ASB and others might already think this was a very bad grant. There's a cost to dwelling on these things, especially as EA Forums drama rather than a high quality post mortem.
it's not anywhere in any canonical EA materials
This seems a bit obtuse. In any local EA community I've been a part of, poly plays a big part in the culture.
Plenty of EAs are criticizing it in this very thread.
This is sort of true, but most of them are receiving a lot of downvotes. And this is the first time I've seen a proper discussion about it.
I don't have a particular agenda about "what should happen" here. I've said we should scrutinize the ways that polyamorous norms could be abused in high trust communities. I'm not sure what the outcome would be, but I would certainly hope it's not intolerance of poly communities.
I would readily agree that some - perhaps most - of these problems could also be solved by ensuring EA spaces are purely professional, but it does seem a bit obtuse to not understand that someone could feel more uncomfortable when asked to join a polycule at an EA meet ...
I certainly don't think it's conclusive, or even strong evidence. As I said, I think it's one thing among many that should inform our priors here. There's also a different vein of anthropological research that looks at non-monogamy and abuse in cults and other religious contexts, but I'm less familiar with it.
The alternative - accepting norms of sexual minorities without scrutiny - seems perfectly reasonable in many cases, but because of those reasons I don't think it should be abided by here, especially in light of these women's accounts. ...
if you are saying "we shouldn't tolerate this in the community", that just is intolerant.
Ok, fortunately that is not what I am saying.
Could you clarify what concretely you do want to happen, then, if not less tolerance of polyamory? What would be different, if polyamory was not a sacred cow? What are the possible conclusions we could come to after reflecting on this?
Jeff was probably not asking what "sacred cow" means; more likely the question was asking in what way polyamory is a sacred cow of EA. I will grant that EA is more tolerant of most personal traits than society typically is, and therefore is more supportive of polyamory than other groups just by not being against it, but it's not anywhere in any canonical EA materials, and certainly not a sacred cow. Plenty of EAs are criticizing it in this very thread.
This post is a bit weak in making its case but it is blindingly obvious that Helena is a grift and I'm a bit unimpressed by galaxy brain'd reasons (hit based, etc) for thinking it might be good.
But in the big picture, occasionally a grant is bad. We can't treat every bad grant as a scandal.
Though there's a point of diminishing returns to treating every bad grant as a scandal, 500000 $ seems non negligible and worth scandaling about at least a little. If we do scandals on all large grants, then it incentivizes to start with smaller grants for hits based giving (where possible)
It's surprising to me that polyamory continues to be such a sacred cow of EA. It's been highly negative for EA's public image, and now it seems to be connected to a substantial amount of abuse. There's a number of reasons our priors should suggest that non-monogomous relationships in high trust, insular communities can easily lead to abuse. It's always seemed overly optimistic to think EA could avoid these problems. Of course, there have been similar ongoing discussions in the Berkeley Rationalist community for a number of years now.
This seems like one of the most important community issues to reflect on.
I voted disagree & want to explain why:
I don't think it's a “sacred cow" in EA and I don't think there are a number of reasons our priors should be that way. I very strongly don't think it can be generalised to that extent. (Background: I've been on the receiving end of some bad social dynamics in which polyamory kind of played a role. Think unwanted attention of a person with more social power, not knowing what to do about it, etc. So I think I know what I'm talking about, at least to a small extent.)
I think the main negative prior s...
I don't see why priors should make us suspect non-monogamous relationships would lead to more abuse than monogamous ones.
No, that's not really what I mean. I mean that I generally doubt these public apologies are generally able to give people the emotional reconciliation that they desire.
They can provide a few things, presumably including PR damage mitigation, a sincere account of their thinking, and perhaps some amount of reconciliation.
My criticism of your post is that it seems intent on optimizing for only one of those - indeed considering it entirely sufficient for a "good apology" without considering how these things trade off, nor considering what we might normatively want an apology to do. In my view, a sincere account of someone's beliefs is very valuable.
Could you (or someone else) actually make the case for "good apologies" (in the sense you outline in this post) that goes beyond PR concerns?
I understand the desire to know what Bostrom really thinks, but the attention on the structural quality of his apology seems completely undue. None of these elements would presumably reveal more about how Bostrom really thinks than his actual apology.
In fact, it seems like if our preference is to understand how Bostrom really feels, your "good apology" approach might take us further away from that! Your emphasis...
What I am arguing for are principles of kindness, empathy, and decency.
When you engage in actions that hurt people, I think it is a good thing to address that hurt and make things right, and mitigate the damage as much as possible. I do not think Bostrom achieved this goal with his apology.
I do not oppose people stating beliefs that might be upsetting to some people, if such beliefs are relevant and important to a discussion at hand. However, when those beliefs are stated, they should be done so in an empathetic and sensitive manner...
Phrases like "EA elevates people" are becoming common, but it is very unclear what it means. Nick Bostrom created groundbreaking philosophical ideas. Will MacAskill has written extremely popular books and built communities and movements. Sam Bankman Fried became the richest man under 30 in a matter of months. All of these people have influenced and inspired many EAs because of their actions.
Under any reasonable sense of the word, people are elevating themselves. I think EA is incredibly free from 'cult of personality' problems - in fact it's amazing how quickly people will turn against popular EAs. But in any group, some people are going to get status for doing their work well.
I am very surprised by the warm reception to this post. To my mind, this is exactly the type of rhetoric we should be discouraging on the Forums. It's insinuating all kinds of scandals
(I am tired of drama, scandals, and PR. I am tired of being in a position where I have to apologize for sexism, racism, and other toxic ideologies within this movement)
without making any specific allegations or points, which becomes somehow acceptable within the emotional frame of "I am TIRED." Presumably many other people, including those directly impacted by these things, are tired too, and we need to use reason to adjudicate how we should respond.
I had a negative reaction to the post but felt hesitant to reply because of the emotional content. It does suck what the OP is experiencing - I think they (and others) could make less of their identity be about the EA movement and that this would be a good thing. I don't like that 'small-scale EA community builders' are having to apologise for things others into EA have done or having to spend time figuring out how to react to EA drama. That does seem like a waste of time and emotional energy, and also unnecessary.
I think it is very difficult to litigate point three further without putting certain people on trial and getting into their personal details, which I am not interested in doing and don't think is a good use of the Forum. For what it's worth, I haven't seen your Twitter or anything from you.
I should have emphasized more that there are consistent critics of EA who I don't think are acting in bad faith at all. Stuart Buck seems to have been right early on a number of things, for example.
Your Bayesian argument may apply in some cases but it fails in othe...
These areas all seem well-identified, but the essential problem is that EA doesn't have near the sufficient talent for top priority causes already.
It seems like about half the country disagrees with that intuition?