I think you should speak to Naming What We Can https://forum.effectivealtruism.org/posts/54R2Masg3C9g2GxHq/announcing-naming-what-we-can-1
Though I think these days they go by ‘CETACEANS’ (the Centre for Effectively, Transparently, Accurately, Clearly, Effectively, and Accurately Naming Stuff).
Maybe I misunderstood you.
I think AIM doesn’t constitute evidence for this. Your top hypothesis should be that they don’t think AI safety is that good of a cause area, before positing the more complicated explanation. I say this partly based on interacting with people who have worked at AIM.
AIM simply doesn't rate AI safety as a priority cause area. It's not any particular organisation's job to work on your favourite cause area. They are allowed to have a different prioritisation from you.
To contextualize the final point I made, it seems that in fact there is a lot of criminality among the ultra rich. https://forum.effectivealtruism.org/posts/d8nW46LrTkCWdjiYd/rates-of-criminality-amongst-giving-pledge-signatories (No comment on how malicious it is)
I don't think it's productive to name just one or two of the very many biases one could bring up. I would need some reason to think this bias is more worth mentioning than other biases (such as Ben's payment to Alice and Chloe, or commenters' friendships, etc.).
David - I mention the gender bias in moral typecasting in this context because (1) moral typecasting seems especially relevant in these kinds of organizational disputes, (2) I've noticed some moral typecasting in this specific discussion on EA Forum, and (3) many EAs are already familiar with the classical cognitive biases, many of which have been studied since the early 1970s, but may not be familiar with this newly researched bias.
Edit: I misread what you were saying. I thought you were saying 'Kat has dodged questions about whether it was true', and 'It's not clear the anecdotes are being presented as real'.Actually, Kat said it was true.
I just mean one shouldn't end up in a situation where you're claiming nobody should do X, having just done X. That would be deeply weird of one.
IIRC, Truman said something at the United Nations like "we need to keep the world free from war", right after having fought one of the largest wars in history (WW2). Doesn't seem that weird to me.
I phrased that poorly, please see my reply to Vlad's reply for an explanation.
I weakly think Ben's decision to search for bad information rather than good was a good policy, but that the investigation was lacking in some other aspects.
First they came for the... But I said nothing.
This is extremely distasteful. We have sufficient evidence now about nonlinear I think, and fortunately it is all in public view
I read the author's intention, when she makes the case for 'forgiveness as a virtue', as a bid to (1) seem more virtuous herself, and (2) make others more likely to forgive her (since she was so generous to her accusers - at least in that section - and we want to reciprocate generosity). I think this is an effective persuasive writing technique, but is not relevant to the questions at issue (who did what).
Another related 'persuasive writing' technique I spotted was that, in general, Kat is keen to phrase the hypothesis where Nonlinear did bad things in an ...
I'm confused. You say "what's at issue is the overall character of Nonlinear staff", but that Kat displaying virtues like forgiveness is "is not relevant to the questions at issue (who did what)". (I think both people's character and "who did what" are relevant, and a lot of the post addresses "who did what").
Incidentally, your interpretation of Kat as being manipulative happens to be an example of the lack of goodwill that my original comment was referring to. Whether or not goodwill is in general desirable, I think viewing things through such an overly negative lens puts you at risk of confirmation bias.
If what's at issue was the 'overall character of Nonlinear staff', then is it fair to assume you fully disagreed with Ben's one-sided approach?
Retaliation is bad. If you think doing X is bad, then you shouldn't do X, even if you're 'only doing it to make the point that doing X is bad'.
Retaliation is bad.
People seem to be using “retaliation” in two different senses: (1) punishing someone merely in response to their having previously acted against the retaliator’s interests, and (2) defecting against someone who has previously defected in a social interaction analogous to a prisoner’s dilemma, or in a social context in which there is a reasonable expectation of reciprocity. I agree that retaliation is bad in the first sense, but Will appears to be using ‘retaliation’ in the second sense, and I do not agree that retaliation is bad in this ...
So you endorse "always cooperate" over "tit-for-tat" in the Prisoner's Dilemma?
Seems to me there are 2 consistent positions here:
The thing is bad, in which case the person who did it first is worse. (They were the first to defect.)
The thing is OK, in which case the person who did it second did nothing wrong.
I don't think it's particularly blameworthy to both (a) participate in a defect/defect equilibrium, and (b) try to coordinate a move away from it.
EDIT: A couple other points
I know the payoff structure here might not be an actual Prisoner's
Thanks, really helpful to have this overview, makes me more likely to read the sequence itself (partly by directing me to which parts cover what)
On the wiki:
It seems like 'topics' are trying to serve at least two purposes: linking to wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don't need to be so tied together as they currently are.
One could want to have e.g. 3 classification labels to help subdivide a topic (I think we currently have 'AI safety', 'AI risks', and 'AI alignment'), but that seems like a bad reason to write 3 separate similar articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags and wiki articles were split out such that closely related tags could point to the same wiki article.
Seems like these 'topics' are trying to serve at least two purposes: providing wiki articles with info to orient people, and classifying/tagging forum posts. These purposes don't need to be so tied together as they currently are. One could want to have e.g. 3 classification labels ('safety', 'risks', 'alignment'), but that seems like a bad reason to write 3 separate articles, which duplicates effort in cases where the topics have a lot of overlap.
A lot of writing time could be saved if tags/topics and wiki articles were split out such that closely related tags/topics could point to the same wiki article.
My hard-workingness is really dependent on my work context (e.g., whether I have a job or not). A graph of my hard-workingness over the past year peaks really strongly from Jan-March when I was working on EAGxCambridge, because of the soon and immovable deadlines, and being the main person responsible for it. I tracked 70 hrs/wk of work in the last month (unsustainable). In the meantime I've been far less hard-working (which I prefer). I think if I had a baby, I'd also become really hard-working, because I'd be one of the people most responsible for the 'project'.
One can submit new features here: https://www.swapcard.com/product-roadmap
I just submitted what you said.
Because there's barely anything relevant that is common to both. We don't have any moral obligation to companies, nor does it make the world better in my view to "rehabilitate" companies. A person has to continue existing in society even after committing a crime. A company doesn't have to continue existing.
Good work. It occurred to me that this might be happening but I didn’t do the sleuth work. Thanks.
The problem with Kat’s text is that it’s a very thinly veiled threat to end someone’s career in an attempt to control Nonlinear’s image. There is no context that justifies such a threat.
Just for the record, I think there are totally contexts that could justify that threat. I would be surprised if one of those had occurred here, but I can totally imagine scenarios where the behavior in the screenshot is totally appropriate (or at the very least really not that bad, given the circumstances).
Is my impression correct that EAGxs tend to cost vastly less than this? If so, what explains the difference? (EAGxCambridge cost around $0.25M, and I think other ones had smaller budgets?)
If it's that larger venues cost more in a super-linear way, that suggests having more and smaller events.
Some reasons could be
a) The purpose of the rest of the questions is to inform the initial sift, and not later stages of the application, and if you have been referred by a trusted colleague, then there is no further use of the optional questions to the initial sift, so it would be a waste of applicants’ time
b) Saving applicants’ time on the initial application makes you likely to receive more applications to choose from
However, these referrals could indeed have a nepotistic effect by allowing networking to have more of an influence on the ease of getting t...
Thanks for saying this. This totally rhymes with my experience. I assume that if an application says it will take 15 minutes, I will probably need to spend at least an hour on it (assuming I actually care about getting the job).
Another reason you don't want to run at maximum capacity whenever you have the chance to is in order to conserve the ability to 'sprint' when you actually need/want to.
See also various related thoughts, from the latest 80k After Hours episode with Luisa Rodriguez interviewing Hannah Boettcher:
Luisa:
Concretely, I very much have this if it’s the end of a workday, and I have any more energy and I’m not totally spent, I could obviously do a bit more work. Or just money: like, if I have any savings, that feels wrong. And then what do we do?
Hannah:
...Like, every ti
@JP Addison are you open to me working on a PR that offers this to authors as a toggle-able option?
Looks like this feature is being rolled out on new posts. Or at least one post: https://forum.effectivealtruism.org/posts/gEmkxFuMck8SHC55w/introducing-the-effective-altruism-addiction-recovery-group
We did encourage speakers to include action points and action-relevant information in their content, and tried to prioritise action-relevant workshops (e.g. "what it takes to found a charity"); I think that's about all. Thanks for the tip to include the goals in the write-up.
Thanks, Nick.
I wanted to aim high with cause diversity, as it seemed vital to convey the important norm that EA is a research question rather than a pile of 'knowledge' one is supposed to imbibe. I consider us to have failed to meet our ambitions as regards cause diversity, and would advise future organisers to move on this even earlier than you think you need to. It seems to me that an EAGx (aimed more towards less experienced people) should do more to showcase cause diversity than an EA Global.
From our internal content strategy doc:
...Highest priority:
- AI ri
What I'll say should be taken more as representative of how I've been thinking, than of how CEA or other people think about it.
These were our objectives, in order:
1: Connect the EA UK community.
2: Welcome and integrate less well-connected members of the community. Reduce the social distance within the UK EA community.
3: Inspire people to take action based on high-quality reasoning.
The main emphasis was on 1, where the theory of impact is something like:
The EA community will achieve more by working together than they will by working as indivi...
Re fuzzy search... I couldn't find this post. Search shouldn't be converting 'EA' into a separate search for the word 'effective' absent 'altruism'. Also it feels like it isn't weighting the title heavily enough relative to post body, since the correct title isn't far from my search query.
This is a good design because it’s quite standard across a few websites (social media and fora) that timestamps are links to permalinks.
I got a cheap ish one off Amazon a few years back but I noticed it was melting holes in the the plasticky mat I put it on. I didn’t try to fix this. Does yours get very hot on the underside?
> Money is our biggest bottleneck for right now. Across everything we want to do, the core reason we can’t do more of it is not having enough money.
This suggested to me that I should consider donating to RP. Would donations from small donors be true counterfactual donations, or would they be more likely to displace expected grants from big funders?
For this kind of reason, I would be interested to hear more about where your funding comes from currently. I couldn't determine this from the 'transparency' page on your website; it would seem good to add something about it on there!
Donations from small donors would be true counterfactual donations as we don't expect to raise from institutional funders all the money we would be able to productively spend. Given our size, institutional funders often expect us to raise a decent portion of our money from individual donors.
For transparency, in 2022 we raised $10,693,023.74. It came:
40% from Open Philanthropy
4% from EA Funds
29% as donations / gifts from other foundations and institutions not named OP or EA Funds
18% from individuals giving over $100K
6% from us providing direc
I heard that the EAG London debate wasn’t really a debate as such, it was two talks back to back presenting different views on something. That seems not as valuable as a back and forth.
I'm taking away that how much I believe results is super sensitive to how I decide to model the distribution of actual intervention quality, and how I decide to model the contribution of noise.
How would I infer how to model those things?
As a snapshot of the landscape 1 year on, post-FTX:
80,000 Hours lists 62 roles under the skill sets 'software engineering' (50) and 'information security' (18) when I use the filter to exclude 'career development' roles.
This sounds like a wealth of roles, but note that the great majority (45) are in AI (global health and development is the distant second-place runner up, at 6); and the great majority are in the Bay Area (35; London is second with 5).
Of course, this isn't a perfectly fair test, as I just did the quick thing of using filters on the 80K job board rather than checking all the organisations as Sebastian did last year.
I get a ‘comment not found’ response to your link.