All of Neel Nanda's Comments + Replies

As far as I'm aware, coefficient giving may slightly adjust which global health causes they support based on how neglected those are, but it's less than a 1:1 effect, and the size of the global health funding pool at CG is fairly fixed. And there are a bunch of people dying each year, especially given the foreign aid cuts, who would not die if there was more money given to global health stuff, including GiveWell top charities (if nothing else, GiveDirectly seems super hard to saturated). So I don't really see much cause for despondency here, your donations... (read more)

8
Kestrel🔸
As this person seems very worried about counterfactuals, I should probably point out that the All Grants Fund does still make substantial grants to the Top Charities because they don't get enough granting opportunities that are reliably estimated as more effective than a top charity, so on the margin your donations are equivalent. This may change in future - GiveWell are investigating lots more scalable grants in things like water treatment and humanitarian contexts.

Note that Dominic Cummings, one of the then most powerful men in the UK, [credits the rationality community] (https://x.com/dominic2306/status/1373333437319372804) for convincing him that the UK needed to change its coronavirus policy (which I personally am very grateful for!). So it seems unlikely to have been that obvious

-3
Yarrow Bouchard 🔸
It's quite a strange and interesting story, but I don't think it supports the case that LessWrong actually called covid earlier than others. Let's get into the context a little bit. First off, Dominic Cummings doesn't appear to be a credible person on covid-19, and seems to hold strange, fringe views. For example, in November 2024 he posted a long, conspiratorial tweet which included the following: Incidentally, Cummings also had a scandal in the UK around allegations that he inappropriately violated the covid-19 lockdown and subsequently wasn't honest about it (possibly lied about it). This also makes me a bit suspicious about his reliability. This situation with Dominic Cummings reminds me a bit about how Donald Trump's staffers in the White House have talked about how it's nearly impossible to get him to read their briefings, but he's obsessed with watching Fox News. Unfortunately, the information a politician pays attention to and acts on is not necessarily the best information, or the source that conveyed that information first.  As mentioned in the post above, there were already mainstream experts like the CDC giving public warnings before the February 27, 2020 blog post that was republished on LessWrong on February 28. Is it possible Dominic Cummings was, for whatever reason, ignoring warnings from experts while, oddly, listening to them from bloggers? Is Cummings' narrative, in general, reliable? I decided to take a look at the timeline of the UK government's response to covid in March 2020. There's an article from the BBC published on March 14, 2020, headlined, "Coronavirus: Some scientists say UK virus strategy is 'risking lives'". Here's how the BBC article begins: The open letter says: The BBC article also mentions another open letter, then signed by 200 behavioural scientists (eventually, signed by 680), challenging the government's rationale for not instituting a lockdown yet. That letter opened for signatures on March 13, 2020 and closed for si
9
Jason
Although I think Yarrow's claim is that the LW community was not "particularly early on covid [and did not give] particularly wise advice."  I don't think the rationality community saying things that were not at the time "obvious" undermines this conclusion as long as those things were also being said in a good number of other places at the same time. Cummings was reading rationality material, so that had the chance to change his mind. He probably wasn't reading (e.g.) the r/preppers subreddit, so its members could not get this kind of credit. (Another example: Kim Kardashian got Donald Trump to pardon Alice Marie Johnson and probably had some meaningful effect on his first administration's criminal-justice reforms. This is almost certainty a reflection of her having access, not evidence that she is a first-rate criminal justice thinker or that her talking points were better than those of others supporting Johnson's clemency bid.)

ETA: because I think lots of the dollars from individual donors in the EA giving space come from people with 1:1 or better employer matches, like Google or Anthropic

Google's donation match is $10k per person, and I would guess a bunch of donations from Googlers are unmatched

What do you mean by giving to Manifund's regranting program? It's not one place to donate to. It's a bunch of different people who get regranting budgets. You can give to one of those people, but how the money gets used depends a ton on who, which seems important

If you're looking for something x risk related then I think something like the Longview emerging challenges fund is better https://www.longview.org/fund/emerging-challenges-fund/

3
Guillem Lajara🔸
Hi Neel, thanks for your comment! You're right. Manifund's regranting system is quite different from the other funds, and we've spent a lot of time discussing how we should allocate our Future of the Planet funds. It was by far the most nuanced cause area to decide on. Here’s a slightly modified copy-paste of my reply to Austin, who asked a very similar question: Regarding the question of which regrantor to support: we're planning to choose at the end of each allocation cycle (every 3-4 months), based on recent grants and track records. It is kind of funny that you’re the one reviewing this, as we were already planning to contribute to your specific regranting budget for this first cycle. That said, nothing is set in stone. We're very open to exploring alternatives as we gather more evidence; for example, if we observe that Manifund’s system introduces weirdness that puts off donors. If you see any reasons we might be missing or any downsides we should consider, please let me know. Albert Casals, who has been thinking about this with me most closely, may also jump in with more details.

Seems reasonable, thanks! I feel generally more aligned with Coefficient/OP's judgement than Good Ventures', so seems fine by me

Thanks a lot for this post! I was not planning on going to EAGx India for unrelated reasons, but if I had been, this would probably have been enough to convince me to cancel

Are you able to share why you've been endorsed by OP/Coefficient, to the point that they are recommending you to other donors, but they haven't been able to fill your funding gap via Good Ventures?

My understanding is that Coefficient remains excited about recommending us to donors, but recently confirmed with Good Ventures that we're not a good fit for Good Ventures' specific preferences at the moment. 

I'm afraid that I can't speak to Good Ventures' reasons apart from noting that they evidently didn't change Coefficient's decision to recommend us to their other donors.

I was surprised to see that you are US tax deductible (via every) but not UK tax deductible, given that you are a uk-based charity. I assume this is downstream of different levels of non-profit bureaucracy in the different countries? I would recommend explicitly flagging this early in the post as this is a deal breaking factor for many medium sized donors and if this was a constraint for me, I would have filtered exactly incorrectly

I'm happy to confirm that we can now accept Gift Aid (tax-deductible) donations from the UK, via this page on Giving What We Can's donation platform.

6
NickLaing
it's not that surprising to me. getting Tax free donation status in the US  is far easier than most other countries. OneDay Health Charity is registered in New Zealand in primarily with governance based there, and in Uganda as an international NGO, but it's only in the US that people can donate tax deductable through our 501c3 there.....

EDIT 1st Dec 2025: I'm happy to confirm that we can now accept Gift Aid (tax-deductible) donations from the UK, via this page on Giving What We Can's donation platform.

--

Great nudge, thanks Neel! I've updated the post and webpages to make this clearer now.

Extra notes:

  • It's likely that we'll have an online platform for allowing UK donors to fund us tax-deductibly / with Gift Aid later this year or in early 2026. If anyone would like to be notified if/when this becomes the case, please fill in this form [link removed].
  • If anyone would like to make a substan
... (read more)
5
Aleks_K
Forethought are not a UK charity, they are a UK-based non-profit company (according to the footer of their website). But I agree that flagging that donations from the UK are not gift-aid eligible/tax deductible (and that US donations are) would be good as it might be surprising to many people.

Are you happy to receive donations from AGI company employees?

Hi! We're keeping an eye on how big a portion of our funding comes from AGI company employees, but yes, we're very happy to receive such donations at the current margin, thanks.

(Responding in my capacity as Director of Ops at Forethought.)

Thanks for writing this! I strongly agree re work trials, unstructured interviews, informal references, and info being useful despite conflicts of interest

Your poll seems to exclude non US citizen, US residents, who are the most interesting category imo

2
Benevolent_Rain
Thanks Neel, I totally agree. I hope me updating the relevant answers to "U.S. citizens or green card/work permit holders" is not too hard to understand.

I would be pretty shocked if paid ads reach equivalently good people, given that these are not people who have chosen to watch the video, and may have very little interest

3
Austin
Oh definitely! I agree that by default, paid ads reach lower-quality & less-engaged audiences, and the question would be how much to adjust that by. (though paid ads might work better for a goal of reaching new people, of increasing total # of people who have heard of core AI safety ideas)

I struggle to imagine Qf 0.9 being reasonable for anything on TikTok. My understanding of TikTok is that most viewers will be idly scrolling through their feed, watch your thing for a bit as part of this endless stream, then continue, and even if they decide to stop for a while and get interested, they still would take long enough to switch out of the endless scrolling mode to not properly engage with large chunks of the video. Is that a correct model, or do you think that eg most of your viewer minutes come from people who stop and engage properly?

5
Michaël Trazzi
Update: after looking at Marcus' weights, I ended up dividing all the intermediary values of Qf I had by 2, so that it matches with Marcus' weights where Cognitive Revolution = 0.5. Dividing by 2 caps the best tiktok-minute to the average Cognitive Revolution minute. Neel was correct to claim that 0.9 was way too high. === My model is that most of the viewer minutes come from people who watch the all thing, and some decent fraction end up following, which means they'll end up engaging more with AI-Safety-related content in the future as I post more. Looking at my most viewed TikTok: TikTok says 15.5% of viewers (aka 0.155 * 1400000 = 217000) watched the entire thing, and most people who watch the first half end up watching until the end (retention is 18% at half point, and 10% at the end). And then assuming the 11k who followed came from those 217000 who watched the whole thing, we can say that's 11000/217000 = 5% of the people who finished the video that end up deciding to see more stuff like that in the future. So yes, I'd say that if a significant fraction (15.5%) watch the full thing, and 0.155*0.05  = 0.7% of the total end up following, I think that's "engaging properly". And most importantly, most of the viewer-minutes on TikTok do come from these long videos that are 1-4 minutes long (especially ones that are > 2 minutes long): * The short / low-fidelity takes that are 10-20s long don't get picked up by the new tiktok algorithm, don't get much views, so didn't end up in that "TikTok Qa & Qs" sheet of top 10 videos (and for the ones that did, they didn't really contribute to the total minutes, so to the final Qf). * To show that the Eric Schimdt example above is not cherry-picked, here is a google docs with similar screenshots of stats for the top 10 videos that I use to compute Qf. From these 10 videos, 6 are more than 1m long, and 4 are more than 2 minutes long. The precise distribution is: * 0m-1m: 4 videos * 1m-2m: 2 videos * 2m-3m: 2 vi

Innocent until proven guilty is a fine principle for the legal system, but I do not think it is obviously reasonable to apply it to evaluating content made by strangers on the internet. It is not robust to people quickly and cheaply generating new identities, and new questionably true content. Further, the whole point of the principle is that it's really bad to unjustly convict people, along with other factors like wanting to be robust to governments persecuting civilians. Incorrectly dismissing a decent post is really not that bad.

Feel free to call discri... (read more)

3
Midtermist12
I've made a pretty clear distinction here that you seem to be eliding:  1. Identifying AI content and deciding on that basis it's not worth your time 2. Identifying AI content and judging that content differently simply because it is AI generated (where that judgment has consequences)  The first is a reasonable way to protect your time based on a reliable proxy for quality. The second is unfair and poisoning of the epistemic commons. 

I empathise but strongly disagree. AI has lowered the costs of making superficially plausible but bad content. The internet is full of things that are not worth reading and people need to prioritise.

Human written writing has various cues that people are practiced at identifying that indicate bad writing, and this can often be detected quickly, eg seeming locally incoherent, bad spelling, bad flow, etc. These are obviously not perfect heuristics, but convey real signal. AI has made it much easier to avoid all these basic heuristics, without making it much e... (read more)

1
Midtermist12
See my response to titotal.  Identifying signs of AI and using this as a reason not to spend further time assessing is rational for the reasons you and titotal state. But such identification should not effect one's evaluations of content (allocating karma, up voting, or more extremely, taking moderation actions) except insofar as it otherwise actually lowers the quality of the content.  If AI as source effects your evaluation process (in assessing the content itself, not in deciding whether to spend time on it) this is essentially pure prejudice. It's similar to the difference between cops incorporating crime statistics in choosing whether to investigate a young black male for homicide and a judge deciding to lower the standard of proof on that basis. Prejudice in the ultimate evaluation process is simply unjust and erodes the epistemic commons. 

Your points seem pretty fair to me. In particular, I agree that putting your videos at 0.2 seems pretty unreasonable and out of line with the other channels - I would have guessed that you're sufficiently niche that a lot of your viewers are already interested in AI Safety! TikTok I expect is pretty awful, so 0.1 might be reasonable there

2
Marcus Abramovitch 🔸
I answered Michael directly on the parent. Hopefully, that gives some colour.
1
Michaël Trazzi
This comment is answering "TikTok I expect is pretty awful, so 0.1 might be reasonable there". For my previous estimate on the quality of my Youtube long-form stuff, see this comment. tl;dr: I now estimate the quality of my TikTok content to be Q = 0.75 * 0.45 * 3 = 1 The Inside View (TikTok) - Alignment = 0.75 & Fidelity = 0.45 To estimate fidelity of message (Qf) and alignment of message (Qm) in a systematic way, I compiled my top 10 most performing tiktoks and ranked their individual Qf and Qm (see tab called "TikTok Qa & Qf" here, which contains the reasoning for each individual number). Update Sep 14: I've realized that my numbers about fidelity used 1 as the maximum, but now that I've looked at Marcus' weights for other stuff, I think I should use 0.5 because that's the number he gives to a podcast like Cognitive Revolution, and I don't want to claim that a long tiktok clip is more high-fidelity than the average Cognitive Revolution podcast. So I divided everything by 2 so my maximum fidelity is now 0.5 to match Marcus' other weights. Then, by doing a minute-adjusted weighted average of the Qas and Qfs I get: 1. Qf(The Inside View TikTok) = 0.45 2. Qm(The Inside View TikTok) = 0.75 What this means: 1. Since I'm editing clips, the message is already high-fidelity (comes from the source, most of the time). The question is whether people will get a high-fidelity long explanation, or something short but potentially compressed. When weighing things by minute we end up with 0.9 meaning that most of the watchtime-minutes come from the high-fidelity content. 2. I am not always fully aligned with the clips that I post, but I am mostly aligned with them. The Inside View (TikTok) - Quality of Audience = 3 I believe the original reasoning for Qa = 2 is that people watching short-form by default would be young and / or have short attention spans, and therefore be less of a high-quality audience. However, most of my high-performing TikTok clips (that repres
7
Michaël Trazzi
Agreed that the quality of audience is definitely higher for my (niche) AI Safety content on Youtube, and I'd expect Q to be higher for (longform) Youtube than Tiktok. In particular, I estimate Q(The Inside View Youtube) = 2.7, instead of 0.2, with (Qa, Qf, Qm) = (6, 0.45, 1), though I acknowledge that Qm is (by definition) the most subjective. To make this easier to read & reply to, I'll post my analysis for Q(The Inside View Tiktok) in another comment, which I'll link to when it's up. EDIT: link for TikTok analysis here. The Inside View (Youtube) - Qa = 6 In light of @Drew Spartz's comment (saying one way to quantify the quality of audience would be to look at the CPM [1]), I've compiled my CPM Youtube data and my average Playback-based CPM is $14.8, which according to this website [2] would put my CPM above the 97.5 percentile in the UK, and close to the 97.5 percentile in the US. Now, this is more anecdotal evidence than data-based, but I've met quite a few people over the years (from programs like MATS, or working at AI Safety orgs) who've told me they discovered AI Safety from my Inside View podcast. And I expect the SB-1047 documentary to have attracted a niche audience interested in AI regulation. Given the above, I think it would make sense to have the Qa(Youtube) be between 6 (same as other technical podcasts) and 12 (Robert Miles). For the sake of giving a concrete number, I'll say 6 to be on par with other podcasts like FLI and CR. The Inside View (Youtube) - Qf = 0.45 In the paragraph below I'll say Qf_M for the Qf that Marcus assigns to other creators. For the fidelity of message, I think it's a bit of a mixed bag here. As I said previously, I expect the podcasts that Nathan would be willing to crosspost to be on par with his channel's quality, so in that sense I'd say the fidelity of message for these technical episodes (Owain Evans, Evan Hubinger) to be on par with CR (Qf_M = 0.5). Some of my non-technical interviews are probably closer to

Agreed with the other comments for why this is doomed. The thing closest to this that I think might make sense, is something like, "conditioned on the following assumptions/worldview we estimate that this intervention for an extra million dollars can have the following effect". I think that anything that doesn't acknowledge the fact that there are enormous fundamental cruxes here is pretty doomed. but that there might be something productive about clustering the space of worldviews and talking about what makes sense by the lights of each

Seems better with the edit, it didn't flag as self promotional at all to me, since it was a natural and appropriate thing to include

My null hypothesis is that any research field is not particularly useful until proven otherwise. I am certainly not claiming that all economics research is high quality, but I've seen some examples that seemed pretty legit to me. For example, RCTs on direct cash transfers seem pretty useful and relevant to EA goals. And I think tools like RCTs are a pretty powerful way to find true insights into complex questions.

I largely haven't come across insights from other social sciences that seem useful for EA interests. I haven't investigated this much, and I woul... (read more)

2
Bob Jacobs
Thanks for the comment, that seems like a strange null hypothesis to me but alright. My earlier aversion to commenting on the EA forum has borne out again, so I'm going to stop commenting now.

This post is too meta, in my opinion. The key reason EA discusses economics a lot more is that if you want to have true beliefs about how to improve the world, economics can provide a bunch more useful insights than other parts of the social sciences. If you want to critique this, you need to engage with the actual object level claims of how useful the fields are, how good their scientific standards are and how much value there actually is. And I didn't feel like your post spent much time arguing for this

5
Bob Jacobs
Source? EDIT: I'm getting downvoted for asking for a source on a controversial claim? Why? Why does the heterodox EA have to cite dozens of academic sources and still get more downvotes than someone just asserting an academically controversial (but orthodox within EA) claim without a citation or justification? Why does asking for one generate downvotes?

Overrelying on simple economic models might mislead us about which policies will actually help people, while a more holistic look at the social sciences as a whole may counter that.

The papers you cite about how the minimum wage doesn't lead to a negative impact on jobs all seem like economics papers to me. What are the social science papers you have in mind that provide useful evidence that the minimum wage doesn't harm employment?

2
Bob Jacobs
I've already responded to Larks on this

Your examples seem disanalogous to me. The key thing here is the claim that people have a lifelong obligation to their parents. Some kind of transactional "you received a bunch of upfront benefits and now have a lifelong debt", and worse, often a debt that's considered impossible to discharge

This is very different from an instantaneous obligation that applies to them at a specific time, or a universal moral obligation to not do harm to an entity regardless of your relationship with them, or an ongoing obligation that is contingent on having a certain statu... (read more)

4
Larks
Good comment - I agree this is a meaningful distinction, though I don't think it cuts as strongly as you do. Firstly, I'm not sure where you are getting 'impossible to discharge' from. If you borrow $100, you would typically discharge that obligation by repaying $100 (plus interest). Similarly, if you believed in natalist obligations to parents, it seems logical that an obligation created by your parents investing say 19 years in raising you, could be discharged by through similar amount of investment. Secondly, many of the obligations I mentioned cannot easily be avoided either. Moving to another country might get you out of paying taxes in one place, but you'll probably have to pay them in the new place - and some countries like the US will continue to tax you even if you leave! Similarly national service is often based on citizenship, not residency, and obligations like decency and pond intervention cannot be discharged (though I guess you could choose to live in a location with few ponds and very buoyant children). It's even the case that many people seem to view leaving, and thereby escaping from location-based obligations, as immoral - see for example brain drain criticism, or criticism of fighting-age men for fleeing their country rather than defend it. I don't mean to take a strong stance here defending any particular one of these obligations. My point is just that a lot of people do believe in them.

Interesting. Does anyone do group brainstorming because they actually expect it to make meaningful progress towards solving a problem? At least when you're at a large event with people who are not high context on the problem, that seems pretty doomed. I assumed the main reason for doing something like that is to get people engaged and actually thinking about ideas and participating in a way that you can't in a very extremely large group. If any good ideas happen, that's a fun bonus

If I wanted to actually generate good ideas, I would do a meeting of people ... (read more)

6
OllieBase
I don't know what motivations people usually have, but I also feel skeptical of this vague "activation" theory of change. If session leads don't know what actions they want session participants to take, I'm not optimistic about attendees generating useful actions themselves by discussing the topic for 10 minutes in a casual no-stakes, no-rigour, no-guidance setting. I'm more optimistic if the ask is "open a doc and write things that you could do". Yep, the thing you've described here sounds promising for the reasons Alex covered :) I realise I was thinking of the conference setting in my critique here (and probably should've made that explicit), but I'm much more optimistic about brainstorming in small groups of people with shared context, shared goals and using something like the format you've described.

[Epistemic status: argument from authority*]

I think your suggested format is a significant upgrade on the (much more common, unfortunately) "group brainstorm" set up that Ollie is criticising, for roughly the reasons he outlines; It does much better on "fidelity per person-minute".
 

Individual brainstorming is obviously great for this, for the reasons you said (among others).

Commenting on a doc (rather than discussing in groups of 6-8) again allows many more people to be engaging in a high-quality/active way simultaneously.

It also seems worth saying th... (read more)

When you say you doubt that claim holds generally, is that because you think that the weight of AI isn't actually that high, or because you think that AI may make the other thing substantially more important too?

I'm generally pretty sceptical about the latter - something which looked like a great idea not accounting for AI will generally not look substantially better after accounting for AI. By default I would assume that's false unless given strong arguments to the contrary.

6
Hayley Clatterbuck
I think there are probably cases of each. For the former, there might be some large interventions in things like factory farming or climate change (i) that could have huge impacts and (ii) for which we don't think AI will be particularly efficacious or impactful.  For the latter, here are some cases off the top of my head. Suppose we think that if AI is used to make factory farming more efficient and pernicious, it will be via X (idk, some kind of precision farming technology). Efforts to make X illegal look a lot better after accounting for AI. Or, right now, making it harder for people to buy ingredients for biological weapons might be good bets but not great bets. It reduces the chances of bio weapons somewhat, but knowledge about how to create weapons is the main bottleneck. If AI removes that bottleneck, then those projects look a lot better. 

I agree with the broad critique that " Even if you buy the empirical claims of short-ish AI timelines and a major upcoming transition, even if we solve technical alignment, there is a way more diverse set of important work to be done than just technical safety and AI governance"

But I'm concerned that reasoning like this can easily implicitly lead to people justifying incremental adaptions to what they were already doing and answering the question of, is what I'm doing useless in the light of AI, rather than the question that actually matters of, given my v... (read more)

By calling out one kind of mistake, we don't want to incline people toward making the opposite mistake. We are calling for more careful evaluations of projects, both within AI and outside of AI. But we acknowledge the risk of focusing on just one kind of mistake (and focusing on an extreme version of it, to boot). We didn't pursue comprehensive analyses of which cause areas will remain important conditional on short timelines (and the analysis we did give was pretty speculative), but that would be a good future project. Very near future, of course, if short-ish timelines are correct!

I'd argue that you also need some assumptions around is-ought, whether to be a consequentialist or not, what else (if at all) you value and how this trades off against suffering, etc. And you also need to decide on some boundaries for which entities are capable of suffering in a meaningful way, which there's wide spread disagreement on (in a way that imo goes beyond being empirical)

It's enough to get you something like "if suffering can be averted costlessly then this is a good thing" but that's pretty rarely practically relevant. Everything has a cost

I agree that you need ridiculously fundamental assumptions like "I am not a Boltzmann brain that ephemerally emerged from the aether and is about to vanish" and "we are not in a simulation". But if you have that kind of thing, I think you can reasonably discuss objective reality

2
Ben_West🔸
I think if you grant something like "suffering is bad" you get (some form of) ethics, and this seems like a pretty minimal assumption. (Though I agree you can have an internally consistent view that suffering as good just as you can have an internally consistent view that you are a Boltzmann brain.)

Less controversial is a very long way from objective - why do you think that "caring about the flourishing of society" is objectively ethical?

Re the idea of an attractor, idk, history has sure had lot of popular beliefs I find abhorrent. How do we know there even is convergence at all rather than cycles? And why does being convergent imply objective? If you told me that the supermajority of civilization concluded that torturing criminals was morally good, that would not make me think it was ethical.

My overall take is that objective is just an incredibly st... (read more)

Idk, I would just downvote posts with unproductively bad titles, and not downvote posts with strong but justified titles. Further posts that seem superficially justified but actually don't justify the title properly are also things I dislike and downvote. I don't think we need a slippery slope argument here when the naive strategy works fine

What do you say to someone who doesn't share your goals? Eg thinks that happiness is only justified if it's earned, and that most people do not deserve it, as they do "bad thing X", and being against promoting happiness to them

2
Richard Y Chappell🔸
Generally parallel things to what I'd say to someone with different fundamental epistemic standards, like: * I could be wrong about what's justified. (Certainly my endorsing a standard doesn't suffice to make it justified - and likewise for them. We're not infallible!) * Check whether their answer seems objectionably ad hoc in some way, fails to treat like cases alike, is in tension with other claims they accept, or rests on dubious presuppositions ("why think X is so bad?"), etc. * If we get to bedrock, neither of us will be able to persuade the other to change their mind. Still, we may each think that (at least) one of us must be mistaken about what's genuinely justified. * + we may at least identify some areas of overlap (e.g. it sure would suck if a clearly innocent individual were to suffer...)
Neel Nanda
11
4
4
100% disagree

Morality is Objective

What would this even mean? If I assert that X is wrong, and someone else asserts that it's fine, how do we resolve this? We can appeal to common values that derive this conclusion, but that's pretty arbitrary and largely just feels like my opinion. Claiming that morality is objective just feels groundless. 

6
Ben_West🔸
Are there non-moral disagreements which can be resolved without appeal to common assumptions?
4
Owen Cotton-Barratt
Locally, I think that often there will be some cluster of less controversial common values like "caring about the flourishing of society" which can be used to derive something like locally-objective conclusions about moral questions (like whether X is wrong). Globally, an operationalization of morality being objective might be something like "among civilizations of evolved beings in the multiverse, there's a decently big attractor state of moral norms that a lot of the civilizations eventually converge on".

Yep, this seems extremely reasonable - I am in practice far more annoyed if a piece makes attacks and does not deliver

I agree in general, but think that titotal's specific use was fine. In my opinion, the main goal of that post was not to engage the AI 2037, which had already be done extensively in private but rather to communicate their views to the broader community. Titles in particular are extremely limited, many people only read the title, and titles are a key way people decide whether to eat on, and efficiency of communication is extremely important. The point they were trying to convey was these models that are treated as high status and prestigious should not be a... (read more)

9
Patrick Hoang
Even if the goal is communication, it could be the case that normalizing strong attractive titles could lead to more clickbait-y EA content. For example, we could get: "10 Reasons Why [INSERT_PERSON] Wants to Destroy EA." Of course, we still need some prioritization system to determine which posts are worth reading (typically via number of upvotes).
Buck
43
30
0
1

I agree with you but I think that part of the deal here should be that if you make a strong value judgement in your title, you get more social punishment if you fail to convince readers. E.g. if that post is unpersuasive, I think it's reasonable to strong downvote it, but if it had a gentler title, I'd think you should be more forgiving.

I think A>B, eg I often find people who don't know each other in London who it is valuable to introduce. People are not as on the ball as you think, the market is very far from efficient

Though many of the useful intros I make are very international, and I would guess that it's most useful to have a broad network across the world. So maybe C is best, though I expect that regular conference and business trips are enough

2
Constance Li
"People are not as on the ball as you think, the market is very far from efficient" Couldn't agree more!

I think this is reasonable as a way for the community to reflexively react to things, to be honest. The question I'm trying to answer when I see someone making a post with an argument that seems worth engaging with is: what's the probability that I'll learn something new or change my mind as a result of engaging with this?

When there's a foundational assumption disagreement, it's quite difficult to have productive conversations. The conversation kind of needs to be about the disagreement about that assumption, which is a fairly specific kind of discussion. ... (read more)

3
Joseph_Chu
I want to clarify that I don't think ideas like the Orthogonality Thesis or Instrumental Convergence are wrong. They're strong predictive hypotheses that follow logically from very reasonable assumptions, and even the possibility that they could be correct is more than enough justification for AI safety work to be critical. I was more just pointing out some examples of ideas that are very strongly held by the community, that happen to have been named and popularized by people like Bostrom and Yudkowsky, both of whom might be considered elites among us. P.S. I'm always a bit surprised that the Neel Nanda of Google DeepMind has the time and desire to post so much on the EA Forums (and also Less Wrong). That probably says very good things about us, and also gives me some more hope that the folks at Google are actually serious about alignment. I really like your work, so it's an honour to be able to engage with you here (hope I'm not fanboying too much).

Thanks for writing this! I'm broadly sympathetic to Thom's critique, but thought this was impressively well written and good at engagingly/non-annoyingly conveying a different perspective, so kudos. I would love to see more posts in that genre.

Have people recognize you right away. You don't need to tell your name to everyone

This is a VERY huge use case for me. It's so useful!

If someone is in this situation they can just take off their name tag. Security sometimes ask to see it, but you can just take it out of a pocket to show them and put it back

Fair enough, I guess my take from all this is that you mainly just want the all grants fund to have a different philosophy than the one GiveWell is following in practice? Or do you also think they're making a mistake by their own lights?

I just originally thought that the All Grants fund has stuff with a decent evidence base, but less certainty then the top charities. So still more certainty than most other funders in the world.

 Nearly all of the charities there would fit that description so I think they were following that practice. So yes I thought they were making a mistake somewhat by their own lights, or maybe taking the fund in a bit of a different direction.

Or Maybe I was just wrong about what they were trying to do. 

Agreed. GiveWell also takes outside donors and OpenPhil doesn't. I've donated to the all grants fund because I wanted to help with risk tolerant and fast giving after the aid cuts, and am glad the opportunity exists

6
Lorenzo Buonanno🔸
  I don't think that's true anymore: https://www.openphilanthropy.org/partner-with-us/ but I imagine OpenPhil only takes donors above a certain size (here they say >$1M/year) while GiveWell takes donations of all sizes

When I read that description I infer "make the best decision we can under uncertainty", not "only make decisions with a decent standard of evidence or to gather more evidence". It's a reasonable position to think that the TSUs grant is a bad idea or that it would be unreasonable to expect it to be a good idea without further evidence, but I feel like GiveWell are pretty clear that they're fine with making high risk grants, and in this case they seem to think this TSUs will be high expected value

4
NickLaing
Yeah based on the evidence of what GiveWell actually have given most grants to in the past I would have gone with this as what I think GiveWell meant and what I would personally like the most. "only make decisions with a decent standard of evidence or to gather more evidence" I think it makes sense to have separation, and have Openphil in doing higher risk bets undet your heuristic of "make the best decision we can under uncertainty". Why have 2 different bodies doing the same thing with largely the same pool of money? But yes you might be right that at least now maybe both GiveWell and Open Phil are meaning and doing that.

What’s unique about these grants?: These grants are a good illustration of how GiveWell is applying increased flexibility, speed, and risk tolerance to respond to urgent needs caused by recent cuts to US foreign assistance. Funded by our All Grants Fund, the grants also demonstrate how GiveWell has broadened its research scope beyond its Top Charities while maintaining its disciplined approach—comparing each new opportunity to established interventions, like malaria prevention or vitamin A supplementation, as part of its grantmaking decisions.

The grants... (read more)

Related to this point, I was surprised to see this

 

Given that GiveWell's All Grants Fund has basically the same graph

Graph comparing impacts of Top Charities Fund and All Grants Fund

 

Many other grants from the All Grants Fund don't have a ton of evidence behind them and are exploratory. As an example, they funded part of an RCT on building trailbridges in Rwanda, with reasoning «While our best guess is that bridges are below the range of cost-effectiveness of programs we would recommend funding, we think there’s a reasonable chance the findings of the RCT update us toward believing this program is above our ba... (read more)

Your maths needs a term for the conversion between money and labour for the company. I think your current equation assumes 1 to 1 which seems patently false

2
Vasco Grilo🔸
Thanks, Neel. I am assuming the marginal cost-effectiveness of spending on capital and labour is the same. Organisations should move money from the least to the most cost-effective activities until the marginal cost-effectiveness of all activities is equalised. I understand organisations do not manage their resources perfectly. However, for one to argue against my assumption, one would need specific arguments about why, for example, AI safety organisations are under or overspending on compute, or are under or overpaying their employees.

I think there's some speaking past each other due to differing word choices. Holly is prominent, evidenced by the fact that we are currently discussing her. She has been part of the EA community for a long time and appears to be trying to do the most good according to her own principles. So it's reasonable to call her a member of the EA community. And therefore "prominent member" is accurate in some sense.

However, "prominent member" can also imply that she represents the movement, is endorsed by it, or that her actions should influence what EA as a whole i... (read more)

Some takes:

  • I think Holly's tweet was pretty unreasonable and judge her for that not you. But I also disagree with a lot of other things she says and do not at all consider her to speak for the movement
  • To the best of my ability to tell (both from your comments and private conversations with others), you and the other Mechanize founders are not getting undue benefit from Epoch funders apart from less tangible things like skills, reputation, etc. I totally agree with your comment below that this does not seem a betrayal of their trust. To me, it seems more
... (read more)
6
Guive
Can you be a bit more specific about what it means for the EA community to deny Matthew (and Mechanize) implicit support, and which ways of doing this you would find reasonable vs. unreasonable? 

I was going to write a comment responding but Neel basically did it for me. 

The only thing I would object to is Holly being a "prominent member of the EA community". The PauseAI/StopAI people are often treated as fringe in the EA community and the she frequently violates norms of discourse. EAs due to their norms of discourse, usually just don't respond to her in the way she responds to others..

That's not my understanding of what happened with CAIP, there's various funders who are very happy to disagree with OpenPhil who I know have considered giving to CAIP and decided against. My understanding is that it's based on actual reasons, not just an information cascade from OpenPhil

No idea about Apart though

Strong agree - cause neutrality should not at all imply an even spread of investment. I just in fact do think AI is the most pressing cause according to my values and empirical beliefs

2
Stefan_Schubert
Yes, I agree. The OP seems to talk about cause-agnosticism (uncertainty about which cause is most pressing) or cause-divergence (focusing on many causes).

Really glad to hear it! (And that writing several thousand words of very in depth examples was useful!) I'd love to hear if it proves to be useful longer term

Agreed with all of the above. I'll also add that a bunch of orgs do work that is basically useless, and it should not be assumed that just because an org seems "part of the community" that working there will be an effective way to do good - public callouts are costly, and community dynamics and knowledge can be hard to judge from the outside.

I wonder whether CEA or someone could fruitfully run (and share the results of) an anonymous survey of some suitably knowledgeable and diverse group of EA insiders, regarding their confidence in various "EA adjacent" orgs?

Thank you for the post. I was a bit surprised by the bulletin board one. What goes wrong with just positioning the forum exactly as it is now, but saying you're not going to do any maintenance or moderation. but without trying to reposition it as a bulletin board? At the very least I expect the momentum could keep it going for a while. Is the issue that you think you do a lot of active moderation work that sustains healthy discussion norms which matters a lot for the current forum but would matter less for a bulletin board?

4
Sarah Cheng 🔸
Yup this seems right to me, but I would expect that usage would naturally go down over time. You can see this happening in the chart from my January post, for example. I think that online spaces naturally move toward being "a place [for orgs] to promote things" once they have an established audience. For example, I feel like most Slack workspaces turn into this. Most subreddits have rules against promotion, probably for this reason. Without a Forum Team that pays attention to the distribution of content being posted, and actively works to get more good content and retain strong contributors, my guess is that the site will gradually increase in promotions and decrease in discussions, and that this is a feedback loop that will cause strong contributors to continue to leave as the site feels less and less like a place to have interesting discussions. Though of course I don't know for sure what would happen, this is just my guess. :) I really like and resonate with Lizka's thoughts on this as well. For example, this bit pulled out of her doc:
Load more