All of Neel Nanda's Comments + Replies

4
Pablo
12d
I have taken the liberty of reinstating the images and removing the notice. @Mark Xu, I assume you are okay with this?

The Wenar criticism in particular seems laughably bad, such that I find bad faith hypotheses like this fairly convincing. I do agree it's a seductive line of reasoning to follow in general though, and that this can be dangerous

I got the OpenPhil grant only after the other grant went through (and wasn't thinking much about OpenPhil when I applied for the other grant). I never thought to inform the other grant maker after I got the OpenPhil grant, which maybe I should have in hindsight out of courtesy?

This was covering some salary for a fixed period of research, partially retroactive, after an FTX grant fell through. So I guess I didn't have use for more than X, in some sense (I'm always happy to be paid a higher salary! But I wouldn't have worked for a longer period of time, so I would have felt a bit weird about the situation)

9
Linda Linsefors
20d
Given the order of things, and the fact that you did not have use for more money, this seems indeed reasonable. Thanks for the clarification.

Without any context on this situation, I can totally imagine worlds where this is reasonable behaviour, though perhaps poorly communicated, especially if SFF didn't know they had OpenPhil funding. I personally had a grant from OpenPhil approved for X, but in the meantime had another grantmaker give me a smaller grant for y < X, and OpenPhil agreed to instead fund me for X - y, which I thought was extremely reasonable.

In theory, you can imagine OpenPhil wanting to fund their "fair share" of a project, evenly split across all other interested grantmakers.... (read more)

8
Linda Linsefors
21d
Thanks for sharing.    What the other grantmaker (the one who gave your y) though of this? Where they aware of your OpenPhil grant when they offered you funding? Did OpenPhil role back your grant because you did not have use for more than X or some other reason?

Omg what, this is amazing(though nested bullets not working does seem to make this notably less useful). Does it work for images?

Ok nested bullets should be working now :)

3
Will Howard
1mo
Yep images work, and agree that nested bullet points are the biggest remaining issue. I'm planning to fix that in the next week or two. Edit: Actually I just noticed the cropping issue, images that are cropped in google docs get uncropped when imported. That's pretty annoying. There is no way to carry over the cropping but we could flag these to make sure you don't accidentally submit a post with the uncropped images.

Thanks!

Follow up questions to anyone who may know:

Is METR (formerly ARC Evals) meant to be the "independent, external organization" that is allowed to evaluate the capabilities and safety of Anthropic's models? As of 2023-12-04 METR was spinning off from the Alignment Research Center (ARC) into their own standalone nonprofit 501(c)(3) organization, according to their website. Who is on METR's board of directors?

Note: OpenPhil seemingly recommended a total of $1,515,000 to ARC in 2022. Holden Karnofsky (co-founder and co-CEO of OpenPhil at the time, and cu... (read more)

I liked this, and am happy for this to have been a post. Maybe putting [short poem] in the title could help calibrate people on what to expect?

I'd be curious to hear your or Emma's case for why it's notably higher impact for a forum reader to donate via the campaign rather than to New Incentives directly (if they're inclined to make the donation at all)

7
NickLaing
2mo
Nice question, I think there can be a higher impact for a few reasons. First I assume that if they donate directly it will be silent and have no multiplicative effect. 1. When people see donations flowing in on social media it can encourage others to donate 2. This is the kind of thing that's easy to help friends donate to "Hey, my friend is doing this massive run to help kids get vaccinated in Nigeria, do you think you could chip in a few dollars 3. This might help raise awareness generally of NGOs like New Incentive which I'm fairly sure almost no of your non EA friends have heard of. Of course none of these might happen as well, but I think its likely to tilt towards higher impact ;).

To me this post ignores the elephant in the room: OpenPhil still has billions of dollars left and is trying to make funding decisions relative to where they think their last dollar is. I'd be pretty surprised if having the Wytham money liquid rather than illiquid (or even having £15mn out of nowhere!) really made a difference to that estimate.

It seems reasonable to argue that they're being too conservative, and should be funding the various things you mention in this post, but also plausible to me that they're acting correctly? More importantly, I think th... (read more)

I personally feel strongly about CEEALAR being better value, but that's just one of the many organisations listed - you can mentally delete it if your mileage varies. 

Also, for better or worse, the mansion now belongs to EVF, so it's now up to EVF to decide whether it's the most effective path for them to keep it. Does a status quo reversal test suggest that right now, if they had £15million in cash, the best thing to spend it on would be a mansion near Oxford (let's assume that spending it on anything else would come with a couple of months' worth of admin work)?

I also work at Google, and a surprising amount of people (including EAs) aren't aware of the substantial annual donation match! I only noticed by happenstance.

I didn't know there were useful tools online for this, I agree this seems like a great thing for EA orgs/charities to have on their website if it's easy to do

It still seems like a mistake to not point out to people that they can substantially increase their donation and thus lives saved, even if it doesn't count towards the pledge

8
Karthik Tadepalli
3mo
I agree. Let's take for granted GWWC's position that donation matching shouldn't count towards the 10% pledge (I don't think it matters, but even so). Then they could say on their website "Pledgers should check here (link) to see if their employer matches donations. While matched donations do not count towards the GWWC pledge, they offer an easy way to direct extra money towards saving lives." Then there is no confusion.

I think in hindsight the response (with the information I think the board had) was probably reasonable

Reasonable because you were all the same org, or reasonable even if EA Funds was its own org

6
calebp
4mo
I think reasonable even if EA Funds was its own org.

Maybe it would have been cleaner if it wasn't about Ben, though I don't think a hypothetical person would have made the lesson as clear, and if Ben wasn't fair game for having written that article, I don't know who would be.

Thanks! This line in particular changed my mind about whether it was retributive, I genuinely can't think of anyone else it would be appropriate to do this for

9
Linch
4mo
The obvious thing to do is finding a friend or other ally who's willing to consent to do this. Rather than spring it on someone else out of the blue.  Normally you could also volunteer yourself, but of course it's not exactly viable in this case. EDIT: I'm happy to volunteer myself for these 1-3 hypothetical experiments going forwards. But please warn me first! And I only want to run this experiment 1-3 times to start with.

They were shocked at his lack of concern for her suffering and confirmed that he would probably really hurt her career if she came forward with her information.

Re-reading that section, it was surprisingly consistent with that interpretation, but this line seems to make no sense if it's about Kat's experience - if the trauma is publishing the previous post then "probably really hurt her career if she came forward with her information" does not make sense because the trauma was a public event

I am also confused by this. I think it would be good for Kat to quickly clarify if it was or wasn't her. Since the section is for rhetorical affect, I don't think this should matter, and it seems like an easy misunderstanding to clear up.

I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.

I'd be curious to hear more about this - naively, if I'm funding an org, and then OpenPhil stops funding that org, that's a fairly strong signal to me that I should also stop funding it, knowing nothing more. (since it implies OpenPhil put in enough effort to evaluate the org, and decided to deviate from the path of least resistance)

Agreed re funding things without a track record, that seems clearly good for s... (read more)

Yeah, that intermediate world sounds great to me! (though a lot of effort, alas)

Ah, gotcha. If I understand correctly you're arguing for more of a "wisdom of the crowds" analogy? Many donors is better than a few donors.

If so, I agree with that, but think the major disanalogy is that the big donors are professionals, with more time experience and context, while small donors are not - big donors are more like hedge funds, small donors are more like retail investors in the efficient market analogy

I disagree, because you can't short a charity, so there's no way for overhyped charity "prices" to go down

4
ElliotJDavies
5mo
My claim is that your intuitions are the opposite of what they would be if applied to the for-profit economy. You're response (if I understand correctly) is questioning the veracity of the analogy - which seems not to really get at the heart of the efficient market heuristic. I.e. you haven't claimed that bigger donors are more likely to be efficient, you've just claimed efficiency in charitable markets are generally unlikely? Besides this, shorting isn't the only way markets regulate (or deflate) prices. "Selling" is the more common pathway. In this context, "selling" would be medium donors changing their donation to a more neglected/effective charity. It could be argued, this is more likely to happen under a dynamic donation "marketplace",  with lot's of medium donors, than in a less dynamic, fewer but bigger donors, donation "marketplace"

Thanks for writing this up! One problem with this proposal that I didn't see flagged (but may have missed) is that if the ETG donors defer to the megadonors you don't actually get a diversified donor base. I earn enough to be a mid-sized donor, but I would be somewhat hesitant about funding an org that I know OpenPhil has passed up on/decided to stop funding, unless I understood the reasons why and felt comfortable disagreeing with them. This is both because of fear of unilateralist curse/downside risks, and because I broadly expect them to have spent more... (read more)

5
Vasco Grilo
5mo
Thanks for pointing that out, Neel. It is also worth having in mind that GWWC's donations are concentraded in a few dozens of donours: Given the donations per donor are so heavy-tailed, it is very hard to avoid organisations being mostly supported by a few big donors. In addition, GWWC recommends donating to funds for most people: I agree with this. Personally, I have engaged a significant time with EA-related matters, but continue to donate to the Long-Term Future Fund (LTFF) because I do not have a good grasp about which opportunities are best within AI safety, even though I have opinions about which cause areas are more pressing (I also rate animal welfare quite highly). I am more positive about people working on cause area A to decide on which interventions are most effective within A (e.g. you donating to AI safety interventions). However, people earning to give may well not be familiar with any cause area, and it is unclear whether the opportunity cost to get quite familiar would be worth it, so I think it makes sense to defer. On the other hand, I believe it is important for donors to push funds to be more transparent about their evaluation process. One way to do this is donating to more transparent funds, but another is donating directly to organisations.
5
abrahamrowe
5mo
Yeah, I think there is an open question of whether or not this would cause a decline in the impact of what's funded, and this reason is one of the better cases why it would. I think one potential middle-ground solution to this is having like, 5x as many EA Fund type vehicles, with more grant makers representing more perspectives / approaches, etc., and those funds funded by a more diverse donor base, so that you still have high quality vetting of opportunities, but also grantmaking bodies who are responsive to the community, and some level of donor diversity possible for organizations.
8
ElliotJDavies
5mo
I disagree with this (except unilateralist curse), because I suspect something like the efficient market hypothesis plays out when you many medium-small donors. I think it's suspect that one wouldn't make the same argument as the above for the for-profit economy.

OP doesn't have the capacity to evaluate everything, so there are things they don't fund that are still quite good.

Also OP seems to prefer to evaluate things that have a track record, so taking bets on people to be able to get more of a track record to then apply to OP would be pretty helpful.

I also think orgs generally should have donor diversity and more independence, so giving more funding to the orgs that OP funds is sometimes good.

Thanks Neel, I get the issue in general, but I'm a bit confused about what exactly the crux really is here for you?

I would have thought you would be in one of the best positions of anyone to donate to an AI org - you are fully immersed in the field and I would have thought in a good position to fund things you think are promising in on the margins, perhaps even new and exciting things that AI funds may miss?

Our of interest why aren't you giving a decent chunk away at the moment? Feel free not to answer if you aren't comfortable with it!

I've found that if a funder or donor asks, (and they are known in the community,) most funders are happy to privately respond about whether they decided against funding someone, and often why, or at least that they think it is not a good idea and they are opposed rather than just not interested.

I upvoted this comment, since I think it's a correct critique of poor quality studies and adds important context, but I also wanted to flag that I also broadly think Athena is a worthwhile initiative and I'm glad it's happening! (In line with Lewis' argument below). I think it can create bad vibes for the highest voted comment on a post about promoting diversity to be critical

6
Angelina Li
5mo
+1, I appreciate you for upvoting the parent comment and then leaving this reply :) (Edit: for what it's worth, I am also excited Athena is happening)
[anonymous]5mo24
13
0

Usually, if someone proposes something and then cites loads of weak literature supporting it, criticism is warranted. I think it is a good norm for people promoting anything to make good arguments for it and provide good evidence. 

I'm confused by how this relates to Gemma's post?

4
Radical Empath Ismam
7mo
Just wanting to express my shared disappointment with how parts of this community embraced crypto/ gambling etc. as Gemma points out in her post.

Thanks for the post! This seems a useful summary, and I didn't spot anything that contradicted existing information I have (I didn't check very hard, so this isn't strong data)

2
Vilhelm Skoglund
7mo
Thank you!

For what it's worth, I interpreted the original post as Elizabeth calling it a pseudo RCT, and separately saying that commenters cited it, without implying commenters called it a pseudo RCT

I understood ‘pseudo’ here to mean ‘semi’ not ‘fake’. So my interpretation of Elizabeth’s argument is ‘people point to this study as a sort-of-RCT but it really doesn’t resemble that’

+1 to Beth Barnes on dangerous capability evals

I really enjoyed this one! These feel like common EA mistakes, and I feel like I see them often

What kinds of grants tend to be most controversial among fund managers?

2
calebp
7mo
I am not sure these are the most controversial, but I have had several conversations when evaluating AIS grants where I disagreed substantively with other fund managers. I think there are some object-level disagreements (what kinds of research do we expect to be productive) as well as meta-level disagreements (like "what should the epistemic process look like that decides what types of research get funded" or "how do our actions change the incentives landscape within EA/rationality/AIS").
4
Habryka
7mo
Somewhat embarrassingly we've been overwhelmed enough with grant requests in the past few months that we haven't had much time to discuss grants, so there hasn't been much opportunity for things to be controversial among the fund managers. But guessing about what kinds of things I disagree most with other people on, my sense is that grants that are very PR-risky, and grants that are more oriented around a theory of change that involves people getting better at thinking and reasoning (e.g. "rationality development"), instead of directly being helpful with solving technical problems or acquiring resources that could be used by the broader longtermist community, tend to be the two most controversial categories. But again, I want to emphasize that I don't have a ton of data here, since the vast majority of grants are currently just evaluated by one fund manager and then sanity-checked by the fund chair, so there aren't a lot of contexts in which disagreements like this could surface.
2
Linch
7mo
I've answered both you and Quadratic Reciprocity here.

What are some past LTFF grants that you disagree with?

2
Daniel_Eth
7mo
In my personal opinion, the LTFF has historically funded too many bio-related grants and hasn't sufficiently triaged in favor of AI-related work.

What are some types of grant that you'd love to fund, but don't tend to get as applications?

8
Lawrence Chan
7mo
I'd personally like to see more well-thought out 1) AI governance projects and 2) longtermist community building projects that are more about strengthening the existing community as opposed to mass recruitment. 

Can grantees return money if their plans change, eg they get hired during a period of upskilling? If so, how often does this happen?

4
Linch
8mo
Yep, grantees are definitely allowed to do so and it sometimes happens!  I'll let someone who knows the numbers better answer with stats. 

Glad to hear it, I didn't think anyone still remembered this post!

Congratulations! Martin and Fernanda do great work, and I'm glad to see them being supported.

If someone is doing the shadow account thing (ie, a boiler room scam, I think), there will be exponentially fewer forecasters for each number of successful bets. I don't think this is the case for the well known ones

site:forum.effectivealtruism.org communism should work, as a google search

IMO it was tactically correct to not mention climate. The point of the letter is to get wide support, and I think many people would not be willing to put AI X-Risk on par with climate

4
jackva
11mo
Yeah, I can see that though it is a strange world where we treat nuclear and pandemics as second-order risks.

"You’ll need to get hands-on. The best ML and alignment research engages heavily with neural networks (with only a few exceptions). Even if you’re more theoretically-minded, you should plan to be interacting with models regularly, and gain the relevant coding skills. In particular, I see a lot of junior researchers who want to do “conceptual research”. But you should assume that such research is useless until it cashes out in writing code or proving theorems, and that you’ll need to do the cashing out yourself (with threat modeling being the main exception, since it forces a different type of concreteness). ..."

This seems strongly true to me

3
Lizka
1y
Yeah, I agree on priors & some arguments about feedback loops, although note that I don't really have relevant experience. But I remember hearing someone try to defend something like the opposite claim to me in some group setting where I wasn't able to ask the follow-up questions I wanted to ask — so now I don't remember what their main arguments were and don't know if I should change my opinion.

I agree re PhD skillsets (though think that some fraction of people gain a lot of high value skills during a PhD, esp re research taste and agenda settings).

I think you're way overrating OpenAI though - in particular, Anthropic's early employees/founders include more than half of the GPT-3 first authors!! I think the company has become much more oriented around massive distributed LLM training runs in the last few years though, so maybe your inference that people would gain those skills is more reasonable now.

Strongly downvoted. I agree with the other comments. I think this post is bad as is especially in the current context of AI Safety disclose, and should be posted as part of a broader post about violent methods being ineffective (at least, assuming you're writing such a post). I personally strongly want AI Safety discourse to condemn and disavow violent methods, and think it's both immoral and ineffective. I don't think you believe that violence is a good idea here, but this post in isolation just feels like "hey, violent approaches exist, maybe worth thinking about, you wouldn't be super weird for doing them"

This seems fair, I'm significantly pushing back on this as criticism of Redwood, and as focus on the "Redwood has been overfunded" narrative. I agree that they probably consumed a bunch of grant makers time, and am sympathetic to the idea that OpenPhil is making a bunch of mistakes here.

I'm curious which academics you have in mind as slam dunks?

  • I personally found MLAB extremely valuable. It was very well-designed and well-taught and was the best teaching/learning experience I've had by a fairly wide margin

Strong +1, I was really impressed with the quality of MLAB. I got a moderate amount out of doing it over the summer, and would have gotten much much more if I had done it a year or two before. I think that kind of outreach is high value, though plausibly a distraction from the core mission

Sorry for the long + rambly comment! I appreciate the pushback, and found clarifying my thoughts on this useful

I broadly agree that all of the funding ideas you point to seem decent. My biggest crux is that the counterfactual of not funding Redwood is not that one of those gets funded, and that the real constraints here around logistical effort, grantmaker time, etc. I wrote a comment downthread with further thoughts on these points.

And that it is not Redwood's job to solve this - they're pursuing a theory of change that does not depend on these, and it se... (read more)

Fwiw, my read is that a lot of "must have an ML PhD" requirements are gatekeeping nonsense. I think you learn useful skills doing a PhD in ML, and I think you learn some skills doing a non-ML PhD (but much less that's relevant, though physics PhDs are probably notably more relevant than maths). But also that eg academia can be pretty terrible for teaching you skills like ML engineering and software engineering, lots of work in academia is pretty irrelevant in the world of the bitter lesson, and lots of PhDs have terrible mentorship.

I care about people havi... (read more)

I care about people having skills, but think that a PhD is only an OK proxy for them, and would broadly respect the skills of someone who worked at one of the top AI labs for four years straight out of undergrad notably more than someone straight out of a PhD program

I completely agree.

I've worked in ML engineering and research for over 5 years at two companies, I have a PhD (though not in ML), and I've interviewed many candidates for ML engineering roles.

If I'm reviewing a resume and I see someone has just graduated from a PhD program (and does not have ot... (read more)

There are other TAIS labs (academic and not) that we believe could absorb and spend considerably more funding than they currently receive.

My understanding is that, had Redwood not existed, OpenPhil would not have significantly increased their funding to these other places, and broadly has more money than they know what to do with (especially in the previous EA funding environment!). I don't know whether those other places have applied for grants, or why they aren't as funded as they could be, but this doesn't seem that related to me. And more broadly th... (read more)

To push back on this point, presumably even if grantmaker time is the binding resource and not money, Redwood also took up grantmaker time from OP (indeed I'd guess that OP's grantmaker time on RR is much higher than for most other grants given the board member relationship). So I don't think this really negates Omega's argument--it is indeed relevant to ask how Redwood looks compared to grants that OP hasn't made.

Personally, I am pretty glad Redwood exists and think their research so far is promising. But I am also pretty disappointed that OP hasn't funde... (read more)

Neel Nanda, Tom Lieberum and others, mentored by Jacob Steinhardt

I will clarify in my personal case that I did the grokking work as an independent research project and that Jacob only became involved in the project after I had done the core research, and his mentorship was specifically about the process of distillation and writing up the results (to be clear, his mentorship here was high value! But I think that the paper benefited less from his mentorship than is implied by the reference class of having him as the final author)

-5
NunoSempere
1y

I agree with this.

Proofreading a job application seems completely fine and socially normal to me, including for content. The thing that crosses a line, by my lights, is having someone (or GPT-4) write it for you.

2
Dawn Drescher
1y
Thanks! :-D
3
tobyj
1y
As a counter-opinion to the above, I would be fine with the use of GPT-4, or even paying a writer. The goal of most initial applications is to asses some of the skills and experience of the individual. As long as that information is accurate, then any system that turns that into a readable application (human or AI) seems fine, and more efficient seems better.  The information this looses, is the way someone would communicate their skills and experience unassisted, but I'm skeptical that this is valuable in most jobs (and suspect it's better to test for these kinds of skills later in the process). More generally I'm doubtful of the value of any norms that are very hard to enforce and disadvantage scrupulous people (e.g.  "don't use GPT-4 or "only spend x hours on this application").

Academic salaries are crazy low (which is one of my many reasons for not wanting to do a PhD lol)

Minor note that an anonymous feedback form might help to elicit negative feedback here. I appreciate the openness to criticism! (I don;t have significant negative feedback, I like constellation a lot, this is just a general note)

6
billzito
1y
Agreed. We have a Constellation-internal anonymous form that isn’t set up well for external feedback, and I didn’t want to block on setting it up before replying.

Of Redwood’s published research, we were impressed by Redwood's interpretability in the wild paper, but would consider it to be no more impressive than progress measures for grokking via mechanistic interpretability, executed primarily by two independent researchers, or latent knowledge in language models without supervision, performed by two PhD students.[4] These examples are cherry-picked to be amongst the best of academia and independent research, but we believe this is a valid comparison because we also picked what we consider the best of Redwood's r

... (read more)
7
Omega
1y
(written in first person because one post author wrote it)  I think this is the area we disagree on the most. Examples of other ideas:   1. Generously fund the academics who you do think are doing good work (as far as I can tell, two of them -- Christopher Pott and Martin Watternberg -- get no funding from OP, and David Bau gets an order of magnitude less). This is probably more on OP than Redwood, but Redwood could also explore funding academics and working on projects in collaboration with them.   2. Poach experienced researchers who are executing well on interpretability but working on what (by Redwood's lights) are less important problems, and redirect them to more important problems. Not everyone would want to be "redirected", but there's a decent fraction of people who would love to work on more ambitious problems but are currently not incentivized to do so, and a broader range of people are open to working on a wide range of problems so long as they are interesting. I would expect these individuals to cost a comparable amount to what Redwood currently pays (somewhat less if poaching from academia, somewhat more if poaching from industry) but be able to execute more quickly as well as spread valuable expertise around the organization.   3. Make one-year seed grants of around $100k to 20 early-career researchers (PhD students, independent researchers) to work on interpretability, nudging them towards a list of problems viewed important by Redwood. Provide low-touch mentorship (e.g. once a month call). Scale up the grants and/or hire people from the projects that did well after the one-year trial. I wouldn't confidently claim that any of these approaches would necessarily best Redwood, but there's a large space of possibilities that could be explored and largely has not been. Notably, the ideas above differ from Redwood's high-level strategy to date by: (a) making bets on a broad portfolio of agendas; (b) starting small and evaluating projects before scalin

Man, I have a strong negative aesthetic reaction to the new frontpage that I struggle to articulate - the old one was just so pretty and aesthetic, in a way that feels totally lost! How hard would it be to have an option to revert to the old style?

5
JP Addison
1y
Sigh. I do think we should reply to this. It is hard to do so well, but I will give it my best shot. Starting from the most important part of my reply: We do have important things we have heard from new users about our site that we’re aiming to fix here. I do really appreciate the Book UI aesthetic, and have a huge amount of respect for Oliver Habryka for developing it while being an inexperienced designer and also being the project lead and a software developer. (That’s not a backhanded compliment! I genuinely love it!) Nevertheless, it is a constraining style, and it is hard for new users to navigate, as validated by my experience designing inside it, and by our user interviews. Very hard. Maintaining two consistent styles for everything is quite difficult. I speak to this briefly in my response about A/B testing. A less models-y, but maybe pretty persuasive answer: I expect if you have friends who have experience in frontend engineering and you ask them what they would do in my shoes, ≥75% of them would agree with me that we should not support multiple design styles. The weakest part of my reply, but which I think is important to state, is that I, personally, love the new design style. I predict that, in 9 months, if we survey people and ask them whether they like the old design or new design better, ≥50% would reply that they like the new style better.
1
Will Bradshaw
1y
This comment helped clarify my feelings here. It's not that the new style is bad, really - it's unremarkably fine, and after a while I'll probably stop noticing it. It's that the old Forum was a really unusually beautiful website, and throwing that away feels quite sad to me.

I agree with this in spirit, but think that in this case it's completely fine. a) Presumably, for some people, being zakat compatible has important cultural meaning. I generally think that the EA thing to do is to act within your constraints and belief systems and to do as much good as you can, not to need to tear down all of them. b) In my opinion, the point of impartiality is "find the most effective ways of helping people". I do not personally think that GiveDirectly is the most effective way to give, but it's not at all clear to me that the Yemeni reci... (read more)

Load more