Most importantly, it seems to me that the people in EA leadership that I felt were often the most thoughtful about these issues took a step back from EA, often because EA didn't live up to their ethical standards, or because they burned out trying to affect change and this recent period has been very stressful
Who on your list matches this description? Maybe Becca if you think she's thoughtful on these issues? But isn't that one at most?
I think that one reason this isn’t done is that the people who have the best access to such metrics might not think it’s actually that important to disseminate them to the broader EA community, rather than just sharing them as necessary with the people for whom these facts are most obviously action-relevant.
Yeah, I think that might be one reason it isn't done. I personally think that it is probably somewhat important for the community to understand itself better (e.g., the relative progress and growth in different interests/programs/geographies). Especially for people in the community who are community builders, recruiters, founders, etc. I also recognise that it might not be seen as priority for various reasons or risky for other reasons and I haven't thought a lot about it.
Regardless, if people who have data about the community that they don't want to...
I think you're right that my original comment was rude; I apologize. I edited my comment a bit.
I didn't mean to say that the global poverty EAs aren't interested in detailed thinking about how to do good; they definitely are, as demonstrated e.g. by GiveWell's meticulous reasoning. I've edited my comment to make it less sound like I'm saying that the global poverty EAs are dumb or uninterested in thinking.
But I do stand by the claim that you'll understand EA better if you think of "promote AMF" and "try to reduce AI x-risk" as results of two fairly differe...
I don't think it makes sense to think of EA as a monolith which both promoted bednets and is enthusiastic about engaging with the kind of reasoning you're advocating here. My oversimplified model of the situation is more like:
Thanks a lot that makes sense, this comment no longer stands after the edits so have retracted really appreciate the clarification!
(I'm not sure its intentional, but this comes across as patronizing to global health folks. Saying folks "don't want to do this kind of thinking" is both harsh and wrong. It seems like you suggest that "more thinking" automatically leads people down the path of "more important" things than global health, which is absurd.
Plenty of people have done plenty of thinking through an EA lens and decided that bed nets are a great place ...
The situation with person L was deeply tragic. This comment explains some of the actions taken by CEA’s Community Health team as a result of their reports.
Even if most examples are unrelated to EA, if it's true that the Silicon Valley AI community has zero accountability for bad behavior, that seems like it should concern us?
EDIT: I discuss a [high uncertainty] alternative hypothesis in this comment.
I would also be interested in more clarification about how EA relevant the case studies provided might be, to whatever extent this is possible without breaking confidentiality. For example:
We were pressured to sign non-disclosure agreements or “consent statements” in a manipulative “community process”.
this does not sound like the work of CEA Community health team, but it would be an important update if it was, and it would be useful to clarify if it wasn't so people don't jump to the wrong conclusions.
That being said, I think the AI community in the Bay Ar...
eg, some (much lighter) investigation, followed by:
I think it was unhelpful to refer to “Harry Potter fanfiction” here instead of perhaps “a piece of fiction”—I don’t think it’s actually more implausible that a fanfic would be valuable to read than some other kind of fiction, and your comment ended up seeming to me like it was trying to use the dishonest rhetorical strategy of implying without argument that the work is less likely to be valuable to read because it’s a fanfic.
I found Ezra's grumpy complaints about EA amusing and useful. Maybe 80K should arrange to have more of their guests' children get sick the day before they tape the interviews.
For what it’s worth, gpt4 knows what rat means in this context: https://chat.openai.com/share/bc612fec-eeb8-455e-8893-aa91cc317f7d
I think this is a great question. My answers:
My attitude, and the attitude of many of the alignment researchers I know, is that this problem seems really important and neglected, but we overall don't want to stop working on alignment in order to work on this. If I spotted an opportunity for research on this that looked really surprisingly good (e.g. if I thought I'd be 10x my usual productivity when working on it, for some reason), I'd probably take it.
It's plausible that I should spend a weekend sometime trying to really seriously consider what research opportunities are available in this space.
My guess is that a lot of the skills involved in doing a good job of this research are the same as the skills involved in doing good alignment research.
fwiw the two videos linked look identical to me (EAG Bay Area 2023, "The current alignment plan, and how we might improve it")
I really like this frame. I feel like EAs are somewhat too quick to roll over and accept attacks from dishonest bad actors who hate us for whatever unrelated reason.
Yeah I noticed a huge difference between EAs and my politically active right wing friends, for whom disingenuous media articles calling you racist are just an occupational hazard. I think especially a lot of younger EA straight out of college are used to affiliating with moral-language coded condemnation and find being the recipient, or adjacent to the recipient, very disorientating.
Yes, I think this is very scary. I think this kind of risk is at least 10% as important as the AI takeover risks that I work on as an alignment researcher.
I don't think Holden agrees with this as much as you might think. For example, he spent a lot of his time in the last year or two writing a blog.
I think it's absurd to say that it's inappropriate for EAs to comment on their opinions on the relative altruistic impact of different actions one might take. Figuring out the relative altruistic impact of different actions is arguably the whole point of EA; it's hard to think of something that's more obviously on topic.
Obviously it would have been better if those organizers had planned better. It's not clear to me that it would have been better for the event to just go down in flames; OP apparently agreed with me, which is why they stepped in with more funding.
I don't think the Future Forum organizers have particularly strong relationships with OP.
The main bottleneck I'm thinking of is energetic people with good judgement to execute on and manage these projects.
How come you think that? Maybe I'm biased from spending lots of time with Charity Entrepreneurship folks but I feel like I know a bunch of talented and entrpreneurial people who could run projects like the ones mentioned above. If anything, I would say neartermist EA has a better (or at least, longer) track record of incubating new projects relative to longtermist EA!
I disagree, I think that making controversial posts under your real name can improve your reputation in the EA community in ways that help your ability to do good. For example, I think I've personally benefited a lot from saying things that were controversial under my real name over the years (including before I worked at EA orgs).
Stand up a meta organization for neartermism now, and start moving functions over as it is ready.
As I've said before, I agree with you that this looks like a pretty good idea from a neartermist perspective.
Neartermism has developed meta organizations from scratch before, of course.
[...]
which is quite a bit more than neartermism had when it created most of the current meta.
I don't think it's fair to describe the current meta orgs as being created by neartermists and therefore argue that new orgs could be created by neartermists. These were created by ...
Fwiw my guess is that longtermism hasn’t had net negative impact by its own standards. I don’t think negative effects from AI speed up outweigh various positive impacts (e.g. promotion of alignment concerns, setting up alignment research, and non-AI stuff).
One issue for me is just that EA has radically different standards for what constitutes "impact." If near-term: lots of rigorous RCTs showing positive effect sizes.
If long-term: literally zero evidence that any long-termist efforts have been positive rather than negative in value, which is a hard enough question to settle even for current-day interventions where we see the results immediately . . . BUT if you take the enormous liberty of assuming a positive impact (even just slightly above zero), and then assume lots of people in the future, everything has a huge positive impact.
and then explains why these longtermists will not be receptive to conventional EA arguments.
I don't agree with this summary of my comment btw. I think the longtermists I'm talking about are receptive to arguments phrased in terms of the classic EA concepts (arguments in those terms are how most of us ended up working on the things we work on).
Holden Karnofsky on evaluating people based on public discourse:
...I think it's good and important to form views about people's strengths, weaknesses, values and character. However, I am generally against forming negative views of people (on any of these dimensions) based on seemingly incorrect, poorly reasoned, or seemingly bad-values-driven public statements. When a public statement is not misleading or tangibly harmful, I generally am open to treating it as a positive update on the person making the statement, but not to treating it as worse news about the
Buck seems to be consistently missing the point.
Although leaders may say "I won't judge or punish you if you disagree with me", listeners are probably correct to interpret that as cheap talk. We have abundant evidence from society and history that those in positions of power can and do act against them. A few remarks to the contrary should not convince people they are not at risk.
Someone who genuinely wanted to be open to criticism would recognise and address the fears people have about speaking up. Buck's comment of "the fact that people want to hid...
I think you're imagining that the longtermists split off and then EA is basically as it is now, but without longtermism. But I don't think that's what would happen. If longtermist EAs who currently work on EA-branded projects decided to instead work on projects with different branding (which will plausibly happen; I think longtermists have been increasingly experimenting with non-EA branding for new projects over the last year or two, and this will probably accelerate given the last few months), EA would lose most of the people who contribute to its infras...
My guess is that this new neartermist-only EA would not have the resources to do a bunch of things which EA currently does--it's not clear to me that it would have an actively maintained custom forum, or EAGs, or EA Funds. James Snowden at Open Phil recently started working on grantmaking for neartermist-focused EA community growth, and so there would be at least one dedicated grantmaker trying to make some of this stuff happen. But most of the infrastructure would be gone.
This paragraph feels pretty over the top. When you say "resources" I assume you mean...
Also worth noting that "all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then." (Comment by lukeprog)
Note that while Caleb is involved with grantmaking, I don’t think he has funded atlas, so this post isn’t about a grantee of his.
Note that "lots of people believe that they need to hire their identities" isn't itself very strong evidence for "people need to hide their identities". I agree it's a shame that people don't have more faith in the discourse process.
While I do think I maybe disagree with Buck on the actual costs to people speaking openly, I also think there are pretty big gains in terms of trust and reputation to be had by speaking out openly. In my experience it's a kind of increase-in-variance of how people relate to you, with an overall positive mean, with there definitely being some chance of someone disliking you being open, but there also being a high chance of someone being very interested in supporting you and caring a lot about you not being unfairly punished for speaking true things, and rew...
Thanks for the specific proposals.
The reasonable person also knows that senior EAs have a lot of discretionary power, and thus there is a significant chance retailatory action would not be detected absent special safeguards.
FWIW, I think you're probably overstating the amount of discretionary power that senior EAs could use for retaliatory action.
IMO, if you proposition someone, you're obligated to mention this to other involved parties in situations where you're wielding discretionary power related to them. I would think it was wildly inappropriate for a ...
Thanks, Buck. It is good to hear about those norms, practices, and limitations among senior EAs, but the standard for what constitutes harassment has to be what a reasonable person in other person's shoes would think. The student or junior EA experiences harm if they believe a refusal will have an adverse effect on their careers, even if the senior EA actually lacks the practical ability to create such an effect.[1]
The reasonable student or junior EA doesn't know about undocumented (or thinly documented) norms, practices, and limitations among senior EAs. ...
For what it's worth, my current vote is for immediate suspension in situations if there is credible allegations for anyone in a grantmaking etc capacity where they used such powers in a retaliatory action for rejected romantic or sexual advances. In addition to being illegal, such actions are just so obviously evidence of bad judgement and/or poor self-control that I'd hesitate to consider anyone who acted in such ways a good fit for any significant positions of power. I have not thought the specific question much, but it's very hard for me to imagine any realistic situation where someone with such traits is a good fit for grantmaking.
Re 2: You named a bunch of cases where a professional relationship comes with restrictions on sex or romance. (An example you could given, which I think is almost universally followed in EA, is "people shouldn't date anyone in their chain of management"; IMO this is a good rule.) I think it makes sense to have those relationships be paired with those restrictions. But it's not clear to me that the situation in EA is more like those situations than like these other situations:
So, I think the first question is something like: "Could a reasonable person in the shoes of the lower-status person conclude that rebuffing the overtures of the higher-status person [1] could result in a meaningfully adverse impact on their career due to the higher-status person taking improper action?"[2]
I think in the vast majority of cases following your three hypotheticals would result in a "no" answer to this question. For instance, most professors have relatively little influence on the operations of other universities, or on the nat...
Yeah, IMO medals definitely don't suffice for me to think it's extremely likely someone will be AFAICT good at doing research.
I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.
For example:
...For example, I recently was surprised to hear someone who controls relevant funding and community space access remark to
Thanks for the nuanced response. FWIW, this seems reasonable to me as well:
I agree that it's important to separate out all of these factors, but I think it's totally reasonable for your assessment of some of these factors to update your assessment of others.
Separately, I think that people are sometimes overconfident in their assessment of some of these factors (e.g. intelligence), because they over-update on signals that seem particularly legible to them (e.g. math accolades), and that this can cause cascading issues with this line of reasoning. But th...
I do not actually endorse this comment above. It is used as an illustration of why a true statement alone might not mean it is "fair game", or a constructive way to approach what you want to say. Here is my real response:
As a random aside, I thought that your first paragraph was totally fair and reasonable and I had no problem with you saying it.
@Buck – As a hopefully constructive point I think you could have written a comment that served the same function but was less potentially off-putting by clearly separating your critique between a general critique of critical writing on the EA Forum and critiques of specific people (me or the OP author).
I do update on people when they say things on this forum that I think indicate bad things about their judgment or integrity, as I think I should
I agree! But given this, I think the two things you mention often feel highly correlated, and it's hard for people to actually know that when you make a statement like that, that there's no negative judgement either from you, nor from other readers of your statement. It also feels a bit weird to suggest there's no negative judgement if you also think the forum is a better place without their critical writing?
...In gen
By “unpleasant” I don’t mean “the authors are behaving rudely”, I mean “the content/framing seems not very useful and I am sad about the effect it has on the discourse”.
I picked that post because it happened to have a good critical comment that I agreed with; I have analogous disagreements with some of your other posts (including the one you linked).
Thanks for your offer to receive critical feedback.
Thank you Buck that makes sense :-)
“the content/framing seems not very useful and I am sad about the effect it has on the discourse”
I think we very strongly disagree on this. I think critical posts like this have a very positive effect on discourse (in EA and elsewhere) and am happy with the framing of this post and a fair amount (although by no means all) of the content.
I think my belief here is routed in quite strong lifetime experiences in favour of epistemic humility, human overconfidence especially in the domain of doing good, positi...
Thanks for your sincere reply (I'm not trying to say other people aren't sincere, I just particularly felt like mentioning it here).
Here are my thoughts on the takeaways you thought people might have.
- There is an EA leadership (you saying it, as a self-confessed EA leader, is likely more convincing in confirming something like this than some anonymous people saying it), which runs counter to a lot of the other messaging within EA. This sounds very in-groupy (particularly as you refer to it as a ‘social cluster’ rather than e.g. a professional cluster)
As I s...
- Why is only "arguable" that you had more power when you were an active grantmaker?
I removed "arguable" from my comment. I intended to communicate that even when I was an EAIF grantmaker, that didn't clearly mean I had "that much" power--e.g. other fund managers reviewed my recommended grant decisions, and I moved less than a million dollars, which is a very small fraction of total EA spending.
- Do you mean you don't have much power, or that you don't use much power?
I mean that I don't have much discretionary power (except inside Redwood). I can't unila...
I think Lark's response is reasonably close to my object-level position.
My quick summary of a big part of my disagreement: a major theme of this post suggests that various powerful EAs hand over a bunch of power to people who disagree with them. The advantage of doing that is that it mitigates various echo chamber failure modes. The disadvantage of doing that is that now, people who you disagree with have a lot of your resources, and they might do stuff that you disagree with. For example, consider the proposal "OpenPhil should diversify its grantmaking by...
Some things that I might come to regret about my comment:
I consider this a good outcome--I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant. In my opinion, it is fair game for me to make truthful comments that cause people to feel less incentivized to write posts like this one (or yours) in future (though I can imagine changing my mind on this).
This was a disappointing comment to read fro...
I would prefer an EA Forum without your critical writing on it, because I think your critical writing has similar problems to this post (for similar reasons to the comment Rohin made here), and I think that posts like this/yours are fairly unhelpful, distracting, and unpleasant.
I think this is somewhat unfair. I think it is unfair to describe this OP as "unpleasant", it seems to be clearly and impartially written and to go out of its way to make it clear it is not picking on individuals. Also I feel like you have cherry picked a post from my post history t...
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly.
I think you're right about this and that my comment was kind of unclearly equivocating between the suggestions that aimed at the community and the suggestions that aimed at orgs. (Though the suggestions aimed at the community also give me a vibe of "please, core EA orgs, start telling people that they should be different in these ways" rather than "here is my argument for why people should be different in these ways").
A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation
I'm sympathetic to the position that it's bad for me to just post meta-level takes without defending my object-level position.
Thanks for this, and on reading other comments etc, I was probably overly harsh on you for doing so.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact.
Obviously it's the responsibility of EAs and EA orgs to actively seek out ways that they could do things better. But I'm just noting that it seems unlikely to me that this post will actually persuade EA orgs to do things differently, and so if the authors had hoped to have impact via that route, they should try another plan instead.
I think that the class of arguments in this post deserve to be considered carefully, but I'm personally fine with having considered them in the past and decided that I'm unpersuaded by them, and I don't think that "there is an EA Forum post with a lot of discussion" is a strong enough signal that I should take the time to re-evaluate a bunch--the EA Forum is full of posts with huge numbers of upvotes and lots of discussion which are extremely uninteresting to me.
(In contrast, e.g. the FTX collapse did prompt me to take the time to re-evaluate a bunch of what I thought about e.g. what qualities we should encourage vs discourage in EAs.)
I'd be pretty interested in you laying out in depth why you have basically decides to dismiss these very varied and large set of arguments.(Full disclosure: I don’t agree with all of them, but in general I think there pretty good) A self admitted EA leader posting a response poo-pooing a long thought out criticism with very little argumentation, and mostly criticising it on tangential ToC grounds (which you don't think or want to succeed anyway?) seems like it could be construed to be pretty bad faith and problematic. I don’t normally reply like this, but...
[For context, I'm definitely in the social cluster of powerful EAs, though don't have much actual power myself except inside my org (and my ability to try to persuade other EAs by writing posts etc); I had more power when I was actively grantmaking on the EAIF but I no longer do this. In this comment I’m not speaking for anyone but myself.]
This post contains many suggestions for ways EA could be different. The fundamental reason that these things haven't happened, and probably won't happen, is that no-one who would be able to make them happen has decided t...
I think all your specific points are correct, and I also think you totally miss the point of the post.
You say you though about these things a lot. Maybe lots of core EAs have though about these things a lot. But what core EAs have considered or not is completely opaque to us. Not so much because of secrecy, but because opaque ness is the natural state of things. So lots of non core EAs are frustrated about lots of things. We don't know how our community is run or why.
On top of that, there are actually consequences for complaining or disagreeing too m...
One irony is that it's often not that hard to change EA orgs' minds. E.g. on the forum suggestion, which is the one that most directly applies to me: you could look at the posts people found most valuable and see if a more democratic voting system better correlates with what people marked as valuable than our current system. I think you could probably do this in a weekend, it might even be faster than writing this article, and it would be substantially more compelling.[1]
(CEA is actually doing basically this experiment soon, and I'm >2/3 chance the resu...
I strongly disagree with this response, and find it bizarre.
I think assessing this post according to a limited number of possible theories of change is incorrect, as influence is often diffuse and hard to predict or measure.
I agree with freedomandutility's description of this as an "isolated demand for [something like] rigor".
There seem to have been a lot of responses to your comment, but there are some points which I don’t see being addressed yet.
I would be very interested in seeing another similarly detailed response from an ‘EA leader’ whose work focusses on community building/community health Put on top as this got quite long, rationale below, but first:
I think at least a goal of the post is to get community input (I’ve seen in many previous forum posts) to determine the best suggestions without claiming to have all the answers. Quoted from the original post (intro to 'Sug...
Some things that I might come to regret about my comment:
Relatedly, I think a short follow-up piece listing 5-10 proposed specific action items tailored to people in different roles in the community would be helpful. For example, I have the roles of (1) low-five-figure donor, and (2) active forum participant. Other people have roles like student, worker in an object-level organization, worker in a meta organization, object-level org leader, meta org leader, larger donor, etc. People in different roles have different abilities (and limitations) in moving a reform effort forward.
I think "I didn't walk away w...
"If elites haven't already thought of/decided to implement these ideas, they're probably not very good. I won't explain why. "
"Posting your thoughts on the EA Forum is complaining, but I think you will fail if you try to do anything different. I won't explain why, but I will be patronising."
"Meaningful organisational change comes from the top down, and you should be more polite in requesting it. I doubt it'll do anything, though."
Do you see any similarities between your response here and the problems highlighted by the original post...
...If that's your goal, I think you should try harder to understand why core org EAs currently don't agree with your suggestions, and try to address their cruxes. For this ToC, "upvotes on the EA Forum" is a useless metric--all you should care about is persuading a few people who have already thought about this all a lot. I don't think that your post here is very well optimized for this ToC.
... I think the arguments it makes are weak (and I've been thinking about these arguments for years, so it would be a bit surprising if there was a big update from t
I strongly downvoted this response.
The response says that EA will not change "people in EA roles [will] ... choose not to", that making constructive critiques is a waste of time "[not a] productive ways to channel your energy" and that the critique should have been better "I wish that posts like this were clearer" "you should try harder" "[maybe try] politely suggesting".
This response seems to be putting all the burden of making progress in EA onto those trying to constructively critique the movement, those who are putting their limited spare time in...
I think that's particularly true of some of the calls for democratization. The Cynic's Golden Rule ("He who has the gold, makes the rules") has substantial truth both in the EA world and in almost all charitable movements. In the end, if the people with the money aren't happy with the idea of random EAs spending their money, it just isn't going to happen. And to the extent there is a hint of cutting off or rejecting donors, that would lead to a much smaller EA to the extent it was followed. In actuality, it wouldn't be -- someone is going to take the donor...
Interesting that another commenter has the opposite view, and criticises this post for being persuasive instead of explanatory!
May just be disagreement but I think it might be a result of a bias of readers to focus on framing instead of engaging with object level views, when it comes to criticisms.
I think it’s fairly easy for readers to place ideas on a spectrum and identify trade offs when reading criticisms, if they choose to engage properly.
I think the best way to read criticisms is to steelman as you read, particularly via asking whether you’d sympathise with a weaker version of the claim, and via the reversal test.
I think this comment reads as though it’s almost entirely the authors’ responsibility to convince other EAs and EA orgs that certain interventions would help maximise impact, and that it is barely the responsibility of EAs and EA orgs to actively seek out and consider interventions which might help them maximise impact. I disagree with this kind of view.
I think the criticism of the theory of change here is a good example of an isolated demand for rigour (https://slatestarcodex.com/2014/08/14/beware-isolated-demands-for-rigor/), which I feel EAs often apply when it comes to criticisms.
It’s entirely reasonable to express your views on an issue on the EA forum for discussion and consideration, rather than immediately going directly to relevant stakeholders and lobbying for change. I think this is what almost every EA Forum post does and I have never before seen these posts criticised as ‘complaining’.
One thing I think this comment ignores is just how many of the suggestions are cultural, and thus do need broad communal buy in, which I assume is why they sent this publicly. Whilst they are busy, I'd be pretty disappointed if the core EAs didn't read this and take the ideas seriously (ive tried tagging dome on twitter), and if you're correct that presenting such a detailed set of ideas on the forum is not enough to get core EAs to take the ideas seriously I'd be concerned about where there was places for people to get their ideas taken seriously. I'm luc...
Title: Paul Christiano on how you might get consequentialist behavior from large language models
Author: Paul Christiano
URL: https://forum.effectivealtruism.org/posts/dgk2eLf8DLxEG6msd/how-would-a-language-model-become-goal-directed?commentId=cbJDeSPtbyy2XNr8E
Why it's good: I think lots of people are very wrong about how LLMs might lead to consequentialist behavior, and Paul's comment here is my favorite attempt at answering this question. I think that this question is extremely important.
- Snacks also feel more optional
I strongly disagree fwiw; it was seriously inconvenient for me that there weren't snacks at EAGx Berkeley recently. And they're not very expensive compared to catering actual meals, I think.
I don't think we should think of EA as having "not enough money to pay for all the things they're now considering cancelling". Open Phil has enough money for at least a decade of longtermist spending at current rates; the fact that they aren't spending all their money right now means that they think that there will be grant opportunities in the future that are better than grant opportunities that they choose not to make now.
Decisions to cut back on spending on a community-wide level shouldn't be made from the perspective of short-term budgetary constraints...
Your first comment sounded like you're criticizing CEA for their allocation of resources. Your second reply now sounds more like you're criticzing funders (like Open Phil) for not increasing CEA's budget. (Or maybe CEA for not asking for a funding increase more aggressively.) I guess the main thing you're saying is that you find it hard to believe everyone is acting optimally if we have to cut back on EAGs in these ways, given that money isn't that tight in the EA movement as a whole.
I think it's pretty unreasonable to call him a Nazi--he'd hate Nazis, because he loves Jews and generally dislikes dumb conservatives.
I agree that he seems pretty racist.