All of sawyer🔸's Comments + Replies

Thanks for writing and posting this! I've had these sorts of feelings floating around in my head for a while, but this is the best term I've heard for it.

Having not read the article, this threw me and I had to go check. But unfortunately they do seem to be calling the timeline models themselves "bad". 

I focussed on one section alone: their “timelines forecast” code and accompanying methodology section. Not to mince words, I think it’s pretty bad.

I think this is the single most underrated post on the EA Forum.

3
Toby Tremlett🔹
Thanks, this was a good nudge to curate!  (probably don't agree on 'single most' but definitely underrated!)
3
Holly Elmore ⏸️ 🔸
High praise!

Thanks for writing this! I have a more philosophical counter that I'd love for you to respond to.

The idea of haggling doesn't sit well with me or my idea of what a good society should be like. It feels competitive, uncooperative, and zero-sum, when I want to live in a society where people are honest and cooperative. Specifically, it seems to encourage deceptive pricing and reward people who are willing to be manipulative and stretch the truth.

In other words, haggling gives me bad vibes.

When you think about haggling/negotiating in altruistic context, do you... (read more)

3
Nick Kautz
I feel the same way.  The moment someone initiates haggling with me, they've soured the relationship unless they can demonstrate that they aren't just haggling because they Want to pay less, but because they Have to and it means a lot to them.  Especially if I've priced something generously to begin with.  Along the lines of "you get what you pay for", I always remember how a transaction went with someone and it dictates if I want to deal with them in the future or how generous I'll be with my time/advice/effort in future interactions.  Or if I refer them to anyone.  People that seem to have the most integrity and likeability are the ones that are happy to pay the indicated price and are aware/appreciative of anything I do above or beyond what's expected.  The worst are the one's that are a black hole and take a long time, ask tons of questions, ask for me to throw in accessories, and also haggle the price.  They are also the one's that are most likely to return the product,  or need help with it in the future.  There are plenty of people that expect and enjoy the haggling process on both sides of the transaction,  but I suspect in altruistic/empathic circles,  the sentiment leans more in the direction I've laid out here.
4
Sam Anschell
Thanks for the thoughtful comment, Sawyer. I agree that haggling can be zero sum in many (though not all) cases, and I understand the sentiment of your note. In my personal experience, haggling hasn’t felt particularly adversarial or deceptive. It feels less like the Pawn Stars guy ripping off antiquers, and more like marketing, campaign finance, professional poker, standardized test prep, quant trading, or another type of legal and socially acceptable form of working for a bigger piece of a fixed pie. I think Robi raises a good point. Despite transaction costs, I would guess that haggling creates societal surplus on net by enabling more trades. In much (maybe most) of the world, haggling for daily goods is common; I’ve been a fly on the wall at outdoor markets in a handful of LMICs, and my impression is that haggling helps customers and vendors send valuable signals about their willingness to buy/sell. This isn’t exactly getting at what you wrote, but I feel uncomfortable negotiating when my counterparty seems like they need the money. E.g., if a taxi driver in another country quotes a “tourist” price where it’s pretty clear that locals would haggle and I’m (literally) getting taken for a ride, I pay sticker.  When it comes to contracts with a San Francisco landlord, a big university, or DocuSign, I feel motivated to haggle. Not because I see my counterparty as "the bad guy”, but because haggling is a standard practice following social norms that helps me direct more resources to important projects making the world a better place.
6
Robi Rahman🔸
Counterpoint: some people are more price-sensitive than typical consumers, and really can't afford things. If we prohibit or stigmatize haggling, society is leaving value on the table, in terms of sale profits and consumer surplus generated by transactions involving these more financially constrained consumers. (When the seller is a monopolist, they even introduce opportunities like this through the more sinister-sounding practice of price discrimination.)

Ah! Yes that's a good point and I misinterpreted.That's part of what I meant by "historical accident" but now I think that it was confusing to say "accident" and I should have said something like "hisotrical activities".

I agree that they're worth calling out somehow, I just think "lab" is a misleading way to doing so given their current activities. I've made some admittedly-clunky suggestions in other threads here.

I completely agree that OpenAI and Deepmind started out as labs and are no longer so.

4
calebp
My point was that I don’t think it was marketing or a historical accident, and it’s actually quite different to the other companies that you named which were all just straightforward revenue generating companies from ~day 1.

I agree that those companies are worth distinguishing. I just think calling them "labs" is a confusing way to do so. If the purpose was only to distinguish them from other AI companies, you could call them "AI bananas" and it would be just as useful. But "AI bananas" is unhelpful and confusing. I think "AI labs" is the same (to a lesser but still important degree).

3
Matthew_Barnett
Unfortunately there's momentum behind the term "AI lab" in a way that is not true for "AI bananas". Also, it is unambiguously true that a major part of what these companies do is scientific experimentation, as one would expect in a laboratory—this makes the analogy to "AI bananas" imperfect.

I think this is a useful distinction, thanks for raising it. I support terms like, "frontier AI company," "company making frontier AI," and "company making foundation models," all of which help distinguish OpenAI from Palantir. Also it seems pretty likely that within a few years, most companies will be AI companies!? So we'll need new terms. I just don't want that term to be "lab".

Another thing you might be alluding to is that "lab" is less problematic when talking to people within the AI safety community, and more problematic the further out you go. I thi... (read more)

Interesting point! I'd be OK with people calling them "evil mad scientist labs," but I still think the generic "lab" has more of a positive, harmless connotation than this negative one.

I'd also be more sympathetic to calling them "labs" if (1) we had actual regulations around them or (2) they were government projects. Biosafety and nuclear weapons labs have a healthy reputation for being dangerous and unfriendly, in a way "computer labs" do not. Also, private companies may have biosafety containment labs on premises, and the people working within them are ... (read more)

From everything I've seen, GWWC has totally transformed under your leadership. And I think this transformation has been one of the best things that's happened in EA during that time. I'm so thankful for everything you've done for this important organization.

Yep! Something like this is probably unavoidable, and it's what all of my examples below do (BERI, ACE, and MIRI).

There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transparency page and top contributors page).

(Not deeply thought through) Funders have a strong (though usually indirect) influence on the priorities and goals of the organization. Transparency about funders adds transparency about the priorities and goals of the organization. Conversely, lack of funder transparency creates the appearance that you're trying to hide something important about your goals and priorities. This sort of argument comes up a lot in US political funding, under the banners of "Citizens United", "SuperPACs", etc. I'm making a pretty similar argument to that one.

Underlying my fee... (read more)

I think this dynamic is generally overstated, at least in the existential risk space that I work in. I've personally asked all of our medium and large funders for permission, and the vast majority of them have given permission. Most of the funding comes from Open Philanthropy and SFF, both of which publicly announce all of their grants—when recipients decided not to list those funders, it's not because the funders don't want them to. There are many examples of organizations with high funding transparency, including BERI (which I run), ACE, and MIRI (transp... (read more)

6
NickLaing
That makes sense I was talking about my global health and development space only.

Nonprofit organizations should make their sources of funding really obvious and clear: How much money you got from which grantmakers, and approximately when. Any time I go on some org's website and can't find information about their major funders, it's a big red flag. At a bare minimum you should have a list of funders, and I'm confused why more orgs don't do this.

2[anonymous]
Literally never even considered it. Would you mind sharing an example of this being done well?
9
Habryka [Deactivated]
Hmm, reasonably fair point. I might add some language to the Lightcone/Lesswrong about pages.
5
kave
Why do you think that? (I agree fwiw)
8
NickLaing
This is ideal, yet many funders individual or otherwise either probably this or would rather you didn't. Maybe even most. I think this is as good idea, but less important than many other factors about organisations.

I think people would say that the dog was stronger and faster than all previous dog breeds, not that it was "more capable". It's in fact significantly less capable at not attacking its owner, which is an important dog capability. I just think the language of "capability" is somewhat idiosyncratic to AI research and industry, and I'm arguing that it's not particularly useful or clarifying language.

More to my point (though probably orthogonal to your point), I don't think many people would buy this dog, because most people care more about not getting attacke... (read more)

1
skluug
I think game playing AI is pretty well characterized as having the goal of winning the game, and being more or less capable of achieving that goal at different degrees of training. Maybe I am just too used to this language but it seems very intuitive to me. Do you have any examples of people being confused by it?

What is "capabilities"? What is "safety"? People often talk about the alignment tax: the magnitude of capabilities/time/cost a developer loses by implementing an aligned/safe system. But why should we consider an unaligned/unsafe system "capable" at all? If someone developed a commercial airplane that went faster than anything else on the market, but it exploded on 1% of flights, no one would call that a capable airplane.

This idea overlaps with safety culture and safety engineering and is not new. But alongside recent criticism of the terms "safety" and "alignment", I'm starting to think that the term "capabilities" is unhelpful, capturing different things for different people.

3
skluug
I don’t think the airplane analogy makes sense because airplanes are not intelligent enough to be characterized as having their own preferences or goals. If there were a new dog breed that was stronger/faster than all previous dog breeds, but also more likely to attack their owners, it would be perfectly straightforward to describe the dog as “more capable” (but also more dangerous).

I played the paperclips game 6-12 months before reading Superintelligence (which is what convinced me to prioritize AI x-risk), and I think the game made these ideas easier for me to understand and internalize.

This is truly crushing news. I met Marisa at a CFAR workshop in 2020. She was open, kind, and grateful to everyone, and it was joyful to be around her. I worked with her a bit revitalizing the EA Operations Slack Workspace in 2020, and had only had a few conversations with her since then, here and there at EA events. Marisa (like many young EAs) made me excited for a future that would benefit from her work, ambition, and positivity. Now she's gone. She was a good person, I'm glad she was alive, and I am so sad she's gone.

Good reasoning, well written. Reading this post convinced me to join the next NYC protest. Unfortunately I missed the one literally two days ago because I waited too long to read this. But I plan to be there in September.

One thing I think is often missing from these sorts of conversations is that "alignment with EA" and "alignment with my organization's mission" are not the same thing! It's a mistake to assume that the only people who understand and believe in your organization’s mission are members of the effective altruism community. EA ideas don’t have to come in a complete package. People can believe that one organization’s mission is really valuable and important, for different reasons, coming from totally different values, and without also believing that a bunch of o... (read more)

2
Joseph
I think that is a good point, and I wish that we had included this in the post! We approached this mainly from the perspective of a community building work (Tatiana's main work), which as a meta-EA job is probably the only type of work for which there is such high overlap between "alignment with EA" and "alignment with my organization's mission." But you are correct. I can see how there would be a lot less overlap for an organization focused on a specific cause.

Within EA, work on x-risk is very siloed by type of threat: There are the AI people, the bio people, etc. Is this bad, or good?

Which of these is the correct analogy?

  1. "Biology is to science as AI safety is to x-risk," or 
  2. "Immunology is to biology as AI safety is to x-risk"

EAs seem to implicitly think analogy 1 is correct: some interdisciplinary work is nice (biophysics) but most biologists can just be biologists (i.e. most AI x-risk people can just do AI).

The "existential risk studies" model (popular with CSER, SERI, and lots of other non-EA academics) ... (read more)

I agree with your last sentence, and I think in some versions of this it's the vast majority of people. A lot of charity advertising seems to encourage a false sense of confidence, e.g. "Feed this child for $1," or "adopt this manatee". I think this makes use of a near-universal human bias which probably has a name but which I am not recalling at the moment. For a less deceptive version of this, note how much effort AMF and GiveDirectly seem to have put in into tracking the concrete impact of your specific donation.

Orthogonally, I think most people are willing to pay more for a more legible/direct theory of impact. 

"I give $2800, this kid has lifesaving heart surgery" is certainly more legible and direct than a GiveWell-type charity. In the former case, the donor doesn't have to trust GiveWell's methodologies, data gathering abilities, and freedom from bias. I've invested a significant amount of time and thought into getting to my current high level of confidence in GiveWell's analyses, more time than most people are prepared to spend thinking about their charit... (read more)

sawyer🔸
47
24
0
8
2
1

Building off of Jason's comment: Another way to express this is that comparing directly to the $5,500 Givewell bar is only fair for risk-neutral donors (I think?). Most potential donors are not really risk neutral, and would rather spend $5,001 to definitely save one life than $5,000 to have a 10% chance of saving 10 lives. Risk neutrality is a totally defensible position, but so is non-neutrality. It's good to have the option of paying a "premium" for a higher confidence (but lower risk-neutral EV).

Leaving math mode...I love this post. It made me emotiona... (read more)

3
shepardriley
I think that’s such a good point about risk neutrality - and when looking outside of EA spaces this is important to keep in mind, I think risk neutrality is so accepted in EA spaces it’s sometimes forgotten that outside it is not necessarily so.  Also, just a really moving post. Trying to do the best you can in an emotional yet rational way. I really appreciate this. 

Thanks so much for the encouragment, really do appreciate it.

Great point I hadn't thought about risk neutrality vs non-neutrality here and that there might be a pool of people even within EA who would rather pay a "premium" for higher confidence. Outside EA my experience has been that perhaps even the majority of people would probably prefer to pay for higher confidence.

Very nice post. "Anarchists have no idols" strikes me as very similar to the popular anarchist slogan, "No gods, no masters." Perhaps the person who said it to you was riffing on that?

I think a simpler explanation for his bizarre actions is that he is probably the most stressed-out person on the face of the earth right now. Or he's not seeing the situation clearly, or some combination of the two. Also probably sleep-deprived, struggling to get good advice from people around him, etc.

(This is not meant to excuse any of his actions or words, I think he's 100% responsible for everything he says and does.)

1
skerple
definitely not struggling to get good advice:

This sort of falls under the second category, "Grantees who received funds, but want to set them aside to return to creditors or depositors." At least that's how I read it, though the more I think about it the more this category is kind of confusing and your wording seems more direct.

I think it'd be preferable to explicitly list as a reason for applying something along the lines of "Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks". 

Less importantly, it'd possibly be better to make it separate from "to return to creditors or depositors". 

Thanks for the clarification. I agree that the FTX problems are clearly related to crypto being such a new unregulated area, and I was wrong to try to downplay that causal link.

I don't think anonymized donations would help mitigate conflicts of interest. In fact I think it would encourage COIs, since donors could directly buy influence without anyone knowing they were doing so. Currently one of our only tools for identifying otherwise-undisclosed COIs is looking at flows of money. If billionaire A donates to org B, we have a norm that org B shouldn't do st... (read more)

Downvoted because I think this is too harsh and accusatory:

I cannot believe that some of you delete your posts simply because it ends up being downvoted.

Also because I disagree in the following ways:

  • Donating anonymously seems precisely opposed to transparency. At the very least, I don't think it's obvious that donor anonymity works towards the values you're expressing in your post. Personally I think being transparent about who is donating to what organizations is pretty important for transparency, and I think this is a common view.
  • I don't think FTX's mist
... (read more)

Sorry that the post came off as very harsh and accusatory tone. I mainly meant to express my exasperation with how the situation unfolded so quickly. I’m worried about the coming months and how that will affect the community and in the long term. 
Clearly, revealing who is donating is good for transparency. However, if donations were anonymized from the perspective of the recipients, I think that would help mitigate conflicts of interest. I think there needs to be more dialogue about how we can mitigate conflicts of interest, regardless of whether we a... (read more)

Yep this is a great point and overlaps with Vardev's comment. If I thought that the money was gained immorally, it would be pretty bad to just return it to the people who did the immoral thing!

2
Jason
FTX and 133 related entities have filed for bankruptcy in US court, so distribution of corporate assets will follow applicable law. Equity holders like SBF are last in line. However, if it's really bad there might not be enough to pay more than the costs of bankruptcy administration -- this is unlikely based on the petition that was filed.

Yeah this seems super relevant, great point! To be honest I'm skeptical of how separate "FTX Foundation, Inc." is/was from the rest of the FTX conglomerate. Would be useful to see the Foundation's finances after this all shakes out.

Put very vaguely: If it turned out that the money BERI received was made through means which I consider to be immoral, then I think I would return the money, even if that meant cancelling the projects it funded.

But of course I don't know how where my bar for "immoral" is in this case. Also it's probably not the case that all of FTX's profits were immoral. So how do I determine (even in theory) if the money BERI received was part of the "good profits" or the "bad profits"?

1
Jason
It's likely temporal. Once FTX turned to deeply unethical business practices and depositors suffered real losses (even without knowing it), then any "profits" were morally owed to reimburse the depositors. If there were fraud against non-depositors as well, that would complicate things. In any event, I think you could ethically keep any money that would otherwise go to equity holders, although it is unlikely this exists. Equity holders accept the risk of their agents (corporate management) going haywire in a way no other stakeholder does.
3
Vardev
I also would think, how would returning of that money change the situation that FTX is in and those that have experienced losses from this? It would take a significant amount of money, and without more knowledge on how the situation is it could be that (a) FTX finds better solutions, (b) FTXFF accepts the return of that money, but because it is a separate entity from FTX, it is not returned to those who faced losses from FTX in the first place but into the wallets of the donor.  Just adding more questions/ food for thought, as I guess the things I am saying are more practical than moral but may affect whether there are any moral obligations. 

What if there were a norm in EA of not accepting large amounts of funding unless a third-party auditor of some sort has done a thorough review of the funder's finances and found them to above-board? Obviously lots of variables in this proposal, but I think something like this is plausibly good and would be interested to hear pushback.

I disagree with this. I think we should receive money from basically arbitrary sources, but I think that money should not come with associated status and reputation from within the community. If an old mafia boss wants to buy malaria nets, I think it's much better if they can than if they cannot. 

I think the key thing that went wrong was that in addition to Sam giving us money and receiving charitable efforts in return, he also received a lot of status and in many ways became one of the central faces of the EA community, and I think that was quite bad... (read more)

1
Miguel
This is actually the best practice in banks and publicly held corporations...
[anonymous]26
9
2

I don't know much about how this all works but how relevant do you think this point is?

If Sequoia Capital can get fooled - presumably after more due diligence and apparent access to books than you could possibly have gotten while dealing with the charitable arm of FTX FF that was itself almost certainly in the dark - then there is no reasonable way you could have known.

[Edit: I don't think the OP had included the Eliezer tweet in the question when I originally posted this. My point is basically already covered in the OP now.]

What are the specific things you'd want to see on a transparency page? I think transparency is important, and I try to maintain BERI's transparency page, but I'm wondering if it meets your standards.

1
Cesar Scapella
Hi Sawyer, I looked at your transparency page and I believe that it is somewhat satisfactory for the kind of people who are familiar with the nonprofit structure. For a potential donor who is totally unfamiliar with the organization and also does not live in the US, they may find it difficult to navigate and understand. For example: People outside of US (and probably some people living in the US) may not be satisfied with IRS 990 filings as they may not know how they should interpret that information (myself included) and how much importance they should give to it (in the context of transparency). There are other documents, for example one called "ByLaws", which, again, for a non-US person (or US people who are not familiar with those docs) they can't judge the importance of such document for the transparency of an organization. Alright, all of this is not exactly a criticism, especially if your organization is only focused on the US audience for donations and contributions. As to the annual reports I think it is a positive sign that they contain a lot of information. I would suggest though it would make for a more friendly transparent page if some key information was summarized in a neat table of contents shown at the top of the page like: salary of each team member, total donations received per month or year, how much spent (on what), etc., so that someone from outside could have some overall idea of what is going on before they dig deeper on those more technical and dense PDFs. A note: it is probably there in the annual reports or 990 files, but I couldn't easily find info about team members and directors salary. I think that is crucial information to be found buried inside PDFs. I know that is a minor thing but if you take a look at the transparency page of Buffer you will have a good illustration of what I am imagining. I conclude by saying that your page is probably satisfactory for someone well versed in how nonprofit work, its financials, IRS files,

I'd guess the reason this was done for comments first is that posts are much longer and more complicated, such that it's often not clear what "agreeing" with the post even means. I think it's plausibly a good feature for posts, but I think it makes a lot more sense for comments.

It might be tough to implement this in a way that doesn't boost linkposts (which  I think would be counter to your purpose).

2
Yonatan Cale
1. I agree 2. No sorting algorithm is perfect. The relevant question, I think, is if this would be better than the current algorithm. (Would you prefer using it even though linkposts would be too high?) 3. With some extra effort, one could solve most of the link post problem. Specifically, I think the forum currently supports built-in link posts. Or one could search for "linkpost" or "link post" in the first line. But in practice I would just leave this problem as-is and see if anyone still uses this feed

Love this, great work. I especially appreciate your honest opinions on what mistakes you think you made and how the survey could have been improved. If JERIS continues next year, those thoughts will enable a lot of improvement!

Consider adding the Berkeley Existential Risk Initiative (BERI) to the list, either under Professional Services or under Financial and other material support. Suggested description: "Supports university research groups working to reduce x-risk, by providing them with free services and support."

2
Arepo
Thanks! I've added them now.

Great post. This put words to some vague concerns I've had lately with people valorizing "agent-y" characteristics. I'm agentic in some ways and very unagentic in other ways, and I'm mostly happy with my impact, reputation, and "social footprint". I like your section on not regulating consumption of finite resources: I think that modeling all aspects of a community as a free market is really bad (I think you agree with this, at least directionally).

This post, especially the section on "Assuming that it is low-cost for others to say 'no' to requests"  ... (read more)

Good catch, thanks! I can't find my original quote, so I think this was a recent change. I will edit my post accordingly.

Great points, thanks David. I especially like the compare and contrast between personal connections and academic credentials. I think probably you're more experienced with academia and non-EA philanthropy than I am, so your empirical views are different. But I also think that even if EA is better than these other communities, we should still be thinking about (1) keeping it that way, and (2) maybe getting even less reliant. This is part of what I was saying with:

None of this is unique to EA. While I think EA is particularly guilty of some of these issues,

... (read more)

I think the extent to which "member of the EA community" comes along with a certain way of thinking (i.e. "a lot of useful frames") is exaggerated by many people I've heard talk about this sort of thing. I think ~50% of the perceived similarity is better described as similar ways of speaking and knowledge of jargon. I think that there actually not that many people who have fully internalized new ways of thinking that are 1.) very rare outside of EA, and 2.) shared across most EA hiring managers.

Another way to put this would be: I think EA hiring managers o... (read more)

Explicitly asking for a reference the head organizer knows personally.

That feels pretty bad to me! I can imagine some reason that this would be necessary for some programs, but in general requiring this doesn't seem healthy.

I find the request for references on the EA Funds' application to be a good middle-ground. There's several sentences to it, but the most relevant one is:

References by people who are directly involved in effective altruism and adjacent communities are particularly useful, especially if we are likely to be familiar with their work and thi

... (read more)
2
Guy Raveh
I should probably be more precise and say the phrasing was something like "preferably someone who [organizer] knows". But since this is presented as the better option, I don't think I see much difference between the two, as you'd expect the actual filtering process to favour exactly those people in the organiser's network.

Thanks Chi, this was definitely a mistake on my part and I will edit the post. I do think that your website's "Get Involved" -> "CLR Fund" might not be the clearest path for people looking for funding, but I also think I should have spent more time looking.

Thanks for the thoughtful feedback Chris!

I think that the author undervalues value alignment and how the natural state is towards one of regression to the norm unless specific action is taken to avoid this

I think there is  difference between "value alignment" and "personal connection". I agree that the former is important, and I think the latter is often used (mostly successfully) as a tool to encourage the former. I addressed one aspect of this in the Hiring Managers section.

I agree that as EA scales, we will be less able to rely personal relationshi

... (read more)
2
Guy Raveh
I hadn't thought of your post in these explicit terms till now, but now that you write it like that I remember that indeed I've already applied to a program which explicitly asked for a reference the head organizer knows personally. I was rejected from that program twice, though I obviously can't know if the reason was related, and I may still apply in the future.
2
Chris Leong
Agreed. I was responding to: Although we might be more on the same page than I was thinking as you write: I guess my position is that there may be some people who don't identify with EA who would be really valuable; but it's also the case that being EA is valuable beyond just caring about the mission in that EAs are likely to have a lot of useful frames. I'd be surprised if it changed that fast. Like even if a bunch of additional people joined the community, you'd still know the people that you know.

tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.

I completely agree! I think probably some combination is best, and/or it could differ between subcommunities.

Also thanks for pointing out the FTX Future Fund's experience, I'd forgotten about that. I completely agree that this is evidence against my hypothesis, specifically in the case of grantee-grantor relationships.

Great point about mitigating as opposed to solving. It's possible that my having a "solutions" section wasn't the best framing. I definitely don't think personal connections should be vilified or gotten rid of entirely (if that were even possible), and going too far in this direction would be really bad.

Thanks Stefan! I agree with those strengths of personal connections, and I think there are many others. I mainly tried to argue that there are negative consequences as well, and that the negatives might outweigh the positives at some level of use. Did any of the problems I mentioned in the post strike you as wrong? (Either you think they don't tend to arise from reliance on personal connections, or you think they're not important problems even if they do arise?)

Something that didn't strike me as wrong, but as something worth reflecting more on, is your analysis of the tension between reliance on personal connections and high rates of movement growth. You take this to be a reason for relying on personal connections less, but one may argue it is a reason for growing more slowly.

Another point bearing in mind is that your (correct) observation that many EA orgs do not take general applications may itself be (limited) evidence against your thesis. For example, the Future Fund has made a special effort to test a variet... (read more)

Load more