All of Raemon's Comments + Replies

Proposed Longtermist Flag

Oh man, this is pretty cool. I actually like the fact that it's sort of jagged and crazy.

What I learned from working at GiveWell

This was among the most important things I read recently, thanks! (Mostly via reminding me "geez holy hell it's really hard to know things.")

Mentorship, Management, and Mysterious Old Wizards

That is helpful, thanks. I've been sitting on this post for years and published it yesterday while thinking generally about "okay, but what do we do about the mentorship bottleneck? how much free energy is there?", and "make sure that starting-mentorship is frictionless" seems like an obvious mechanism to improve things.

Dealing with Network Constraints (My Model of EA Careers)

https://forum.effectivealtruism.org/posts/JJuEKwRm3oDC3qce7/mentorship-management-and-mysterious-old-wizards

AMA: Elizabeth Edwards-Appell, former State Representative

In another comment you mention:

(One example would be the high levels of self-censorship required.)

I'm curious what the mechanism underlying the "required-ness" is. i.e. which of the following, or others, are most at play:

  • you'd get voted out of office
  • you'd lose support from your political allies that you need to accomplish anything
  • there are costs imposed directly on you/people-close-to-you (i.e. stress)

A related thing I'm wondering is whether you considered anything like "going out with a bang", where you tried... just not self-censoring, and... probably lo... (read more)

  • you'd get voted out of office

No, not this one. I don't think there was anything I wanted to say that would have been harmful enough to turn the Eye of Sauron(*) upon me.

  • there are costs imposed directly on you/people-close-to-you (i.e. stress)

Nah, any stress would have been a tertiary effect from...

  • you'd lose support from your political allies that you need to accomplish anything

This was the big one. I was already a black sheep when I got voted into office; I had negative amounts of political capital within my party. I had to focus a ton of... (read more)

Morality as "Coordination" vs "Altruism"

The issue isn't just the conflation, but missing a gear about how the two relate.

The mistake I was making, that I think many EAs are making, is to conflate different pieces of the moral model that have specifically different purposes.

Singer-ian ethics pushes you to take the entire world into your circle of concern. And this is quite important. But, it's also quite important that the way that the entire world is in your circle of concern is different from the way your friends and government and company and tribal groups are in your circle of concern.

In part... (read more)

An argument for keeping open the option of earning to save

Just wanted to throw up my previous exploration of a similar topic. (I think I had a fairly different motivation than you – namely I want young EAs to mostly focus on financial runway so they can do risky career moves once they're better oriented).

tl;dr – I think the actual Default Action for young EAs should not be giving 10%, but giving 1% (for self-signalling), and saving 10%. 

2Benjamin_Todd1yIt's a good point there could also be good cultural effects from encouraging people to save more as well as the negatives I mention.
You have more than one goal, and that's fine

I recently chatted with someone who said they've been part of ~5 communities over their life, and that all but one of them was more "real community" like than the rationalists. So maybe there's plenty of good stuff out there and I've just somehow filtered it out of my life.

8Julia_Wise1yThe "real communities" I've been part of are mostly longer-established, intergenerational ones. I think starting a community with almost entirely 20-somethings is a hard place to start from. Of course most communities started like that, but not all of them make it to being intergenerational.
Dealing with Network Constraints (My Model of EA Careers)

Alas, I started writing it and then was like "geez, I should really do any research at all before just writing up a pet armchair theory about human motivation."

I wrote this Question Post to try to get a sense of the landscape of research. It didn't really work out, and since then I... just didn't get around to it.

Dealing with Network Constraints (My Model of EA Careers)

Currently, there's only so many people who are looking to make friends, or hire at organizations, or start small-scrappy-projects together.

I think most EA orgs started out as a small scrappy project that initially hired people they knew well. (I think early-stage Givewell, 80k, CEA, AI Impacts, MIRI, CFAR and others almost all started out that way – some of them still mostly hire people they know well within the network, some may have standardized hiring practices by now)

I personally moved to the Bay about 2 years ago and shortly thereaft... (read more)

1agent182yVery much appreciate the detailed response. I think you have answered both my questions. Very much appreciate the clear example. If there are only 100 jobs in EA per year, it seems unlikely to support 1000s in the way you have suggested (rate limited). How does a "median EA" look? 1. he (the median EA) is within the 60-90th percentile (I am unsure of what, IQ?) 2. In the case with LW, he was able to talk about rationality and the "surrounding ecosystem". If you can, I would really like an example for this? P.S I am trying to judge if I could be a potential "median-EA". Hence the questions. Thanks.
Volunteering isn't free

I expect to want to link this periodically. One thing I could use is clearer survey data about how often volunteering is useful, and when it is useful almost-entirely-for-PR reasons. People often are quite reluctant to think volunteering isn't useful will say "My [favorite org] says they like volunteers!". (My background assumption is that their favorite org probably likes volunteers and needs to say so publicly, but primarily because of long-term-keeping-people-engaged reasons. But, I haven't actually seen reliable data here)

Announcing the 2019-20 Donor Lottery

I just donated to the first lottery, but FYI I found it surprisingly hard to navigate back to it, or link others to it. It doesn't look like the lottery is linked from anywhere on the site and I had to search for this post to find the link again.

Why and how to start a for-profit company serving emerging markets

The book The Culture Map explores these sorts of problems, comparing many cultures' norms and advising on how to bridge the differences.

In Senegal people seem less comfortable by default expressing disagreement with someone above them in the hierarchy. (As a funny example, I've had a few colleagues who I would ask yes-or-no questions and they would answer "Yes" followed by an explanation of why the answer is no.)

Some advice it gives for this particular example (at least in several 'strong hierarchy' cultures), is instead of a ... (read more)

Does 80,000 Hours focus too much on AI risk?

Tying in a bit with Healthy Competition:

I think it makes sense (given my understanding of the folk at 80k's views) for them to focus the way they are. I expect research to go best when it follows the interests and assumptions of the researchers.

But, it seems quite reasonable if people want advice for different background assumptions to... just start doing that research, and publishing. I think career advice is a domain that can definitely benefit from having multiple people or orgs involved, just needs someone to actually step up and do it.

Healthy Competition

Nod. I had "more experimentation" as part of what I meant to imply by "diversity of worldviews" but yeah it's good to have that spelled out.

The Future of Earning to Give

This certainly seems like a viable option. I agree with the pros and cons described here, and think it'd make sense for local groups to decide which one made more sense.

The Future of Earning to Give
My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations)

My background assumption is that it's important to grow the number of people who can work fulltime on grant evaluation.

Remember that Givewell was originally just a few folk doing research in their spare time.

The Future of Earning to Give

My understanding (not confident) is that those people (at least Nick Beckstead) are more something like advisors acting as a sanity check or something (or at least that they aren't the ones putting most of the time into the funds)

The Future of Earning to Give

I also think there's some potential to re-orient the EA pipeline around this concept. If local EA meetups did a collective donor lottery, then even if only one of them ends up allocating the money, they could still solicit help from others to think about it.

My experience is that EA meetups struggle a bit with "what do we actually do to maintain community cohesiveness, given that for many of us our core action is something we do a couple times per year, mostly privately." If a local meetup did a collective donor lottery, than even if only on... (read more)

The Future of Earning to Give

(edit: whoops, responded to wrong comment)

The Future of Earning to Give

My take: rank-and-file-EAs (and most EA local communities) should be oriented around donor lotteries.

Background beliefs:

  • I think EA is vetting constrained
  • Much of the direct work that needs doing is network constrained (i.e. requires mentorship, in part to help people gain context they need to form good plans)
  • The Middle of the Middle of the EA community should focus on getting good at thinking.
  • There's only so much space in the movement for direct work, and it's unhealthy to set expectations that direct work is what people are "supposed to be.&
... (read more)

What about donor coalitions instead of donor lotteries?

Instead of 50 people putting $2000 into a lottery, you could have groups of 5-10 putting $2000 into a pot that they jointly agree where to distribute.

Pros:

-People might be more invested in the decision, but wouldn't have to do all the research by themselves.

-Might build an even stronger sense of community. The donor coalition could meet regularly before the donation to decide where to give, and meet up after the donation for updates from the charity.

-Avoids the unilateralist's curse.

-Less legally fraug

... (read more)

My intuition is that the EA Funds are usually a much better opportunity in terms of donation impact than donor lotteries and having one person do independent research themself (instead of relying almost entirely on recommendations), unless you think you can do better (according to your own ethical views) than the researchers for each fund. They typically have at least a few years of experience in research in their respective areas, often full-time, they have the time to consider many different neglected opportunities, and they probably get more feedback th... (read more)

I also think there's some potential to re-orient the EA pipeline around this concept. If local EA meetups did a collective donor lottery, then even if only one of them ends up allocating the money, they could still solicit help from others to think about it.

My experience is that EA meetups struggle a bit with "what do we actually do to maintain community cohesiveness, given that for many of us our core action is something we do a couple times per year, mostly privately." If a local meetup did a collective donor lottery, than even if only on... (read more)

Kerry_Vaughan's Shortform

This was quite an interesting point I hadn't considered before. Looking forward to reading more.

Survival and Flourishing Fund grant applications open until October 4th ($1MM-$2MM planned for dispersal)

My understanding is that it's currently focused on nonprofits (in large part because it's much more logistically and legally complicated to send money to individuals)

5Denkenberger2yI understand that they cannot make grants to individuals, but researchers in academia are part of a university which is a charity (501(c)(3) in the US). So legally it would be the same as a focused charity, but still it is a question of whether they would likely make a grant like that.
Effective Altruism and Everyday Decisions
Believing that my time is really valuable can lead to me making more wasteful decisions. Decisions like: "It is totally fine for me to buy all these expensive ergonomic keyboards simultaneously on Amazon and try them out, then throw away whichever ones do not work for me." Or "I will buy this expensive exercise equipment on a whim to test out. Even if I only use it once and end up trashing it a year later, it does not matter."
...
The thinking in the examples above worries me. People are bad at reasoning about when to make exceptions to r
... (read more)
Leverage Research: reviewing the basic facts

Just wanted to say I super appreciated this writeup.

Thanks Raemon :-) I'm glad it was helpful.

'Longtermism'

I suspect the goal here is less to deconfuse current EAs and more to make it easier to explain things to newcomers who don't have any context.

(It also seems like good practice to me for people in leadership positions to keep people up to date about how they're conceptualizing their thinking)

-3Milan_Griffes2yBasically agree about the first claim, though the Forum isn't really aimed at EA newcomers. Eh, some conceptualizations are more valuable than others. I don't see how six paragraphs of Will's latest thinking on whether to hyphenate "longtermism" could be important to stay up-to-date about.
I find this forum increasingly difficult to navigate

Quick note that if you set All Posts to "sort by new" instead of "sort by Daily" there'll be 50 posts. (The Daily view is a bit weird because it varies a lot depending on forum traffic that week)

Extinguishing or preventing coal seam fires is a potential cause area

I don't have much to contribute but I appreciated this writeup – I like it when EAs explore cause areas like this.

6Aaron Gertler2yI especially appreciate the "causal story" section of the post! I'm not sure I fully believe the explanation*, but it's always good to propose one, rather than handwaving away the reasons that a good cause would be so neglected (an error I frequently see outside of EA, and occasionally in EA-aligned work on other new cause areas). *The part that rings truest to me is "no ready channels for donation". Ignorance seems more likely than deliberate neglect; I can picture many large environmental donors being asked about coal seam fires and reacting with "huh, never thought about it" or "is that actually a problem?"
I find this forum increasingly difficult to navigate

For the record I'm someone who works on the forum and thought the OP was expressed pretty reasonably.

I find this forum increasingly difficult to navigate

Strong upvoted mostly to make it easier to find this comment.

Raemon's EA Shortform Feed

The Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they're either young and lacking some core "figure out how to be helpful and actually help" skills, or they're older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require.

I think the *End* of the Middle of the funnel is more of where "volunteer at EA orgs" makes sense. And people in the Middle of the Middle who think they have the "figure out how to be helpful and help" property should do so if they're self-motivated to. (If they're not self motivated they're probably not a good volunteer)

Raemon's EA Shortform Feed

My claim is just that "volunteer at an org" is not a scalable action that it makes sense to be a default thing EA groups do in their spare time. This isn't to say volunteers aren't valuable, or that many EAs shouldn't explore that as an option, or that better coordination tools to improve the situation shouldn't be built.

But I am a bit more pessimistic about it – the last time I checked, many of the times someone had said "huh, it looks like there should be all this free labor available by passionate people, can't w... (read more)

2Raemon2yThe Middle of the Middle of the funnel is specifically people who I expect to not yet be very good at volunteering, in part because they're either young and lacking some core "figure out how to be helpful and actually help" skills, or they're older and busier with day jobs that take a lot of the same cognitive bandwidth that EA volunteering would require. I think the *End* of the Middle of the funnel is more of where "volunteer at EA orgs" makes sense. And people in the Middle of the Middle who think they have the "figure out how to be helpful and help" property should do so if they're self-motivated to. (If they're not self motivated they're probably not a good volunteer)
Raemon's EA Shortform Feed

Membranes

A membrane is a semi-permeable barrier that things can enter and leave, but it's a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings.

An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. ... (read more)

Raemon's EA Shortform Feed

Notes from a "mini talk" I gave to a couple people at EA Global.

Local EA groups (and orgs, for that matter) need leadership, and membranes.

Membranes let you control who is part of a community, so you can cultivate a particular culture within that community. They can involve barrier to entry, or actively removing people or behaviors that harm the culture.

Leadership is necessary to give that community structure. A good leader can make a community valuable enough that it's worth people's effort to overcome the barriers to entry, and/or maintain that barrier.

2Raemon2yMembranes A membrane is a semi-permeable barrier that things can enter and leave, but it's a bit hard to get in and a bit hard to get out. This allows them to store negentropy, which lets them do more interesting things than their surroundings. An EA group that anyone can join and leave at a whim is going to have relatively low standards. This is fine for recruiting new people. But right now I think the most urgent EA needs have more do with getting people from the middle-of-the-funnel to the end, rather than the beginning-of-the-funnel to the middle. And I think helping the middle requires a higher expectation of effort and knowledge. (I think a reasonably good mixed strategy is to have public events maybe once every month or two, and then additional events that require some kind of effort on the part of members) What happens inside the membrane? * First, you meet some basic standards for intelligence, good communication, etc. The basics you need in order to accomplish anything on purpose. * As noted elsewhere [https://forum.effectivealtruism.org/posts/DdJNQvvS3SdrEYnKS/raemon-s-ea-shortform-feed#2SQBjQCHWvaihmzxZ] , I think EA needs to cultivate the skill of thinking (as well as gaining agency). There are a few ways to go about this, but all of them require some amount of "willing to put in extra effort and work." Having a space where people have the expectation that everyone there is interested in putting that effort is helpful for motivation and persistence. * In time, you can develop conversation norms that foster better-than-average thinking and communication. (i.e. make sure that admitting you were wrong is rewarded rather than punished) Membranes can work via two mechanisms: * Be more careful about who you let in, in the first place * Be willing to invest effort in giving feedback, or being willing to expel people from the group. The first option is easier. Giving feedback and expelling people is quite costly, and
Raemon's EA Shortform Feed

Part of the problem is there are not that many volunteer spots – even if this worked, it wouldn't scale. There are communities and movements that are designed such that there's lots of volunteer work to be done, such that you can provide 1000 volunteer jobs. But I don't think EA is one of them.

I've heard a few people from orgs express frustration that people come to them wanting to volunteer, but this feels less like the orgs receive a benefit, and more than the org is creating a training program (at cost to themselves) to provide a benefit to the volunteers.

2Denkenberger2yI agree that EA does not have 1000 volunteer jobs. However, here [https://forum.effectivealtruism.org/posts/MYth4Ju4kbfHmJRbA/remote-volunteering-opportunities-in-effective-altruism] is a list of some possibilities. I know ALLFED [http://www.allfed.info/] could still effectively utilize more volunteers.
Raemon's EA Shortform Feed

Updated the thread to just serve as my shortform feed, since I got some value out of the ability to jot down early stage ideas.

Raemon's EA Shortform Feed

I’m not yet sure that I’ll be doing this more than 3 months, so I think there’s a bit more value to focus more on generating value in that time.

1nonzerosum2yGotcha. I wonder whether it could create substantially more impact from doing over the long term yourself, or setting it up well for someone else to run long term. Obviously I have no context and your goals on the project but I've seen things where people do a short term project aiming for impact creation and where in the end they feel that they could've created much more impact by doing the thing in a more ongoing manner. So this note may or may not be relevant depending on the project and your goals :)
Raemon's EA Shortform Feed

I think the actions that EA actually needs to be involved with doing also require figuring things out and building a deep model of the world.

Meanwhile... "sufficiently advanced thinking looks like doing", or something. At the early stages, running a question hackathon requires just as much ops work and practice as running some other kind of hackathon.

I will note that default mode where rationalists or EAs sit around talking and not doing is a problem, but often that mode, in my opinion, doesn't actually rise to the level of "thinking for real." Thinking for real is real work.

2Moses2yHmm, it's not so much the classic rationalist trait of overthinking that I'm concerned about. It's more like… First, when you do X, the brain has a pesky tendency to learn exactly X. If you set out to practice thinking, the brain improves at the activity of "practicing thinking". If you set out to achieve something that will require serious thinking, you improve at serious thinking in the process. Trying to try [https://www.lesswrong.com/posts/WLJwTJ7uGPA5Qphbp/trying-to-try] and all that. So yes, practicing thinking, but you can't let your brain know that that's what you're trying to achieve. Second, "thinking for real" sure is work, but the next question is, is this work worth doing? When you start with some tangible end goal and make plans by working your way backwards to where you are now, that informs you what thinking works needs to be done, decreasing the chance that you'll waste time on producing research which looks nice and impressive and all that, but in the end doesn't help anyone improve the world. I guess if you come up with technology [https://www.lesswrong.com/posts/ZvjYRmkTfWxhTXCaT/lw2-0-technology-platform-for-intellectual-progress-1] that allows people to plug into the world-saving-machine at the level of "doing research-assistant-kind-of-work for other people who know what they're doing" and gradually work their way up to "being one of the people who know what they're doing", that would make this work. You wouldn't be "practicing thinking"; you could easily convince your brain that you're actually trying to achieve something in the real world, because you could clearly follow some chain of sub-sub-agendas to sub-agendas to agendas and see that what you're working on is for real. And, by the same token, you'd be working on something that (someone believes) needs to be done. And maybe sometimes you'd realize that, no, actually, this whole line of reasoning can be cut out or de-prioritized, here's why, etc.—and that's how you'd gradually grow
Raemon's EA Shortform Feed

So I actually draw an important distinction between "mid-level EAs", where there's three stages:

"The beginning of the Middle" – once you've read all the basics of EA, the thing you should do is... read more things about EA. There's a lot to read. Stand on the shoulders of giants.

"The Middle of the Middle" – ????

"The End of the Middle" – Figure out what to do, and start doing it (where "it" is probably some kind of ambitious project).

An important facet of the Middle of the Middle is that peopl... (read more)

4Moses2yAh. This seems to me like two different problems: Some people lack, as you say, agency. This is what I was talking about—they're looking for someone to manage them. Other people are happy to do things on their own, but they don't have the necessary skills and experience, so they will end up doing something that's useless in the best case and actively harmful in the worst case. This is a problem which I missed before but now acknowledge. Normally I would encourage practicing doing (or, ideally, you know, doing) rather than practicing thinking, but when doing carries the risk of harm, thinking starts to seem like a sensible option. Fair enough.
2Denkenberger2yHow about volunteering for an EA org?
Is preventing child abuse a plausible Cause X?

I didn't write a top level post but I sketched out some of the relevant background ideas here. (I'm not sure if they answer your particular concerns, but you can ask more specific questions there if you have them)

Raemon's EA Shortform Feed

Integrity, Accountability and Group Rationality

I think there are particular reasons that EA should strive, not just to have exceptionally high integrity, but exceptionally high understanding of how integrity works.

Some background reading for my current thoughts includes habryka's post on Integrity and my own comment here on competition.

7Elityre2yWhat about Paul's Integrity for Consequentialists [https://forum.effectivealtruism.org/posts/CfcvPBY9hdsenMHCr/integrity-for-consequentialists] ?
Raemon's EA Shortform Feed

A few reasons for I think competition is good:

  • Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
  • Easier criticism. When there's only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn't get done at all. Multiple orgs can allow people to think more freely ab
... (read more)
Raemon's EA Shortform Feed

Competition in the EA Sphere

A few years ago, EA was small, and it was hard to get funding to run even one organization. Spinning up a second one with the same focus area might have risked killing the first one.

By now, I think we have the capacity (both financial, coordinational and human-talent) that that's less of a risk. Meanwhile, I think there are a number of benefits to having more, better, friendly competition.

I'm interested in chatting with people about the nuts and bolts of how to apply this.

A few reasons for I think competition is good:

  • Diversity of worldviews is better. Two research orgs might develop different schools of thought that lead to different insights. This can lead to more ideas as well as avoiding the tail risks of bias and groupthink.
  • Easier criticism. When there's only one org doing A Thing, criticizing that org feels sort of like criticizing That Thing. And there may be a worry that if the org lost funding due to your criticism, That Thing wouldn't get done at all. Multiple orgs can allow people to think more freely ab
... (read more)
Raemon's EA Shortform Feed

Some background thoughts on why I think the middle of the EA talent funnel should focus on thinking:

  • I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted.
    • So the longterm Earn-to-Give options are:
      • A
... (read more)
Raemon's EA Shortform Feed

Mid-level EA communities, and cultivating the skill of thinking

I think a big problem for EA is not having a clear sense of what mid-level EAs are supposed to do. Once you've read all the introductory content, but before you're ready to tackle anything real ambitious... what should you do, and what should your local EA community encourage people to do?

My sense is that grassroots EA groups default to "discuss the basics; recruit people to give money to givewell-esque charities and sometimes weirder things; occasionally run EAGx conferences; gi... (read more)

7Moses2yFunny—I think a big problem for EA is mid-level EAs looking over their shoulders for someone else to tell them what they're supposed to do
1Moses2yI'll take your invitation to treat this as an open thread (I'm not going to EAG). Why not tackle less ambitious goals?
7Raemon2ySome background thoughts on why I think the middle of the EA talent funnel should focus on thinking: * I currently believe the longterm value of EA is not in scaling up donations to well vetted charities. This is because vetting charities is sort of anti-inductive. If things are going well (and I think this is quite achievable – it only really takes a couple billionaires to care) charities should get vetted and then quickly afterwards get enough funding. This means the only leftover charities will not be well vetted. * So the longterm Earn-to-Give options are: * Actually becoming pretty good at vetting organizations and people * Joining donor lotteries (where you still might have to get good at thinking if you win) * Donating to GiveDirectly (which is maybe actually fine but less exciting) * The world isn't okay because the problems it faces are actually hard. You need to understand how infrastructure plugs together. You need to understand incentives and unintended consequences. In some cases you need to actually solve unsolved philosophical problems. You need object level domain expertise in whatever field you're trying to help with. * I think all of these require a general thinking skill that is hard to come by and really needs practice. (Writing this is making me realize that maybe part of what I wanted with this thread was just an opportunity to sketch out ideas without having to fully justify every claim)
Raemon's EA Shortform Feed

Grantmaking and Vetting

I think EA is vetting constrained. It's likely that I'll be involved with a new experimental grant allocation process. There are a few key ingredients here that are worth discussing:

  • Meta Process design. I have some thoughts on designing good grantmaking processes (at the meta level), and I'm interested in hearing from others about what seem like important process elements.
  • Evaluation approach. I haven't done (much) evaluation before, and would be interested in talking to people about what makes for good evaluation
... (read more)
3Nicole_Ross2yHey Raemon - I run the EA Grants program at CEA. I'd be happy to chat! Email me at nicole.ross@centreforeffectivealtruism.org [nicole.ross@centreforeffectivealtruism.org] if you want to arrange a time.
1nonzerosum2yI'd offer that whatever you can do to make it possible to iterate on your grantmaking loop quickly will be useful. Perhaps starting with smaller grants on a month or even week cycle, running a few rounds there, and then scaling up. Don't try and make it near-perfect from the start, instead try and make it something that can become near-perfect because of iterations and improvements.
3Halffull2yI won't be at EAG but I'm in Berkeley for a week or so and would love to chat about this.
There's Lots More To Do

I think if you've read Ben's writings, it's obvious that the prime driver is about epistemic health.

8anonymous_ea2yI don't feel inclined to get into this, but FWIW I have read a reasonable amount of Ben's writings on both EA and non-EA topics, and I do not find it obvious that his main, subconscious motivation is epistemic health rather than a need to reject EA.
Load More