All of vipulnaik's Comments + Replies

Understanding Open Philanthropy's evolution on migration policy

Good point. My understanding is that Open Phil made a general decision to focus only on US policy for most of their policy areas, for the reason that there are high fixed costs to getting familiar with a policy space. In some areas like animal welfare they've gone beyond US policy, but those are areas where they are spending way more money.

Their grants to Labor Mobility Partnerships stand out as not being US-specific, though LaMP is still currently more focused on the US.

I do expect that if there are shovel-ready, easy-to-justify opportunities outside the US, Open Phil would take them.

2BrownHairedEevee8mo
For what it's worth, the Center for Global Development [https://www.cgdev.org/topics/migration-displacement-humanitarian-policy] and Migration Policy Institute [https://www.migrationpolicy.org/programs/programs] do work on policy advocacy outside the United States.
You can now apply to EA Funds anytime! (LTFF & EAIF only)

Hello! I'm wondering what implications the switch to rolling applications has on how payout reports are published? https://funds.effectivealtruism.org/funds/far-future#payout-reports doesn't include anything beyond April 1, 2021. Previously there would be three reports per year tied to the (discrete) grant rounds.

5Jonas Vollmer8mo
We will continue to publish payout reports ~3 times per year. There have been a number of delays with the more recent payout reports, but several funds expect to publish them within a few days/weeks.
Wikipedia editing is important, tractable, and neglected

Hi Darius!

I appreciate that you've raised this issue and provided a reasonably thorough discussion of it. I would like to highlight a bunch of aspects based on my experience editing Wikipedia as well as studying its culture in some depth. While the paid editing phase and the subsequent fallout inform my views partly, these are actually based on several years of experience before (and some after) that incident.

While none of what I say falsifies what you wrote, it is in tension with some of your tone and emphasis. So in some ways these observations are criti... (read more)

1gwern10d
One downside you don't mention: having a Wikipedia article can be a liability when editors are malicious, for all the reasons it is a benefit when it is high-quality like its popularity and mutability. A zealous attacker or deletionist destroying your article for jollies is bad, but at least it merely undoes your contribution and you can mirror it; an article being hijacked (which is what a real attacker will do) can cause you much more damage than you would ever have gained as it creates a new reality which will echo everywhere. My (unfortunately very longstanding) example of this is the WP article on cryonics [https://en.wikipedia.org/wiki/Cryonics]: you will note that the article is surprisingly short for a topic on which so much could be said and reads like it's been barely touched in half a decade. Strikingly, while having almost no room for any information on minor topics like how cryonics works or how current cryonics orgs operate or the background on why it should be possible in principle or remarkable research findings like the progress on bringing pigs back from the dead, instead, the introduction, and an entire section, harp on how corporations go bankrupt and it is unlikely that a corporation today will be around in a century and how ancient pre-1973 cryonics companies have all gone bankrupt and so on. These claims are mostly true, but you will then search the article in vain for any mention that the myriad of cryonics bankruptcies alluded to is like 2 or 3 companies, that cryonics for the past 50 years isn't done solely by corporations precisely because of that when it became apparent that cryonics was going to need to be a long-term thing & families couldn't be trusted to pay, they are structured as trusts (the one throwaway comma mentioning trusts is actively misleading by implying that they are optional and unusual, rather than the status quo), and that there have been few or no bankruptcies or known defrostings since. All attempts to get any of th
Wikipedia editing is important, tractable, and neglected

Hi Linch! I have a loose summary of my sponsored Wikipedia editing efforts at https://vipulnaik.com/sponsored-wikipedia-editing/ that I have just updated to include more information and links.

For third-party coverage of the incident, check out https://web.archive.org/web/20170625001549/http://en.kingswiki.com/wiki/Vipulgate -- I'm linking to Wayback Machine since that wiki seems to no longer exist; also a warning that the site's general viewpoints are redpill, which might be a dealbreaker for some readers. But this particular article seems reasonably well... (read more)

Announcing an updated drawing protocol for the EffectiveAltruism.org donor lotteries

It looks like the NIST randomness beacon will be back in time for the draw date of the lottery. https://www.nist.gov/programs-projects/nist-randomness-beacon says "NIST will reopen at 6:00 AM on Monday, January 28, 2019."

Might it make sense to return to the NIST randomness beacon for the drawing?

In defence of epistemic modesty

The comments on naming beliefs by Robin Hanson (2008) appears to be how the consensus around the impressions/beliefs distinction began to form (the commenters include such movers and shakers as Eliezer and Anna Salamon).

Also, impression track records by Katja (September 2017) recent blog post/article circulated in the rationalist community that revived the terminology.

6Pablo5y
Thanks for drawing our attention to that early Overcoming Bias post. But please note that it was written by Hal Finney, not Robin Hanson. It took me a few minutes to realize this, so it seemed worth highlighting lest others fail to appreciate it. Incidentally, I've been re-reading Finney's posts over the past couple of days and have been very impressed. What a shame that such a fine thinker is no longer with us. ETA: Though one hopes this is temporary [https://www.alcor.org/blog/hal-finney-becomes-alcors-128th-patient/].
Introducing fortify hEAlth: an EA-aligned charity startup

Against Malaria Foundation was started by a guy who had some business and marketing experience but no global health chops. It is now a GiveWell top charity

https://issarice.com/against-malaria-foundation

https://timelines.issarice.com/wiki/Timeline_of_Against_Malaria_Foundation

Disclosure: I funded the creation of the latter page, which inspired the creation of the former.

Why & How to Make Progress on Diversity & Inclusion in EA

I'm not sure why you brought up the downvoting in your reply to my reply to your comment, rather than replying directly to the downvoted comment. To be clear, though, I did not downvote the comment, ask others to downvote the comment, or hear from others saying they had downvoted the comment.

Also, I could (and should) have been clearer that I was focusing only on points that I didn't see covered in the post, rather than providing an exhaustive list of points. I generally try to comment with marginal value-add rather than reiterating things already mentione... (read more)

1MichaelPlant5y
Sorry. That was a user error.
Why & How to Make Progress on Diversity & Inclusion in EA

I tried to avoid things that have already been discussed heavily and publicly in the community, and I think the math/philosopher angle is one that is often mentioned in the context of EA not being diverse enough. The post itself notes:

"""people who are both that and young, white, cis-male, upper middle class, from men-dominated fields, technology-focused, status-driven, with a propensity for chest-beating, overconfidence, narrow-picture thinking/micro-optimization, and discomfort with emotions."""

This also mentioned in the pos... (read more)

2MichaelPlant5y
I'm really not sure why my comment was so heavily downvoted without explanation. I'm assuming people think discussion of inclusion issues is a terrible idea. Assuming that is what I've been downvoted for, that makes me feel disappointed in the online EA community and increases my belief this is a problem. I think this may be part of the problem in this context. Some EAs seem to take the attitude (i'm exaggerating a bit for effect) that if there was a post on the internet about it once, it's been discussed. This itself is pretty unwelcoming and exclusive, and it penalises people who haven't been in the community for multiple or haven't spend many hours reading around internet posts. My subjective view is that this topic is under-discussed relative to how much I feel it should be discussed.
Why & How to Make Progress on Diversity & Inclusion in EA

"I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards straight, white dudes."

(1) Where did "straight" come into this picture? The author says that EAs are well-represented on sexual diversity (and maybe even overrepresented on some fairly atypical sexual orientations), and my comment (and the data I used) had nothing to say about sexual orientation?

(2) """it seems a bit suspicious to me that desire to be effective and al... (read more)

Why & How to Make Progress on Diversity & Inclusion in EA

I find it interesting that most of the examples given in the article conform to mainstream, politically correct opinion about who is and isn't overrepresented. A pretty similar article could be written about e.g. math graduate students with almost the exact list of overrepresented and underrepresented groups. In that sense it doesn't seem to get to the core of what unique blind spots or expansion problems EA might have.

An alternate perspective would be to look at minorities, subgroups, and geographical patterns that are way overrepresented in EAs relative ... (read more)

1MichaelPlant5y
I think about this a different way. I think it weird, given there's so much mainstream discussion of inclusion, that it hasn't seemed to penetrate into EA. That makes EA the odd one out. Hence it might be good to identify the generic blindspots, even if we haven't yet honed in on EA specific ones. I think you're approach of looking for over-represented people is interested and promising. What I find surprising is that you didn't zone in on the most obvious one, which is that EA is really heavily weighed with philosophers and maths-y types, such as software engineers.

I can see 1-3 being problems to some extent (and I don't think Kelly would disagree)... but "overrepresentation of vegetarians and vegans"?? You might as well complain about an overrepresentation of people who donate to charity.

Why & How to Make Progress on Diversity & Inclusion in EA

You report EA as being 70% male. How unusual is that for a skew? One comparison point for this, for which data is easily abundant, is readerships of websites that are open-to-read (no entry criteria, no member fees). Looking at the distribution of such websites, 70% seems like a relatively low end of skew. For instance, Politico and The Hill, politics news sites, see 70-75% male audiences (https://www.quantcast.com/politico.com#demographicsCard and https://www.quantcast.com/thehill.com#demographicsCard) whereas nbc.com, a mainstream TV, entertainment, and ... (read more)

2MichaelPlant5y
I take your point that skews can happen, but it seems a bit suspicious to me that desire to be effective and altruistic should be so heavily skewed towards white dudes. Edit: I previous said "straight white dudes" but removed the "straight". See below.
2Lila5y
Politics is rarely used as an example of a positive environment for women. It's not just the actual numbers that are concerning (though I disagree with you that a 70% skew can be brushed off). It's the exclusionary behavior within EA.
8Chris Leong5y
Obligatory SlateStarCodex post for the graphs: http://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/ [http://slatestarcodex.com/2017/08/07/contra-grant-on-exaggerated-differences/] "We can relax the Permanent State Of Emergency around too few women in tech, and admit that women have the right to go into whatever field they want, and that if they want to go off and be 80% of veterinarians and 74% of forensic scientists, those careers seem good too."
Effective Altruism Grants project update

Thanks for the detailed post, Roxanne! I am a little confused by the status of the recipients and the way these grants are treated by recipients from an accounting/tax perspective.

First off, are all the grants made to individuals only, or are some of them made to corporations (such as nonprofits)? Your spreadsheet lists all the recipients as individuals, but the descriptions of the grants suggest that in at least some cases, the money is actually going to an organization that is (probably) incorporated. Three examples: Oliver Habryka for LessWrong 2.0 (whi... (read more)

1ricoh_aficio5y
Some of them are going to nonprofits and other institutions, yes. This wasn't something we'd considered publishing, and I'm not sure what if any privacy concerns this could raise. If there's a good case for doing so I'm happy to consider adding that information. Unfortunately, in cases where we paid individuals directly they do have to treat them as personal income. We might have been able to avoid this in some cases by giving the money as scholarships, although as far as I'm aware this would have been a big hassle to set up. It's on the table for future rounds if it seems worth the setup cost. In four of five cases the money went to an institution with whom the recipient will coordinate multi-person distribution. In the fifth case the money went directly to an individual who had yet to designate the other recipient, so we gave them the totality to distribute themselves.
Effective Altruism Grants project update

It now went from 20,000 to 200,000. Is that what you intended? My crude calculation yields a number closer to 20,000 than 200,000.

0ricoh_aficio5y
Sloppy editing; thanks for the catch. It should actually be fixed now.
Why donate to 80,000 Hours

I'm following up regarding this :).

Is EA Growing? Some EA Growth Metrics for 2017

The subreddit stats used to be public (or rather, moderators could choose to make them public) but that option was removed by Reddit a few months ago.

https://www.reddit.com/r/ModSupport/comments/6atvgi/upcoming_changes_view_counts_users_here_now_and/

I discussed Reddit stats a little bit in this article: https://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time

Is EA Growing? Some EA Growth Metrics for 2017

I have been using PredictionBook for recording predictions related to GiveWell money moved; see http://effective-altruism.com/ea/xn/givewell_money_moved_in_2015_a_review_of_my/#predictions-for-2016 for links to the predictions. Unfortunately searching on PredictionBook itself does not turn up all the predictions because they use Google, which does not index all pages or at least doesn't surface them in search results.

Changes to the EA Forum

Do you foresee any changes being made to the moderation guidelines on the forum? Now that CEA's brand name is associated with it, do you think that could mean forbidding the posting of content that is deemed "not helpful" to the movement, similar to what we see on the Effective Altruists Facebook group?

If there are no anticipated changes to the moderation guidelines, how do you anticipate CEA navigating reputational risks from controversial content posted to the forum?

8Julia_Wise5y
The main reason moderation on the Facebook group works the way it does is that the group has 13000+ members and no ability to downvote, so the ratio of signal to noise would be pretty sad if there were no screening. It's very rare that the Facebook group moderators screen out a post for being harmful - almost everything that we screen out is because it's not relevant enough. With the Forum, everyone can upvote and downvote, so content that readers find most interesting and relevant gets sorted up to the top that way. There's also a karma threshold to make a post (though we can help newcomers with that if they ask.) So I don't have the same worry about the front page becoming mostly noise. We still expect to enforce the standards of discussion on the Forum, described in the FAQ ("Spam, abuse and materials advocating major harm or illegal activities are deleted.") But in general we expect that people don't take everything posted on the Forum to represent CEA's view.
Update on Effective Altruism Funds

Thanks again for writing about the situation of the EA Funds, and thanks also to the managers of the individual funds for sharing their allocations and the thoughts behind it. In light of the new information, I want to raise some concerns regarding the Global Health and Development fund.

My main concern about this fund is that it's not really a "Global Health and Development" fund -- it's much more GiveWell-centric than global health- and development-centric. The decision to allocate all fund money to GiveWell's top charity reinforces some of my c... (read more)

2Kerry_Vaughan5y
Hey Vipul, thanks for taking the time to write this. I think I largely agree with the points you've made here. As we've stated in the past, the medium-term goal for EA Funds to have 50% or less of the fund managers be Open Phil/GiveWell staff. We haven't yet decided whether we would plan to add fund managers in new cause areas, add fund managers with different approaches in existing cause areas, or some combination of the two. Given that Global Health and Development has received the most funding, there is likely room for adding funds that take a different approach to funding the space. Personally, I'd be excited to see something like a high risk, high reward global health and development fund. I probably disagree with changing the name of the fund right now as I think the current name does a good job of making it immediately clear what the fund is about. Because the UI of EA Funds shows you all the available funds and lets you split between them, we chose names that make it clear what the fund is about as compared to what the other funds are about. If we added a fund that was also in Global Heath and Development, then it might make sense to change the current name of the Global Health and Development fund to make it clear how the two funds are distinct from one another. By the way, if you know of solid thinkers in Global Heath and Development funding who are unaffiliated with GiveWell please feel free to email their names to me at kerry@effectivealtruism.org [kerry@effectivealtruism.org].
Update on Effective Altruism Funds

I appreciate the information being posted here, in this blog post, along with all the surrounding context. However, I don't see the information on these grants on the actual EA Funds website. Do you plan to maintain a grants database on the EA Funds website, and/or list all the grants made from each fund on the fund page (or linked to from it)? That way anybody can check in at any time to see how how much money has been raised, and how much has been allocated and where.

The Open Philanthropy Project grants database might be a good model, though your needs may differ somewhat.

7Kerry_Vaughan5y
We have an issue with our CMS which is making the grant information not show up on the website. I will include these grants and all future grants as soon as that is fixed.

Commenting here to avoid a misconception that some readers of this post might have. I wasn't trying to "spread effective altruism" to any community with these editing efforts, least of all the Wikipedia community (it's also worth noting that the Wikipedia community that participates in these debates is basically disjoint from the people who actually read those specific pages in practice -- many of the latter don't even have Wikipedia accounts).

Some of the editing activities were related to effective altruism in these two ways: (1) The pages we e... (read more)

1AlasdairGives5y
i've deleted the post because I would like to make one on this issue with greater subtlety and nuance to do the complex topic of this saga better justice than my rather late night post did - thanks for your comment, I will take it into account.
Some Thoughts on Public Discourse

Great points! (An upvote wasn't enough appreciation, hence the comment as well).

Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission)

Hi Dony,

The submission doesn't qualify as serious, and was past the deadline. So we won't be considering it.

1adamaero5y
Perhaps, next time have a due date that falls at midnight or 11:59 something. I too missed the deadline. Or maybe put one word before 12PM: noon.
Some Thoughts on Public Discourse

One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts.

From my list, the posts I identified as clearly vague:

http://www.openphilanthropy.org/blog/radical-empathy got 1 comment (a question that hasn't been answered)

http://www.openphilanthropy.org/blog/worldview-diversification got 1 comment (a single sentence praising the post)

http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing got 6 comments

http://blog.givewell.org/2016/12/22/front-loading-personal-giving... (read more)

Just my rough impression, but I find that controversial or flawed posts get comments, whereas posts that make a solid, concrete, well-argued point tend to not generate much discussion. So I don't think this is a good measure for the value of the post to the community.

8RomeoStevens5y
Thinking about what to call this phenomenon because it seems like an important aspect of discourse. Namely, making no claims but only distinctions, which generates no arguments. This was a distinct flavor to Superintelligence, I think intentionally to create a framework within which to have a dialog absent the usual contentious claims. This was good for that particular use case, but I think that deployed indiscriminately it leads to a kind of big tent approach inimical to real progress. I think potentially it is the right thing for OpenPhil to currently be doing since they are first trying to figure out how the world actually is with pilot grants and research methodology testing etc. Good to not let it infect your epistemology permanently though. Suggested counter force: internal non-public betting market.
Some Thoughts on Public Discourse

(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case.

As a more general claim about being public, though, your claim strike... (read more)

Some Thoughts on Public Discourse

(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative.

"By "public discourse," I mean communications that are available to the public and that are primarily aimed at clearly describing one's thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc."

If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stackin... (read more)

Some Thoughts on Public Discourse

(2) Overstated connotations of expertise with respect to the value of transparency and openness:

"Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits - more below). I'm aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public... (read more)

6vipulnaik5y
One point to add: the frustratingly vague posts tend to get FEWER comments than the specific, concrete posts. From my list, the posts I identified as clearly vague: http://www.openphilanthropy.org/blog/radical-empathy [http://www.openphilanthropy.org/blog/radical-empathy] got 1 comment (a question that hasn't been answered) http://www.openphilanthropy.org/blog/worldview-diversification [http://www.openphilanthropy.org/blog/worldview-diversification] got 1 comment (a single sentence praising the post) http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing [http://www.openphilanthropy.org/blog/update-how-were-thinking-about-openness-and-information-sharing] got 6 comments http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/ [http://blog.givewell.org/2016/12/22/front-loading-personal-giving-year/] got 8 comments In contrast, the posts I identified as sufficiently specific (even though they tended on the fairly technical side) http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/ [http://blog.givewell.org/2016/12/06/why-i-mostly-believe-in-worms/] got 17 comments http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/ [http://blog.givewell.org/2017/01/04/how-thin-the-reed-generalizing-from-worms-at-work/] got 14 comments http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms [http://www.openphilanthropy.org/blog/initial-grants-support-corporate-cage-free-reforms] got 27 comments http://blog.givewell.org/2016/12/12/amf-population-ethics/ [http://blog.givewell.org/2016/12/12/amf-population-ethics/] got 7 comments If engagement is any indication, then people really thirst for specific, concrete content. But that's not necessarily in contradiction with Holden's point, since his goal isn't to generate engagement. In fact comments engagement can even be viewed negatively in his framework because it means more effort necess
Some Thoughts on Public Discourse

Thank you for the illuminative post, Holden. I appreciate you taking the time to write this, despite your admittedly busy schedule. I found much to disagree with in the approach you champion in the post, that I attempt to articulate below.

In brief: (1) Frustrating vagueness and seas of generality in your current post and recent posts, (2) Overstated connotations of expertise with regards to transparency and openness, (3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative, (4... (read more)

Thanks for the thoughts, Vipul! Responses follow.

(1) I'm sorry to hear that you've found my writing too vague. There is always a tradeoff between time spent, breadth of issues covered, and detail/precision. The posts you hold up as more precise are on narrower topics; the posts you say are too vague are attempts to summarize/distill views I have (or changes of opinions I've had) that stem from a lot of different premises, many hard to articulate, but that are important enough that I've tried to give people an idea of what I'm thinking. In many cases their ... (read more)

3vipulnaik5y
(4) Repeatedly shifting the locus of blame to external critics rather than owning up to responsibility: You keep alluding to costs of publishing your work more clearly, yet there are no examples of how such costs have negatively affected Open Phil, or the specific monetary, emotional, or other damages you have incurred (this is related to (1), where I am critical of your frustrating vagueness). This vagueness makes your claims of the risks to openness frustrating to evaluate in your case. As a more general claim about being public, though, your claim strikes me as misguided. The main obstacle to writing up stuff for the public is just that writing stuff up takes a lot of time, but this is mostly a limitation on the part of the writer. The writer doesn't have a clear picture of what he or she wants to say. The writer does not have a clear idea of how to convey the idea clearly. The writer lacks the time and resources to put things together. Failure to do this is a failure on the part of the writer. Blaming readers for continually trying to misinterpret their writing, or carrying out witch hunts, is simply failing to take responsibility. A more humble framing would highlight this fact, and some of its difficult implications, e.g.: "As somebody in charge of a foundation that is spending ~$100 million a year and recommending tens of millions in donations by others, I need to be very clear in my thinking and reasoning. Unfortunately, I have found that it's often easier and cheaper to spend millions of dollars in grants than write up a clear public-facing document on the reasons for doing so. I'm very committed to writing publicly where it is possible (and you can see evidence of this in all the grant writeups for Open Phil and the detailed charity evaluations for GiveWell). However, there are many cases where writing up my reasoning is more daunting than signing off on millions of dollars in money. I hope that we are able to figure out better approaches to reducing the
7vipulnaik5y
(3) Artificially filtering out positive reputational effects, then claiming that the reputational effects of openness are skewed negative. "By "public discourse," I mean communications that are available to the public and that are primarily aimed at clearly describing one's thinking, exploring differences with others, etc. with a focus on truth-seeking rather than on fundraising, advocacy, promotion, etc." If you exclude from public discourse any benefits pertaining to fundraising, advocacy, and promotion, then you are essentially stacking the deck against public discourse -- now any reputational or time-sink impacts are likely to be negative. Here's an alternate perspective. Any public statement should be thought of both in terms of the object-level points it is making (specifically, the information it is directly providing or what it is trying to convince people of), and secondarily in terms of how it affects the status and reputation of the person or organization making the statement, and/or their broader goals. For instance, when I wrote http://effective-altruism.com/ea/15o/effective_altruism_forum_web_traffic_from_google/ [http://effective-altruism.com/ea/15o/effective_altruism_forum_web_traffic_from_google/] my direct goal was to provide information about web traffic to the Effective Altruism Forum and what the patterns tell us about effective altruism movement growth, but an indirect goal was to highlight the value of using data-driven analytics, and in particular website analytics, something I've championed in the past. Whether you choose to label the public statement as "fundraising", "advocacy", or whatever, is somewhat besides the point.
8vipulnaik5y
(2) Overstated connotations of expertise with respect to the value of transparency and openness: "Regardless of the underlying reasons, we have put a lot of effort over a long period of time into public discourse, and have reaped very little of this particular kind of benefit (though we have reaped other benefits - more below). I'm aware that this claim may strike some as unlikely and/or disappointing, but it is my lived experience, and I think at this point it would be hard to argue that it is simply explained by a lack of effort or interest in public discourse." Your writing makes it appear like you've left no stone unturned to try every approach at transparency and confirmed that the masses are wanting. But digging into the facts suggests support for a much weaker conclusion. Which is: for the particular approach that GiveWell used and the particular kind of content that GiveWell shared, the people who responded in ways that made sense to you and were useful to you were restricted to a narrow pool. There is no good reason offered on why these findings would be general across any domains or expository approaches than the ones you've narrowly tried at GiveWell. This doesn't mean GiveWell or Open Phil is obligated to try new approaches -- but it does suggest more humility in making claims about the broader value of transparency and openness. There is a wealth of ways that people seek to make their work transparent. Public projects on GitHub make details about both their code evolution and contributor list available by default, without putting in any specific effort into it, because of the way the system is designed. This pays off to different extents for different kinds of projects; in some cases, there are a lot of issue reports and bugfixes from random strangers, in many others, nobody except the core contributors cares. In some, malicious folks find vulnerabilities in the code because it's so open. If you ran a few projects on GitHub and observed something ab
Changes in funding in the AI safety field

I appreciate posts like this -- they are very helpful (and would be more so if I were thinking of donating money or contributing in kind to the topic).

How Should I Spend My Time?

"So if I could be expected to work 4380 hours over 2016-2019, earn $660K (95%: $580K to $860K) and donate $160K, that’s an expected earnings of $150.68 per hour worked. [...] I consider my entire earnings to be the altruistic value of this project."

What about taxes?

0Peter Wildeford6y
Yeah, that's a good point, since it scales with my income. I should include that in the model.
Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

The post does raise some valid concerns, though I don't agree with a lot of the framing. I don't think of it in terms of lying. I do, however, see that the existing incentive structure is significantly at odds with epistemic virtue and truth-seeking. It's remarkable that many EA orgs have held themselves to reasonably high standards despite not having strong incentives to do so.

In brief:

  • EA orgs' and communities' growth metrics are centered around numbers of people and quantity of money moved. These don't correlate much with epistemic virtue.
  • (more specul
... (read more)
5atucker6y
I suspect that a crux of the issue about the relative importance of growth vs. epistemic virtue is whether you expect most of the value of the EA community comes from novel insights and research that it does, or through moving money to the things that are already known about. In the early days of EA I think that GiveWell's quality was a major factor in getting people to donate, but I think that the EA movement is large enough now that growth isn't necessarily related to rigor -- the largest charities (like Salvation Army or YMCA) don't seem to be particularly epistemically rigorous at all. I'm not sure how closely the marginal EA is checking claims, and I think that EA is now mainstream enough that more people don't experience strong social pressure to justify it.
2TruePath6y
The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness. No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit card billing). If EA charities insist on cultivating donations only in circumstances where the donors are best equipped to make a careful judgement, e.g., eschewing 'Give Now' impulse donations, fundraising parties with liquor and peer pressure and insist on reminding us each time another donation is about to be deducted from our account, they will lose out on a huge amount of donations. Worse, because of the role of overhead in charity work, the lack of sufficient donations will actually make such charities bad choices. Moreover, there is nothing morally wrong with putting your organization's best foot forward or using standard charity/advertising tactics. Despite the joke it's not morally wrong to make a good first impression. If there is a trade off between reducing suffering and improving epistemic virtue there is no question which is more important and if that requires implying they are highly effective so be it. I mean it's important charities are incentivized to be effective but imagine if the law required every charitable solicitation to disclose the fraction of donations that went into fundraising and overhead. It's unlikely the increased effectiveness that resulted would make up for the huge losses that forcing people to face the unpleasant fact that even the best charities can only send a fraction of their donation to the intended beneficiaries. ---------------------------------------------------

One bit of progress on this front is Open Phil and GiveWell starting to make public and private predictions related to grants to improve their forecasting about outcomes, and create track records around that.

There is significant room for other EA organizations to adopt this practice in their own areas (and apply it more broadly, e.g. regarding future evaluations of their strategy, etc).

I believe the incentive alignment is strongest in cases where you are talking about moving moderate to large sums of money per donor in the present, for a reasonable numb

... (read more)
9kbog6y
I like your thoughts and agree with reframing it as epistemic virtue generally instead of just lying. But I think EAs are always too quick to think about behavior in terms of incentives and rational action. Especially when talking about each other. Since almost no one around here is rationally selfish, some people are rationally altruistic, and most people are probably some combination of altruism, selfishness and irrationality. But here people are thinking that it's some really hard problem where rational people are likely be dishonest and so we need to make it rational for people to be honest and so on. We should remember all the ways that people can be primed or nudged to be honest or dishonest. This might be a hard aspect of an organization to evaluate from the outside but I would guess that it's at least as internally important as the desire to maximize growth metrics. For one thing, culture is important. Who is leading? What is their leadership style? I'm not in the middle of all this meta stuff, but it's weird (coming from the Army) that I see so much talk about organizations but I don't think I've ever seen someone even mention the word "leadership." Also, who is working at EA organizations? How many insiders and how many outsiders? I would suggest that ensuring that a minority of an organization is composed of identifiable outsiders or skeptical people would compel people to be more transparent just by making them feel like they are being watched. I know that some people have debated various reasons to have outsiders work for EA orgs - well here's another thing to consider. I don't have much else to contribute, but all you LessWrong people who have been reading behavioral econ literature since day one should be jumping all over this.
Tell us how to improve the forum

I haven't been able to successfully log in to EAF from my phone (which is a pretty old Windows Mobile phone, so might be something unique to it). That probably increases the number of pageviews I generated for EAF, because I revisit on desktop to leave a comment :).

Individual Project Fund: Further Details

Great to hear about this, Jacob! As somebody who funds a lot of loosely similar activities in the "EA periphery" I have some thoughts and experience on the challenges and rewards of funding. Let me know if you'd like to talk about it.

You can get a list of stuff I've funded at https://contractwork.vipulnaik.com

Effective Altruism Forum web traffic from Google Analytics

Thanks, I added the explication of the acronym at the beginning.

Effective Altruism Forum web traffic from Google Analytics

You can get data on the Facebook group(s) using tools like http://sociograph.io -- however, they can take a while to load all the data. A full analysis of that data would be worth another post.

2Peter Wildeford6y
That would be really cool to see!
Risk-neutral donors should plan to make bets at the margin at least as well as giga-donors in expectation

Some people in the effective altruist community have argued that small donors should accept that they will use marginal charitable dollars less efficiently than large actors such as Open Phil, for lack of time, skill, and scale to find and choose between charitable opportunities. Sometimes this is phrased as advice that small donors follow GiveWell's recommendations, while Open Phil pursues other causes and strategies such as scientific research and policy.

The argument that I have heard is a little different. It is that the entry of big players like Ope... (read more)

6CarlShulman6y
This sounds like a fine topic for another post to me. Re diminishing marginal returns, note that some of the change is Open Phil finding areas that it tentatively guesses [http://www.openphilanthropy.org/blog/good-ventures-and-giving-now-vs-later-2016-update] may be higher return than GiveWell top charities. That affords room for increase in expected returns via research and allocation (I discussed the balance of improved knowledge with diminishing returns above). For donors who were already trying to pursue low-hanging fruit in areas OpenPhil hadn't reached, advantage has declined some, but not collapsed due to the continued presence of scale diseconomies discussed in the post (such as organizational independence and nonmonetary costs).
Why donate to 80,000 Hours

As further evidence, a survey of meta-charity donors carried out by Open Phil and 80,000 Hours found that they expect to give about £4.5m this year, and not all will go to meta-charities. Given that CEA is aiming to raise £2.5m-£5m alone, the capacity of meta-charity donors is going to be used up this year. This means we need new meta-charity donors, or good meta opportunities will go unfunded.

Is there more information about this survey currently available, and/or are there plans to release more information? This is the first I am hearing about the survey, and it sounds like something that deserves standalone coverage.

2Benjamin_Todd6y
Hi Vipul, I was planning to write up the results, but haven't been able to fit it in yet. Most of the information is confidential, so it needs some care.
What is the expected value of creating a GiveWell top charity?

Thanks for updating the post! I still see the somewhat outdated sentence:

For example, a fifth top charity would likely lead Good Ventures to make an additional incentive grant of $2.5M that they would not have otherwise made.

Since GiveWell now has seven top charities, that should read "eighth" rather than fifth.

0Peter Wildeford6y
Thanks, I now fixed that typo as well!
What is the expected value of creating a GiveWell top charity?

Your estimates could probably benefit a bit more by explicitly incorporating the 2016 top charity recommendations as well as information released in GiveWell's blog post about the subject. In particular:

  • Good Ventures is expected to donate $50 million to GiveWell top charities (+ special recognition charities) and is likely to allocate a similar amount for the next few years. This should be incorporated into estimation of total annual money moved (mostly in terms of reducing variance).

Due to the growth of the Open Philanthropy Project this year and its

... (read more)
1Peter Wildeford6y
Thanks Vipul! It was drafted prior to the announcement, but I did update it after the announcement to incorporate 2016 work. - I did see that fact and that's the number I used in the model, though I now see that I missed updating the $1K figure in the post. I have now corrected that typo but doing so does not affect any of the downstream calculations in the post. - This is a good point and one I did miss from reading the post. I'll revise the model to fix the number at $50M, though I suppose there still is uncertainty about whether they will waver from this commitment in the future or make special grants (like they did with GD and have reportedly considered for AMF). The model is now updated on this as of now - thanks! - and I'll write a disclaimer at the end of the post as soon as I'm done clearing up Owen's contention.
What the EA community can learn from the rise of the neoliberals

Your post is yeoman's work and much appreciated.

There were a few areas where your reading of history seems to differ from mine, as well as a bunch of key distinctions that I believe should have made it in a piece of this length.

First, I think the piece gives too much credit to and puts too much focus on Hayek as an intellectual architect of neoliberalism. Hayek's work was influential, and his impact on Fisher as well, but I don't think Hayek is treated as a blueprint for neoliberalism.

The significant focus on Hayek is coupled with a lack of focus on the ke... (read more)

Load More