All of Dunja's Comments + Replies

Thanks for writing this. The suggested criticism of debate is as old as debate itself, and in addition to the reasons you list here, I'd add the *epistemic* benefits of debating.

Competitive debating allows for the exploration of the argumentative landscape of the given topic in all its breath (from the preparation to the debating itself). That means that it allows for the formulation of the best arguments for either side, which (given all the cognitive biases we may have) may be hard to come by in a non-competitive context. As a result, debate is a le... (read more)

Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I've just learned about through DavidNash's post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.

Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn't it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.

4
LewisBollard
5y
Thanks for your feedback and question Dunja, and thanks for your patience while I was traveling. I agree that the Fund benefits from having a diverse team, but disagree that criticism of ACE is the right kind of ideological diversity. Both Toni and Jamie bring quite different perspectives on how to most cost-effectively help animals within an EA framework (see, for instance, the charities they’re excited about here). The Fund won’t be funding ACE now they’re onboard, and my guess is that we’ll continue to mostly fund smaller unique opportunities, rather than ACE top or standout charities. So I don’t think people’s views on ACE will be especially relevant to our giving picks here. I see less value to bringing in critics of EA, as many (though not all) of ACE’s critics are, as we'd have trouble reaching a consensus on funding decisions. Instead, I encourage those who are skeptical of EA views or the groups we fund to donate directly to effective animal groups they prefer.

That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it's in ethics, then you need experts working in ethics; if it's interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similar... (read more)

Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).

My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific ... (read more)

I'd be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.

From their bio page I don't see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!

2
Evan_Gaensbauer
5y
What would you say qualifies as expertise in these fields? It's ambiguous, because it's not like universities are offering Ph.D.'s in 'Safeguarding the Long-Term Future.'
7
matt
5y
Hi Dunja, I'm Matt Fallshaw, Chair of the fund. This response is an attempt to be helpful, but I'm not entirely sure what, in answer to your question, would qualify as a qualification; perhaps it's relevant that I've been following the field for over 10 years, I've been an advisor to MIRI (I joined their Board of Directors in 2014 (a position I recently had to give up) and currently spend approaching half of my time working on MIRI projects) and I'm an advisor to BERI. I chose the expert team (in consultation with Marek Duda), and I chose them for (among other things) their intelligence, knowledge and connections (to both advisors and likely grantee orgs or individuals). We absolutely do intend to consult with experts (including Nick and Jonas, our listed advisors, and outside experts) when we don't feel that we have enough knowledge ourselves to properly assess a grant. Our connections span multiple continents and (when we don't feel qualified ourselves) we will choose advisors relevant to each grant we consider. … I'm not sure whether that response is going to be satisfying, so feel free to clarify your question and I'll try again.

These are good points, and unless the area is well established so that initial publications come from bigger names (who will that way help to establish the journal), it'll be hard to realize the idea.

What could be done at this point though is have an online page that collects/reports on all the publications relevant for cause prioritization, which may help with the growth of the field.

I agree that journal publications certainly allow for a raise in quality due to the peer-review system. In principle, there could even be a mixed platform with an (online) journal + a blog which (re)posts stuff relevant for the topic (e.g. posts made on this forum that are relevant for the topic of cause prioritization).

My main question is: is there anyone on here who's actually actively doing research on this topic and who could comment on the absence of an adequate journal, as argued by kbog? I don't have any experience with this domain, but if more peop... (read more)

Thanks, Benito, that sums it up nicely!

It's really about the transparency of the criteria, and that's all I'm arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.

As for my papers - crap, that's embarrassing that I've linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website... have to think of some proper free solution here. But in any case: please don't feel oblige... (read more)

Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.

Sure! Which is why I've been exchanging arguments with you.

Oh, there have been numerous articles, in your field, claimed by you.

Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/

That's all well and good, but it should be clear why peop

... (read more)
0
kbog
6y
And, therefore, you would be wise to treat Open Phil in the same manner, i.e. something to disagree with, not something to attack as not being Good Enough for EA. It means that you haven't argued your point with the sufficient rigor and comprehensiveness that is required for you to convince every reasonable person. (no, stating "experts in my field agree with me" does not count here, even though it's a big part of it) Other people have discussed and linked Open Phil's philosophy, I see no point in rehashing it.

While I largely agree with your idea, I just don't understand why you think that a new space would divide people who anyway aren't on this forum to begin with? Like I said, 70% on here are men. So how are you gonna attract more non-male participants? This topic may be unrelated, but let's say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn't that a good enough reason to initiate it? Why would it that be conflicting, rather than complementary with this forum?

1
kbog
6y
I stated the problems in my original comment. The same ways that we attract male participants, but perhaps tailored more towards women. It depends on the "different type of venue." Because it may entail the problems that I gave in my original comment.

Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).

So yes, I'd argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.

-1
kbog
6y
Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate. Oh, there have been numerous articles, in your field, claimed by you. That's all well and good, but it should be clear why people will have reasons for doubts on the topic.

Right, and I agree! But here's the thing (which I haven't mentioned so far, so maybe it helps): I think some people just don't participate in this forum much. For instance, there is a striking gender imbalance (I think more than 70% on here are men) and while I have absolutely no evidence to correlate this with near/far-future issues, I wouldn't be surprised if it's somewhat related (e.g. there are not so many tech-interested non-males in EA). Again, this is now just a speculation. And perhaps it's worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.

0
kbog
6y
Absofuckinglutely, so let's not make that problem worse by putting them into their own private Discord. As I said at the start, this is creating the problem that it is trying to solve. EA needs to adhere to high standards of intellectual rigor, therefore it can't fracture and make wanton concessions to people who feel emotional aversion to people with a differing point of view. The thesis that our charitable dollars ought to be given to x-risk instead of AMF is so benign and impersonal that it beggars belief that a reasonable person will feel upset or unsafe upon being exposed to widespread opinion in favor of it. Remember that the "near-term EAs" have been pushing a thesis that is equally alienating to people outside EA. For years, EAs of all stripes have been saying to stop giving money to museums and universities and baseball teams, that we must follow rational arguments and donate to faraway bed net charities which are mathematically demonstrated to have the greatest impact, and (rightly) expect outsiders to meet these arguments with rigor and seriousness; for some of these EAs to then turn around and object that they feel "unsafe", and need a "safe space", because there is a "bubble" of people who argue from a different point of view on cause prioritization is damningly hypocritical. The whole point of EA is that people are going to tell you that you are wrong about your charitable cause, and you shouldn't set it in protective concrete like faith or identity.

OK, you aren't anonymous, so that's even more surprising. I gave you earlier examples of your rude responses, but doesn't matter, I'm fine going on.

My impression of bias is based by my experience on this forum and observations in view of posts critical of far-future causes. I don't have any systematic study on this topic, so I can't provide you with evidence. It is just my impression, based on my personal experience. But unfortunately, no empirical study on this topic, concerning this forum, exists, so the best we currently have are personal experiences. M... (read more)

1
kbog
6y
I'm not referring to that, I'm questioning whether talking about near-term stuff needs to be anywhere else. This whole thing is not about "where can we argue about cause prioritization and the flaws in Open Phil," it is about "where can we argue about bed nets vs cash distribution". Those are two different things, and just because a forum is bad for one doesn't imply that it's bad for the other. You have been conflating these things in this entire conversation. The basic premise here, that you should have experience with conversations before opining about the viability of having such a conversation, is not easy to communicate with someone who defers to pure skepticism about it. I leave that to the reader to see why it's a problem that you're inserting yourself as an authority while lacking demonstrable evidence and expertise.

Again: you are missing my point :) I don't care if it's their money or not, that's beside my point.

What I care about is: are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?

Otherwise, makes no sense to label them as an organization that's conforming to the standards of EA, at least in the case of such practices.

Subjective, unverifiable, etc. has nothing to do with such standards (= conducive to effective & efficient scientific research).

0
kbog
6y
As I stated already, "We can presume that formal, traditional institutional funding policies would do better, but it is difficult to argue that point to the level of certainty that tells us that the situation is "disturbing". Those policies are costly - they take more time and people to implement." It is, in short, your conceptual argument about how to do EA. So, people disagree. Welcome to EA. It has something to do with the difficulty of showing that a group is not conforming to the standards of EA.

But in many contexts this may not be the case: as I've explained, I may profit from reading some discussions which is a kind of engagement. You've omitted that part of my response. Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they've never participated). Knowledge-of doesn't necessarily have to be knowledge obtained by an object-level engagement in the given field.

1
kbog
6y
OK, sure. But when I look at conversations about near term issues on this forum I see perfectly good discussion (e.g. http://effective-altruism.com/ea/xo/givewells_charity_recommendations_require_taking/), and nothing that looks bad. And the basic idea that a forum can't talk about a particular cause productively merely because most of them reject that cause (even if they do so for poor reasons) is simply unsubstantiated and hard to believe in the first place, on conceptual grounds. This kind of talk has a rather mixed track record, actually. (source: I've studied economics and read the things that philosophers opine about economic methodology)

right, we are able to - doesn't mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?

as for my suggestion, unfortunately, and as i've said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched an... (read more)

1
kbog
6y
But they'll be unsubstantiated. You don't have to be certain, just substantiated. It may be, or it may not be. Even if so, it's not healthy to split groups every time people dislike the majority point of view. "It's a bubble and people are biased and I find it repulsive" is practically indistinguishable from "I disagree with them and I can't convince them". Again, this is unsupported. What biases? What's the evidence? Who is put off? Etc. my IRL identity is linked via the little icon by my username. I don't know what's rude here. I'm saying that you need to engage with on a topic before commenting on the viability of engaging on it. Yet this basic point is being met with appeals to logical fallacies, blank denial of the validity of my argument, insistence upon the mere possibility and plausible deniability of your position. These tactics are irritating and lead to nowhere, so all I can do is restate my points in a slightly different manner and hope that you pick up the general idea. You're perceiving that as "rude" because it's terse, but I have no idea what else I can say.

Like I mentioned above, I may be interested in reading focused discussions on this topic and chipping in when I feel I can add something of value. Reading alone brings a lot on forums/discussion channels.

Moreover, I may assess how newcomers with a special interest in these topics may contribute from such a venue. You reduction of a meta-topic to one's personal experience of it is a non-sequitur.

1
kbog
6y
I didn't reduce it. I only claim that it requires personal experience as a significant part of the picture.

I'm recommending that you personally engage before judging it with confidence.

But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can't make sense of your recommendation (which was originally an imperative, in fact).

This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.

I haven't seen a... (read more)

0
kbog
6y
First, because you seem to be interested in 'talking about near-future related topics and strategies". And second, because it will provide you with firsthand experience on this topic which you are arguing about. In above comments, I write "It's hard to judge the viability of talking about X when you haven't talked about X", and "I'm not sure what you're really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging."

Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.

Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that's expressed in a modal way. So I don't really understand how is this challenging what I have said :-/

different venues are fine, they must simply be split among legitimate lines (like light c

... (read more)
0
kbog
6y
The part where I say "it's POSSIBLE to talk about it" relates to your claim "we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues", and the part where I say "bias MAY exist" relates to your claim "the fact that measuring bias is difficult doesn't mean bias doesn't exist." Your suggestion to the OP to only host conversation about "[projects that] improve the near future" is the same distinction of near-term vs long-term, and therefore is still the wrong way to carve up the issues, for the same reasons I gave earlier.

Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven't seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I've raised the issue.

we're willing to give a lot of money to wherever it will do the most good in expectation.

And my focus is on: which criteria are used/should be used in order to decid... (read more)

0
kbog
6y
Open Phil has a more subjective approach, others have talked about their philosophy here. That means it's not easily verifiable to outsiders, but that's of no concern to Open Phil, because it is their own money.

Civil can still be unfriendly, but hey, if you aren't getting it, it's fine.

It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.

If it was clear, why would I ask? there's your lack of friendliness in action. And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to p... (read more)

0
kbog
6y
Yes, you can, technically, in theory. I'm recommending that you personally engage before judging it with confidence. This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument. What part of it doesn't make sense? I honestly don't see how it's not clear, so I don't know how to make it clearer. They can, I'm just saying that it will be pretty unreliable.

I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:

But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understa... (read more)

0
kbog
6y
I don't know of any less confrontational/unfriendly way of wording those points. That comment is perfectly civil. It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X. Look, it's right there in the original comment - "talking about near-future related topics and strategies". I don't know how else I can say this.

(1) I think it is standard practice for peer review to be kept anonymous,

Problem wasn't in the reviewer being anonymous, but in the lack of access to the report

(2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context,

Sure, but that doesn't mean no criteria should be available.

(3) you're just looking at one grant out of all that Open Phil has done,

Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same ca... (read more)

0
kbog
6y
Open Phil gave $5.6MM to Berkeley for AI, even though Russell's group is new and its staff/faculty are still fewer than the staff of MIRI. They gave $30MM to OpenAI. And $1-2MM for many other groups. Of course EAs can give more to a particular groups, that's because we're EAs, we're willing to give a lot of money to wherever it will do the most good in expectation.

First, I disagree with your imperatives concerning what one should do before engaging in criticism. That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.

Second, the fact that measu... (read more)

0
kbog
6y
Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument. different venues are fine, they must simply be split among legitimate lines (like light chat vs serious chat, or different specific causes; as I stated already, those are legitimate ways to split venues). Splitting things along illegitimate lines is harmful for reasons that I stated earlier in this thread.

No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I'll try to write a separate, more general post on that.

My only point was that due to the high presence of "far-future bias" on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards bias... (read more)

0
kbog
6y
It's extremely hard to identify bias without proper measurement/quantification, because you need to separate it from actual differences in the strength of people's arguments, as well as legitimate expression of a majority point of view, and your own bias. In any case, you are not going to get downvoted for talking about how to reduce poverty. I'm not sure what you're really worried about. At some point you have to accept that no discussion space is perfect, that attempts to replace good ones usually turn out to be worse, and that your time is better spent focusing on the issues. But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.

wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.

I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar wa... (read more)

1
kbog
6y
I assumed your post to be more of a nominal attempt to disagree with me than it really was, so the failure of some of its statements to constitute specific rebuttals of my points became irritating. I've edited my comment to be cleaner. I apologize for that. Okay, and if we look at that post, we see some pretty complete and civil responses to your arguments. Seems like things are Working As Intended. I am responding some of your claims in that thread so that it gets collected in the right place. But going back to the conversation here, you seem to be pretty clear that it is possible to have effective and efficient science funding, even if Open Phil isn't doing it right. Plus, you're only referring to Open Phil/EAF, not everyone else who supports long term causes. So clearly it would be inappropriate for long term EA causes to be separated. We can push for political change at the national or international level, we can grow the EA movement, or do animal advocacy. Those are known and viable far-future cause areas, even if they don't get as much attention under that guise.

This is a nice idea though I'd like to suggest some adjustments to the welcome message (also in view of kbog's worries discussed above). Currently the message begins with:

"(...) we ask that EAs who currently focus on improving the far future not participate. In particular, if you currently prioritize AI risks or s-risks, we ask you not participate."

I don't think it's a good idea to select participants in a discussion according to what they think or do (it pretty much comes down to an Argumentum ad Hominem fallacy). It would be better to specify w... (read more)

I like this suggestion - personally I feel a lot of uncertainty about what to prioritize, and given that a portion of my donations go to near-term work I'd enjoy taking part in discussion about how to best do that, even if I'm also seriously considering whether to prioritize long-term work. But I'd be totally happy to have the topic of that space limited to near-term work.

Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of "effective" charity). Moreover, there is a large c... (read more)

1
kbog
6y
Well the main point of my comment is that people should not reinforce wrong practices by institutionalizing them. What is it when money goes to Givewell or Animal Charity Evaluators? Funding scientific research. Don't poverty interventions need research? Animal advocacy campaigns? Plant-based meat? Is it only the futurists who are doing everything wrong when numerous complaints have been lodged at the research quality of Givewell and ACE? Well I haven't claimed that the evaluation of futurist scientific research is rigorous, transparent or valid. I think you should make a compelling argument for that in a serious post. Telling us that you failed to persuade groups such as Open Phil and the EAF doesn't exactly show us that you are right. Note: it's particularly instructive here, as we evaluate the utility of the sort of segregation proposed by the OP, how the idea that EA ought to be split along these lines is bundled with the assertion that the Other Side is doing things "wrong"; we can see that the nominally innocuous proposal for categorization is operationalized to effect the general discrediting of those with an opposing point of view, which is exactly why it is a bad thing. Just think of the press reporting on us doing exactly the same thing as everyone else in science? If you are worried about bad press, the #1 thing you should avoid is trying to kick up the social divisions that would give them something actually juicy to report on. Where is this criticism? Where are the arguments on cause prioritization? Where is the review of the relevant academic literature? Where is the quantitative modeling? I see people complain that their "criticisms" aren't being met, but when I look for these criticisms, the search for the original source bottoms out either in sparse lines of assertions in web comments, or quite old arguments that have already been accepted and answered, and in either case opponents are clearly ready and willing to engage with such criticism. Th

Oh damn :-/ I was just gonna ask for the info (been traveling and could reply only now). That's really interesting, is this info published somewhere online? If not, it would maybe be worthwhile to make a post on this here and discuss both the reasons for the predominantly male community, as well as ideas for how to make it more gender-balanced.

I'd be very interested in possible relations between the lack of gender balance and the topic of representation discussed in another recent thread. For instance, it'd be interesting to see whether non-male EAs find the forum insufficiently focused on causes which they find more important, or largely focused on issues that they do not find as important.

1
Peter Wildeford
6y
We haven't posted a gender breakdown by group yet. I can see if there may be ways to follow this up as part of our forthcoming 2018 EA Survey work.

Thanks a lot for writing this up - it's nice to get some info on this literature. I didn't get though the relationship between the selfish option and "doing good ineffectively" - why do you think that rejecting the selfish option would be a response to the ineffective charity?

0
kbog
6y
Remember that the core problem here is the tension that arises when people say that ineffective charity is impermissible while also saying that selfishness is permissible. This pair of views implies that a more beneficial option is more blameworthy, which seems paradoxical. But if we say that both of these options are impermissible, the problem goes away: the more beneficial option is never more blameworthy than the less beneficial option.

Thanks a lot for this post, that's really interesting and highly relevant. I'd be curious to see also the proportion of women in online forums such as this one. And of course, I'm super interested in possible reasons behind the tendencies you describe.

4
Peter Wildeford
6y
We have that in the EA Survey data.

Hey Evan, thanks for the detailed reply and the encouragement! :) I'd love to write a longer post on this and I'll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.

  1. Efficiency in terms of time - the idea that academics can't do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don't have much (or any) teaching duties (e.g. I did my

... (read more)

Yeah, in case of obvious crap posts (like spams) they'll be massively downvoted. Otherwise, I've never seen here any of the serious posts massively only downvoted. Rather, you'd have some downvotes, some upvotes, and the case you describe doesn't capture this situation. In fact, an initial row of downvotes may misleadingly give such an impression, leading to some people ignoring the issue, while later on a row of upvotes may actually show the issue is controversial, and as such indeed deserves further discussion.

Hi John, I don't have any concrete links, but I'd start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn't even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.

Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board,... (read more)

Part of what we do is help people to understand themselves better via introspection and psychological frameworks.

Could you please specify which methods of introspection and psychological frameworks you employ to this end, and which evidence you use to assure these frameworks are based on the adequate scientific evidence, obtained by reliable methods?

Thanks for the link, Michael - I've missed that post and it's indeed related to the current one.

Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostro... (read more)

Hi Max! I agree, it indeed provides information, but the problem is that the information is too vague, and it may easily reflect a sheer bias (as in: "I don't like any posts that question the work of OpPhil"). I think this is a strong sentiment in this community and as an academic who is not affiliated with OpPhil or any other EA organization, I've noticed numerous cases of silent rejection of a certain problem. I don't think the issues are relevant for any "mainstream" EA topic (points on which the majority here agrees). But as soon as... (read more)

Hi Evan, Here's my response to your comments (including another post of yours from above). By the way, that's a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.

Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me i... (read more)

-1
Evan_Gaensbauer
6y
I think it's a common perception in EA effective altruists can often do work as efficiently and effectively as academics not explicitly affiliated with EA. Often EAs also think academics can do some if not most EA work than a random non-academic EA. AI safety is more populated with and stems from the rationality community. On average it's more ambivalent towards academia than EA. It's my personal opinion there are a variety of reasons why EA may often have a comparative advantage of doing the research in-house. There are a number of reasons for this. One is practical. Academics would often have to divide their time doing EA-relevant research with teaching duties. EA tends to focus on unsexy research topics, so academics may be likelier to get grants for focusing on irrelevant research. Depending on the field, the politics of research can distort the epistemology of academia so it won't work for EA's purposes. These are constraints effective altruists working full-time at NPOs funded by other effective altruists don't face, allowing them to dedicate all their attention to their organization's mission. Personally, my confidence in EA to make progress on research and other projects for a wide variety of goals is bolstered by some original research in multiple causes being lauded by academics as some of the best on the subject they've seen. Of course, these are NPOs focused on addressing neglected problems in global poverty, animal advocacy campaigns, and other niche areas. Some of the biggest successes in EA come from close collaborations with academia. I think most EAs would encourage more cooperation between academia and EA. I've pushed in the past for EA making more grants to academics doing sympathetic research. Attracting talent with an academic research background to EA can be difficult. I agree with you overall EA's current approach doesn't make sense. I think you've got a lot of good points. I'd encourage you to make a post out of some of the comments I made

But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can't accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.

Second, what about the payment for a better equipment? That was another issue mentioned in Nick's post.

Finally, the underlying assumption of Nick's explanation is that the output of non-academic workers will be bett... (read more)

But that's just not necessarily true: as I said, academics can accept money to cover e.g. teaching duties and hence do more research. If you look at ERC grants, that's part of their format in case of Consolidator and Advanced grants. So it really depends on who applied for which funds, which is why Nick's explanation isn't satisfactory.

Thanks for the input! But I didn't claim that Nick is biased against academia - I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.

As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.

Nevertheless, I am not concluding from this that Nick is biased (nor that he isn't) - we just don't have evidence for either of these claims, and at the end of the da... (read more)

Couldn't agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don't know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.

I think this a symptom of an anti-academic tendencies that I've noticed on this form and in this particular domain of research, which I think woul... (read more)

I'm Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI's largest donor, and Nick Beckstead was the program officer who made that grant decision.

I can't speak for other universities, but I agree with his assessment that Oxford's regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more chil... (read more)

1
Evan_Gaensbauer
6y
* My guess would be because EA is still a niche community favouring unpopular causes, and existing effective altruists outside academia will be more willing to pursue effective ideas within uncommon areas EAs favour, while university projects typically have more opportunity for funding outside EA, it makes sense to prioritize funding non-academic projects. Of course, that's only heuristic reasoning. These aren't the most solid assumptions for EA as a movement to make. I agree this should be addressed with more open and detailed discussion on this forum. * Arguably life extension or anti-ageing research institutions are doing medical research outside academia. Indeed it's the case most organizations in this space I've heard effective altruists tout are either for-profit companies, or NPOs, to which they donate, such as SENS and the newly opened Longevity Research Institute. So while I don't know about climate research centres, there are in fact a lot of people in EA who might defend a policy of redirecting resources for medical research towards non-academic institutions. * Nick Beckstead stated why he didn't pay as much attention to the EA Community and Long-Term Future Funds is because they were redundant with grants he would have already made to the Open Philanthropy Project. Of course that still raises the question of why the EA Funds were presented differently to donors and the community, and why this wasn't better addressed, which I intend to follow up on with the CEA. Regarding the long-term future, I'm aware Nick is correct the Open Philanthropy Project has been making many smaller grants to many small academic projects in AI safety/alignment, biosecurity and other areas. I expect this trend will only increase in the near future. Having looked into it myself, and talked to academics in EA who know the area from the inside better than I, there are indeed fewer opportunities for academic research on effective 'EA Community-Building' than there will be for o
7
jsteinhardt
6y
Given that Nick has a PhD in Philosophy, and that OpenPhil has funded a large amount of academic research, this explanation seems unlikely. Disclosure: I am working at OpenPhil over the summer. (I don't have any particular private information, both of the above facts are publicly available.) EDIT: I don't intend to make any statement about whether EA as a whole has an anti-academic bias, just that this particular situation seems unlikely to reflect that.

I'd be curious to hear some explanation of

"University-based grantees were not considered for these grants because I believe they are not well-positioned to use funds for time-saving and productivity-enhancement due to university regulations."

since I have no clue what that means. In the text previous to this claim it is only stated that "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electro... (read more)

4
Elizabeth
6y
I'm in no way associated with EA Funds (although I do contract with CEA), but I can take a guess. Several EA orgs pay for assistants and certain other kinds of help for academics directly, which makes me think that the straightforward interpretation of the statement is true: Nick wanted to fund time savings for high impact people, and academics can't accept money to do that, although they can accept donated labor.

Ahh, now I get you! Yeah, that sounds like a good idea! Like I've mentioned in another reply, I wouldn't require the same from upvotes because they may imply the lack of counterarguments, while a downvote implies a recognition that there is a problem, in which case it'd only be fair to state which one it is.

Oh thanks for sharing this!

Yes, that's a good point, I've been wondering about this as well. According to one (pretty common) approach to argumentation, an argument is acceptable unless challenged by a counterargument. From that perspective:

upvoting = an acknowledgement of the absence of a counterargument.

downvoting = an observation that there is a counterargument, in which case it should be stated.

This is just an idea from the top of my head, I'd be curious to discuss this in more detail since I find it genuinely curious :)

That'd probably be already better than nothing ;) Then again, I'm afraid most people would still just (anonymously) downvote without giving reasons. It's much easier to hide behind an anonymous veil than take a stance and open yourself for debate.

In fact, I'd be curious to see some empirical data on how correlated the act of downvoting and the absence of commenting are. My guess is that those who provide comments (including critical ones) mostly don't downvote except in extreme cases (e.g. discrimination, obviously off-topic for the forum, obviously misinformation, etc.).

1
RandomEA
6y
Just to clarify, my proposal is that the downvote would only be counted if the person selected a reason. When I said "without requiring every downvoter to provide an explanation," I meant without requiring every one of them to type out their own explanation (since they can rely on the defaults or on what a previous person has written).
Load more