Thanks a lot for this, very useful indeed. I think this list hasn't been mentioned: Awful AI - a curated list to track current scary usages of AI - hoping to raise awareness to its misuses in society.
Update: this is all the more important in view of common ways one may accidentally cause harm by trying to do good, which I've just learned about through DavidNash's post). As the article points out, having an informed opinion of experts, and a dense network with them can decrease chances of harmful impacts, such as reputational harm or locking in on suboptimal choices.
Thanks for the explanation, Lewis. In order to make the team as robust as possible towards criticism, and as reliable as possible, wouldn't it be better to have a diverse team, consisting also of critics of ACE? That would send the right message to the donors as well as to anyone taking a closer look at EA organizations. I think it would also benefit ACE since their researchers would have an opportunity to work directly with their critics.
That should always depend on the project at hand: if the project is primarily in a specific domain of AI research, then you need reviewers working precisely in that particular domain of AI; if it's in ethics, then you need experts working in ethics; if it's interdisciplinary, then you try to get reviewers from the respective fields. This also shows that it will be rather difficult (if not impossible) to have an expert team competent to evaluate each candidate project. Instead, the team should be competent in selecting the adequate expert reviewers (similar...
Hi Matt, thanks a lot for the reply! I appreciate your approach, but I do have worries, which Jonas, for instance, is very well aware of (I have been a strong critic of EAF policy and implementation of research grants, including those directed at MIRI and FRI).
My main worry is that evaluating grants aimed at research cannot be done without having them assessed by expert researchers in the given domain, that is, people who have a proven track-record in the given field of research. I think the best way to see why this matters is to take any other scientific ...
I'd be curious to hear an explication for selecting the given team for the Long Term Future Funds. If they are expected to evaluate grants including research grants, how do they plan to do that, what qualifies them for this job, and in case they are not qualified, which experts do they plan to invite on such occasions.
From their bio page I don't see who of them should count as an expert in the field of research (and in view of which track-record), which is why I am asking. Thanks!
These are good points, and unless the area is well established so that initial publications come from bigger names (who will that way help to establish the journal), it'll be hard to realize the idea.
What could be done at this point though is have an online page that collects/reports on all the publications relevant for cause prioritization, which may help with the growth of the field.
I agree that journal publications certainly allow for a raise in quality due to the peer-review system. In principle, there could even be a mixed platform with an (online) journal + a blog which (re)posts stuff relevant for the topic (e.g. posts made on this forum that are relevant for the topic of cause prioritization).
My main question is: is there anyone on here who's actually actively doing research on this topic and who could comment on the absence of an adequate journal, as argued by kbog? I don't have any experience with this domain, but if more peop...
Thanks, Benito, that sums it up nicely!
It's really about the transparency of the criteria, and that's all I'm arguing for. I am also open for changing my views on the standard criteria etc. - I just care we start the discussion with some rigor concerning how best to assess effective research.
As for my papers - crap, that's embarrassing that I've linked paywall versions, I have them on academia page too, but guess those can be accessed also only within that website... have to think of some proper free solution here. But in any case: please don't feel oblige...
Part of being in an intellectual community is being able to accept that you will think that other people are very wrong about things. It's not a matter of opinion, but it is a matter of debate.
Sure! Which is why I've been exchanging arguments with you.
Oh, there have been numerous articles, in your field, claimed by you.
Now what on earth is that supposed to mean? What are you trying to say with this? You want references, is that it? I have no idea what this claim is supposed to stand for :-/
...That's all well and good, but it should be clear why peop
While I largely agree with your idea, I just don't understand why you think that a new space would divide people who anyway aren't on this forum to begin with? Like I said, 70% on here are men. So how are you gonna attract more non-male participants? This topic may be unrelated, but let's say we find out that the majority of non-males have preferences that would be better align with a different type of venue. Isn't that a good enough reason to initiate it? Why would it that be conflicting, rather than complementary with this forum?
Oh no, this is not just a matter of opinion. There are numerous articles written in the field of philosophy of science aimed precisely to determine which criteria help us to evaluate promising scientific research. So there is actually quite some scholarly work on this (and it is a topic of my research, as a matter of fact).
So yes, I'd argue that the situation is disturbing since immense amount of money is going into research for which there is no good reason to suppose that it is effective or efficient.
Right, and I agree! But here's the thing (which I haven't mentioned so far, so maybe it helps): I think some people just don't participate in this forum much. For instance, there is a striking gender imbalance (I think more than 70% on here are men) and while I have absolutely no evidence to correlate this with near/far-future issues, I wouldn't be surprised if it's somewhat related (e.g. there are not so many tech-interested non-males in EA). Again, this is now just a speculation. And perhaps it's worth a shot to try an environment that will feel safe for those who are put-off by AI-related topics/interests/angles.
OK, you aren't anonymous, so that's even more surprising. I gave you earlier examples of your rude responses, but doesn't matter, I'm fine going on.
My impression of bias is based by my experience on this forum and observations in view of posts critical of far-future causes. I don't have any systematic study on this topic, so I can't provide you with evidence. It is just my impression, based on my personal experience. But unfortunately, no empirical study on this topic, concerning this forum, exists, so the best we currently have are personal experiences. M...
Again: you are missing my point :) I don't care if it's their money or not, that's beside my point.
What I care about is: are their funding strategies rooted in the standards that are conducive to effective and efficient scientific research?
Otherwise, makes no sense to label them as an organization that's conforming to the standards of EA, at least in the case of such practices.
Subjective, unverifiable, etc. has nothing to do with such standards (= conducive to effective & efficient scientific research).
But in many contexts this may not be the case: as I've explained, I may profit from reading some discussions which is a kind of engagement. You've omitted that part of my response. Or think of philosophers of science discussing the efficiency of scientific research in, say, a specific scientific domain (in which, as philosophers, they've never participated). Knowledge-of doesn't necessarily have to be knowledge obtained by an object-level engagement in the given field.
right, we are able to - doesn't mean we cannot form arguments. since when did arguments exist only if we can be absolutely certain about something?
as for my suggestion, unfortunately, and as i've said above, there is a bubble in the EA community concerning the far-future prioritization, which may be overshadowing and repulsive towards some who are interested in other topics. in the ideal context of rational discussion, your points would hold completely. but we are talking here about a very specific context where a number of biases are already entrenched an...
Like I mentioned above, I may be interested in reading focused discussions on this topic and chipping in when I feel I can add something of value. Reading alone brings a lot on forums/discussion channels.
Moreover, I may assess how newcomers with a special interest in these topics may contribute from such a venue. You reduction of a meta-topic to one's personal experience of it is a non-sequitur.
I'm recommending that you personally engage before judging it with confidence.
But why would I? I might be fond of reading about certain causes from those who are more knowledgeable about them than I am. My donation strategies may profit from reading such discussions. And yet I may engage there where my expertise lies. This is why i really can't make sense of your recommendation (which was originally an imperative, in fact).
This kind of burden-of-proof-shifting is not a good way to approach conversation. I've already made my argument.
I haven't seen a...
Mhm, it's POSSIBLE to talk about it, bias MAY exist, etc, etc. There's still a difference between speculation and argument.
Could you please explain what you are talking about here since I don't see how this is related to what you quote me saying above? Of course, there is a difference between a speculation and argument, and arguments may still include a claim that's expressed in a modal way. So I don't really understand how is this challenging what I have said :-/
...different venues are fine, they must simply be split among legitimate lines (like light c
Again, you are missing the point: my argument concerns the criteria in view which projects are assessed as worthy of funding. These criteria exist and are employed by various funding institutions across academia. I haven't seen any such criteria (and the justification thereof, such that they are conducive to effective and efficient research) in this case, which is why I've raised the issue.
we're willing to give a lot of money to wherever it will do the most good in expectation.
And my focus is on: which criteria are used/should be used in order to decid...
Civil can still be unfriendly, but hey, if you aren't getting it, it's fine.
It should be clear, no? It's hard to judge the viability of talking about X when you haven't talked about X.
If it was clear, why would I ask? there's your lack of friendliness in action. And I still don't see the rationale in what you are saying: I can judge that certain topics may profit from being discussed in a certain context A even if I haven't personally engaged in discussing it in that context. The burden of proof is on you: if you want to make an argument, you have to p...
I have to single out this one quote from you, because I have no idea where you are getting all this fuel from:
But when I look through your comment history, you seem to not be talking about near-future related topics and strategies, you're just talking about meta stuff, Open Phil, the EA forums, critiques of the EA community, critiques of AI safety, the same old hot topics. Try things out before judging.
Can you please explain what you are suggesting here? How is this conflicting with my interest in near-future related topics? I have a hard time understa...
(1) I think it is standard practice for peer review to be kept anonymous,
Problem wasn't in the reviewer being anonymous, but in the lack of access to the report
(2) some of the things you are mentioning seem like norms about grants and writeups that will reasonably vary based on context,
Sure, but that doesn't mean no criteria should be available.
(3) you're just looking at one grant out of all that Open Phil has done,
Indeed, I am concerned with one extremely huge grant. I find the sum large enough to warrant concerns, especially since the same ca...
First, I disagree with your imperatives concerning what one should do before engaging in criticism. That's a non-sequitur: we are able to reflect on multiple meta-issues without engaging in any of the object-related ones and at the same time we can have a genuine interest in reading the object-related issues. I am genuinely interested in reading about near-future improvement topics, while being genuinely interested in voicing opinion on all kinds of meta issues, especially those that are closely related to my own research topics.
Second, the fact that measu...
No worries! Thanks for that, and yes, I agree pretty much with everything you say here. As for the discussion on far-future funding, it did start in the comments on my post, but it led nowhere near practical changes, in terms of transparency of proposed criteria used for the assessment of funded projects. I'll try to write a separate, more general post on that.
My only point was that due to the high presence of "far-future bias" on this forum (I might be wrong, but much of downvoting-without-commenting seems to be at least a tendency towards bias...
wow, you really seem annoyed... didn't expect such a pissed post, but i suppose you got really annoyed by this thread or something. I provided the arguments in detail concerning OpenPhil's practices in a post from few months ago here: http://effective-altruism.com/ea/1l6/how_effective_and_efficient_is_the_funding_policy/.
I have a few paper deadlines these days, so as much as I wish to respond with all the references, arguments, etc. I don't have the time. I plan on writing a post concerning EAF's funding policy as well, where I'll sum it up in a similar wa...
This is a nice idea though I'd like to suggest some adjustments to the welcome message (also in view of kbog's worries discussed above). Currently the message begins with:
"(...) we ask that EAs who currently focus on improving the far future not participate. In particular, if you currently prioritize AI risks or s-risks, we ask you not participate."
I don't think it's a good idea to select participants in a discussion according to what they think or do (it pretty much comes down to an Argumentum ad Hominem fallacy). It would be better to specify w...
I like this suggestion - personally I feel a lot of uncertainty about what to prioritize, and given that a portion of my donations go to near-term work I'd enjoy taking part in discussion about how to best do that, even if I'm also seriously considering whether to prioritize long-term work. But I'd be totally happy to have the topic of that space limited to near-term work.
Hi Kbog, I see your point concerning near/far-future ideas in principle. However, if you look at the practical execution of these ideas, things aren't following your lines of reasoning (unfortunately, of course). For instance, the community practices related to far-future focus (in particular AI-risks) have adopted the assessment of scientific research and the funding thereof, which I find lacking scientific rigor, transparency and overall validity (to the point that it makes no sense to speak of "effective" charity). Moreover, there is a large c...
Oh damn :-/ I was just gonna ask for the info (been traveling and could reply only now). That's really interesting, is this info published somewhere online? If not, it would maybe be worthwhile to make a post on this here and discuss both the reasons for the predominantly male community, as well as ideas for how to make it more gender-balanced.
I'd be very interested in possible relations between the lack of gender balance and the topic of representation discussed in another recent thread. For instance, it'd be interesting to see whether non-male EAs find the forum insufficiently focused on causes which they find more important, or largely focused on issues that they do not find as important.
Thanks a lot for writing this up - it's nice to get some info on this literature. I didn't get though the relationship between the selfish option and "doing good ineffectively" - why do you think that rejecting the selfish option would be a response to the ineffective charity?
Thanks a lot for this post, that's really interesting and highly relevant. I'd be curious to see also the proportion of women in online forums such as this one. And of course, I'm super interested in possible reasons behind the tendencies you describe.
Hey Evan, thanks for the detailed reply and the encouragement! :) I'd love to write a longer post on this and I'll try to do so as soon as I catch some more time! Let me just briefly reply to some of your worries concerning academia, which may be shared by others across the board.
Efficiency in terms of time - the idea that academics can't do research as much as non-academic due to teaching duties is not necessarily the case. I am speaking here for EU, where in many cases both pre-docs and post-docs don't have much (or any) teaching duties (e.g. I did my
Yeah, in case of obvious crap posts (like spams) they'll be massively downvoted. Otherwise, I've never seen here any of the serious posts massively only downvoted. Rather, you'd have some downvotes, some upvotes, and the case you describe doesn't capture this situation. In fact, an initial row of downvotes may misleadingly give such an impression, leading to some people ignoring the issue, while later on a row of upvotes may actually show the issue is controversial, and as such indeed deserves further discussion.
Hi John, I don't have any concrete links, but I'd start by distinguishing different kinds of far-future causes: on the one hand, those that are supported by a scientific consensus, and those that are a matter of scientific controversy. An example of the former would be global warming (which isn't even that far future for some parts of the world), while the example of the latter would be the risks related to the development of AI.
Now in contrast to that, we have existing problems in the world: from poverty and hunger, to animals suffering across the board,...
Part of what we do is help people to understand themselves better via introspection and psychological frameworks.
Could you please specify which methods of introspection and psychological frameworks you employ to this end, and which evidence you use to assure these frameworks are based on the adequate scientific evidence, obtained by reliable methods?
Thanks for the link, Michael - I've missed that post and it's indeed related to the current one.
Thanks, Joey, for writing this up. My worry is that making any hard rules for what counts as representative may do more harm than good, if only due to deep (rational) disagreements that may arise on any particular issue. The example Michael mentions is a case in point: for instance, while I may not necessarily disagree that research on AI safety is worthy of pursuit (though see the disagreements between Yann LeCun, the head of AI research at Facebook with Bostro...
Hi Max! I agree, it indeed provides information, but the problem is that the information is too vague, and it may easily reflect a sheer bias (as in: "I don't like any posts that question the work of OpPhil"). I think this is a strong sentiment in this community and as an academic who is not affiliated with OpPhil or any other EA organization, I've noticed numerous cases of silent rejection of a certain problem. I don't think the issues are relevant for any "mainstream" EA topic (points on which the majority here agrees). But as soon as...
Hi Evan, Here's my response to your comments (including another post of yours from above). By the way, that's a nice example of an industry-compatible research, I agree that such and similar cases can indeed fall into what EAs wish to fund, as long as they are assessed as effective and efficient. I think this is an important debate, so let me challenge some of your points.
Your arguments seem to be based on the assumption that EAs can do EA-related topics more effectively and efficiently than a non-explicitly EA-affiliated academics (but please correct me i...
But what about paying for teaching duties (i.e. using the finding to cover the teaching load of a given researcher)? Teaching is one of the main issues when it comes to time spent on research, and this would mean that OU can't accept the funding framework within quite common ERC grants that have this issue covered. This was my point all along.
Second, what about the payment for a better equipment? That was another issue mentioned in Nick's post.
Finally, the underlying assumption of Nick's explanation is that the output of non-academic workers will be bett...
But that's just not necessarily true: as I said, academics can accept money to cover e.g. teaching duties and hence do more research. If you look at ERC grants, that's part of their format in case of Consolidator and Advanced grants. So it really depends on who applied for which funds, which is why Nick's explanation isn't satisfactory.
Thanks for the input! But I didn't claim that Nick is biased against academia - I just find the lack of clarity on this point and his explanation of why university grants were disqualified simply unsatisfactory.
As for your point that it is unlikely for people with PhDs to be biased, I think ex-academics can easily hold negative attitudes towards academia, especially after exiting the system.
Nevertheless, I am not concluding from this that Nick is biased (nor that he isn't) - we just don't have evidence for either of these claims, and at the end of the da...
Couldn't agree more. What is worse, (as I mention in another comment) university grants were disqualified for no clear reason. I don't know which university projects were at all considered, but the underlying assumption seems to be that irrespective of how good they would be, the other projects will perform more effectively and more efficiently, even if they are already funded, i.e. by giving them some more cash.
I think this a symptom of an anti-academic tendencies that I've noticed on this form and in this particular domain of research, which I think woul...
I'm Head of Operations for the Global Priorities Institute (GPI) at Oxford University. OpenPhil is GPI's largest donor, and Nick Beckstead was the program officer who made that grant decision.
I can't speak for other universities, but I agree with his assessment that Oxford's regulations make it much more difficult to use donations get productivity enhancements than it would be at other non-profits. For example, we would not be able to pay for the child care of our employees directly, nor raise their salary in order for them to be able to pay for more chil...
I'd be curious to hear some explanation of
"University-based grantees were not considered for these grants because I believe they are not well-positioned to use funds for time-saving and productivity-enhancement due to university regulations."
since I have no clue what that means. In the text previous to this claim it is only stated that "I recommended these grants with the suggestion that these grantees look for ways to use funding to trade money for saving the time or increasing the productivity of their employees (e.g. subsidizing electro...
Ahh, now I get you! Yeah, that sounds like a good idea! Like I've mentioned in another reply, I wouldn't require the same from upvotes because they may imply the lack of counterarguments, while a downvote implies a recognition that there is a problem, in which case it'd only be fair to state which one it is.
Yes, that's a good point, I've been wondering about this as well. According to one (pretty common) approach to argumentation, an argument is acceptable unless challenged by a counterargument. From that perspective:
upvoting = an acknowledgement of the absence of a counterargument.
downvoting = an observation that there is a counterargument, in which case it should be stated.
This is just an idea from the top of my head, I'd be curious to discuss this in more detail since I find it genuinely curious :)
That'd probably be already better than nothing ;) Then again, I'm afraid most people would still just (anonymously) downvote without giving reasons. It's much easier to hide behind an anonymous veil than take a stance and open yourself for debate.
In fact, I'd be curious to see some empirical data on how correlated the act of downvoting and the absence of commenting are. My guess is that those who provide comments (including critical ones) mostly don't downvote except in extreme cases (e.g. discrimination, obviously off-topic for the forum, obviously misinformation, etc.).
Thanks for writing this. The suggested criticism of debate is as old as debate itself, and in addition to the reasons you list here, I'd add the *epistemic* benefits of debating.
Competitive debating allows for the exploration of the argumentative landscape of the given topic in all its breath (from the preparation to the debating itself). That means that it allows for the formulation of the best arguments for either side, which (given all the cognitive biases we may have) may be hard to come by in a non-competitive context. As a result, debate is a le... (read more)