Reminds me of The House of Saud (although I'm not saying they have this goal, or any shared goal):
"The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family's wealth measure their net worth at $1.4 trillion"
https://en.wikipedia.org/wiki/House_of_Saud
IMO, the main things holding back scaling are EA's (in)ability to identify good "shovel ready" ideas and talent within the community and allocate funds appropriately. I think this is a very general problem that we should be devoting more resources to. Related problems are training and credentialing, and solving common good problems within the EA community.
I'm probably not articulating all of this very well, but basically I think EA should focus a lot more on figuring out how to operate effectively, make collective decisions, and distribute reso...
I view economists are more like physicists working with spherical cows, and often happy to continue to do so. So that means we should expect lots of specific blind spots, and for them to be easy to identify, and for them to be readily acknowledged by many economists. Under this model, economists are also not particularly concerned with the practical implications of the simplifications they make. Hence they would readily acknowledge many specific limitations of their models. Another way of putting it: this is more of a blind spot for...
It hardly seems "inexplicable"... this stuff is harder to quantify, especially in terms of the long-term value. I think there's an interesting contrast with your comment and jackmalde's below: "It's also hardly news that GDP isn't a perfect measure."
So I don't really see why there should be a high level of skepticism of a claim that "economists haven't done a good job of modelling X[=value of nature]". I'd guess most economists would emphatically agree with this sort of critique.
Or perhaps there's an underlying disagreement about what to do whe...
I think this illustrates a harmful double standard. Let me substitute a different cause area in your statement:
"Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem".
Reiterating my other comments: I don't think it's appropriate to say that the evidence showed it made sense to give up. As others have mentioned, there are measurement issues here. So this is a case where absence of evidence is not strong evidence of absence.
Just because they didn't get the evidence of impact they were aiming for doesn't mean it "didn't work".
I understand if EAs want to focus on interventions with strong evidence of impact, but I think it's terrible comms (both for PR and for our own epistemics) to go around saying that interventions lacking such evidence don't work.
It's also pretty inconsistent; we don't seem to have that attitude about spending $$ on speculative longtermist interventions! (although I'm sure some EAs do, I'm pretty sure it's a minority view).
Thanks for this update, and for your valuable work.
I must admit I was frustrated by reading this post. I want this work to continue, and I don't find the levels of engagement you report surprising or worth massively updating on (i.e. suspending outreach).
I'm also bothered by the top-level comments assuming that this didn't work and should've been abandoned. What you've shown is that you could not provide strong evidence of the type that you hoped for the programs effectiveness, NOT that it didn't work!
Basically, I think there should be a strong...
I have a recommendation: try to get at least 3 people, so you aren't managing your manager. I think accountability and social dynamics would be better that way, since:
- I suspect part of why line managers work for most people is because they have some position of authority that makes you feel obligated to satisfy them. If you are in equal positions, you'd mostly lose that effect.
- If there are only 2 of you, it's easier to have a cycle of defection where accountability and standards slip. If you see the other person slacking, you feel more OK with slacking. Whereas if you don't see the work of your manager, you can imagine that they are always on top of their shit.
(Sorry, this is a bit stream-of-conscious):
I assume its because humans rely on natural ecosystems in a variety of ways in order to have the conditions necessary for agriculture, life, etc. So, like with climate change, the long-term cost of mitigation is simply massive... really these numbers should not be thought of as very meaningful, I think, since the kinds of disruptions and destruction we are talking about is not easily measured in $s.
TBH, I find it not-at-all surprising that saving coral reefs would have a huge impact, since they are basically...
I recommend changing the "climate change" header to something a bit broader (e.g."environmentalism" or "protecting the natural environment", etc.). It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental concerns in the public imagination. While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.
A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/p...
Thanks for the pointer! I think many EAs are interested in QS, but I agree it's a bit tangential.
IIRC Etherium foundation is using QF somehow.
But it's probably best just to get in touch with someone who knows more of what's going on at RXC.
Not sure who that would be OTTMH, unfortunately.
I think you guys are already aware of RadicalXChange. It's a bit different in focus, but I know they are excited about trying out mechanisms like QV/QF in institutional settings.
It was a few years back that I looked into it, and I didn't try too hard. Sad to see the PETA link.
I'm basically looking for a reference that summarizes someone else's research (so I don't have to do my own).
This doesn't seem like a great use of time. For one thing, I think it gets the psychology of political disagreements backwards. People don't simply disagree with each other because they don't understand each others' words. Rather they'll often misinterpret words to meet political ends.
It's not one or the other. Anyways, having shared definitions also prevents deliberate/strategic misinterpretation.
...I also question anyone's ability to create such an "objective/apolitical" dictionary. As you note, even the term "woke" can have a negative connotati
Do you disagree that the EA community at large seems less excited about multiplier orgs vs. more direct orgs?
I'm skeptical of multiplier organizations relative effectiveness because the EA community doesn't seem that excited about them.
(P.S.: This is actually probably my #1 reason, as someone who hasn't spent much time thinking about where people should donate. I suspect a lot of people are wary of seeming too enthusiastic because they don't want EA to look like a pyramid scheme.)
What you describe is part of what I meant by "jadedness".
"If they were actually trying to change the world -- if they were actually strongly motivated to make the world a better place, etc. -- the stuff they learn in college wouldn't stop them."
^ I disagree. Or rather, I should say, there are a lot of people who are not-so-strongly motivated to make the world a better place, and so get burned out and settle into a typical lifestyle. I think this outcome would be much less likely at a place like "Change the World University", both because it wou...
Thanks for that!
I'm interested if you have other examples.
This one looks similar, but not that similar. The whole framing/vision is different.
When I visit their webpage, the message I get is: "hey, do you maybe want to opt in to this thing to tell us about yourself because you can't get any real publicity?"
The message I want to send is: "Politicians are job candidates; why don't we make them apply/grovel for a job like everyone else?
I think I understand what you are doing, and disagree with it being a way of meaningfully addressing my concern.
It seems like you are calculating the chance that NONE of these results are significant, not the chance that MOST of them ARE (?)
Out of 55 2-sample t-tests, we would expect 2 to come out "statistically significant" due to random chance, but I found 10, so we can expect most of these to point to actually meaningful differences represented in the survey data.
Is there a more rigorous form of this argument?
I just skimmed the post.
Many of the most pressing threats to the humanity are far more likely to cause collapse than be an outright existential threat with no ability for civilisation to recover.
This claim is not supported, and I think most people who study catastrophic risks (they already coined the acronym C-risk, sorry!) and x-risks would disagree with it.
In fact, civilization collapse is considered fairly unlikely by many, although Toby Ord thinks it hasn't been properly explored (see is recent 80k interview).
AI in particular (which many believe...
These are not the same thing. GCR is just anything that's bad on a massive scale, civilization doesn't have to collapse.
Overall, I'm intrigued and like this general line of thought. A few thoughts on the post:
To answer your question: no.
I basically agree with this comment, but I'd add that the "diminishing returns" point is fairly generic, and should be coupled with some arguments about why there are very rapidly diminishing returns in US/China (seems false) or non-trivial returns in Europe (seems plausible, but non-obvious, and also to be one of the focuses of the OP).
RE "why look at Europe at all?", I'd say Europe's gusto for regulation is a good reason to be interested (you discuss that stuff later, but for me it's the first reason I'd give). It's also worth mentioning the "right to an explanation" as well as GDPR.
Based on the report [1], it's a bit misleading to say that they are a charity doing $35 cataracts. The report seems pretty explicit that donations to the charity are used for other activities.
I strongly agree that independent thinking seems undervalued (in general and in EA/LW). There is also an analogy with ensembling in machine learning (https://en.wikipedia.org/wiki/Ensemble_learning).
By "independent" I mean "thinking about something without considering others' thoughts on it" or something to that effect... it seems easy for people's thoughts to converge too much if they aren't allowed to develop in isolation.
Thinking about it now, though, I wonder if there isn't some even better middle ground; in my experience, group bra...
Thanks for writing this. My TL;DR is:
AI policy is important, but we don’t really know where to begin at the object level
You can potentially do 1 of 3 things, ATM: A. “disentanglement” research: B. operational support for (e.g.) FHI C. get in position to influence policy, and wait for policy objectives to be cleared up
Get in touch / Apply to FHI!
I think this is broadly correct, but have a lot of questions and quibbles.
My main comments:
As others have mentioned: great post! Very illuminating!
I agree value-learning is the main technical problem, although I’d also note that value-learning related techniques are becoming much more popular in mainstream ML these days, and hence less neglected. Stuart Russell has argued (and I largely agree) that things like IRL will naturally become a more popular research topic (but I’ve also argued this might not be net-positive for safety: http://lesswrong.com/lw/nvc/risks_from_approximate_value_learning/)
My main comment wrt the val
...My point was that HRAD potentially enables the strategy of pushing mainstream AI research away from opaque designs (which are hard to compete with while maintaining alignment, because you don't understand how they work and you can't just blindly copy the computation that they do without risking safety), whereas in your approach you always have to worry about "how do I compete with with an AI that doesn't have an overseer or has an overseer who doesn't care about safety and just lets the AI use whatever opaque and potentially dangerous technique it wa
Will - I think "meta-reasoning" might capture what you mean by "meta-decision theory". Are you familiar with this research (e.g. Nick Hay did a thesis w/Stuart Russell on this topic recently)?
I agree that bounded rationality is likely to loom large, but I don't think this means MIRI is barking up the wrong tree... just that other trees also contain parts of the squirrel.
I'm also very interested in hearing you elaborate a bit.
I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.
RE: "MIRI has focused in on an extremely specific kind of AI", I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I'm less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights...
(cross posted on facebook):
I was thinking of applying... it's a question I'm quite interested in. The deadline is the same as ICML tho!
I had an idea I will mention here: funding pools:
I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.
I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building.
I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.
Overall, though, I think it's just plain wrong to argue for an unexamined idea of hones...
Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?
I can imagine that people would not be very good at predicting that accurately.
People are motivated both by:
"But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people."
Care to elaborate (or link to something?)
"This is something the EA community has done well at, although we have tended to focus on talent that current EA organization might wish to hire. It may make sense for us to focus on developing intellectual talent as well."
Definitely!! Are there any EA essay contests or similar? More generally, I've been wondering recently if there are many efforts to spread EA among people under the age of majority. The only example I know of is SPARC.
Great post!
This framing doesn't seem to capture the concern that even slight misspecification (e.g. a reward function that is a bit off) could lead to x-catastrophe.
I think this is a big part of many people's concerns, including mine.
This seems somewhat orthogonal to the Saint/Sycophant/Schemer disjunction... or to put it another way, it seems like a Saint that is just not quite right about what your interests actually are (e.g. because they have alien biology and culture) could still be an x-risk.
Thoughts?