All of capybaralet's Comments + Replies

Great post!

This framing doesn't seem to capture the concern that even slight misspecification (e.g. a reward function that is a bit off) could lead to x-catastrophe.  

I think this is a big part of many people's concerns, including mine.

This seems somewhat orthogonal to the Saint/Sycophant/Schemer disjunction... or to put it another way, it seems like a Saint that is just not quite right about what your interests actually are (e.g. because they have alien biology and culture) could still be an x-risk.

Thoughts?

Reminds me of The House of Saud (although I'm not saying they have this goal, or any shared goal):
"The family in total is estimated to comprise some 15,000 members; however, the majority of power, influence and wealth is possessed by a group of about 2,000 of them. Some estimates of the royal family's wealth measure their net worth at $1.4 trillion"
https://en.wikipedia.org/wiki/House_of_Saud
 

IMO, the best argument against strong longtermism ATM is moral cluelessness.  

IMO, the main things holding back scaling are EA's (in)ability to identify good "shovel ready" ideas and talent within the community and allocate funds appropriately.  I think this is a very general problem that we should be devoting more resources to.  Related problems are training and credentialing, and solving common good problems within the EA community.

I'm probably not articulating all of this very well, but basically I think EA should focus a lot more on figuring out how to operate effectively, make collective decisions, and distribute reso... (read more)

I view economists are more like physicists working with spherical cows, and often happy to continue to do so.  So that means we should expect lots of specific blind spots, and for them to be easy to identify, and for them to be readily acknowledged by many economists.  Under this model, economists are also not particularly concerned with the practical implications of the simplifications they make.  Hence they would readily acknowledge many specific limitations of their models.  Another way of putting it: this is more of a blind spot for... (read more)

It hardly seems "inexplicable"... this stuff is harder to quantify, especially in terms of the long-term value.  I think there's an interesting contrast with your comment and jackmalde's below: "It's also hardly news that GDP isn't a perfect measure."

So I don't really see why there should be a high level of skepticism of a claim that "economists haven't done a good job of modelling X[=value of nature]".  I'd guess most economists would emphatically agree with this sort of critique.

Or perhaps there's an underlying disagreement about what to do whe... (read more)

3
Harrison Durland
3y
I think there may be a bit of a disconnect between what I meant and how it was received, perhaps magnified by the fact that I was only giving my skim-derived impressions. First, I fully agree with jackmalde's point that GDP isn't a perfect measure, but partially reflecting a comment from your second paragraph, I presume that a lot of economists recognize that measures like GDP are not perfect (in fact, at least 2 if not all 3 of the econ professors I've had have explicitly said something along those lines).  Second, based on the first paragraph of the Cambridge article ("Nature is a “blind spot” in economics") it seemed like the implication was that 1) economists have massively ignored this, and 2) adding consideration of "nature" would be model-shattering. When the claim is simply "nature is a factor" (among multiple others), I think that's probably reasonable. Third, I should clarify what I mean about my skepticism: I am not the slightest bit skeptical that economic models could be improved in general. However, by default I am skeptical towards any specific claim of widespread blindness among economists, because I think that most of these claims will be wrong -- i.e., I have a low outside view/base rate for each specific claim, especially with regards to the questions I mentioned in my original answer/comment.  Building on that, I don't want to over-articulate my thought process since it was largely just my initial, informal thoughts, but: There may be good evidence to back up Partha's claim, it just seems like something that falls within a category of "Things that, if true, would be much more widely recognized [by economists] / would not have to be presented as some major 'blind spot.'" I don't claim that this heuristic is good for someone whose work/research relates to this (i.e., those kinds of people should do more research than initial impressions), but as someone who is not in economics I think it's more effective to have that kind of skepticism as opposed t

I think this illustrates a harmful double standard.  Let me substitute a different cause area in your statement:
"Sounds like any future project meant to reduce x-risk will have to deal with the measurement problem".


 

2
Aaron Gertler
3y
I think that X-risk reduction projects also have a problem with measurement! However, measuring the extent to which you've reduced X-risk is a lot harder than measuring whether students have taken some kind of altruistic action: in the latter case, you can just ask the students (and maybe give them an incentive to reply). Thus, if someone wants me to donate to their "EA education project", I'm probably going to care more about direct outcome measurement than I would if I were asked to support an X-risk project, because I think good measurement is more achievable. (I'd hold the X-risk project to other standards, some of which wouldn't apply to an education project.)

Online meetings could be an alternative/supplement, especially in the post-COVID world.

Reiterating my other comments: I don't think it's appropriate to say that the evidence showed it made sense to give up.  As others have mentioned, there are measurement issues here.  So this is a case where absence of evidence is not strong evidence of absence.  

Just because they didn't get the evidence of impact they were aiming for doesn't mean it "didn't work".  

I understand if EAs want to focus on interventions with strong evidence of impact, but I think it's terrible comms (both for PR and for our own epistemics) to go around saying that interventions lacking such evidence don't work.

It's also pretty inconsistent; we don't seem to have that attitude about spending $$ on speculative longtermist interventions! (although I'm sure some EAs do, I'm pretty sure it's a minority view).

Thanks for this update, and for your valuable work.

I must admit I was frustrated by reading this post.  I want this work to continue, and I don't find the levels of engagement you report surprising or worth massively updating on (i.e. suspending outreach).

I'm also bothered by the top-level comments assuming that this didn't work and should've been abandoned.  What you've shown is that you could not provide strong evidence of the type that you hoped for the programs effectiveness, NOT that it didn't work!

Basically, I think there should be a strong... (read more)

2
Catherine Low
3y
Hi capybaralet,  Thanks for your comments and enthusiasm for the program! > I must admit I was frustrated by reading this post.  I want this work to continue, and I don't find the levels of engagement you report surprising or worth massively updating on (i.e. suspending outreach). I admit when the decision was made to stop actively working on SHIC, I was pretty sad and frustrated too. However for our team, and our funders too, the main question was "do we think this is worth continuing compared to other things we could spend our time and money on?", and the answer was "probably not".   You might also be interested in this post which combines the experience of all the EA outreach attempts I was aware of at the time:  https://forum.effectivealtruism.org/posts/L5t3EPnWSj7D3DpGt/high-school-ea-outreach This probably will answer many of your questions about why we didn't continue to test out different ideas of engaging students and teachers - we'd already tried quite a few different things and learnt from work from other EAs. The post is now nearly 2 years old and there have been other efforts in the EA community to work with high schoolers since then. But I still basically agree with my conclusion which was: > I don’t think our outreach described in this post was a particularly effective use of resources. However, outreach could be effective if you are able to attract highly promising students to sign up for a program over a longer term. This might be possible if you have a strong brand (such as an association with elite University) allowing you to attract suitable students through schools and other networks, and the resources to run a fellowship-type program with these students.   To answer your specific questions: 1. There were only a few students who engaged significantly out of class, so it is hard to know what to conclude from a small number. Some were quite keen on EA concepts, others were eager to do good, but didn't seem to be particularly excited about ap

I have a recommendation: try to get at least 3 people, so you aren't managing your manager.  I think accountability and social dynamics would be better that way, since:
- I suspect part of why line managers work for most people is because they have some position of authority that makes you feel obligated to satisfy them.  If you are in equal positions, you'd mostly lose that effect. 
- If there are only 2 of you, it's easier to have a cycle of defection where accountability and standards slip.  If you see the other person slacking, you feel more OK with slacking.  Whereas if you don't see the work of your manager, you can imagine that they are always on top of their shit. 
 

1
Ariel_ZJ
3y
I strongly second this

(Sorry, this is a bit stream-of-conscious):

I assume its because humans rely on natural ecosystems in a variety of ways in order to have the conditions necessary for agriculture, life, etc.  So, like with climate change, the long-term cost of mitigation is simply massive... really these numbers should not be thought of as very meaningful, I think, since the kinds of disruptions and destruction we are talking about is not easily measured in $s.

TBH, I find it not-at-all surprising that saving coral reefs would have a huge impact, since they are basically... (read more)

Yeah...  it's not at all my main focus, so I'm hoping to inspire someone else to do that! :) 

I recommend changing the "climate change" header to something a  bit broader (e.g."environmentalism" or "protecting the natural environment", etc.).  It is a shame that (it seems) climate change has come to eclipse/subsume all other environmental  concerns in the public imagination.  While most environmental issues are exacerbated by climate change, solving climate change will not necessarily solve them.

A specific cause worth mentioning is preventing the collapse of key ecosystems, e.g. coral reefs: https://forum.effectivealtruism.org/p... (read more)

2
NunoSempere
3y
With regards to coral reefs, your post is pretty short. In my experience, it's more likely that people will pay more attention to it if you flesh it out a little bit more.
2
NunoSempere
3y
Yeah, this makes sense, thanks.

Thanks for the pointer!  I think many EAs are interested in QS, but I agree it's a bit tangential.

2
EdoArad
3y
Sorry, my comment seems too harsh. The reason I think that this wouldn't be useful is that it is a suggestion/request for someone in the EA community to pretty much make a business for physical QS devices (which also seems to probably exist). If you would have written something similar, but concluded with specific suggestions of devices and how to use them it would have been awesome.  I think that posts on QS, and self-improvement generally, would be awesome on the forum if they would give the readers ideas on how to improve themselves or their productivity or if the post writer is looking for an actionable answer for something. It might also be nice if a post would just serve as a vehicle to start a conversation around some aspect of QS. I think that this post seems to be a bit too much aimed at persuasion and doesn't generate anything actionable.

IIRC Etherium foundation is using QF somehow.
But it's probably best just to get in touch with someone who knows more of what's going on at RXC.
Not sure who that would be OTTMH, unfortunately.
 

I think you guys are already aware of RadicalXChange.  It's a bit different in focus, but I know they are excited about trying out mechanisms like QV/QF in institutional settings.

2
Vicky Clayton
3y
Yes, thank you! Do you know of any institutions which are currently using QV/QF? Thanks

It was a few years back that I looked into it, and I didn't try too hard.  Sad to see the PETA link.
I'm basically looking for a reference that summarizes someone else's research (so I don't have to do my own).

This doesn't seem like a great use of time. For one thing,  I think it gets the psychology of political disagreements backwards. People don't simply disagree with each other because they don't understand each others' words. Rather they'll often misinterpret words to meet political ends.

It's not one or the other.  Anyways, having shared definitions also prevents deliberate/strategic misinterpretation.

I also question anyone's ability to create such an "objective/apolitical" dictionary. As you note, even the term "woke" can have a negative connotati

... (read more)
1
nathan98000
3y
The existence of a dictionary which claims to be apolitical doesn't mean that people will have shared definitions. Webster's dictionary already exists. This doesn't stop people from having semantic disagreements. How does one make a "less political" dictionary that explicitly and exclusively deals with political concepts? There's a risk of EA being subsumed under one or another political party, which would make it less credible to those of different political affiliations.  There's also the risk of turning into the kind of dumpster fire of bad faith arguments that many political forums encounter. There's also the fact that political issues are relatively less neglected.

Do you disagree that the EA community at large seems less excited about multiplier orgs vs. more direct orgs?  

3
MichaelStJules
3y
No, I agree with that.

I'm skeptical of multiplier organizations relative effectiveness because the EA community doesn't seem that excited about them. 

(P.S.: This is actually probably my #1 reason, as someone who hasn't spent much time thinking about where people should donate.  I suspect a lot of people are wary of seeming too enthusiastic because they don't want EA to look like a pyramid scheme.)

5
MichaelStJules
3y
Open Phil and the CEA Global Health and Development Fund have each made a grant to One for the World before, Open Phil has made grants to Founders Pledge, and the EA Infrastructure Fund has made grants to TLYCS, One for the World, RC Forward, Raising for Effective Giving, Founders Pledge, a tax deductible status project run by Effective Altruism Netherlands, Generation Pledge, http://gieffektivt.no/, Lucius Caviola and Joshua Greene (givingmultiplier.org), EA Giving Tuesday and Effektiv Spenden. Of the 4 EA funds, the EA Infrastructure Fund has paid out the least to date, though, and it looks like they all started paying out in 2017.
4
Jon_Behar
3y
Thanks for sharing your thinking on this! Hopefully this exercise will shed some light on whether that lack of excitement is warranted, or whether it could represent an untapped opportunity.

Aren't grant lotteries a more obvious solution than the three you mention?

1
gavintaylor
3y
I think they could help with some things. But as  I wrote here, I am not sure if it would be appropriate to only fund academic research through lotteries. 
  • To some extent, you don't need to.  I don't believe there's a very clear distinction between the 2 camps.
  • To begin with, this university would be viewed as weird, and I suspect, would not be particularly attractive to virtue signalers as a result.  This would help establish a culture of genuine idealists.
  • This is part of the mandate of the admissions decision-makers.  I expect if you had good people, you could do a pretty good job of screening applicants.

     
1
Cienna
3y
Hmm. I apologize, I don’t actually know whether idealists and virtue signalers differ in productivity. I think the motivation matters for what someone will put up with on the way to their goals; maybe some problems are easier for virtue signalers to solve.
1[anonymous]3y
good question, I didn't realized until now that there might not be many measurable interventions that have been carried out for this cause for now I'd say answer the question as if it meant: charities whose operations and interventions are reviewed and scrutinized by independent entities

What you describe is part of what I meant by "jadedness".

"If they were actually trying to change the world -- if they were actually strongly motivated to make the world a better place, etc. -- the stuff they learn in college wouldn't stop them."

^ I disagree.  Or rather, I should say, there are a lot of people who are not-so-strongly motivated to make the world a better place, and so get burned out and settle into a typical lifestyle.  I think this outcome would be much less likely at a place like "Change the World University", both because it wou... (read more)

1
Cienna
3y
I think there’s another source of jadedness: things being made unnecessarily difficult. I was explicitly told in school by instructors, “we’re going to make this harder than it needs to be, in arbitrary ways, because real life is like that sometimes, and you need to figure out how to handle it psychologically. Better that you learn to deal with pointless assignments and needlessly difficult problems and petty teammates and vague instructions now than in a job.” Being forewarned made it bearable, I was even grateful for it. And then I forgot this when I changed schools, the new place provided the same kind of challenges but didn’t say so explicitly, and I got frustrated and hurt by the work being unnecessarily difficult. I think some who aren’t warned correctly intuit that someone is deliberately making life harder than it needs to be, and conclude the deck is being stacked against them, instead of concluding this adversity is being created to give them opportunities to learn mental strategies for not getting jaded.

Thanks for that! 
I'm interested if you have other examples.

This one looks similar, but not that similar.  The whole framing/vision is different.

When I visit their webpage, the message I get is: "hey, do you maybe want to opt in to this thing to tell us about yourself because you can't get any real publicity?"

The message I want to send is: "Politicians are job candidates; why don't we make them apply/grovel for a job like everyone else?

I think I understand what you are doing, and disagree with it being a way of meaningfully addressing my concern.  

It seems like you are calculating the chance that NONE of these results are significant, not the chance that MOST of them ARE (?)

1
MaxRa
3y
Hmm, do you maybe mean "based on a real effect" when you say significant? Because we already now that 10 of the 55 tests came out significant, so I don't understand why we would want to calculate the probability of these results being significant. I was calculating the probability of seeing the 10 significant differences that we saw, assuming all the differences we observed are not based on real effects but on random variation, or basically  p(observing differences in the comparisons that are so high that they the t-test with a 5% threshold says 'significant' ten out of 55 times | the differences we saw are all just based on random variation in the data). In case you find this confusing, that is totally on me. I find signicance testing very unintuitive and maybe shouldn't even have tried to explain it. :') Just in case, chapter 11 in Doing Bayesian Data Analysis introduces the topic from a Bayesian perspective and was really useful for me.
Out of 55 2-sample t-tests, we would expect 2 to come out "statistically significant" due to random chance, but I found 10, so we can expect most of these to point to actually meaningful differences represented in the survey data.

Is there a more rigorous form of this argument?

1
MaxRa
4y
Not sure if that is what you asked for, but here my attempt to spell this out, almost more to order my own thoughts: * assuming the null-hypothesis "There is no personality difference in a [personality trait Y] between people prioritizing vs. not prioritizing [cause area X].", the false-positive rate of the t-test is designed to be 5% * i.e. even if there is no difference, due to random variation we expect differences in the sample averages anyway and we only want to decide "There is a difference!" if the difference is big enough/unlikely enough when assuming there is no difference in reality * we decide to call a difference "significant" if the observed difference is less than 5% likely due to random variation only * so, if we do one hundred t-tests where there is actually no difference in reality, only by random variation we expect to see 5% of them to show significant differences in the averages of the samples * same goes for 55 t-tests, where we expect 55*5%=2.75 significant results if there is no difference in real life * so instead seeing 10 significant results is very unlikely when we assume the null-hypothesis * how unlikely can be calculated with the cumulative distribution function of the binomial distribution: 55 repetitions with p=5% gives a probability of 0.04% that 10 or more tests would be significant due to random chance alone * therefore, given the assumptions of the t-test, there is a 99.96% probability that the observed differences of personality differences are not all due to random variation
3
ben.smith
4y
I second this question. Intuitively, your argument makes sense and you have something here. But I would have more confidence in the conclusion if a False Discovery Rate correction was applied. This is also called a Benjamini-Hochberg procedure (https://en.wikipedia.org/wiki/False_discovery_rate#Controlling_procedures). In R, the stats package makes it very easy to apply the false discovery rate correction to your statistics - see https://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html. You would do something like where p is a vector/list of all 55 of your uncorrected p-values from your t-tests.
5
David_Moss
4y
There are lots of different ways to control for multiple comparisons: https://en.wikipedia.org/wiki/Multiple_comparisons_problem#Controlling_procedures

I just skimmed the post.

Many of the most pressing threats to the humanity are far more likely to cause collapse than be an outright existential threat with no ability for civilisation to recover.

This claim is not supported, and I think most people who study catastrophic risks (they already coined the acronym C-risk, sorry!) and x-risks would disagree with it.

In fact, civilization collapse is considered fairly unlikely by many, although Toby Ord thinks it hasn't been properly explored (see is recent 80k interview).

AI in particular (which many believe... (read more)

These are not the same thing. GCR is just anything that's bad on a massive scale, civilization doesn't have to collapse.

2
Davidmanheim
4y
There are a variety of definitions, but most of the GCR literature is in fact concerned with collapse risks. See Nick Bostrom's book on the topic, for example, or Open Philanthropy's definition: https://www.openphilanthropy.org/research/cause-reports/global-catastrophic-risks/global-catastrophic-risks

Overall, I'm intrigued and like this general line of thought. A few thoughts on the post:

  • If you're using earn.com, it's not really email anymore, right? So maybe it's better to think about this as about "online messaging".
  • Another (complementary) way to improve email is to make it like facebook where you have to agree to connect with someone before they can message you.
  • Like many ideas about using $$ as a signal, I think it might be better if we instead used a domain-specific credit system, where credits are allotted to indivi
... (read more)

To answer your question: no.

I basically agree with this comment, but I'd add that the "diminishing returns" point is fairly generic, and should be coupled with some arguments about why there are very rapidly diminishing returns in US/China (seems false) or non-trivial returns in Europe (seems plausible, but non-obvious, and also to be one of the focuses of the OP).


RE "why look at Europe at all?", I'd say Europe's gusto for regulation is a good reason to be interested (you discuss that stuff later, but for me it's the first reason I'd give). It's also worth mentioning the "right to an explanation" as well as GDPR.





1
stefan.torges
5y
Do you think these points make Europe/the EU more important than the US or China? Otherwise, they don't give a reason for focusing on the Europe/the EU over these countries to the extent that this focus is mutually exclusive, which it is to some extent (e.g., you either set up your think tank in Washington DC or Brussels, you either analyze the EU policy-making process or the US one). Reasons to focus on the EU/Europe over these countries are in my opinion: * personal fit/comparative advantage * diminishing returns for additional people to focus on the US/China (should have noted this in the OP) * threshold effects

Based on the report [1], it's a bit misleading to say that they are a charity doing $35 cataracts. The report seems pretty explicit that donations to the charity are used for other activities.

I strongly agree that independent thinking seems undervalued (in general and in EA/LW). There is also an analogy with ensembling in machine learning (https://en.wikipedia.org/wiki/Ensemble_learning).

By "independent" I mean "thinking about something without considering others' thoughts on it" or something to that effect... it seems easy for people's thoughts to converge too much if they aren't allowed to develop in isolation.

Thinking about it now, though, I wonder if there isn't some even better middle ground; in my experience, group bra... (read more)

Thanks for writing this. My TL;DR is:

  1. AI policy is important, but we don’t really know where to begin at the object level

  2. You can potentially do 1 of 3 things, ATM: A. “disentanglement” research: B. operational support for (e.g.) FHI C. get in position to influence policy, and wait for policy objectives to be cleared up

  3. Get in touch / Apply to FHI!

I think this is broadly correct, but have a lot of questions and quibbles.

  • I found “disentanglement” unclear. [14] gave the clearest idea of what this might look like. A simple toy example would h
... (read more)
9
RyanCarey
7y
That's the TLDR that I took away from the article too. I agree that "disentanglement" is unclear. The skillset that I previously thought was needed for this was something like IQ + practical groundedness + general knowledge + conceptual clarity, and that feels mostly to be confirmed by the present article. I have some lingering doubts here as well. I would flesh out an objection to the 'disentanglement'-focus as follows: AI strategy depends critically on government, some academic communities and some companies, that are complex organizations. (Suppose that) complex organizations are best understood by an empirical/bottom-up approach, rather than by top-down theorizing. Consider the medical establishment that I have experience with. If I got ten smart effective altruists to generate mutually exclusive collectively exhaustive (MECE) hypotheses about it, as the article proposes doing for AI strategy, they would, roughly speaking, hallucinate some nonsense, that could be invalidated in minutes by someone with years of experience in the domain. So if AI strategy depends in critical components on the nature of complex institutions, then what we need for this research may be, rather than conceptual disentanglement, something more like high-level operational experience of these domains. Since it's hard to find such people, we may want to spend the intervening time interacting with these institutions or working within them on less important issues. Compared to this article, this perspective would de-emphasize the importance of disentanglement, while maintaining the emphasis on entering these institutions, and increasing the emphasis on interacting with and making connections within these institutions.

My main comments:

  1. As others have mentioned: great post! Very illuminating!

  2. I agree value-learning is the main technical problem, although I’d also note that value-learning related techniques are becoming much more popular in mainstream ML these days, and hence less neglected. Stuart Russell has argued (and I largely agree) that things like IRL will naturally become a more popular research topic (but I’ve also argued this might not be net-positive for safety: http://lesswrong.com/lw/nvc/risks_from_approximate_value_learning/)

  3. My main comment wrt the val

... (read more)

My point was that HRAD potentially enables the strategy of pushing mainstream AI research away from opaque designs (which are hard to compete with while maintaining alignment, because you don't understand how they work and you can't just blindly copy the computation that they do without risking safety), whereas in your approach you always have to worry about "how do I compete with with an AI that doesn't have an overseer or has an overseer who doesn't care about safety and just lets the AI use whatever opaque and potentially dangerous technique it wa

... (read more)

Will - I think "meta-reasoning" might capture what you mean by "meta-decision theory". Are you familiar with this research (e.g. Nick Hay did a thesis w/Stuart Russell on this topic recently)?

I agree that bounded rationality is likely to loom large, but I don't think this means MIRI is barking up the wrong tree... just that other trees also contain parts of the squirrel.

I'm also very interested in hearing you elaborate a bit.

I guess you are arguing that AIS is a social rather than a technical problem. Personally, I think there are aspects of both, but that the social/coordination side is much more significant.

RE: "MIRI has focused in on an extremely specific kind of AI", I disagree. I think MIRI has aimed to study AGI in as much generality as possible and mostly succeeded in that (although I'm less optimistic than them that results which apply to idealized agents will carry over and produce meaningful insights... (read more)

(cross posted on facebook):

I was thinking of applying... it's a question I'm quite interested in. The deadline is the same as ICML tho!

I had an idea I will mention here: funding pools:

  1. You and your friends whose values and judgement you trust and who all have small-scale funding requests join together.
  2. A potential donor evaluates one funding opportunity at random, and funds all or none of them on the basis of that evaluation.
  3. You have now increased the ratio of funding / evaluation available to a potential donor by a factor of #projects
  4. There is an in
... (read more)
2
vipulnaik
7y
Awesome, excited to see you flesh out your thinking and submit!

I was overall a bit negative on Sarah's post, because it demanded a bit too much attention, (e.g. the title), and seemed somewhat polemic. It was definitely interesting, and I learned some things.

I find the most evocative bit to be the idea that EA treats outsiders as "marks".
This strikes me as somewhat true, and sadly short-sighted WRT movement building. I do believe in the ideas of EA, and I think they are compelling enough that they can become mainstream.

Overall, though, I think it's just plain wrong to argue for an unexamined idea of hones... (read more)

Do you have any info on how reliable self-reports are wrt counterfactuals about career changes and EWWC pledging?

I can imagine that people would not be very good at predicting that accurately.

0
Benjamin_Todd
7y
Hi there, It's definitely hard for people to estimate. When we "impact rate" the plan changes, we also try to make an initial assessment of how much is counterfactually due to us (as well as how much extra impact results non-counterfactually adjusted). We then to more in-depth analysis of the counterfactuals in crucial cases. Because we think the impact of plan changes it fat tailed, if we can understand the top 5% of them, we get a reasonable overall picture. We do this analysis in documents like this: https://80000hours.org/2016/12/has-80000-hours-justified-its-costs/ Each individual case is debateable, but I think there's a large enough volume of cases now to justify that we're having a substantial impact.
1
RyanCarey
7y
One would expect some social acceptability bias that might require substantial creativity and work to measure.

People are motivated both by:

  1. competition and status and
  2. cooperation and identifying with the successes of a group. I think we should aim to harness both of these forms of motivation.

"But maybe that's just because I am less satisfied with the current EA "business model"/"product" than most people."

Care to elaborate (or link to something?)

0
John_Maxwell
7y
https://www.facebook.com/groups/effective.altruists/permalink/1263971716992516/

"This is something the EA community has done well at, although we have tended to focus on talent that current EA organization might wish to hire. It may make sense for us to focus on developing intellectual talent as well."

Definitely!! Are there any EA essay contests or similar? More generally, I've been wondering recently if there are many efforts to spread EA among people under the age of majority. The only example I know of is SPARC.

Load more