For low probability of other civilizations, see https://arxiv.org/abs/1806.02404.
Humans don't have obviously formalized goals. But you can formalize human motivation, in which case our final goal is going to be abstract and multifaceted, and it is probably going to be include a very very broad sense of well-being. The model applies just fine.
Because it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute. The proof is trivial, it comes from the very stipulation of what a goal is in t...
The Pascal's Mugging thing has been discussed a lot around here. There isn't an equivalence between all causes and muggings because the probabilities and outcomes are distinct and still matter. It's not the case that every religion and every cause and every technology has the same tiny probability of the same large consequences, and you cannot satisfy every one of them because they have major opportunity costs. If you apply EV reasoning to cases like this then you just end up with a strong focus on one or a few of the highest impact issues (...
I think the same sheltering happens if you talk about ignoring small probabilities, even if the probability of the x-risk is in fact extremely small.
The probability that $3000 to AMF saves a life is significant. But the probability that it saves the life of any one particular individual is extremely low. We can divide up the possibility space any number of ways. To me it seems like this is a pretty damning problem for the idea of ignoring small probabilities.
We can say that the outcome of the AMF donation has lower variance than the outcome of an x-risk do...
Only if this project is assumed to be the best available use of funds. Other things may be better.
>Zeke estimates the direct financial upside of a successful replication to be about 33B$/year. This is a 66000:1 ratio (33B/500K = 66000).
This is not directly relevant, because the money is being saved by other people and governments, who are not normally using their money very well. EAs' money is much more valuable as it is used much more efficiently than Western people and governments usually do. NB: this is also the reason why EA should generally be considered funders of last resort.
If the study has a 0.5% (??? I have no idea) chance of leadi...
Last thread you said the problem with the funnel is that it makes the decision arbitrarily dependent upon how far you go. But to stop evaluating possibilities violates the regularity assumption. It seems like you are giving an argument against people who follow solution 1 and reject regularity; it's those people whose decisions depend hugely and arbitrarily on where they define the threshold, especially when a hard limit for p is selected. Meanwhile, the standard view in the premises here has no cutoff.
> One needs a very extreme probability functio...
Going from moderate disease to remission seems to be an increase of about 0.25 QALY/year (https://academic.oup.com/ecco-jcc/article-pdf/9/12/1138/984265/jjv167.pdf). If this research accelerates treatment for sufferers by an average of 10 years then that's an impact of 5 million QALY.
Crohn's also costs $33B per year in the US + major European countries (https://www.ncbi.nlm.nih.gov/pubmed/25258034). If we convert that at a typical Western cost-per-statistical-life-saved of $7M, and the average life saved is +25 QALY, that's another 1.2 mill...
We need to factor in QALY or WALY benefits of health improvement in addition to the money saved by users, but we also need to discount for how many people won't get the new treatment.
>The shape of your action profiles depends on your probability function
Are you saying that there is no expected utility just because people have different expectations?
>and your utility function
Well, of course. That doesn't mean there is no expected utility! It's just different for different agents.
>I'm arguing that even if you ignore infinitely valuable outcomes, there's still a big problem having to do with infinitely many possible finite outcomes,
That in itself is not a problem, imagine a uniform distribution from 0 to...
>Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won't change the EV much. But occasionally you'll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV.
But the probability of those rare things will be super low. It's not obvious that they'll change the EV as much as nearer term impacts.
This would benefit from an exercise in modeling the utilities and probabilities of a certain intervention to see what the distributi...
Good post but we shouldn't assume the "funnel" distribution to be symmetric about the line of 0 utility. We can expect that unlikely outcomes are good in expectation just as we expect that likely outcomes are good in expectation. Your last two images show actions which have an immediate expected utility of 0. But if we are talking about an action with generally good effects, we can expect the funnel (or bullet) to start at a positive number. We also might expect it to follow an upward-sloping line, rather than equally diverging to positive a...
Considering that most people would be unhappy to be told that they're more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way.
Well I saw statistics that suggest that I'm more likely to be a rapist since I'm a man, the post explicitly said that I have a 6% chance of being a rapist as a man in EA, and that didn't make me unhappy. And I haven't seen anyone who has actually expressed any personal discomfort at the OP nor any of my posts, leaving aside the secondhand outrage expre...
It's nice to imagine things. But I'll wait for actual EAs to tell me about what does or doesn't upset them before drawing conclusions about what they think.
I think it's pretty odd of you to try to tell me about what upsets EAs or how we feel, given that you have already left the movement. To speak as if you have some kind of personal stake or connection to this matter is rather dishonest.
I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.
Racial profiling is something that is conducted by law enforcement and criminal investigation, and EA does neither of those things. I would be much more bothered if EA started trying to hunt for crim...
There are an estimated 276,000 annual cases of female suicide in the entire world (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3367275/). If, say, half of them are associated with sexual violence (guess), and you throw males in as well, then the eventual lifesaving potential is maybe 150,000 people per year.
Most of these suicides are in SE Asia and the Western Pacific where I believe healthcare and medication provision are not as comprehensive as they are here in the west.
How long do you think it would take you to upgrade every single estimate to the maximum quality level?
Um I don't know, I just said I would estimate this one number. I think I was clear that I was talking about "this particular question".
Assuming 2,300 people in EA per the survey, for every 100 rape victims:
Out of the 25 rape victims who are spouses or partners of the perpetrator (https://www.rainn.org/statistics/perpetrators-sexual-violence), 20 will be outside of EA, when the offender is in EA.
Out of the 45 rape victims who are acquaintances...
I think you'd get better results if you spent your time simply including things that can easily be included, rather than sparking meta-level arguments about which things are or aren't worth including. You could have accepted the race correlations and then found one or two countervailing considerations to counter the alleged bias for a more comprehensive overall view. That still would have been more productive than this.
The way in which gender is relevant while race is not is that sexual attractions are limited by gender preferences in most humans.
Sexual violence tendencies are correlated with racial status in most humans. Why treat it differently?
Given that most sexually violent people attack one gender but not the other, and given that our gender ratio is very seriously skewed, gender is a critical component of this sexual violence risk estimate.
And given that sexually violent people are disproportionately represented across racial categories, and given that our ...
Well that's true. Depending on how many unscrupulous people you think there are on the EA forum :) Though you don't necessarily need to include all possible adjustments at once to avoid biased updates, you just need to select adjustments via an unbiased process.
Demographics is one of the more obvious and robust things to adjust for, though. It's a very common topic in criminology and social science, with accurate statistics available both for EA and for outside groups. It's a reasonable thing to think about as an easy initial thing to adjust for. You already included adjustment for gender statistics, so racial statistics should go along with that.
It mentions them, but does it make any points based on the assumption that there are too few of them?
Again - I'm not making any demand about putting a lot of effort into the research. I think it's totally okay to make simple, off-the-cuff estimates, as long as better information isn't easy to find.
On this particular question though, we can definitely do better than calculating as if the figure is 100%. I mean, just think about it, think about how many of EAs' social and sexual interactions involve people outside of EA. So of course it's going to be less than 100%, significantly less. Maybe 50%, maybe 75%, we can't come up with a great estimate, but at lea...
Are most acts of sexual violence committed by a select particularly egregious few or by the presumably more common 'casual rapist'? Answering this question is relevant for picking the strategies to focus on.
Lisak and Miller (link repeated for convenience: http://www.davidlisak.com/wp-content/uploads/pdf/RepeatRapeinUndetectedRapists.pdf) give decent data on the distribution. 91% of rapes/attempted rapes are from repeat offenders.
Of course that would be suboptimal, hundreds of hours calculating base rates would certainly not be worthwhile. I'm not offering to do it and I'm not demanding that anyone do it. Hundreds of hours directly studying EA would surely be more worthwhile, I agree on that. All I'm saying is that this information we have now is better than that information which we had an hour ago.
I did not see that note. But for the calculations on the productivity impact, it seemed like one might read it with the assumption that the 80,000 hours in a career are EA career hours. If we don't have enough information to make an estimate on this proportion, that's fine, but it definitely doesn't mean that we should implicitly treat it as if it is 100%; after all it is certainly less than that. What I read of the calculations just didn't make it clear, so I wanted to clarify.
Yes, I saw that part. But first, just because there are lots of unknown factors doesn't mean we should ignore the ones that we do know. Suppose we're too busy to look at anything besides demographics, that's fine, but it doesn't mean that we should deliberately ignore the information that we have about demographics. We'll have an inaccurate estimate, but it's still less inaccurate than the estimate we had before. If you don't/didn't have time to originally do this adjustment, that's fine, like I said you already did a lot of work getting a good statistical...
The second point is irrelevant - what statistic is changed by the prevalence of false rape accusations? The Lisak and Miller study cited for the 6% figure do a survey of self-reports among men on campus.
Are you assuming that crimes committed by people in EA will be towards other people in EA? According to RAINN, 34% of the time the sex offender is a family member. And most EAs have social circles which mostly comprise people who are not in EA, I would think. (This is certainly the case if you take the whole Facebook group to be the EA movement.)
I think that for all intents and purposes we should just use the survey responses as the template for the size of the EA movement, because if someone is on Facebook but is not even involved enough that we can get ...
You can find the stats by going to the right of the page in moderation tools and clicking "traffic stats". They only go back a year though. Redditmetrics.com should show you subscriber counts from before that, but not activity.
The effective altruism subreddit is growing in traffic: https://i.imgur.com/3BSLlgC.png (August figures are 2.5k and 9.5k)
The EA Wikipedia page is not changing much in pageviews: https://tools.wmflabs.org/pageviews/?project=en.wikipedia.org&platform=all-access&agent=user&start=2015-07&end=2017-08&pages=Effective_altruism
I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.")
Parenthesis is probably true, e.g. most of MIRI's traditional agenda. If agents don't quickly gain decisive strategic advantages then you don't have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it's ju...
Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?
Don't think so. It's too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There's not many other factors that a complicated calculation could add.
Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that's not the reason that research is conducting right now.
Yes, but I mean they're not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.
given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds
This is wholly speculative. I've seen no evidence that consequentialists "feel bad" in any emotionally meaningful sense for having made donations to the wrong cause.
This is the same sort of effect people get from looking at this sort of advertising, but more subtle
Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would eve...
Thanks for the comments.
Evolution doesn't really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.
Evolution favors replication. But patience and resource acquisition aren't obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn't that evolution is intrinsically "against" any particular value, it's t...
Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.
Only if you assume that there are high thresholds for achievements.
The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.
I do not understand what you are saying.
Edit: do y...
This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn't seem like a likely phenomenon to me.
I think we need more reading lists. There have already been one or two for AI safety, but I've not seen similar ones for poverty, animal welfare, social movements, or other topics.
We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.
If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.
We do not plan to continue the Pareto Fellowship in its current form this year. While we thought that it was a valuable experiment, the cost per participant was too high relative to the magnitude of plan changes made by the fellows. We might consider running a much shorter version of the program, without the project period, in the future. The Pareto Fellowship did, however, make us more excited about doing other high-touch mentoring and training with promising members of the effective altruism community.
Has anyone thought about retiring in a foreign country where the cost of living is low? That seems like a great idea to me - all the benefits of saving money, without worrying about work opportunities.
A lot of baggage goes into the selection of a threshold for "highly accurate" or "ensured safe" or statements of that sort. The idea is that early safety work helps even though it won't get you a guarantee. I don't see any good reason to believe AI safety to be any more or less tractable than preemptive safety for any other technology, it just happens to have greater stakes. You're right that the track record doesn't look great; however I really haven't seen any strong reason to believe that preemptive safety is generally ineffective - it seems like it just isn't tried much.