All of Zeke_Sherman's Comments + Replies

A lot of baggage goes into the selection of a threshold for "highly accurate" or "ensured safe" or statements of that sort. The idea is that early safety work helps even though it won't get you a guarantee. I don't see any good reason to believe AI safety to be any more or less tractable than preemptive safety for any other technology, it just happens to have greater stakes. You're right that the track record doesn't look great; however I really haven't seen any strong reason to believe that preemptive safety is generally ineffective - it seems like it just isn't tried much.

1
Fods12
5y
Hi Zeke, I give some reasons here why I think that such work won't be very effective, namely that I don't see how one can achieve sufficient understanding to control a technology without also attaining sufficient understanding to build that technology. Of course that isn't a decisive argument so there's room for disagreement here.

For low probability of other civilizations, see https://arxiv.org/abs/1806.02404.

Humans don't have obviously formalized goals. But you can formalize human motivation, in which case our final goal is going to be abstract and multifaceted, and it is probably going to be include a very very broad sense of well-being. The model applies just fine.

Because it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute. The proof is trivial, it comes from the very stipulation of what a goal is in t... (read more)

1
Fods12
5y
Hi Zeke! Thanks for the link about the Fermi paradox. Obviously I could not hope to address all arguments about this issue in my critique here. All I meant to establish is that Bostrom's argument does rely on particular views about the resolution of that paradox. You say 'it is tautologically true that agents are motivated against changing their final goals, this is just not possible to dispute'. Respectfully I just don't agree. It all hinges on what is meant by 'motivation' and 'final goal'. You also say " it just seems clear that you can program an AI with a particular goal function and that will be all there is to it", and again I disagree. A narrow AI sure, or even a highly competent AI, but not an AI with human level competence in all cognitive activities. Such an AI would have the ability to reflect on its own goals and motivations, because humans have that ability, and therefore it would not be 'all there is to it'. Regarding your last point, what I was getting at is that you can change a goal by explicitly rejecting a goal and choosing a new one, or by changing one's interpretation of an existing goal. This latter method is an alternative path by which an AI could change its goals in practise even if it still regarded itself as following the same goals it was programmed with. My point isn't that this makes goal alignment not a problem. My point was that this makes the 'AI will never change its goals' not a plausible position.

The Pascal's Mugging thing has been discussed a lot around here. There isn't an equivalence between all causes and muggings because the probabilities and outcomes are distinct and still matter. It's not the case that every religion and every cause and every technology has the same tiny probability of the same large consequences, and you cannot satisfy every one of them because they have major opportunity costs. If you apply EV reasoning to cases like this then you just end up with a strong focus on one or a few of the highest impact issues (... (read more)

2
Fods12
5y
Thanks for the comment! I agree that the probabilities matter, but then it comes to a question of how these are assessed and weighed against each other. On this basis, I don't think it has been established that AGI safety research has strong claims to higher overall EV than other such potential mugging causes. Regarding the Dutch book issue, I don't really agree with the argument that 'we may as well go with' EV because it avoids these cases. Many people would argue that the limitations of the EV approach, such as having to give a precise probability for all beliefs and not being able to suspend judgement etc, also do not fit with our picture of 'rational'. Its not obvious why hypothetical better behaviours are more important than these considerations. I am not pretending to resolve this argument but I am just trying to raise the issue as being relevant for assessing high impact, low probability events - EV is potentially problematic in such cases and we need to talk about this seriously.

I think the same sheltering happens if you talk about ignoring small probabilities, even if the probability of the x-risk is in fact extremely small.

The probability that $3000 to AMF saves a life is significant. But the probability that it saves the life of any one particular individual is extremely low. We can divide up the possibility space any number of ways. To me it seems like this is a pretty damning problem for the idea of ignoring small probabilities.

We can say that the outcome of the AMF donation has lower variance than the outcome of an x-risk do... (read more)

2
kokotajlod
5y
Hmmm, good point: If we carve up the space of possibilities finely enough, then every possibility will have a too-low probability. So to make a "ignore small probabilities" solution work, we'd need to include some sort of rule for how to carve up the possibilities. And yeah, this seems like an unpromising way to go... I think the best way to do it would be to say "We lump all possibilities together that have the same utility." The resulting profile of dots would be like a hollow bullet or funnel. If we combined that with an "ignore all possibilities below probability p" rule, it would work. It would still have problems, of course.

Only if this project is assumed to be the best available use of funds. Other things may be better.

>Zeke estimates the direct financial upside of a successful replication to be about 33B$/year. This is a 66000:1 ratio (33B/500K = 66000).

This is not directly relevant, because the money is being saved by other people and governments, who are not normally using their money very well. EAs' money is much more valuable as it is used much more efficiently than Western people and governments usually do. NB: this is also the reason why EA should generally be considered funders of last resort.

If the study has a 0.5% (??? I have no idea) chance of leadi... (read more)

1
martlau
5y
Hi Zeke, Thanks for the clarification and the estimate for Y. If I understand correctly: (1) Minimum success probability for project viability is ~0.5% (Y=0.5%) (2) Upside following success is 33B$*10 years = 330B$ (per your earlier estimate, this needs to be adjusted for many different reasons, both up and down, but these adjustments are beyond my capabilities). (3) Cost is 500K$. (4) Expected ROI is = (330B$ * 0.5%) / 500K$ = 3300. So this means if you find a 100$ bill on the sidewalk and giving it away to someone else statistically gives them ~300K$, you will keep it, but if it statistically gives them 400K$ you will give it away. Is that right?

Last thread you said the problem with the funnel is that it makes the decision arbitrarily dependent upon how far you go. But to stop evaluating possibilities violates the regularity assumption. It seems like you are giving an argument against people who follow solution 1 and reject regularity; it's those people whose decisions depend hugely and arbitrarily on where they define the threshold, especially when a hard limit for p is selected. Meanwhile, the standard view in the premises here has no cutoff.

> One needs a very extreme probability functio... (read more)

2
kokotajlod
5y
On solution #6: Yeah, it only works if the profiles really do cancel out. But I classified it as involving the decision rule because if your rule is simply to sum up the utilityXprobability of all the possibilities, it doesn't matter if they are perfectly symmetric around 0, your sum will still be undefined. Yep, solution #1 involves biting the bullet and rejecting regularity. It has problems, but maybe they are acceptable problems. Solution #2 would be great if it works, but I don't think it will--I regret pushing that to the appendix, sorry! Thanks again for all the comments, btw!

I know. It's ten years of savings, because curing is accelerated by ten years.

Going from moderate disease to remission seems to be an increase of about 0.25 QALY/year (https://academic.oup.com/ecco-jcc/article-pdf/9/12/1138/984265/jjv167.pdf). If this research accelerates treatment for sufferers by an average of 10 years then that's an impact of 5 million QALY.

Crohn's also costs $33B per year in the US + major European countries (https://www.ncbi.nlm.nih.gov/pubmed/25258034). If we convert that at a typical Western cost-per-statistical-life-saved of $7M, and the average life saved is +25 QALY, that's another 1.2 mill... (read more)

3
RyanCarey
5y
33000/7 * 25 is 120k not 1.2M.
1
martlau
5y
Hi Zeke, Thank you very much for the detailed analysis! Wow, great work!

We need to factor in QALY or WALY benefits of health improvement in addition to the money saved by users, but we also need to discount for how many people won't get the new treatment.

>The shape of your action profiles depends on your probability function

Are you saying that there is no expected utility just because people have different expectations?

>and your utility function

Well, of course. That doesn't mean there is no expected utility! It's just different for different agents.

>I'm arguing that even if you ignore infinitely valuable outcomes, there's still a big problem having to do with infinitely many possible finite outcomes,

That in itself is not a problem, imagine a uniform distribution from 0 to... (read more)

>Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won't change the EV much. But occasionally you'll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV.

But the probability of those rare things will be super low. It's not obvious that they'll change the EV as much as nearer term impacts.

This would benefit from an exercise in modeling the utilities and probabilities of a certain intervention to see what the distributi... (read more)

1
kokotajlod
5y
Yes, if the profiles are not funnel-shaped then this whole thing is moot. I argue that they are funnel-shaped, at least for many utility functions currently in use (e.g. utility functions that are linear in QALYs) I'm afraid my argument isn't up yet--it's in the appendix, sorry--but it will be up in a few days! If the profiles are funnel-shaped, expected utility is not a thing. The shape of your action profiles depends on your probability function and your utility function. Yes, infinitely valuable outcomes are a problem--but I'm arguing that even if you ignore infinitely valuable outcomes, there's still a big problem having to do with infinitely many possible finite outcomes, and moreover even if you only consider finitely many outcomes of finite value, if the profiles are funnel-shaped then what you end up doing will be highly arbitrary, determined mostly by whatever is happening at the place where you happened to draw the cutoff. That's what I'd like to think, and that's what I do think. But this argument challenges that; this argument says that the low-hanging fruit metaphor is inappropriate here: there is no lowest-hanging fruit or anything close; there is an infinite series of fruit hanging lower and lower, such that for any fruit you pick, if only you had thought about it a little longer you would have found an even lower-hanging fruit that would have been so much easier to pick that it would easily justify the cost in extra thinking time needed to identify it... moreover, you never really "pick" these fruit, in that the fruit are gambles, not outcomes; they aren't actually what you want, they are just tickets that have some chance of getting what you want. And the lower the fruit, the lower the chance...

Good post but we shouldn't assume the "funnel" distribution to be symmetric about the line of 0 utility. We can expect that unlikely outcomes are good in expectation just as we expect that likely outcomes are good in expectation. Your last two images show actions which have an immediate expected utility of 0. But if we are talking about an action with generally good effects, we can expect the funnel (or bullet) to start at a positive number. We also might expect it to follow an upward-sloping line, rather than equally diverging to positive a... (read more)

5
kokotajlod
5y
I'm not assuming it's symmetric. It probably isn't symmetric, in fact. Nevertheless, it's still true that the expected utility of every action is undefined, and that if we consider increasingly large sets of possible outcomes, the partial sums will oscillate wildly the more we consider. Yes, at any level of probability there should be a higher density of outcomes towards the center. That doesn't change the result, as far as I can tell. Imagine you are adding new possible outcomes to consideration, one by one. Most of the outcomes you add won't change the EV much. But occasionally you'll hit one that makes everything that came before look like a rounding error, and it might flip the sign of the EV. And this occasional occurrence will never cease; it'll always be true that if you keep considering more possibilities, the old possibilities will continue to be dwarfed and the sign will continue to flip. You can never rest easy and say "This is good enough;" there will always be more crucial considerations to uncover. So this is a problem in theory--it means we are approximating an ideal which is both stupid and incoherent--but is it a problem in practice? Well, I'm going to argue in later posts in this series that it isn't. My argument is basically that there are a bunch of reasonably plausible ways to solve this theoretical problem without undermining long-termism. That said, I don't think we should dismiss this problem lightly. One thing that troubles me is how superficially similar the failure mode I describe here is to the actual history of the EA movement: People say "Hey, let's actually do some expected value calculations" and they start off by finding better global poverty interventions, then they start doing this stuff with animals, then they start talking about the end of the world, then they start talking about evil robots... and some of them talk about simulations and alternate universes... Arguably this behavior is the predictable result of consi

Considering that most people would be unhappy to be told that they're more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way.

Well I saw statistics that suggest that I'm more likely to be a rapist since I'm a man, the post explicitly said that I have a 6% chance of being a rapist as a man in EA, and that didn't make me unhappy. And I haven't seen anyone who has actually expressed any personal discomfort at the OP nor any of my posts, leaving aside the secondhand outrage expre... (read more)

It's nice to imagine things. But I'll wait for actual EAs to tell me about what does or doesn't upset them before drawing conclusions about what they think.

-2
Liam_Donovan
6y
Considering that most people would be unhappy to be told that they're more likely to be a rapist because of their race, we should have a strong prior that many Effective Altruists would feel the same way. What strong evidence do you have that, in fact, minorities in EA are just fine with being told their race makes them more likely to be rapists? Seems like a very strange assumption. Apart from Lila's argument, this "non-white people are more likely to be rapists" is a terrible line of thinking because (IMO) it's likely to build racist modes of thought: assigning negative characteristics to minorities based on dubious evidence seems very likely to strengthen bad cognitive patterns and weaken good judgement around related issues. If the evidence were incontrovertible, this might be acceptable, but it's nowhere near the required standard of proof to overcome the strong prior that humans are equally likely to commit crimes regardless of race (among other reasons, because race is largely a social construct). Additionally, the long history of using false statistics and "science" to bolster white supremacy should make one more skeptical of numbers like this.

I think it's pretty odd of you to try to tell me about what upsets EAs or how we feel, given that you have already left the movement. To speak as if you have some kind of personal stake or connection to this matter is rather dishonest.

I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.

Racial profiling is something that is conducted by law enforcement and criminal investigation, and EA does neither of those things. I would be much more bothered if EA started trying to hunt for crim... (read more)

0
Lila
6y
It's often useful to be able to imagine what will be upsetting to other people and why, even if it's not upsetting to you. Maybe you'll decide that it's worth hurting people, but at least make your decisions with an accurate model of the world. (By the way, "because they're oversensitive" doesn't count as an explanation.) So let's try to think about why someone might be upset if you told them that they're more likely to be a rapist because of their race. I can think of a few reasons: They feel afraid for their personal safety. They feel it's unfair to be judged for something they have no control over. They feel self-conscious and humiliated. Emotional turing tests might be a good habit in general.

There are an estimated 276,000 annual cases of female suicide in the entire world (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3367275/). If, say, half of them are associated with sexual violence (guess), and you throw males in as well, then the eventual lifesaving potential is maybe 150,000 people per year.

Most of these suicides are in SE Asia and the Western Pacific where I believe healthcare and medication provision are not as comprehensive as they are here in the west.

1
Kathy_Forth
6y
The per year incidence is a totally different type of number from the numbers I used. The numbers I used cover a much longer time span. Comparing 276,000 annual cases to the number 36,53,846 is comparing apples to oranges. It is not clear that your intent was to disagree with me. If you are throwing in an additional reference, I can't incorporate that because the other research I referred to wasn't using annual figures. I suppose it's interesting as something to check against. For an outrageously crude way to do that, you can multiply 276,000 by 80, the number of years in the average female lifespan (for one country) and compare a hacked together lifetime rate to my hacked together 36,53,846.

How long do you think it would take you to upgrade every single estimate to the maximum quality level?

Um I don't know, I just said I would estimate this one number. I think I was clear that I was talking about "this particular question".

Assuming 2,300 people in EA per the survey, for every 100 rape victims:

Out of the 25 rape victims who are spouses or partners of the perpetrator (https://www.rainn.org/statistics/perpetrators-sexual-violence), 20 will be outside of EA, when the offender is in EA.

Out of the 45 rape victims who are acquaintances... (read more)

I think you'd get better results if you spent your time simply including things that can easily be included, rather than sparking meta-level arguments about which things are or aren't worth including. You could have accepted the race correlations and then found one or two countervailing considerations to counter the alleged bias for a more comprehensive overall view. That still would have been more productive than this.

The way in which gender is relevant while race is not is that sexual attractions are limited by gender preferences in most humans.

Sexual violence tendencies are correlated with racial status in most humans. Why treat it differently?

Given that most sexually violent people attack one gender but not the other, and given that our gender ratio is very seriously skewed, gender is a critical component of this sexual violence risk estimate.

And given that sexually violent people are disproportionately represented across racial categories, and given that our ... (read more)

1
Lila
6y
I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA. This unpleasant tangent is a great example of why applying aggregate statistics to actual people isn't a good strategy. It should be clear why people find the following statements upsetting: Statistically, there are X rapists in the EA community. Statistically, as a man/black person/Mexican/non-college grad/Muslim, there is X probability you're a rapist. Let's please not go down this path.
-1
Kathy_Forth
6y
I am already aware of a pretty large number of correlations between sexual violence and a lot of different things. I'm telling you that there are a bunch of other things on that list I provided which would significantly alter the result of the estimate. I'm definitely not going to alter the estimate to incorporate just race. I am definitely not going to alter the estimate to incorporate the entire list. I think the most worthwhile way of getting a better estimate is to do a study, so I will not put further time into this discussion.

Well that's true. Depending on how many unscrupulous people you think there are on the EA forum :) Though you don't necessarily need to include all possible adjustments at once to avoid biased updates, you just need to select adjustments via an unbiased process.

Demographics is one of the more obvious and robust things to adjust for, though. It's a very common topic in criminology and social science, with accurate statistics available both for EA and for outside groups. It's a reasonable thing to think about as an easy initial thing to adjust for. You already included adjustment for gender statistics, so racial statistics should go along with that.

1
Kathy_Forth
6y
The way in which gender is relevant while race is not is that sexual attractions are limited by gender preferences in most humans. Given that most sexually violent people attack one gender but not the other, and given that our gender ratio is very seriously skewed, gender is a critical component of this sexual violence risk estimate. Given that you believe a race adjustment should go with gender adjustment, I don't see why you are not also advocating for all of the following: * age * marital status * literacy * education * employment status * occupation * geographical location * place of birth * previous residence * language * religion * nationality * ethnicity * citizenship

It mentions them, but does it make any points based on the assumption that there are too few of them?

3
Alex_Barry
6y
The 'Strike a balance between dismissing accusations and witch-hunting people' is about how to act given that accusations have some small (but non-negligible) chance of being false. If we instead learned that the true rate differed strongly from this, it seems reasonable that the advice would also change. (E.g. if it turned out that there were never any false accusations, we could act much more strongly on the basis of accusations). I also think generally if a piece of writing states a fact you think is false, it is reasonable and should be encouraged to bring it up in the comments, even if it is not central to the argument's conclusion.

Again - I'm not making any demand about putting a lot of effort into the research. I think it's totally okay to make simple, off-the-cuff estimates, as long as better information isn't easy to find.

On this particular question though, we can definitely do better than calculating as if the figure is 100%. I mean, just think about it, think about how many of EAs' social and sexual interactions involve people outside of EA. So of course it's going to be less than 100%, significantly less. Maybe 50%, maybe 75%, we can't come up with a great estimate, but at lea... (read more)

2
Kathy_Forth
6y
I'm glad to hear you would find that easy, Zeke. I made dozens of estimations in this article, and decided that instead of upgrading every single one of them to the maximum level of quality, I should focus on higher value things like raising awareness and persuading people to test methods of sexual violence reduction and doing in-depth evaluations of the two scalable sexual violence reduction methods. Unfortunately, I don't have time to upgrade all these estimations to the maximum level myself. How long do you think it would take you to upgrade every single estimate to the maximum quality level? (I'll just let you count the number of estimations in the article since they're right there.) Would you be up for meeting my quality standards if I laid them out as a set of criteria? Please provide your estimate as the number of hours you will require to upgrade every single estimate in the article to the absolute maximum level of quality. Also, would you be able to do this for free? I'm in the middle of a career change. (I normally wouldn't ask but you said you would find it easy and asking can't hurt!) Thanks.

Are most acts of sexual violence committed by a select particularly egregious few or by the presumably more common 'casual rapist'? Answering this question is relevant for picking the strategies to focus on.

Lisak and Miller (link repeated for convenience: http://www.davidlisak.com/wp-content/uploads/pdf/RepeatRapeinUndetectedRapists.pdf) give decent data on the distribution. 91% of rapes/attempted rapes are from repeat offenders.

0
Kathy_Forth
6y
Does this contain detailed enough information on the different kinds of perps that you can actually use it to target the worst type? That's the part I'm concerned will be missing.

Of course that would be suboptimal, hundreds of hours calculating base rates would certainly not be worthwhile. I'm not offering to do it and I'm not demanding that anyone do it. Hundreds of hours directly studying EA would surely be more worthwhile, I agree on that. All I'm saying is that this information we have now is better than that information which we had an hour ago.

1
Kathy_Forth
6y
Actually, to avoid bias when adjusting a prior, we really need to include as many adjustments as possible all at once. Otherwise, unscrupulous people can just come along and say "Let's adjust these three things!", which all make the risk look smaller, thereby misleading people into thinking that the risk is negligible. Or an ordinary biased human being could come along and accidentally ask for ten things to be adjusted which all just so happen to make the risk look super exaggerated. We'll have a lot of vulnerability to various biases if we adjust stuff without careful consideration. Also, if we think it is always better to chuck in arbitrary adjustments, then this creates an incentive for people to come along with a pet political belief and try to have everyone include it everywhere all the time, just for the sake of promoting their pet belief constantly. One arbitrarily selected adjustment is not better.

I did not see that note. But for the calculations on the productivity impact, it seemed like one might read it with the assumption that the 80,000 hours in a career are EA career hours. If we don't have enough information to make an estimate on this proportion, that's fine, but it definitely doesn't mean that we should implicitly treat it as if it is 100%; after all it is certainly less than that. What I read of the calculations just didn't make it clear, so I wanted to clarify.

2
Kathy_Forth
6y
I am using estimates to make other estimates. I clearly labelled each estimate as an estimate. It would be nice to have high-quality data, such as from doing our own studies. First, someone needs to do an estimate to show why the research questions are interesting enough to invest in studies. I am doing the sorts of estimates that show why certain research questions are interesting. These estimates might inspire someone to fund a study.

Yes, I saw that part. But first, just because there are lots of unknown factors doesn't mean we should ignore the ones that we do know. Suppose we're too busy to look at anything besides demographics, that's fine, but it doesn't mean that we should deliberately ignore the information that we have about demographics. We'll have an inaccurate estimate, but it's still less inaccurate than the estimate we had before. If you don't/didn't have time to originally do this adjustment, that's fine, like I said you already did a lot of work getting a good statistical... (read more)

-1
Kathy_Forth
6y
It's not clear that spending hundreds of hours updating this estimate to include dozens of factors is worthwhile. We could instead do our own undetected rapist study on the EA population with that kind of time. Do you have a few hundred hours for this, and the research background needed? Do you want to fund a researcher to do it?

The second point is irrelevant - what statistic is changed by the prevalence of false rape accusations? The Lisak and Miller study cited for the 6% figure do a survey of self-reports among men on campus.

2
Alex_Barry
6y
The post explicitly talks about false rape accusations, so his second point does not seem irrelevant to me? (Although it is clearly irrelevant to the 6% figure).

Are you assuming that crimes committed by people in EA will be towards other people in EA? According to RAINN, 34% of the time the sex offender is a family member. And most EAs have social circles which mostly comprise people who are not in EA, I would think. (This is certainly the case if you take the whole Facebook group to be the EA movement.)

I think that for all intents and purposes we should just use the survey responses as the template for the size of the EA movement, because if someone is on Facebook but is not even involved enough that we can get ... (read more)

-1
Kathy_Forth
6y
"Note 3: We cannot assume that EA rapists target only other EAs. Sometimes, they might target people outside the social network. We cannot assume that EAs are targeted only by EA rapists. Sometimes they might be targeted by people outside the social network. Depending on how much of an EA’s social life consists of contact with other EAs and also depending on how sociable they are, their individual risk will vary. There is not enough lifestyle information available on EAs for me to include numbers on this into the estimate." I am beginning to wonder if you read carefully because it looks like you missed multiple things that were already addressed.
1
Kathy_Forth
6y
While some people are so uninvolved that they would not take the EA survey, others are so very busy that they might not take the EA survey either, even though they should be counted. Unless research is done to determine what percentage of EA takes the EA survey, we cannot assume that it is accurate. For that reason, I am using the total number of EAs from the survey as the low estimate. For the high estimate, I am using the EA Facebook group. The exact number of EAs is unknown but probably lies between these two figures. So, as an estimate, there are probably between 2,352-13,861 people in the effective altruism movement, like I mentioned.
0
Kathy_Forth
6y
The study on the left will say race A commits more crimes while the study on the right will say it's race B. Do people of a particular race commit more crimes, or are they just more likely to be convicted due to prejudice? As I said, incorporating all these other factors would be very complicated. "It could easily require an article of the same length as this one, just to create an estimate which takes all known relevant factors into account. To ensure enough time for the other parts of this article, a simple rough estimate has been created based on information about the overall population. Please remember that this is an estimate." I feel like you didn't read the quoted part there.

You can find the stats by going to the right of the page in moderation tools and clicking "traffic stats". They only go back a year though. Redditmetrics.com should show you subscriber counts from before that, but not activity.

1
Peter Wildeford
7y
Thanks, I added the Reddit stats to the article!
0
vipulnaik
7y
The subreddit stats used to be public (or rather, moderators could choose to make them public) but that option was removed by Reddit a few months ago. https://www.reddit.com/r/ModSupport/comments/6atvgi/upcoming_changes_view_counts_users_here_now_and/ I discussed Reddit stats a little bit in this article: https://www.wikihow.com/Understand-Your-Website-Traffic-Variation-with-Time
0
Peter Wildeford
7y
Thanks. Good idea to include the subreddit. How do you get those stats? We can add traffic from the r/smartgiving subreddit too (r/effectivealtruism precursor) to go back prior to 2016. I have EA Wikipedia pageview data in my post already. :)

I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.")

Parenthesis is probably true, e.g. most of MIRI's traditional agenda. If agents don't quickly gain decisive strategic advantages then you don't have to get AI design right the first time; you can make many agents and weed out the bad ones. So the basic design desiderata are probably important, but it's ju... (read more)

Amateur question: would it help to also include back-of-the-envelop calculations to make your arguments more concrete?

Don't think so. It's too broad and speculative with ill-defined values. It just boils down to (a) whether my scenarios are more likely than the AI-Foom scenario, and (b) whether my scenarios are more neglected. There's not many other factors that a complicated calculation could add.

Very little research is done on advancing AI towards AGI, while a large portion of neuroscience research and also a decent amount of nanotechnology research (billions of dollars per year between the two) are clearly pushing us towards the ability to do WBE, even if that's not the reason that research is conducting right now.

Yes, but I mean they're not trying to figure out how to do it safely and ethically. The ethics/safety worries are 90% focused around what we have today, and 10% focused on superintelligence.

given how popular consequentialism is around here, would be likely to make certain sorts of people feel bad for not donating to EA Funds

This is wholly speculative. I've seen no evidence that consequentialists "feel bad" in any emotionally meaningful sense for having made donations to the wrong cause.

This is the same sort of effect people get from looking at this sort of advertising, but more subtle

Looking at that advertising slightly dulled my emotional state. Then I went on about my day. And you are worried about something that would eve... (read more)

Thanks for the comments.

Evolution doesn't really select against what we value, it just selects for agents that want to acquire resources and are patient. This may cut away some of our selfish values, but mostly leaves unchanged our preferences about distant generations.

Evolution favors replication. But patience and resource acquisition aren't obviously correlated with any sort of value; if anything, better resource-acquirers are destructive and competitive. The claim isn't that evolution is intrinsically "against" any particular value, it's t... (read more)

4
Paul_Christiano
7y
I don't think this is true in very many interesting cases. Do you have examples of what you have in mind? (I might be pulling a no-true-scotsman here, and I could imagine responding to your examples with "well that research was silly anyway.") Whether or not your system is rebuilding the universe, you want it to be doing what you want it to be doing. Which "multi-agent dynamics" do you think change the technical situation? If evolution isn't optimizing for anything, then you are left with the agents' optimization, which is precisely what we wanted. I though you were telling a story about why a community of agents would fail to get what they collectively want. (For example, a failure to solve AI alignment is such a story, as is a situation where "anyone who wants to destroy the world has the option," as is the security dilemma, and so forth.) We are probably on the same page here. We should figure out how to build AI systems so that they do what we want, and we should start implementing those ideas ASAP (and they should be the kind of ideas for which that makes sense). When trying to figure out whether a system will "do what we want" we should imagine it operating in a world filled with massive numbers of interacting AI systems all built by people with different interests (much like the world is today, but more). You're right. Unsurprisingly, I have a similar view about the security dilemma (e.g. think about automated arms inspections and treaty enforcement, I don't think the effects of technological progress are at all symmetrical in general). But if someone has a proposed intervention to improve international relations, I'm all for evaluating it on its merits. So maybe we are in agreement here.

Optimizing for a narrower set of criteria allows more optimization power to be put behind each member of the set. I think it is plausible that those who wish to do the most good should put their optimization power behind a single criteria, as that gives it some chance to actually succeed.

Only if you assume that there are high thresholds for achievements.

The best candidate afaik is right to exit, as it eliminates the largest possible number of failure modes in the minimum complexity memetic payload.

I do not understand what you are saying.

Edit: do y... (read more)

2
RomeoStevens
7y
right to exit means right to suicide, right to exit geographically, right to not participate in a process politically etc.

This is odd. Personally my reaction is that I want to get to a project before other people do. Does bad research really make it harder to find good research? This doesn't seem like a likely phenomenon to me.

1
Raemon
7y
How could bad research not make it harder to find good research? When you're looking for the research, you have to look through additional things before you find the good research, and good research is fairly costly to ascertain in the first place.

I think we need more reading lists. There have already been one or two for AI safety, but I've not seen similar ones for poverty, animal welfare, social movements, or other topics.

0
Benjamin_Todd
7y
Here's a general purpose one: https://80000hours.org/articles/further-reading/

We all know how many problems there are with reputation and status seeking. You would lower epistemic standards, cement power users, and make it harder for outsiders and newcomers to get any traction for their ideas.

If we do something like this it should be for very specific capabilities, like reliability, skill or knowledge in a particular domain, rather than generic reputation. That would make it more useful and avoid some of the problems.

1
mako yass
2y
That was probably the most load-bearing thought in my web-of-trust-based social network project. The lack of specificity about what endorsements mean is the reason twitter doesn't work (but would if it allowed and encouraged having a lot more alts), and I believe that once you've distinguished the kinds of trust, you'll have a very different, much more useful kind of thing.

Pareto Fellowship was shut down? When? What happened?

We do not plan to continue the Pareto Fellowship in its current form this year. While we thought that it was a valuable experiment, the cost per participant was too high relative to the magnitude of plan changes made by the fellows. We might consider running a much shorter version of the program, without the project period, in the future. The Pareto Fellowship did, however, make us more excited about doing other high-touch mentoring and training with promising members of the effective altruism community.

From CEA's 2017 Fundraising Report.

Has anyone thought about retiring in a foreign country where the cost of living is low? That seems like a great idea to me - all the benefits of saving money, without worrying about work opportunities.

1
david_reinstein
7y
Moving to a low-income foreign country could indirectly help the people in that country, if you buy goods and pay taxes there, create jobs, etc.