All of Flodorner's Comments + Replies

Good news on climate change

Do you have thoughts on how potentially rising inflation could affect emission pathways and the relative cost of renweables? I have heard the argument that associated rises in the cost of capital could be pretty bad, because most costs associated with renewables are capital costs, while fuel costs dominate for fossil energy. 

UK's new 10-year "National AI Strategy," released today

Huh? I did not like the double-page style for the non-mobile pdf, as it required some manual rescaling on my PC.

And the mobile version has the main table cut between two pages in a pretty horrible way. I think I would have much preferred a single pdf in the mobile/single page style that is actually optimized for that style, rather than this.

Maybe I should have used the HTML version instead?

2JP Addison2moI admittedly used the html version.
UK's new 10-year "National AI Strategy," released today

More detailed action points on safety from page 32: 

The Office for AI will coordinate cross-government processes to accurately assess long term AI safety and risks, which will include activities such as evaluating technical expertise in government and the value of research infrastructure. Given the speed at which AI developments are impacting our world, it is also critical that the government takes a more precise and timely approach to monitoring progress on AI, and the government will work to do so. 


The government will support the safe and ethic... (read more)

When pooling forecasts, use the geometric mean of odds

I don't think I get your argument for why the approximation should not depend on the downstream task. Could you elaborate? 

I am also a bit confused about the relationship between spread and resiliency: a larger spread of forecasts does not seem to necessarily imply weaker evidence: It seems like for a relatively rare event about which some forecasters could acquire insider information, a large spread might give you stronger evidence. 

Imagine  is about the future enactment of a quite unusual government policy, and one of your forecaster... (read more)

1Jsevillamol2moYour best approximation of the summary distribution^p=P(E|p1,...,pN)is already "as good as it can get". You think we should be cautious and treat this probability as if it could be higher for precautionary reasons? Then I argue that you should treat it as higher, regardless of how you arrived at the estimate. In the end this circles back to basic Bayesian / Utility theory - in the idealized framework your credences about an event should be represented as a single probability. Departing from this idealization requires further justification. You are right that "weaker evidence" is not exactly correct - this is more about the expected variance introduced by hypothetical additional predictions. I've realized I am confused about what is the best way to think about this in formal terms, so I wonder if my intuition was right after all.
When pooling forecasts, use the geometric mean of odds

This seems to connect to the concept of - means: If the utility for an option is proportional to , then the expected utility of your mixture model is equal to the expected utility using the -mean of the expert's probabilities  and  defined as , as the  in the utility calculation cancels out the .  If I recall correctly, all aggregation functions that fulfill some technical conditions on a generalized mean can be written as a -mean.  

In the first example,&n... (read more)

1Toby_Ord3moThanks — I hadn't heard of f-means before and it is a useful concept, and relevant here.
More undergraduate or just-graduated students should consider getting jobs as research techs in academic labs

I wanted to flag that many PhD programs in Europe might require you to have a Master's degree, or to essentially complete the coursework for Master's degree during your PhD (as seems to be the case in the US),  depending on the kind of undergraduate degree you hold. Obviously, the arguments regarding funding might still partially hold in that case. 

1vbelenky3moI think this is pretty much the case for many (especially non-STEM) fields in the US, too--my sense is that it's a consequence of funding/competition.
What are the EA movement's most notable accomplishments?

Do you have a specific definition of AI Safety in mind? From my (biased) point of view, it looks like large fractions of work that is explicitly branded "AI Safety" is done by people who are at least somewhat adjacent to the EA community. But this becomes a lot less true if you widen the definition to include all work that could be called "AI Safety" (so anything that could conceivably help with avoiding any kind of dangerous malfunction of AI systems, including small scale and easily fixable problems).

1Mauricio3moThanks for flagging this! I didn't have a very specific definition in mind. I was roughly thinking of this cluster of traits: * calls itself "AI Safety" * is concerned with the alignment problem * is concerned with making AI systems safe in the long term Using a narrower definition of the field at least seems consistent with how fields are usually defined. For example, the field that calls itself "Economics" is much smaller than all work that could conceivably be relevant to economics (which could include much of psychology, political science, sociology, history, statistics, math...).
AMA: The new Open Philanthropy Technology Policy Fellowship

Relatedly, what is the likelihood that future iterations of the fellowship might be less US-centric, or include Visa sponsorship?

A large portion of the value from programs like this comes from boosting fellows into career paths where they spend at least some time working in the US government, and many of the most impactful government roles require US citizenship. We are therefore mainly focused on people who have (a plausible pathway to) citizenship and are interested in US government work. Legal and organizational constraints means it is unlikely that we will be able to sponsor visas even if we run future rounds.

This program is US-based because the US government is especially impor... (read more)

Apply to the new Open Philanthropy Technology Policy Fellowship!

The job posting states: 

"All participants must be eligible to work in the United States and willing to live in Washington, DC, for the duration of their fellowship. We are not able to sponsor US employment visas for participants; US permanent residents (green card holders) are eligible to apply, but fellows who are not US citizens may be ineligible for placements that require a security clearance."

So my impression would be that it would be pretty difficult to participate for non-US citizens who do not already live in the US. 

1Mauricio4moMore detail from the footnote on the posting, for those interested:
What previous work has been done on factors that affect the pace of technological development?

https://en.wikipedia.org/wiki/Technological_transitions might be relevant.

The Geels book cited in the article (Geels, F.W., 2005. Technological transitions and system innovations. Cheltenham: Edward Elgar Publishing.) has a bunch of interesting case studies I read a while ago and a (I think popular) framework for technological change, but I am not sure the framework is sufficiently precise to be very predictive (and thus empirically validatable). 

I don't have any particular sources on this, but the economic literature on the effects of regulation migh... (read more)

Is there evidence that recommender systems are changing users' preferences?

Facebook has at least experimented with using deep reinforcement learning to adjust its notifications according to https://arxiv.org/pdf/1811.00260.pdf . Depending on which exact features they used for the state space (i.e. if they are causally connected to preferences), the trained agent would at least theoretically have an incentive to change user's preferences. 

The fact that they use DQN rather than a bandit algorithm seems to suggest that what they are doing involves at least some short term planning, but the paper does not seem to analyze the exp... (read more)

Objectives of longtermist policy making

Interesting writeup!

Depending on your intended audience, it might make sense to add more details for some of the proposals. For example, why is scenario planning a good idea compared to other methods of decision making? Is there a compelling story, or strong empirical evidence for its efficacy? 

Some small nitpicks: 

There seems to be a mistake here: 

"Bostrom argues in The Fragile World Hypothesis that continuous technological development will increase systemic fragility, which can be a source of catastrophic or existential risk. In the Precip... (read more)

5Andreas_Massey10moThank you for your feedback, Flodorner! First, we certainly agree that a more detailed description could be productive for some of the topics in this piece, including your example on scenario planning and other decision making methods. At more than 6000 words this is already a long piece, so we were aiming to limit the level of detail to what we felt was necessary to explain the proposed framework, without necessarily justifying all nuances. Depending on what the community believes is most useful, we are happy to write follow-up pieces with either a higher level of detail for a selected few topics of particular interest (for a more technical discussion on e.g. decision making methods), or a summary piece covering all topics with a lower level of detail (to explain the same framework to non-experts). As for your second issue you are completely correct, it has been corrected. Regarding your last point, we also agree that the repugnant conclusion is not an example of cluelessness in itself. However, the lack of consensus about how to solve the repugnant conclusion is one example of how we still have things to figure out in terms of population ethics (i. e. are morally clueless in this area).
1HaydnBelfield10mogreat catch thanks - fixed!
Even Allocation Strategy under High Model Ambiguity

So for the maximin we are minimizing over all  joint distributions that are  -close to our initial guess?

"One intuitive way to think about this might be considering circles of radius  centered around fixed points, representing your first guesses for your options, in the plane. As  becomes very large, the intersection of the interiors of these circles will approach 100% of their interiors. The distance between the centres becomes small relative to their radii. Basically, you can't tell the options apart anymore for h... (read more)

2MichaelStJules1yYes. That's more accurate than what I said (originally), since you use a single joint distribution for all of the options, basically a distribution overRN, forN options, and you look at distributionsκ-close to that joint distribution. Hmm, good point. I was just thinking about this, too. It's worth noting that in Proposition 3, they aren't just saying that the 1/N distribution is optimal, but actually that in the limit asκ→∞, it's the only distribution that's optimal. I think it might be variance reduction, and it might require risk-aversion, since they require the risk functionals/measures to be convex (I assume strictly), and one of the two example they use of risk measures explicitly penalizes the variance of the allocation (and I think it's the case for the other [https://en.wikipedia.org/wiki/Expected_shortfall]). When you increaseκ, the radius of the neighbourhood around the joint distribution, you can end up with options which are less correlated or even inversely correlated with one another, and diversification is more useful in those cases. They also allow negative allocations, too, so because the optimal allocation is positive for each, I expect that it's primarily because of variance reduction from diversification across (roughly) uncorrelated options. I made some edits. For donations, maybe decreasing marginal returns could replace risk-aversion for those who aren't actually risk-averse with respect to states of the world, but I don't think it will follow from their result, which assumes constant marginal returns.
A case against strong longtermism

I wrote up my understanding of Popper's argument on the impossibility of predicting one's own knowledge (Chapter 22 of The Open Universe) that came up in one of the comment threads. I am still a bit confused about it and would appreciate people pointing out my misunderstandings.

Consider a predictor:

A1: Given a sufficiently explicit prediction task, the predictor predicts correctly

A2: Given any such prediction task, the predictor takes time to predict and issue its reply (the task is only completed once the reply is issued).

T1: A1,A2=> Given a self-predi... (read more)

4vadmas1yHaha just gonna keep pointing you to places where Popper writes about this stuff b/c it's far more comprehensive than anything I could write here :) This question (and the questions re. climate change Max asked in another thread) are the focus of Popper's book The Poverty of Historicism, where "historicism" here means "any philosophy that tries to make long-term predictions about human society" (i.e marxism, fascism, malthusianism, etc). I've attached a screenshot for proof-of-relevance: (Ben and I discuss historicism here [https://increments.buzzsprout.com/1100666/6044605-14-prediction-prophecy-and-fascism] fwiw.) I have a pdf of this one, dm me if you want a copy :)
1vadmas1yImpressive write up! Fun historical note - in a footnote Popper says he got the idea of formulating the proof using prediction machines from personal communication with the "late Dr A. M. Turing".
2Max_Daniel1yYeah, I was also vaguely reminded of e.g. logical induction when I read the summary of Popper's argument in the text Vaden linked elsewhere in this discussion.
A case against strong longtermism

They are, but I don't think that the correlation is strong enough to invalidate my statement. P(sun will exist|AI risk is a big deal) seems quite large to me. Obviously, this is not operationalized very well...

A case against strong longtermism

It seems like the proof critically hinges on assertion 2) which is not proven in your link. Can you point me to the pages of the book that contain the proof?

I agree that proofs are logical, but since we're talking about probabilistic predictions,  I'd be very skeptical of the relevance of a proof that does not involve mathematical reasoning,

2vadmas1yYep it's Chapter 22 of The Open Universe [https://books.google.ca/books/about/The_Open_Universe.html?id=oj1W-Mbc3H0C&redir_esc=y] (don't have a pdf copy unfortunately)
A case against strong longtermism

I don't think I buy the impossibility proof as predicting future knowledge in a probabilistic manner is possible (most simply, I can predict that if I flip a coin now, that there's a 50/50 chance I'll know the coin landed on heads/tails in a minute). I think there is some important true point behind your intuition about how knowledge (especially of more complex form than about a coin flip) is hard to predict, but I am almost certain you  won't be able to find any rigorous mathematical proof for  this intuition because reality is very fuzzy (in a ... (read more)

1vadmas1yIn this example you aren't predicting future knowledge, you're predicting that you'll have knowledge in the future - that is, in one minute, you will know the outcome of the coin flip. I too think we'll gain knowledge in the future, but that's very different from predicting the content of that future knowledge today. It's the difference between saying "sometime in the future we will have a theory that unifies quantum mechanics and general relativity" and describing the details of future theory itself. The proof is here: https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf [https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf] [https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf.]. [https://vmasrani.github.io/assets/pdf/poverty_historicism_quote.pdf] (And who said proofs have to be mathematical? Proofs have to be logical - that is, concerned with deducing true conclusions from true premises - not mathematical, although they often take mathematical form.)
A case against strong longtermism

Ok, makes sense. I  think that our ability to make predictions about the future steeply declines with increasing time horizions, but find it somewhat implausible that it would become entirely uncorrelated with what is actually going to happen in finite time. And it does not seem to be the case that data supporting long term predictions is impossible to get by: while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years; in part due to a lot of data collection and hypothesis testing done by physicist. 

7Greg_Colbourn1y"while it might be pretty hard to predict whether AI risk is going to be a big deal by whatever measure, I can still be fairly certain that the sun will exist in a 1000 years" These two things are correlated.
1vadmas1yYes, there are certain rare cases where longterm prediction is possible. Usually these involve astronomical systems, which are unique because they are cyclical in nature and unusually unperturbed by the outside environment. Human society doesn't share any of these properties unfortunately, and long term historical prediction runs into the impossibility proof in epistemology anyway.
A case against strong longtermism

"The "immeasurability" of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. "

This claim seems confused, as every nonempty set allows for the definition of a probability measure on it  and measures on function spaces exist ( https://en.wikipedia.org/wiki/Dir... (read more)

1vadmas1ySee discussion below w/ Flodorner on this point :) You are Flodorner!
A case against strong longtermism

I am confused about the precise claim made regarding the Hilbert Hotel and measure theory.  When you say "we have no  measure over the set of all possible futures",  do you mean that no such measures exist (which would be incorrect without further requirements:  https://en.wikipedia.org/wiki/Dirac_measure , https://encyclopediaofmath.org/wiki/Wiener_measure ), or that we don't have a way of choosing the right measure?  If it is the latter,  I agree that this is an important challenge, but I'd like to highlight that the situati... (read more)

2Max_Daniel1y(I was also confused by this, and wrote a couple of comments [https://forum.effectivealtruism.org/posts/7MPTzAnPtu5HKesMX/a-case-against-strong-longtermism?commentId=DGz7fcq93SFGkqqRT] in response. I actually think they don't add much to the overall discussion, especially now that Vaden has clarified below what kind of argument they were trying to make. But maybe you're interested given we've had similar initial confusions.)
1vadmas1yYup, the latter. This is why the lack-of-data problem is the other core part of my argument. Once data is in the picture, now we can start to get traction. There is something to fit the measure to, something to be wrong about, and a means of adjudicating between which choice of measure is better than which other choice. Without data, all this probability talk is just idol speculation painted with a quantitative veneer.
Challenges in evaluating forecaster performance

I'm also not sure I follow your exact argument here. But frequency clearly matters whenever the forecast is essentially resolved before the official resolution date, or when the best forecast based on evidence at time t behaves monotonically (think of questions of the type "will event Event x that (approximately) has a small fixed probability of happening each day happen before day y?", where each day passing without x happening should reduce your credence).

1Misha_Yagudin1yI mildly disagree. I think intuition to use here is that the sample mean is an unbiased estimator of expectation (this doesn't depend on frequency/number of samples). One complication here is that we are weighing samples potentially unequally, but if we expect each forecast to be active for an equal number of days this doesn't matter. ETA: I think the assumption of "forecasts have an equal expected number of active days" breaks around the closing date, which impacts things in the monotonical example (this effect is linear in the expected number of active days and could be quite big in extremes).
Challenges in evaluating forecaster performance

I guess you're right (I read this before and interpreted "active foreast" as "forecast made very recently").

If they also used this way of scoring things for the results in Superforecasting, this seems like an important caveat for forecasting advice that is derived from the book: For example the efficacy of updating your beliefs might mostly be explained by this. I previously thought that the results meant that a person who forecasts a question daily will make better forecasts on sundays than a person who only forecasts on sundays.

Challenges in evaluating forecaster performance

Do you have a source for the "carrying forward" on gjopen? I usually don't take the time to update my forecasts if I don't think I'd be able to beat the current median but might want to adjust my strategy in light of this.

2Misha_Yagudin1yAlso, because the Median score is the median of all Brier scores (and not Brier score of the median forecast) it might still be good for your Accuracy score to forecast something close to community's median.
2Misha_Yagudin1yhttps://www.gjopen.com/faq says:
EA considerations regarding increasing political polarization

Claims that people are "unabashed racists and sexists" should at least be backed up with actual examples. Like this, I cannot know whether you have good reasons for that believe that I don't see (to the very least not in all of the cases), or whether we have the same information but fundamentally disagree about what constitutes "unabashed racism".

I agree with the feeling that the post undersells concerns about the right wing, but I don't think you will convince anybody without any arguments except for a weakly supported claim... (read more)

EA considerations regarding increasing political polarization

"While Trump’s policies are in some ways more moderate than the traditional Republican platform". I do not find this claim self-evident (potentially due to biased media reporting affecting my views) and find it strange that no source or evidence for it is provided, especially given the commendable general amount of links and sources in the text.

Relatedly, I noticed a gut feeling that the text seems more charitable to the right-wing perspective than to the left (specific "evidence" included the statement from the previous paragra... (read more)

EA considerations regarding increasing political polarization

If you go by GDP per capita, most of europe is behind the US but ahead of most of Asia https://en.wikipedia.org/wiki/List_of_countries_by_GDP_(nominal)_per_capita (growth rates in Asia are higher though, so this might change at some point in the future.)

In terms of the Human Develompment Index https://en.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index (which seems like a better measure of "success" than just GDP), some countries (including large ones like Germany and the UK) score above the US but others score lower. Most of Asi... (read more)

Critical Review of 'The Precipice': A Reassessment of the Risks of AI and Pandemics

While I am unsure about how good of an idea it is to map out more plausible scenarios for existential risk from pathogens, I agree with the sentiment that the top level post seems seems to focus too narrowly on a specific scenario.

-15avturchin2y
Biases in our estimates of Scale, Neglectedness and Solvability?

Re bonus section: Note that we are (hopefully) taking expectations over our estimates for importance, neglectedness and tractability, such that general correlations between the factors between causes do not necessarily cause a problem. However, it seems quite plausible that our estimation errors are often correlated because of things like the halo effect.

Edit: I do not fully endorse this comment any more, but still belief that the way we model the estimation procedure matters here. Will edit again, once I am less confused.

1MichaelStJules2yMaybe one example could be that we don't know the exact Scale of wild animal suffering, in part because we aren't sure which animals are actually sentient, and if it does turn out that many more animals are sentient than expected, that might mean that relative progress on the problem is harder. It could actually be the opposite, though; if we think we could get more cost-effective methods to address wild invertebrate suffering than for for wild vertebrate suffering (invertebrates are generally believed to be less (likely to be) sentient than vertebrates, with a few exceptions), then the Scale and Solvability might be positively correlated. Similarly, there could be a relationship between the Scale of a global catastrophic risk or x-risk and its Solvability. If advanced AI can cause value lock-in, how long the effects last might be related to how difficult it is to make relative progress on aligning AI, and more generally, how powerful AI will be is probably related to both the Scale and Solvability of the problem. How bad climate change or a nuclear war could be might be related to its Solvability, too, if worse risks are relatively more or less difficult to make progress on.
Implications of Quantum Computing for Artificial Intelligence alignment research (ABRIDGED)

Maybe having a good understanding of Quantum Computing and how it could be leveraged in different paradigms of ML might help with forecasting AI-timelines as well as dominant paradigms, to some extend?

If that was true, while not necessarily helpful for a single agenda, knowledge about quantum computing would help with the correct prioritization of different agendas.

1Jsevillamol2yI do agree with your assesment, and I would be medium excited about somebody informally researching what algorithms can be quantized to see if there is low hanging fruit in terms of simplifying assumptions that could be made in a world where advanced AI is quantum-powered. However my current intuition is there is no much sense in digging in this unless we were sort of confident that 1) we will have access to QC before TAI and that 2) QC will be a core component of AI. To give a bit more context to the article, Pablo and me originally wrote it because we disagreed on whether current research in AI Alignment would still be useful if quantum computing was a core component of advanced AI systems. Had we concluded that quantum ofuscation threatened to invalidate some assumptions made by current research, we would have been more emphatic about the necessity of having quantum computing experts working on "safeguarding our research" on AI Alignment.
Three Biases That Made Me Believe in AI Risk

"The combination of these vastly different expressions of scale together with anchoring makes that we should expect people to over-estimate the probability of unlikely risks and hence to over-estimate the expected utility of x-risk prevention measures. "

I am not entirely sure whether i understand this point. Is the argument that the anchoring effect would cause an overestimation, because the "perceived distance" from an anchor grows faster per added zero than per increase of one to the exponent?

Critique of Superintelligence Part 2

Directly relevant quotes from the articles for easier reference:

Paul Christiano:

"This story seems consistent with the historical record. Things are usually preceded by worse versions, even in cases where there are weak reasons to expect a discontinuous jump. The best counterexample is probably nuclear weapons. But in that case there were several very strong reasons for discontinuity: physics has an inherent gap between chemical and nuclear energy density, nuclear chain reactions require a large minimum scale, and the dynamics of war are very ... (read more)

1Fods123yThanks for these links, this is very useful material!
Critique of Superintelligence Part 2

Another point against the content overhang argument: While more data is definitely useful, it is not clear, whether raw data about a world without a particular agent in it will be similarly useful to this agent as data obtained from its own (or that of sufficiently similar agents) interaction with the world. Depending on the actual implementation of a possible superintelligence, this raw data might be marginally helpful but far from being the most relevant bottleneck.

"Bostrom is simply making an assumption that such rapid rates of progress could occur... (read more)

6rohinmshah3yhttps://sideways-view.com/2018/02/24/takeoff-speeds/ https://aiimpacts.org/likelihood-of-discontinuous-progress-around-the-development-of-agi/
Critique of Superintelligence Part 1

Thanks for writing this!

I think you are pointing out some important imprecisions, but i think that some of your arguments aren't as conclusive as you seem to present them to be:

"Bostrom therefore faces a dilemma. If intelligence is a mix of a wide range of distinct abilities as in Intelligence(1), there is no reason to think it can be ‘increased’ in the rapidly self-reinforcing way Bostrom speaks about (in mathematical terms, there is no single variable  which we can differentiate and plug into the differential equation, as Bostrom does in hi... (read more)

1Fods123yThanks for your thoughts. Regarding your first point, I agree that the situation you posit is a possibility, but it isn't something Bostrom talks about (and remember I only focused on what he argued, not other possible expansions of the argument). Also, when we consider the possibility of numerous distinct cognitive abilities it is just as possible that there could be complex interactions which inhibit the growth of particular abilities. There could easily be dozens of separate abilities and the full matrix of interactions becomes very complex. The original force of the 'rate of growth of intelligence is proportional to current intelligence leading to exponential growth' argument is, in my view, substantively blunted. Regarding your second point, it seems unlikely to me because if an agent had all these abilities, I believe they would use then to uncover reasons to reject highly reductionistic goals like tilling the universe with paperclips. They might end up with goals that are still in opposition to human values, but I just don't see how an agent with these abilities would not become dissatisfied with extremely narrow goals.
When should EAs allocate funding randomly? An inconclusive literature review.

Yes, exactly. When first reading your summary i interpreted it as the "for all" claim.

1Max_Daniel3yOk, thanks, I now say "Prove that a certain nonrandom, non-Bayesian ...".
When should EAs allocate funding randomly? An inconclusive literature review.

Very interesting!

In the your literature review you summarize the Smith and Winkler (2006) paper as "Prove that nonrandom, non-Bayesian decision strategies systematically overestimate the value of the selected option."

On first sight, this claim seems like it might be stronger than the claim i have taken away from the paper (which is similar to what you write later in the text): if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option.

If you think t... (read more)

2Max_Daniel3yOn your first point: I agree that the paper just shows that, as you wrote, "if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option". I also think that "just choose the option you (naively) expect to be best" is an example of a "nonrandom, non-Bayesian decision strategy". Now, the first sentence you quoted might reasonably be read to make the stronger claim that all nonrandom, non-Bayesian decision strategies have a certain property. However, the paper actually just shows that one of them does. Is this what you were pointing to? If so, I'll edit the quoted sentence accordingly, but I first wanted to check if I understood you correctly. In any case, thank you for your comment!
2Max_Daniel3yOn your second point: I think you're right, and that's a great example. I've added a link to your comment to the post.
Tiny Probabilities of Vast Utilities: Defusing the Initial Worry and Steelmanning the Problem

I think that the assumption of the existence of a Funnel shaped distribution with undefined expected value of things we care about is quite a bit stronger than assuming that there are infinitely many possible outcomes.

But even if we restrict ourselves to distributions with finite expected value, our estimates can still fluctuate wildly until we have gathered huge amounts of evidence.

So while i am sceptical of the assumption that there exists a sequence of world states with utilities tending to infinity and even more sceptical of extremely high/low utility ... (read more)

Is Neglectedness a Strong Predictor of Marginal Impact?

I think the argument is that additional information showing that a cause has high marginal impact might divert causes away towards it from causes with less marginal impact. And getting this kind of information does seem more likely for causes without a track record allowing for a somewhat robust estimation of their (marginal) impact.

2Aaron Gertler3yThis is essentially what I was thinking. If we're to discover that the "best" intervention is something that we aren't funding much now, we'll need to look closer at interventions which are currently neglected. I agree with the author that neglectedness isn't a perfect measure, since others may already have examined them and been unimpressed, but I don't know how often that "previous examination" actually happens (probably not too often, given the low number of organizations within EA that conduct in-depth research on causes). I'd still think that many neglected causes have received very little serious attention, especially attention toward the most up-to-date research (maybe GiveWell said no five years ago, but five years is a lot of time for new evidence to emerge). (As I mentioned in another comment, I wish we knew more about which interventions EA orgs had considered but decided not to fund; that knowledge is the easiest way I can think of to figure out whether or not an idea really is "neglected".)
Is Neglectedness a Strong Predictor of Marginal Impact?

For clarification: (PITi+ui) is the "real" tractability and importance?

The text seems to make more sense that way, but reading "ui is the unknown (to you) importance and tractability of the cause.", I interpreted it as ui being the "real" tractability and importance instead of just a noise term at first.

1sbehmer3yYes, PITi + ui is supposed to be the real importance and tractability. If we knew PITi + ui, then we would know a cause area's marginal impact exactly. But instead we only know PITi.
Debate and Effective Altruism: Friends or Foes?

Relatedly, the impromptu nature of some debating formats could also help with getting comfortable formulating answers to nontrivial questions under (time) pressure. Apart from being generally helpful, this might be especially valuable in some types of job interviews.

I've been considering to invest some time into competitive debating, mostly in order to improve that skill, so if someone has data (even anecdotal) on that, pleases share :)

Tiny Probabilities of Vast Utilities: A Problem for Long-Termism?

Interesting post!

I am quite interested in your other arguments for why EV calculations won't work for pascal's mugging and why they might extend to x-risks. I would probably have prefered a post already including all the arguments for your case.

About the argument from hypothetical updates: My intuition is, that if you assign a probability of a lot more than 0.1^10^10^10 to the mugger actually being able to follow through this might create other problems (like probabilities of distinct events adding to something higher than 1 or priors inconsisten... (read more)

1kokotajlod3yThanks! Yeah, sorry--I was thinking about putting it up all at once but decided against because that would make for a very long post. Maybe I should have anyway, so it's all in one place. Well, I don't share your intuition, but I'd love to see it explored more. Maybe you can get an argument out of it. One way to try would be to try to find a class of at least 10^10^10^10 hypotheses that are at least as plausible as the Mugger's story.
Reducing existential risks or wild animal suffering?

What exactly do you mean with utility here? The Quasi-negative utilitarian framework seems to correspond to a shift of everyone's personal utility, such that the shifted utility for each person is 0, whenever this person's live is neither worth living, nor not worth living.

It seems to me, like a reasonable notion of utility would have this property anyway (but i might just use the word differently than other people, please tell me, if there is some widely used definition contradicting this!). This reframes the discussion into one about where the zero poin... (read more)

0[anonymous]3yI would say the utility of a person in a situation S measures how strongly a person prefers that given situation, independently from other possible situations that we could have chosen. But in the end the thing that matters is someone’s relative utility, which can be written as the utility minus a personal critical level. This indeed reframes the discussion into one about where the zero point of utility should lie. In particular, when it comes to interpersonal comparisons of utility or well-being, the utilities are only defined up to an affine transformation, i.e. up to multiplication with a scalar and addition with a constant term. This possible addition of a term basically sets the zero point utility level. I have written more about it here: https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/ [https://stijnbruers.wordpress.com/2018/07/03/on-the-interpersonal-comparability-of-well-being/]
Additional plans for the new EA Forum

Are any ways of making content easier to filter (like for example tags) planned?

I am rather new to the community and there have been multiple occassions, where i randomly stumbled upon old articles, i haven't read, concerned with topics i was interested in and had previously made an effort to find articles about. This seems rather inefficient.

1kbog3yYes I second this - tag system please, if possible
5MichaelDickens3yAnother feature that could help people find old posts is to display a few random old posts on a sidebar. For example, on any of Jeff Kaufman's blog posts [https://www.jefftk.com/p/dual-bagpipes], five old posts display on the sidebar. I've found lots of interesting old posts on Jeff's blog via this feature.
Informational hazards and the cost-effectiveness of open discussion of catastrophic risks

"to prove this argument I would have to present general information which may be regarded as having informational hazard"

Is there any way to assess the credibility of statements like this (or whether this is actually an argument worth considering in a given specific context)? It seems like you could use this as a general purpose argument for almost everything.

1turchin3yIt was in fact a link on the article about how to kill everybody using multiple simultaneous pandemics - this idea may be regarded by someone as an informational hazard, but it was already suggested by some terrorists from Voluntary Human extinction movement. I also discussed with some biologists and other x-risks researchers and we concluded that it is not an infohazard. I can send you a draft.
0MichaelPlant3yI agree statements of this kind are very annoying, whether or not they're true.
Doning with the devil

I am not sure about whether your usage of economies of scale already covers this, but it seems to make sense to highlight, that what matters is the marginal difference of the money for you and your adversary. If doing evil is a lot more efficient at low scales (Think of distributing highly addictive drugs among vurnerable populations vs. Distributing Malaria nets), your adversary could be hitting diminishing returns already, while your marginal returns increase, and the lottery might still be not be worth it.

1cole_haus3yYup, I hope the examples make that clear, but the other descriptions could do more to highlight that we're interested in the margin.
Animal Equality showed that advocating for diet change works. But is it cost-effective?

Are you talking about the individual level, or the mean? My estimate would be, that for the median individual, the effect will have faded out after at most 6 months. However, the mean might be influenced by the tails quite strongly.

Thinking about it for a bit longer, a mean effect of 12 years does seem quite implausible, though. In the limiting case, where only the tails matter, this would be equivalent to convincing around 25% of the initially influenced students to stop eating pork for the rest of their lives.

The upper bound for my 90% confidence interval for the mean seems to be around 3 years, while the lower bound is at 3 months. The probability mass within the interval is mostly centered to the left.

Animal Equality showed that advocating for diet change works. But is it cost-effective?

The claim does not seem to be exactly, that there is a 10% chance of an animal advocacy video affecting consumption decisions after 12 years for a given individual.

I'd interpret it as: there is a 5% chance of the mean duration of reduction, conditioned on the participant reporting to change their behaviour based on the video being higher than 12 years.

This could for example also be achieved by having a very long term impact on very few participants. This interpretation seems a lot more plausible, although i am not certain at all, wheter that claim correct. Long term follow up data would certainly be very helpful.

1John G. Halstead3yYes I was speaking somewhat loosely. It is nevertheless in my view very implausible that the intervention would sustain its effect for that long - we're talking about the effect of one video here. Do you think the chance of fade-out within a year is less than 10%? What is your median estimate?
The counterfactual impact of agents acting in concert

At this point, i think that to analyze the $1bn case correctly, you'd have to substract everyone's opportunity cost in the calculation of the shapley value (if you want to use it here). This way, the example should yield what we expect.

I might do a more general writeup about shapley values, their advantages, disadvantages and when it makes sense to use them, if i find the time to read a bit more about the topic first.

Expected cost per life saved of the TAME trial

I think, it might be best to just report confidence intervals for your final estimates (guesstimate should give you those). Then everyone can combine your estimates with their own priors on general intervention's effectiveness and thereby potentially correct for the high levels of uncertainty (at least in a crude way by estimating the variance from the confidence intervals).

The variance of X can be defined as E[X^2]-E[X]^2, which should not be hard to implement in Guesstimate. However, i am not sure, whether or not having the variance yields to more accur... (read more)

1Emanuele_Ascani4yThank you, I applied your suggestion by modifying the text. I just noticed that Guesstimate gives you the standard deviation. I guess I had to familiarise with the tool.
Load More