All Posts

Sorted by Magic (New & Upvoted)

Saturday, September 26th 2020
Sat, Sep 26th 2020

4Prabhat Soni13hAmong rationalist people and altruistic people, on average, which of them are more likely to be attracted to effective altruism? This has uses. If one type of people are significantly more likely to be attracted to EA, on average, then it makes sense to target them for outreach efforts. (e.g. at university fairs) I understand that this is a general question, and I'm only looking for a general answer :P (but specifics are welcome if you can provide them!)

Friday, September 25th 2020
Fri, Sep 25th 2020

25Nathan Young19hSam Harris takes Giving What We Can pledge for himself and for his meditation company "Waking Up" Harris references MacAksill and Ord as having been central to his thinking and talks about Effective Altruism and exstential risk. He publicly pledges 10% of his own income and 10% of the profit from Waking Up. He also will create a series of lessons on his meditation and education app around altruism and effectiveness. Harris has 1.4M twitter followers and is a famed Humanist and New Athiest. The Waking Up app has over 500k downloads on android, so I guess over 1 million overall. [] I like letting personal thoughts be up or downvoted, so I've put them in the comments.
15Ramiro2dMaybe I didn't understand it properly, but I guess there's something wrong when the total welfare score of chimps is 47 and, for humans in low middle-income countries it's 32. Depending on your population ethics, one may think "we should improve the prospects in poor countries", but others can say "we should have more chimps." Or this scale [] has serious problems for comparisons between different species.
7evelynciara2dNYC is adopting ranked-choice voting [] for the 2021 City Council election. One challenge will be explaining the new voting system, though.
4MagnusVinding1dAn argument in favor of (fanatical) short-termism? [Warning: potentially crazy-making idea.] Section 5 in Guth, 2007 [] presents an interesting, if unsettling idea: on some inflationary models, new universes continuously emerge at an enormous rate, which in turn means (maybe?) that the grander ensemble of pocket universes consists disproportionally of young universes. More precisely, Guth writes that, "in each second the number of pocket universes that exist is multiplied by a factor of exp{10^37}." Thus, naively, we should expect earlier points in a given pocket universe's timeline to vastly outnumber later points — by a factor of exp{10^37} per second! (A potentially useful way to visualize the picture Guth draws is in terms of a branching tree, where for each older branch, there are many more young ones, and this keeps being true as the new, young branches grow and spawn new branches.) If this were true, or even if there were a far weaker universe generation process to this effect (say, one that multiplied the number of pocket universes by two for each year or decade), it would seem that we should, for acausal reasons, mostly prioritize the short-term future (perhaps even the very short-term future). Guth tentatively speculates whether this could be a resolution of sorts to the Fermi paradox, though he also notes that he is skeptical of the framework that motivates his discussion: I'm not claiming that the picture Guth outlines is likely to be correct. It's highly speculative, as he himself hints, and there are potentially many ways to avoid it — for example, contra Guth's preferred model, it may be that inflation eventually stops, cf. Hawking & Hertog, 2018 [,is%20finit

Thursday, September 24th 2020
Thu, Sep 24th 2020

33Linch2dHere are some things I've learned from spending the better part of the last 6 months either forecasting or thinking about forecasting, with an eye towards beliefs that I expect to be fairly generalizable to other endeavors. Note that I assume that anybody reading this already has familiarity with Phillip Tetlock's work on (super)forecasting, particularly [] Tetlock's 10 commandments [] for aspiring superforecasters. 1. Forming (good) outside views is often hard but not impossible. I think there is a common belief/framing in EA and rationalist circles that coming up with outside views is easy, and the real difficulty is a) originality in inside views, and also b) a debate of how much to trust outside views vs inside views. I think this is directionally true (original thought is harder than synthesizing existing views) but it hides a lot of the details. It's often quite difficult to come up with and balance good outside views that are applicable to a situation. See Manheim [] and Muelhauser [] for some discussions of this. 2. For novel out-of-distribution situations, "normal" people often trust centralized data/ontologies more than is warranted. See here [] for a discussion. I believe something similar is true for trust of domain experts, though this is more debatable. 3. The EA community overrates the predictive validity and epistemic superiority of forecasters/forecasting. (Note that I think this is an improvement over the status quo in the broad
14MichaelDickens3d"Are Ideas Getting Harder to Find?" (Bloom et al. []) seems to me to suggest that ideas are actually surprisingly easy to find. The paper looks at the difficulty of finding new ideas in a variety of fields. It finds that in all cases, effort on finding new ideas is growing exponentially over time, while new ideas are growing exponentially but at a lower rate. (For a summary, see Table 7 on page 31.) This is framed as a surprising and bad thing. But it actually seems surprisingly good to me. My intuition is that the number of ideas should grow logarithmically with effort, or possibly even sub-logarithmically. If effort is growing exponentially, we'd expect to see linear or sub-linear growth in ideas. But instead we see exponential growth in ideas. I don't have a great understanding of the math used in this paper, so I might be misinterpreting something.

Wednesday, September 23rd 2020
Wed, Sep 23rd 2020

26Thomas Kwa3dI'm worried about EA values being wrong because EAs are unrepresentative of humanity and reasoning from first principles is likely to go wrong somewhere. But naively deferring to "conventional" human values seems worse, for a variety of reasons: * There is no single "conventional morality", and it seems very difficult to compile a list of what every human culture thinks of as good, and not obvious how one would form a "weighted average" between these. * most people don't think about morality much, so their beliefs are likely to contradict known empirical facts (e.g. cost of saving lives in the developing world [] ) [//] or be absurd (placing higher moral weight on beings that are physically closer to you). * Human cultures have gone through millennia of cultural evolution, such that values of existing people are skewed to be adaptive, leading to greed, tribalism, etc.; Ian Morris says "each age gets the thought it needs". However, these problems all seem surmountable with a lot of effort. The idea is a team of EA anthropologists who would look at existing knowledge about what different cultures value (possibly doing additional research) and work with philosophers to cross-reference between these while fixing inconsistencies and removing values that seem to have an "unfair" competitive edge in the battle between ideas (whatever that means!). The potential payoff seems huge, as it would expand the basis of EA moral reasoning from the intuitions of a tiny fraction of humanity to that of thousands of human cultures, and allow us to be more confident about our actions. Is there a reason this isn't being done? Is it just too expensive?
9Michael_Wiebe4dWill says []: Is this not laughable? How could anyone think that "looking at the 1000+ year effects of an action" is workable?
6MichaelA3dHere I list all the EA-relevant books I've read (well, mainly listened to as audiobooks) since learning about EA, in roughly descending order of how useful I perceive/remember them being to me. I share this in case others might find it useful, as a supplement to other book recommendation lists. (I found Rob Wiblin [] , Nick Beckstead [], and/or Luke Muehlhauser [] 's lists very useful.) That said, this isn't exactly a recommendation list, because some of factors making these books more/less useful to me won't generalise to most other people, and because I'm including all relevant books I've read (not just the top picks). Google Doc version here [] . Let me know if you want more info on why I found something useful or not so useful, where you can find the book, etc. See also this list of EA-related podcasts [] and this list of sources of EA-related videos [] . 1. The Precipice * Superintelligence may have influenced me more, but that’s just due to the fact that I read it very soon after getting into EA, whereas I read The Precipice after already learning a lot. I’d now recommend The Precipice first. 2. Superforecasting 3. How to Measure Anything 4. Rationality: From AI to Zombies * I.e., “the sequences” 5. Superintelligence * Maybe this would've been a little further down the list if I’d already read The Precipice. 6. Expert Political Judgement * I read this a
4Halffull3dIs there much EA work into tail risk from GMOs ruining crops or ecosystems? If not, why not?

Tuesday, September 22nd 2020
Tue, Sep 22nd 2020

15Ozzie Gooen4dEA seems to have been doing a pretty great job attracting top talent from the most prestigious universities. While we attract a minority of the total pool, I imagine we get some of the most altruistic+rational+agentic individuals. If this continues, it could be worth noting that this could have significant repercussions for areas outside of EA; the ones that we may divert them from. We may be diverting a significant fraction of the future "best and brightest" in non-EA fields. If this seems possible, it's especially important that we do a really, really good job making sure that we are giving them good advice.
1Markus_Woltjer5dMy name is Markus Woltjer. I'm a computer scientist living in Portland, Oregon. I have an interest in developing a blue carbon capture-and-storage project. This project is still in its inception, but I am already looking for expertise in the following areas, starting mostly with remote research roles. * Botany and plant decomposition * Materials science * Environmental engineering Please contact me here or at [] if you're interested, and I will be happy to fill in more details and discuss whether your background and interests are aligned with the roles available.

Monday, September 21st 2020
Mon, Sep 21st 2020

4Mati_Roy5dIs there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm? I feel like a consequentialist would care about the harm itself whether or not it was caused by them. And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future. Here's an example (it's just a toy example; let's not argue whether it's true or not). A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans. A deontologist wouldn't eat honey even if they knew they would crack in the future and start eating meat. If you care much more about the harm caused by you, you might act differently than both of them. You wouldn't eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat. A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach. I wonder if most self-label deontologist would actually prefer this framework I'm proposing. EtA: I'm not sure how well "directly caused" can be cached out. Anyone has a model for that? x-post: [] (post currently pending)
3aysu6dI am relatively new to the community and am still getting acquainted with the shared knowledge and resources. I have been wondering what the prevailing thoughts are regarding growing the EA community or growing the use of EA style thought frameworks. The latter is a bit imprecise, but, at a glance, it appears to me that having more organizations and media outlets communicate in a more impact-aware way may have a very high expected value. What are people's thoughts on this problem? It likely fits into a meta-category of EA work, but lately I have been feeling that EA messaging and spread is an underserved area for improvement. It's possible I'm simply unaware of some difficulties or existing related efforts.

Saturday, September 19th 2020
Sat, Sep 19th 2020

50Buck8dI’ve recently been thinking about medieval alchemy as a metaphor for longtermist EA. I think there’s a sense in which it was an extremely reasonable choice to study alchemy. The basic hope of alchemy was that by fiddling around in various ways with substances you had, you’d be able to turn them into other things which had various helpful properties. It would be a really big deal if humans were able to do this. And it seems a priori pretty reasonable to expect that humanity could get way better at manipulating substances, because there was an established history of people figuring out ways that you could do useful things by fiddling around with substances in weird ways, for example metallurgy or glassmaking, and we have lots of examples of materials having different and useful properties. If you had been particularly forward thinking, you might even have noted that it seems plausible that we’ll eventually be able to do the full range of manipulations of materials that life is able to do. So I think that alchemists deserve a lot of points for spotting a really big and important consideration about the future. (I actually have no idea if any alchemists were thinking about it this way; that’s why I billed this as a metaphor rather than an analogy.) But they weren’t really very correct about how anything worked, and so most of their work before 1650 was pretty useless. It’s interesting to think about whether EA is in a similar spot. I think EA has done a great job of identifying crucial and underrated considerations about how to do good and what the future will be like, eg x-risk and AI alignment. But I think our ideas for acting on these considerations seem much more tenuous. And it wouldn’t be super shocking to find out that later generations of longtermists think that our plans and ideas about the world are similarly inaccurate. So what should you have done if you were an alchemist in the 1500s who agreed with this argument that you had some really underrated con
13Denise_Melchin7d[status: mostly sharing long-held feelings&intuitions, but have not exposed them to scrutiny before] I feel disappointed in the focus on longtermism in the EA Community. This is not because of empirical views about e.g. the value of x-risk reduction, but because we seem to be doing cause prioritisation based on a fairly rare set of moral beliefs (people in the far future matter as much as people today), at the expense of cause prioritisation models based on other moral beliefs. The way I see the potential of the EA community is by helping people to understand their values and then actually try to optimize for them, whatever they are. What the EA community brings to the table is the idea that we should prioritise between causes, that triaging is worth it. If we focus the community on longtermism, we lose out on lots of other people with different moral views who could really benefit from the 'Effectiveness' idea in EA. This has some limits, there are some views I consider morally atrocious. I prefer not giving these people the tools to more effectively pursue their goals. But overall, I would much prefer to have more people to have access to cause prioritisation tools, and not just people who find longtermism appealing. What underlies this view is possibly that I think the world would be a better place if most people had better tools to do the most good, whatever they consider good to be (if you want to use SSC jargon, you could say I favour mistake theory over conflict theory). I appreciate this might not necessarily be true from a longtermist perspective, especially if you take the arguments around cluelessness seriously. If you don't even know what is best to do from a longtermist perspective, you can hardly say the world would be better off if more people tried to pursue their moral views more effectively.
4Stefan_Schubert7dOn encountering global priorities research [] (from my blog). People who are new to a field usually listen to experienced experts. Of course, they don’t uncritically accept whatever they’re told. But they tend to feel that they need fairly strong reasons to dismiss the existing consensus. But people who encounter global priorities research [] - the study of what actions would improve the world the most - often take a different approach. Many disagree with global priorities researchers’ rankings of causes, preferring a ranking of their own. This can happen for many reasons, and there’s some merit to several of them. First, as global priorities researchers themselves acknowledge, there is much more uncertainty in global priorities research than in most other fields. Second, global priorities research is a young and not very well-established field. But there are other factors that may make people defer less to existing global priorities research than is warranted. I think I did, when I first encountered the field. First, people often have unusually strong feelings about global priorities. We often feel strongly for particular causes or particular ways of improving the world, and don’t like to hear that they are ineffective. So we may not listen to rankings of causes that we disagree with. Second, most intellectually curious people usually have put some thought into the questions that global priorities research studies, even if they’ve never heard of the field itself. This is especially so since most academic disciplines have some relation with global priorities research. So people typically have a fair amount of relevant knowledge. That’s good in some ways, but can also make them overconfident of their abilities to judge existing global priorities research. Identifying the most effective ways of improving the world
3antimonyanthony7dThe Repugnant Conclusion is worse than I thought At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences. To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony. To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, mult
3Michael_Wiebe7dWhat are the comparative statics for how uncertainty affects decisionmaking? How does a decisionmaker's behavior differ under some uncertainty compared to no uncertainty? Consider a social planner problem where we make transfers to maximize total utility, given idiosyncratic shocks to endowments. There are two agents,AandB, with endowmentseA=5(with probability 1) andeB=0∼p,10∼1−p.SoBeither gets nothing or twice as much asA. We choose a transferTto solve: maxTu(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T)s.t.0≤T≤5 For a baseline, considerp=0.5andu=ln. Then we get an optimal transfer ofT∗=1.8. Intuitively, asp→0,T∗→0(if B gets 10 for sure, don't make any transfer from A to B), and asp→1,T∗→2.5(if B gets 0 for sure, split A's endowment equally). So that's a scenario with risk (known probabilities), but not uncertainty (unknown probabilities). What if we're uncertain about the value ofp? Suppose we thinkp∼F, for some distributionFover[0,1]. If we maximize expected utility, the problem becomes: maxTE[u(5−T)+p⋅u(0+T)+(1−p)⋅u(10+T)]s.t.0≤T≤5 Since the objective function is linear in probabilities, we end up with the same problem as before, except withE[p]instead ofp. If we know the mean ofF, we plug it in and solve as before. So it turns out that this form of uncertainty doesn't change the problem very much. Questions: - if we don't know the mean ofF, is the problem simply intractable? Should we resort to maxmin utility? - what if we have a hyperprior over the mean ofF? Do we just take another level of expectations, and end up with the same solution? - how does a stochastic dominance [] decision theory work here?

Friday, September 18th 2020
Fri, Sep 18th 2020

7Nathan Young8dEA short story competition? Has anyone ever run a competition for EA related short stories? Why would this be a good idea? * Narratives resonate with people and have been used to convey ideas for 1000s of years * It would be low cost and fun * Using voting on this forum there is the same risk of "bad posts" as for any other post How could it work? * Stories submitted under a tag on the EA forum. * Rated by upvotes * Max 5000 words (I made this up, dispute it in the comments) * If someone wants to give a reward, then there could be a prize for the highest rated * If there is a lot of interest/quality they could be collated and even published * Since it would be measured by upvotes it seems unlikely a destructive story would be highly rated (or as likely as any other destructive post on the forum) Upvote if you think it's a good idea. If it gets more than 40 karma I'll write one.
6Nathan Young8dThis perception gap site would be a good form for learning and could be used in altruism. It reframes correcting biases as a fun prediction game. It's a site which gets you to guess what other political groups (republicans and democrats) think about issues. Why is it good: 1) It gets people thinking and predicting. They are asked a clear question about other groups and have to answer it. 2) It updates views in a non-patronising way - it turns out dems and repubs are much less polarised than most people think (the stat they give is that people predict 50% of repubs hold extreme views, when actually it's 30). But rather than yelling this, or an annoying listicle, it gets people's consent and teachest something. 3) It builds consensus. If we are actually closer to those we disagree with than we think, perhaps we could work with them. 4) It gives quick feedback. People learn best when given feedback which is close to the action. In this case, people are rapidly rewarded for thoughts like "probably most of X group" are more similar to me that I first think. Imagine: What percentage of neocons want insitutional reform? What % of libertarians want an end to factory farming? What % of socialists want an increase in foreign direct aid? Conlusion If you want to change people's minds, don't tell them stuff, get them to guess trustworthy values as a cutesy game.

Thursday, September 17th 2020
Thu, Sep 17th 2020

10Denise_Melchin9d[epistemic status: musing] When I consider one part of AI risk as 'things go really badly if you optimise straightforwardly for one goal' I occasionally think about the similarity to criticisms of market economies (aka critiques of 'capitalism'). I am a bit confused why this does not come up explicitly, but possibly I have just missed it, or am conceptually confused. Some critiques of market economies think this is exactly what the problem with market economies is: they should maximize for what people want, but instead they maximize for profit instead, and these two goals are not as aligned as one might hope. You could just call it the market economy alignment problem. A paperclip maximizer might create all the paperclips, no matter what it costs and no matter what the programmers' intentions were. The Netflix recommender system recommends movies to people which glue them to Netflix, whether they endorse this or not, to maximize profit for Netflix. Some random company invents a product and uses marketing that makes having the product socially desirable, even though people would not actually have wanted it on reflection. These problems seem very alike to me. I am not sure where I am going with this, it does kind of feel to me like there is something interesting hiding here, but I don't know what. EA feels culturally opposed to 'capitalism critiques' to me, but they at least share this one line of argument. Maybe we are even missing out on a group of recruits. Some 'latestage capitalism' memes seem very similar to Paul's What Failure looks like to me. Edit: Actually, I might be using the terms market economy and capitalism wrongly here and drawing the differences in the wrong place, but it's probably not important.
7evelynciara10dI just listened to Andrew Critch's interview [] about "AI Research Considerations for Human Existential Safety" (ARCHES). I took some notes on the podcast episode, which I'll share here. I won't attempt to summarize the entire episode; instead, please see this summary [] of the ARCHES paper in the Alignment Newsletter. * We need to explicitly distinguish between "AI existential safety" and "AI safety" writ large. Saying "AI safety" without qualification is confusing for both people who focus on near-term AI safety problems and those who focus on AI existential safety problems; it creates a bait-and-switch for both groups. * Although existential risk can refer to any event that permanently and drastically reduces humanity's potential for future development (paraphrasing Bostrom 2013 []), ARCHES only deals with the risk of human extinction because it's easier to reason about and because it's not clear what other non-extinction outcomes are existential events. * ARCHES frames AI alignment in terms of delegation from m ≥ 1 human stakeholders (such as individuals or organizations) to n ≥ 1 AI systems. Most alignment literature to date focuses on the single-single setting (one principal, one agent), but such settings in the real world are likely to evolve into multi-principal, multi-agent settings. Computer scientists interested in AI existential safety should pay more attention to the multi-multi setting relative to the single-single one for the following reasons: * There are commercial incentives to develop AI systems that are aligned with respect to the single-single setting, but not to make sure they won't break down in the multi-multi setting. A group of AI systems that are "align
1Mati_Roy9dPolicy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable. Which could be used to pay for cryonics among other things.

Load More Days