All Posts

Sorted by Magic (New & Upvoted)

October 2019

Frontpage Posts
Shortform [Beta]
42Stefan_Schubert5d The Nobel Prize in Economics [https://www.nobelprize.org/prizes/economic-sciences/2019/summary/] awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
13jpaddison9d Thus starts the most embarrassing post-mortem I've ever written. The EA Forum went down for 5 minutes today. My sincere apologies to anyone who's Forum activity was interrupted. I was first alerted by Pingdom [https://www.pingdom.com/], which I am very glad we set up. I immediately knew what was wrong. I had just hit "Stop" on the (long unused and just archived) CEA Staff Forum, which we built as a test of the technology. Except I actually hit stop on the EA Forum itself. I turned it back on and it took a long minute or two, but was soon back up. ... Lessons learned: * I've seen sites that, after pressing the big red button that says "Delete", makes you enter the name of the service / repository / etc. you want to delete. I like those, but did not think of porting it to sites without that feature. I think I should install a TAP [https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps] that whenever I hit a big red button, I confirm the name of the service I am stopping. * The speed of the fix leaned heavily on the fact that Pingdom was set up. But it doesn't catch everything. In case it doesn't catch something, I just changed it so that anyone can email me with "urgent" in the subject line and I will get notified on my phone, even if it is on silent. My email is jp at organizationwebsite [https://www.centreforeffectivealtruism.org].
11Stefan_Schubert7d Of possible interest regarding the efficiency of science: paper [https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0223116&fbclid=IwAR0fvF3obK8i1hRd8sVKwYd5HAJGbnqbSeyrtEwhTU9xywIRFQb3py7jZiY] finds that scientists on average spend 52 hours per year formatting papers. ( Times Higher Education write-up [https://www.timeshighereducation.com/news/academics-lose-aweek-ayear-formatting-journal-papers] ; extensive excerpts here [https://www.facebook.com/stefan.schubert.3954/posts/1218205841713137] if you don't have access.)
5Stefan_Schubert15d Hostile review [https://www.nature.com/articles/d41586-019-02939-0?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf220740036=1&fbclid=IwAR2aZ4t7k7yQ1ImBTlzFqd2yAuiLtbGmD0v_Z1VDxrGgzzsgQ1NxtsBQQuo] of Stuart Russell's new book Human Compatible [https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS] in Nature. (I disagree with the review.) Russell, however, fails to convince that we will ever see the arrival of a “second intelligent species”. What he presents instead is a dizzyingly inconsistent account of “intelligence” that will leave careful readers scratching their heads. His definition of AI reduces this quality to instrumental rationality. Rational agents act intelligently, he tells us, to the degree that their actions aim to achieve their objectives, hence maximizing expected utility. This is likely to please hoary behavioural economists, with proclivities for formalization, and AI technologists squeaking reward functions onto whiteboards. But it is a blinkered characterization, and it leads Russell into absurdity when he applies it to what he calls “overly intelligent” AI. Russell’s examples of human purpose gone awry in goal-directed superintelligent machines are bemusing. He offers scenarios such as a domestic robot that roasts the pet cat to feed a hungry child, an AI system that induces tumours in every human to quickly find an optimal cure for cancer, and a geoengineering robot that asphyxiates humanity to deacidify the oceans. One struggles to identify any intelligence here.
5jpaddison18d We're planning Q4 goals for the Forum. Do you use the Forum? (Probably, considering.) Do you have feelings about the Forum? If you send me a PM, one of the CEA staffers running the Forum (myself or Aaron [https://forum.effectivealtruism.org/users/aarongertler]) will set up a call call where you can tell me all the things you think we should do.
Load More (5/12)

September 2019

Frontpage Posts
Shortform [Beta]
48jpaddison1mo Appreciation post for Saulius I realized recently that the same author [https://forum.effectivealtruism.org/users/saulius] that made the corporate commitments [https://forum.effectivealtruism.org/posts/XdekdWJWkkhur9gvr/will-companies-meet-their-animal-welfare-commitments] post and the misleading cost effectiveness post [https://forum.effectivealtruism.org/posts/zdAst6ezi45cChRi6/list-of-ways-in-which-cost-effectiveness-estimates-can-be] also made all three of these excellent posts on neglected animal welfare concerns that I remembered reading. Fish used as live bait by recreational fishermen [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen] Rodents farmed for pet snake food [https://forum.effectivealtruism.org/posts/pGwR2xc39PMSPa6qv/rodents-farmed-for-pet-snake-food] 35-150 billion fish are raised in captivity to be released into the wild every year [https://forum.effectivealtruism.org/posts/4FSANaX3GvKHnTgbw/35-150-billion-fish-are-raised-in-captivity-to-be-released] For the first he got this notable comment [https://forum.effectivealtruism.org/posts/gGiiktK69R2YY7FfG/fish-used-as-live-bait-by-recreational-fishermen#FfySjSzLL8YFZpih5] from OpenPhil's Lewis Bollard. Honorable mention includes this post [https://forum.effectivealtruism.org/posts/SMRHnGXirRNpvB8LJ/fact-checking-comparison-between-trachoma-surgeries-and] which I also remembered, doing good epistemic work fact-checking a commonly cited comparison.
28Linch1mo cross-posted from Facebook [https://www.facebook.com/linchuan.zhang/posts/2407342496023187]. Sometimes I hear people who caution humility say something like "this question has stumped the best philosophers for centuries/millennia. How could you possibly hope to make any progress on it?". While I concur that humility is frequently warranted and that in many specific cases that injunction is reasonable [1], I think the framing is broadly wrong. In particular, using geologic time rather than anthropological time hides the fact that there probably weren't that many people actively thinking about these issues, especially carefully, in a sustained way, and making sure to build on the work of the past. For background, 7% of all humans who have ever lived are alive today, and living people compose 15% of total human experience [2] so far!!! It will not surprise me if there are about as many living philosophers today as there were dead philosophers in all of written history. For some specific questions that particularly interest me (eg. population ethics, moral uncertainty), the total research work done on these questions is generously less than five philosopher-lifetimes. Even for classical age-old philosophical dilemmas/"grand projects" (like the hard problem of consciousness), total work spent on them is probably less than 500 philosopher-lifetimes, and quite possibly less than 100. There are also solid outside-view reasons to believe that the best philosophers today are just much more competent [3] than the best philosophers in history, and have access to much more resources[4]. Finally, philosophy can build on progress in natural and social sciences (eg, computers, game theory). Speculating further, it would not surprise me, if, say, a particularly thorny and deeply important philosophical problem can effectively be solved in 100 more philosopher-lifetimes. Assuming 40 years of work and $200,000/year per philosopher, including overhead, this is ~800 millio
14jpaddison1mo Posting this on shortform rather than as a comment because I feel like it's more personal musings than a contribution to the audience of the original post — Things I'm confused about after reading Will's post, Are we living at the most influential time in history? [https://forum.effectivealtruism.org/posts/XXLf6FmWujkxna3E6/are-we-living-at-the-most-influential-time-in-history-1] : What should my prior be about the likelihood of being at the hinge of history? I feel really interested in this question, but haven't even fully read the comments on the subject. TODO. How much evidence do I have for the Yudkowsky-Bostrom framework? I'd like to get better at comparing the strength of an argument to the power of a study. Suppose I think that this argument holds. Then it seems like I can make claims about AI occurring because I've thought about the prior that I have a lot of influence. I keep going back and forth about whether this is a valid move. I think it just is, but I assign some credence that I'd reject it if I thought more about it. What should my estimate of the likelihood we're at the HoH if I'm 90% confident in the arguments presented in the post?
13Kerry_Vaughan1mo The scaffolding problem in early stage science Part of the success of science comes from the creation and use of scientific instruments. Yet, before you can make good use of any new scientific instrument, you have to first solve what I’m going to call the “scaffolding problem.” A scientific instrument is, broadly speaking, any device or tool that you can use to study the world. At the most abstract level, the way a scientific instrument works is that it interacts with the world in some way resulting in a change in its state. You then study the change in the instrument’s state as a way of learning about the world. For example, imagine you want to use a thermometer to learn the temperature of a cup of water. Instead of studying the water directly, what the thermometer lets you do is study the thermometer itself to learn the temperature instead of studying the water directly. For a device as well-calibrated as a modern thermometer, this works extremely well. Now imagine you’ve invented some new scientific instrument and you want to figure out whether it works. How would you go about doing that? This is a surprisingly difficult problem. Here’s an abstract way of stating it: 1. We want to learn about some phenomenon, X. 2. X is not directly observable, so we infer it from some other phenomenon, Y. 3. If we want to know if Y tells us about X, we cannot use Y itself, we must use some other phenomenon, Z. 4. If Z is supposed to tell us about X, then either: 4a) There’s no need to infer X from Y, we should just infer it from Z OR 4b) We have to explain why we can infer X from Z, which repeats this problem To understand the problem, take the case of the thermometer. If we have the world’s first thermometer what we want to know is whether the thermometer tells us about the temperature. But, to do that we need to know the temperature. But if we knew the temperature there wouldn’t be a need to invent a thermometer in the first place. Given that we have sc
12casebash1mo If we run any more anonymous surveys, we should encourage people to pause and consider whether they are contributing productively or just venting. I'd still be in favour of sharing all the responses, but I have enough faith in my fellow EAs to believe that some would take this to heart.
Load More (5/12)

August 2019

Frontpage Posts
Shortform [Beta]
25BenMillwood2mo Lead with the punchline when writing to informThe convention in a lot of public writing is to mirror the style of writing for profit, optimized for attention. In a co-operative environment, you instead want to optimize to convey your point quickly, to only the people who benefit from hearing it. We should identify ways in which these goals conflict; the most valuable pieces might look different from what we think of when we think of successful writing. * Consider who doesn't benefit from your article, and if you can help them filter themselves out. * Consider how people might skim-read your article, and how to help them derive value from it. * Lead with the punchline – see if you can make the most important sentence in your article the first one. * Some information might be clearer in a non-discursive structure (like… bullet points, I guess). Writing to persuade might still be best done discursively, but if you anticipate your audience already being sold on the value of your information, just present the information as you would if you were presenting it to a colleague on a project you're both working on.
21Khorton2mo What is the global burden of menopause? Symptoms include hot flushes, difficulty sleeping, vaginal irritation or pain, headaches, and low mood or anxiety. These symptoms normally last around five years, although 10% of women experience them for up to 12 years. I couldn't see a Disability-Adjusted Life Years rating for menopause. I'd imagine that it might have a similar impact to mild depression, which in 2004 was rated as 0.140. Currently, about 200 million people are going through menopause, 80% of whom are experiencing symptoms. I'd expect this to increase to 300 million by 2050. A leading menopause charity in the UK has an annual budget of less than £500k, despite the 4 million British women going through menopause, so I think menopause treatment in the UK could be improved with relatively little money.* I'm not sure that would create very helpful spillovers to countries where Hormone Replacement Therapy isn't cheaply accessible. On the other hand, online Cognitive Behavioral Therapy is starting to be used to treat some symptoms, and that could probably be scaled up more easily. *Improving diagnosis and doctor awareness of treatment options seems tractable, but there are some supply chain problems right now which seem less tractable. https://www.bbc.co.uk/news/health-49308083 [https://www.bbc.co.uk/news/health-49308083]
14jpaddison2mo This first shortform comment on the EA Forum will be both a seed for the page and a description. Shortform is an experimental feature brought in from LessWrong to allow posters a place to put quickly written thoughts down, with less pressure to make it to the length / quality of a full post.
13jpaddison2mo On the incentives of climate scienceAlright, the title sounds super conspiratorial, but I hope the content is just boring. Epistemic status: speculating, somewhat confident in the dynamic existing. Climate science as published by the IPCC tends to 1) Be pretty rigorous 2) Not spend much effort on the tail risks I have a model that they do this because of their incentives for what they're trying to accomplish. They're in a politicized field, where the methodology is combed over and mistakes are harshly criticized. Also, they want to show enough damage from climate change to make it clear that it's a good idea to institute policies reducing greenhouse gas emissions. Thus they only need to show some significant damage, not a global catastrophic one. And they want to maintain as much rigor as possible to prevent the discovery of mistakes, and it's easier to be rigorous about things that are likely than about tail risks. Yet I think longtermist EAs should be more interested in the tail risks. If I'm right, then the questions we're most interested in are underrepresented in the literature.
12saulius2mo I sometimes meet people who claim to be vegetarians (don't eat meat but consume milk and eggs) out of the desire to help the animals. If appropriate, I show them the http://ethical.diet/ [http://ethical.diet/] website and explain that the production of eggs likely requires more suffering per calorie than most of the commonly consumed meat products. Hence, if they care about animals, avoiding eggs should be a priority. If they say that this is too many food products to give up, I suggest that perhaps instead of eating eggs, they could occasionally consume some beef (although that is bad for the environment). I think that the production of beef requires less suffering per calorie, even though I'm unsure how to compare suffering between different animals. In general, I'm skeptical about dietary change advocacy, but my intuition is that talking about this with vegetarians in situations where it feels appropriate is worth the effort. But I'm uncertain and either way, I don't think this is very important.
Load More (5/15)

Load More Months