All Posts

Sorted by Magic (New & Upvoted)

Week Of Sunday, October 13th 2019
Week Of Sun, Oct 13th 2019

Frontpage Posts
Shortform [Beta]
42Stefan_Schubert5d The Nobel Prize in Economics [https://www.nobelprize.org/prizes/economic-sciences/2019/summary/] awarded to Abhijit Banerjee, Esther Duflo and Michael Kremer "for their experimental approach to alleviating global poverty".
4Stefan_Schubert4d Andrew Gelman argues [https://statmodeling.stat.columbia.edu/2019/10/15/think-scientifically-scientists-proposals-fixing-science-2/] that scientists’ proposals for fixing science are themselves not always very scientific. If you’ve gone to the trouble to pick up (or click on) this volume in the first place, you’ve probably already seen, somewhere or another, most of the ideas I could possibly propose on how science should be fixed. My focus here will not be on the suggestions themselves but rather on what are our reasons for thinking these proposed innovations might be good ideas. The unfortunate paradox is that the very aspects of “junk science” that we so properly criticize—the reliance on indirect, highly variable measurements from nonrepresentative samples, open-ended data analysis, followed up by grandiose conclusions and emphatic policy recommendations drawn from questionable data— all seem to occur when we suggest our own improvements to the system. All our carefully-held principles seem to evaporate when our emotions get engaged.
3Ramiro2d Why don't we have an "Effective App"? See, e.g., Ribon [https://home.ribon.io/english/] - an app that gives you points (“ribons”) for reading positive news (e.g. “handicapped walks again thanks to exoskeleton”) sponsored by corporations; then you choose one of the TLYCS charities, and your points are converted into a donation. Ribon is a Brazilian for-profit; they claim to donate 70% [http://blog.ribon.io/2019/09/03/conheca-o-caminho-do-dinheiro-na-ribon/] of what they receive from sponsors, but I haven’t found precise stats. It has skyrocketed [http://blog.ribon.io/2019/08/19/comprovante-de-doacoes-%ef%bd%9c-abril-e-maio-de-2019/] this year: from their informed impact, I estimate they have donated about U$ 33k to TLYCS – which is a lot for Brazilian standards. They intend to expand (they gathered more than R$ 1 mi – roughly U$250k - from investors [https://www.startse.com/noticia/startups/60773/startup-de-doacoes-ribon-abre-nova-captacao-apos-aporte-de-r-1-milhao] this year) and will soon launch an ICO. Perhaps an EA non-profit could do even more good?
2evelynciara5d A series of polls by the Chicago Council on Global Affairs [https://www.thechicagocouncil.org/publication/record-number-americans-say-international-trade-good-us-economy] show that Americans increasingly support free trade and believe that free trade is good for the U.S. economy (87%, up from 59% in 2016). This is probably a reaction to the negative effects and press coverage of President Trump's trade wars - anecdotally, I have seen a lot of progressives who would otherwise not care about or support free trade criticize policies such as Trump's steel tariffs as reckless. I believe this presents a unique window of opportunity to educate the American public about the benefits of globalization. Kimberly Clausing is doing this in her book, Open: The Progressive Case for Free Trade, Immigration, and Global Capital [https://smile.amazon.com/Open-Progressive-Immigration-Global-Capital/dp/0674919335/] , in which she defends free trade and immigration to the U.S. from the standpoint of American workers.
1Khorton6d AI policy is probably less neglected than you think it is. There are more than 50 AI policy jobs in the UK government. When one's advertised, it gets 50-100 applicants. The Social Sciences and Humanities Research Council of Canada is really excited about funding AI policy research. http://www.sshrc-crsh.gc.ca/funding-financement/programs-programmes/fellowships/doctoral-doctorat-eng.aspx [http://www.sshrc-crsh.gc.ca/funding-financement/programs-programmes/fellowships/doctoral-doctorat-eng.aspx] AI policy is very important, but at this point it's also very mainstream.

Week Of Sunday, October 6th 2019
Week Of Sun, Oct 6th 2019

Shortform [Beta]
13jpaddison9d Thus starts the most embarrassing post-mortem I've ever written. The EA Forum went down for 5 minutes today. My sincere apologies to anyone who's Forum activity was interrupted. I was first alerted by Pingdom [https://www.pingdom.com/], which I am very glad we set up. I immediately knew what was wrong. I had just hit "Stop" on the (long unused and just archived) CEA Staff Forum, which we built as a test of the technology. Except I actually hit stop on the EA Forum itself. I turned it back on and it took a long minute or two, but was soon back up. ... Lessons learned: * I've seen sites that, after pressing the big red button that says "Delete", makes you enter the name of the service / repository / etc. you want to delete. I like those, but did not think of porting it to sites without that feature. I think I should install a TAP [https://www.lesswrong.com/posts/wJutA2czyFg6HbYoW/what-are-trigger-action-plans-taps] that whenever I hit a big red button, I confirm the name of the service I am stopping. * The speed of the fix leaned heavily on the fact that Pingdom was set up. But it doesn't catch everything. In case it doesn't catch something, I just changed it so that anyone can email me with "urgent" in the subject line and I will get notified on my phone, even if it is on silent. My email is jp at organizationwebsite [https://www.centreforeffectivealtruism.org].
11Stefan_Schubert7d Of possible interest regarding the efficiency of science: paper [https://journals.plos.org/plosone/article?id=10.1371%2Fjournal.pone.0223116&fbclid=IwAR0fvF3obK8i1hRd8sVKwYd5HAJGbnqbSeyrtEwhTU9xywIRFQb3py7jZiY] finds that scientists on average spend 52 hours per year formatting papers. ( Times Higher Education write-up [https://www.timeshighereducation.com/news/academics-lose-aweek-ayear-formatting-journal-papers] ; extensive excerpts here [https://www.facebook.com/stefan.schubert.3954/posts/1218205841713137] if you don't have access.)
3edoarad10d https://collapseos.org [https://collapseos.org]An operating system that should work from scrap materials in the case of civilizational collapse. Very interesting. It turns out that there is an active subreddit on civilizational collapse r/collapse. It seems that WE ARE ALL GOING TO DIEEE!
1edoarad9d Reading Multiagent Models of Mind [https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip/p/x4n4jcoDP7xh5LWLq] and considering the moral patienthood of different cognitive processes: A trolly is headed toward an healthy individual lying carelessly on the track. You are next to a lever, and can switch the trolly to a second track, but on that track there is an individual with a split brain [https://en.wikipedia.org/wiki/Split-brain]. What do you do?

Week Of Sunday, September 29th 2019
Week Of Sun, Sep 29th 2019

Shortform [Beta]
5Stefan_Schubert15d Hostile review [https://www.nature.com/articles/d41586-019-02939-0?utm_source=twt_nnc&utm_medium=social&utm_campaign=naturenews&sf220740036=1&fbclid=IwAR2aZ4t7k7yQ1ImBTlzFqd2yAuiLtbGmD0v_Z1VDxrGgzzsgQ1NxtsBQQuo] of Stuart Russell's new book Human Compatible [https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem-ebook/dp/B07N5J5FTS] in Nature. (I disagree with the review.) Russell, however, fails to convince that we will ever see the arrival of a “second intelligent species”. What he presents instead is a dizzyingly inconsistent account of “intelligence” that will leave careful readers scratching their heads. His definition of AI reduces this quality to instrumental rationality. Rational agents act intelligently, he tells us, to the degree that their actions aim to achieve their objectives, hence maximizing expected utility. This is likely to please hoary behavioural economists, with proclivities for formalization, and AI technologists squeaking reward functions onto whiteboards. But it is a blinkered characterization, and it leads Russell into absurdity when he applies it to what he calls “overly intelligent” AI. Russell’s examples of human purpose gone awry in goal-directed superintelligent machines are bemusing. He offers scenarios such as a domestic robot that roasts the pet cat to feed a hungry child, an AI system that induces tumours in every human to quickly find an optimal cure for cancer, and a geoengineering robot that asphyxiates humanity to deacidify the oceans. One struggles to identify any intelligence here.
5jpaddison18d We're planning Q4 goals for the Forum. Do you use the Forum? (Probably, considering.) Do you have feelings about the Forum? If you send me a PM, one of the CEA staffers running the Forum (myself or Aaron [https://forum.effectivealtruism.org/users/aarongertler]) will set up a call call where you can tell me all the things you think we should do.
3Stefan_Schubert15d Philosopher Eric Schwitzgebel argues [https://schwitzsplinters.blogspot.com/2019/10/what-makes-for-good-philosophical.html?fbclid=IwAR0qIIlX3kJEqoSfmyoG4T8sJiKh-TVhY34rlOPtKxo_182H9i5SMlql_m8] that good philosophical arguments should be such that the target audience ought to be moved by the argument, but that such arguments are difficult to make regarding animal consciousness, since there is no common ground. The Common Ground Problem is this. To get an argument going, you need some common ground with your intended audience. Ideally, you start with some shared common ground, and then maybe you also introduce factual considerations from science or elsewhere that you expect they will (or ought to) accept, and then you deliver the conclusion that moves them your direction. But on the question of animal consciousness specifically, people start so far apart that finding enough common ground to reach most of the intended audience becomes a substantial problem, maybe even an insurmountable problem. Cf. his paper Is There Something It’s Like to Be a Garden Snail? [https://faculty.ucr.edu/~eschwitz/SchwitzPapers/Snails-181025.pdf] The question “are garden snails phenomenally conscious?” or equivalently “is there something it’s like to be a garden snail?” admits of three possible answers: yes, no, and denial that the question admits of a yes-or-no answer. All three answers have some antecedent plausibility, prior to the application of theories of consciousness. All three answers retain their plausibility also after the application of theories of consciousness. This is because theories of consciousness, when applied to such a different species, are inevitably questionbegging and rely partly on dubious extrapolation from the introspections and verbal reports of a single species.

Week Of Sunday, September 22nd 2019
Week Of Sun, Sep 22nd 2019

Frontpage Posts
Shortform [Beta]
13Kerry_Vaughan1mo The scaffolding problem in early stage science Part of the success of science comes from the creation and use of scientific instruments. Yet, before you can make good use of any new scientific instrument, you have to first solve what I’m going to call the “scaffolding problem.” A scientific instrument is, broadly speaking, any device or tool that you can use to study the world. At the most abstract level, the way a scientific instrument works is that it interacts with the world in some way resulting in a change in its state. You then study the change in the instrument’s state as a way of learning about the world. For example, imagine you want to use a thermometer to learn the temperature of a cup of water. Instead of studying the water directly, what the thermometer lets you do is study the thermometer itself to learn the temperature instead of studying the water directly. For a device as well-calibrated as a modern thermometer, this works extremely well. Now imagine you’ve invented some new scientific instrument and you want to figure out whether it works. How would you go about doing that? This is a surprisingly difficult problem. Here’s an abstract way of stating it: 1. We want to learn about some phenomenon, X. 2. X is not directly observable, so we infer it from some other phenomenon, Y. 3. If we want to know if Y tells us about X, we cannot use Y itself, we must use some other phenomenon, Z. 4. If Z is supposed to tell us about X, then either: 4a) There’s no need to infer X from Y, we should just infer it from Z OR 4b) We have to explain why we can infer X from Z, which repeats this problem To understand the problem, take the case of the thermometer. If we have the world’s first thermometer what we want to know is whether the thermometer tells us about the temperature. But, to do that we need to know the temperature. But if we knew the temperature there wouldn’t be a need to invent a thermometer in the first place. Given that we have sc
9Ruby25d Just a thought: there's the common advice that fighting all out with the utmost desperation makes sense for very brief periods, a few weeks or months, but doing so for longer leads to burnout. So you get sayings like "it's a marathon, not a sprint." But I wonder if length of the "fight"/"war" isn't the only variable in sustainable effort. Other key ones might be the degree of ongoing feedback and certainty about the cause. Though I expect a multiyear war which is an existential threat to your home and family to be extremely taxing, I imagine soldiers experiencing less burnout than people investing similar effort for a far-mode cause, let's say global warming which might be happening, but is slow and your contributions to preventing it unclear. (Actual soldiers may correct me on this, and I can believe war is very traumatizing, though I will still ask how much they believed in the war they were fighting.) (Perhaps the relevant variables here are something like Hanson's Near vs Far mode thinking, where hard effort for far-mode thinking more readily leads to burnout than near-mode thinking even when sustained for long periods.) Then of course there's generally EA and X-risk where burnout [https://forum.effectivealtruism.org/posts/NDszJWMsdLCB4MNoy/burnout-what-is-it-and-how-to-treat-it] is common. Is this just because of the time scales involved, or is it because trying to work on x-risk is subject to so much uncertainty and paucity of feedback? Who knows if you're making a positive difference? Contrast with a Mario character toiling for years to rescue the princess he is certain is locked in a castle waiting [https://www.lesswrong.com/posts/SGR4GxFK7KmW7ckCB/something-to-protect]. Fighting enemy after enemy, sleeping on cold stone night after night, eating scraps. I suspect Mario, with his certainty and much more concrete sense of progress, might be able expend much more effort and endure much more hardship for much longer than is sustainable in the EA/X-risk spa
4RobBensinger1mo Rolf Degen, summarizing part of Barbara Finlay's "The neuroscience of vision and pain [https://royalsocietypublishing.org/doi/abs/10.1098/rstb.2019.0292]": Humans may have evolved to experience far greater pain, malaise and suffering than the rest of the animal kingdom, due to their intense sociality giving them a reasonable chance of receiving help.From the paper: Several years ago, we proposed the idea that pain, and sickness behaviour had become systematically increased in humans compared with our primate relatives, because human intense sociality allowed that we could ask for help and have a reasonable chance of receiving it. We called this hypothesis ‘the pain of altruism’ [68]. This idea derives from, but is a substantive extension of Wall’s account of the placebo response [43]. Starting from human childbirth as an example (but applying the idea to all kinds of trauma and illness), we hypothesized that labour pains are more painful in humans so that we might get help, an ‘obligatory midwifery’ which most other primates avoid and which improves survival in human childbirth substantially ([67]; see also [69]). Additionally, labour pains do not arise from tissue damage, but rather predict possible tissue damage and a considerable chance of death. Pain and the duration of recovery after trauma are extended, because humans may expect to be provisioned and protected during such periods. The vigour and duration of immune responses after infection, with attendant malaise, are also increased. Noisy expression of pain and malaise, coupled with an unusual responsivity to such requests, was thought to be an adaptation.We noted that similar effects might have been established in domesticated animals and pets, and addressed issues of ‘honest signalling’ that this kind of petition for help raised. No implication that no other primate ever supplied or asked for help from any other was intended, nor any claim that animals do not feel pain. Rather, animals would experience pa

Load More Weeks