All Posts

Sorted by Magic (New & Upvoted)

Week Of Sunday, January 19th 2020
Week Of Sun, Jan 19th 2020

Shortform [Beta]
11Khorton6d Who should pay the cost of Googling studies on the EA Forum? 1. Many EA Forum posts have minimal engagement with relevant academic literature 2. If you see a Forum post that doesn't engage with literature you think is relevant, you could make a claim without looking up a citation based on your memory, but there's a reasonable chance you'll be wrong. 3. Many people say they'd rather see an imperfect post or comment than not have it at all. 4. But people tend to remember an original claim, even if it's later debunked. 5. Maybe the best option is to phrase my comment as a question: "Have you looked at the literature on X?"
5Ramiro5d Shouldn't we have more EA editors in Philpapers categories? Philpapers [https://philpapers.org/]is this huge index/community of academic philosophers and texts. It's a good place to start researching a topic. Part of the work is done by voluntary editors and assistants, who assume the responsibility of categorizing and including relevant bibliography; in exchange, they are constantly in touch with the corresponding subject. Some EAs are responsible for their corresponding fields; however, I noticed that some relevant EA-related categories currently have no editor (e.g.: Impact of Artificial Intelligence). I wonder: wouldn't it be useful if EAs assumed thses positions?

Week Of Sunday, January 12th 2020
Week Of Sun, Jan 12th 2020

Frontpage Posts
Shortform [Beta]
10Khorton12d There are some pretty good reasons to keep your identity small. http://www.paulgraham.com/identity.html [http://www.paulgraham.com/identity.html] But I see people using that as an excuse to not identify as... anything. As in, they avoid affiliating themselves with any social movements, sports teams, schools, nation-states, professions, etc. It can be annoying and confusing when you ask someone "are you an EA?" or "are you a Christian?" or "are you British?" and they won't give you a straight answer. It's partly annoying because I'm very rationally trying to make some shortcut assumptions about them (if they're an EA, they've probably heard of malaria) and they're preventing me from doing that. But I also sometimes get the sense that they're trying to protect themselves by not affiliating with a movement, and I find that a bit annoying. I feel like they're a free rider. What are they trying to protect themselves from? Effectively they're protecting their reputation. This could be from an existing negative legacy of the group. eg If they don't identify as British (even though they're a British citizen) maybe they can dodge questions about the ongoing negative effects of the British empire. They could also be hedging against future negative reputation eg If I call myself an EA but then someone attempts a military coup in the name of EA, I would look bad. By avoiding declaring yourself a group member, you can sometimes avoid your reputation sinking when your chosen group makes bad choices. Unfortunately, that means that those of us with our reputations on the line are the ones who have the most skin in the game to keep people from doing stupid unilateralist things that make everyone in the community look bad. I would prefer it if people would take that big scary step of saying they're an EA or Christian or Brit or whatever, and then put in the work to improve your community's reputation. Obviously open to hearing reasons why people shouldn't identify as members o
9Linch10d I find the unilateralist’s curse [https://www.nickbostrom.com/papers/unilateralist.pdf] a particularly valuable concept to think about. However, I now worry that “unilateralist” is an easy label to tack on, and whether a particular action is unilateralist or not is susceptible to small changes in framing. Consider the following hypothetical situations: 1. Company policy vs. team discretion 2. Alice is a researcher in a team of scientists at a large biomedical company. While working on the development of an HIV vaccine, the team accidentally created an air-transmissible variant of HIV. The scientists must decide whether to publish their discovery with the rest of the company, knowing that leaks may exist, and the knowledge may be used to create a devastating biological weapon, but also that it could help those who hope to develop defenses against such weapons, including other teams within the same company. Most of the team thinks they should keep it quiet, but company policy is strict that such information must be shared with the rest of the company to maintain the culture of open collaboration. Alice thinks the rest of the team should either share this information or quit. Eventually, she tells her skip manager her concerns, who relayed it to the rest of the company in a company-open document. Alice does not know if this information ever leaked past the company. 3. Stan and the bomb 4. Stan is an officer in charge of overseeing a new early warning system intended to detect (nuclear) intercontinental ballistic missiles from an enemy country. A warning system appeared to have detected five missiles heading towards his homeland, quickly going through 30 early layers of verification. Stan suspects this is a false alarm, but is not sure. Military instructions are clear that such warnings must immediately be relayed upwards.Stan decided not to relay the message to his superiors, on the g
9Khorton12d I'm 60% sure that LessWrong people use the term "Moloch" in almost exactly the same way as social justice people use the term "kyriarchy" (or "capitalist cis-hetero patriarchy"). I might program my browser to replace "Moloch" with "kyriarchy". Might make Christian Twitter confusing though.
7EdoArad14d Basic Research vs Applied Research 1. If we are at the Hinge of History, it is less reasonable to focus on long-term knowledge building via basic research, and vice versa. 2. If we have identified the most promising causes well, then targeted applied research is promising.
4Ramiro10d Philosophers and economists seem to disagree about the marginalist/arbitrage argument [https://www.sciencedirect.com/topics/economics-econometrics-and-finance/social-discount-rate] that a social discount rate should equal (or at least be majorly influenced by) the marginal social opportunity cost of capital. I wonder if there's any discussion of this topic in the context of negative interest rates. For example, would defenders of that argument accept that, as those opportunity costs decline, so should the SDR?
Load More (5/6)

Week Of Sunday, January 5th 2020
Week Of Sun, Jan 5th 2020

Shortform [Beta]
23Pablo_Stafforini18d I don't know if there is a designated place to leave comments about the EA Forum, so for the time being I'm posting them here. I think the current homepage [https://forum.effectivealtruism.org/] has a number of problems: * The 'Community Favorites' section keeps listing the same posts over and over again. I don't see the point of having a prominent list of favorite posts in the home page that changes so little. I suggest expanding the list considerably so that regular visitors can still expect to see novel posts every time they visit the homepage. * [Note: in light of Oli's comment [https://forum.effectivealtruism.org/posts/HfSfZ2ekXaryadBrB/pablo_stafforini-s-shortform#JwhC4dJGvAaiz2ZSB] below, I'm retracting this bullet point.] The 'Latest Posts' section sorts posts neither by karma nor by date; rather, it seems to rely on a hybrid sorting algorithm. I don't think this is useful: as someone who checks the home page regularly, I want to be able to easily see what the latest posts are, so that when I go down the list and eventually come across a post I have already seen, I can conclude that I have seen all posts after it as well. With the current sorting algorithm, there's no way for me to insure that my browsing session has exhausted all the posts seen since the previous session. * I find it hard to understand the meaning of the 'Community' category. The description says that it consists of "posts with topical content or which relate to the EA community itself". But that description also draws a contrast to "Frontpage posts, which are selected by moderators as especially interesting or useful to people with interest in doing good effectively." This contrast suggests that 'community' posts are simply posts that haven't been curated, as opposed to posts with a focus on the EA community. In other words, there are two separate distinctions here: that between curated vs. non-curated posts, and that betw
7Max_Daniel19d [Some of my tentative and uncertain views on AI governance, and different ways of having impact in that area. Excerpts, not in order, from things I wrote in a recent email discussion, so not a coherent text.] 1. In scenarios where OpenAI, DeepMind etc. become key actors because they develop TAI capabilities, our theory of impact will rely on a combination of affecting (a) 'structure' and (b) 'content'. By (a) I roughly mean how the relevant decision-making mechanisms look like irrespective of the specific goals and resources of the actors the mechanism consists of; e.g., whether some key AI lab is a nonprofit or a publicly traded company; who would decide by what rules/voting scheme how Windfall profits would be redistributed; etc. By (b) I mean something like how much the CEO of a key firm, or their advisors, care about the long-term future. -- I can see why relying mostly on (b) is attractive, e.g. it's arguably more tractable; however, some EA thinking (mostly from the Bay Area / the rationalist community to be honest) strikes me as focusing on (b) for reasons that seem ahistoric or otherwise dubious to me. So I don't feel convinced that what I perceive to be a very stark focus on (b) is warranted. I think that figuring out if there are viable strategies that rely more on (a) is better done from within institutions that have no ties with key TAI actors, and also might be best done my people that don't quite match the profile of the typical new EA that got excited about Superintelligence or HPMOR. Overall, I think that making more academic research in broadly "policy relevant" fields happen would be a decent strategy if one ultimately wanted to increase the amount of thinking on type-(a) theories of impact. 2. What's the theory of impact if TAI happens in more than 20 years? More than 50 years? I think it's not obvious whether it's worth spending any current resources on influencing such scenarios (I think they are more likely but we have much less leverage). Ho
3EdoArad18d I think that some causes may have increasing marginal utility. Specifically, I think that it may be true in some types of research that are expected to generate insights about it's own domain. Testing another idea for a cancer treatment is probably of decreasing marginal utility (because the low hanging fruits are being picked up), but basic research in genetics may be of increasing marginal utility (because even if others may work on the best approaches, you could still improve their productivity by giving them further insights). This is not true if the progress in a field relies on progressing along a single "dimension" (say, a specific research direction that everyone attempts), or if researchers in that field can easily and productively change their projects and expertise. It is true if there are multiple dimensions available, and progress along a different dimension wields insight for others to use.
2Misha_Yagudin19d Morgan Kelly, The Standard Errors of Persistence [http://dx.doi.org/10.2139/ssrn.3398303] A large literature on persistence finds that many modern outcomes strongly reflect characteristics of the same places in the distant past. However, alongside unusually high t statistics, these regressions display severe spatial autocorrelation in residuals, and the purpose of this paper is to examine whether these two properties might be connected. We start by running artificial regressions where both variables are spatial noise and find that, even for modest ranges of spatial correlation between points, t statistics become severely inflated leading to significance levels that are in error by several orders of magnitude. We analyse 27 persistence studies in leading journals and find that in most cases if we replace the main explanatory variable with spatial noise the fit of the regression commonly improves; and if we replace the dependent variable with spatial noise, the persistence variable can still explain it at high significance levels. We can predict in advance which persistence results might be the outcome of fitting spatial noise from the degree of spatial autocorrelation in their residuals measured by a standard Moran statistic. Our findings suggest that the results of persistence studies, and of spatial regressions more generally, might be treated with some caution in the absence of reported Moran statistics and noise simulations.
2alexrjl21d Discounting the future consequences of welfare producing actions: * there's almost unanimous agreement among moral philosophers that welfare itself should not be discounted in the future. * however many systems in the world are chaotic, and it's very uncontroversial that in consequentialist theories the value of an action should depend on the expected utility it produces. * is it possible that the rational conclusion is to exponentially discount future welfare as a way of accounting for the exponential sensitivity to initial conditions exhibited by the long term consequences of one's actions?
Load More (5/6)

Load More Weeks