All of BenMillwood's Comments + Replies

What is meant by 'infrastructure' in EA?

I think EA uses the word in a basically standard way. I imagine there being helpful things to say about "what do we mean by funding infrastructure" or "what kind of infrastructure is the EA Infrastructure Fund meaning to support", but I don't know that there's anything to say in a more general context than that.

Launching SoGive Grants

Why do you think it's valuable? I don't think we have this norm already, and it's not immediately obvious to me how it would change my behaviour.

2Tony.Sena4d
I agree that this norm does not largely exist at the moment, however, I would argue that there is a trend towards greater transparency in philanthropy (see Charity Navigator [https://www.charitynavigator.org/index.cfm?bay=content.view&cpid=1093] evaluations of charities and many other charity evaluator criteria that include transparency). I think part of this trend is driven by the EA movement itself and the radical transparency that is exhibited by GiveWell and Open Philanthropy, particularly when compared to funders that are older. I would argue that transparency is fundamentally and intrinsically important in a number of ways. I will share a few thoughts here but I don’t believe this is an exhaustive list and would require much more time to flesh out fully: * Power Asymmetries: I would argue that the redistribution of funds is essentially the redistribution of power. Resources (money) provide implicit and explicit power and consequently, the distribution of resources through philanthropy or any type of spending is power distribution. For this reason, philanthropic spending can exacerbate power asymmetries and it can create new power asymmetries. This power may manifest through which voices and ideas are given a platform, which cause-priority areas are perpetuated or otherwise, and who is given the power to make these determinations. In the Global Health and Development space, we have seen a very large push towards greater transparency in the movement towards ‘decolonizing development’. A great example of this can be seen in India [https://idronline.org/article/fundraising-and-communications/fcra-the-background-the-myths-and-the-facts/] , where the Supreme Court of India upheld laws that limit the ability of foreign donors to give in India. This is a tangible example of so-called ‘beneficiaries’ pushing back against the power involved with philanthropic giving and seeking to exercise greater control over the source
Bad Omens in Current Community Building

I don't think we have a single "landing page" for all the needs of the community, but I'd recommend applying for relevant jobs or getting career advice or going to an EA Global conference, or figuring out what local community groups are nearby you and asking them for advice.

Bad Omens in Current Community Building

I agree with paragraph 1 and 2 and disagree with paragraph 3 :)

That is: I agree longtermism and x-risk are much more difficult to introduce to the general population. They're substantially farther from the status quo and have weirder and more counterintuitive implications.

However, we don't choose what to talk about by how palatable it is. We must be guided by what's true, and what's most important. Unfortunately, we live in a world where what's palatable and what's true need not align.

To be clear, if you think global development is more important than x-ri... (read more)

Against immortality?

I don't buy the asymmetry of your scope argument. It feels very possible that totalitarian lock-in could have billions of lives at stake too, and cause a similar quantity of premature deaths.

2AllAmericanBreakfast21d
I think Matt’s on the right track here. Treating “immortal dictators” as a separate scenario from “billions of lives lost to an immortal dictator” smacks of double-counting. Really, we’re asking if immortality will tend to save or lose lives on net, or to improve or worsen QoL on net. We can then compare the possible causes of lives lost/worsened vs gained/bettered: immortal dictators, or perhaps immortal saints; saved lives from life extension; lives less tainted by fear of death and mourning; lives more free to pursue many paths; alignment of individual self-interest with the outcome of the long-term future; the persistent challenge of hyperbolic discounting; the question of how to provide child rearing experiences in a crowded world with a death rate close to zero; the possible need to colonize the stars to make more room for an immortal civilization; the attendant strife that such a diaspora may experience. When I just make a list of stuff in this manner, no individual item jumps out at me as particularly salient, but the collection seems to point in the direction of immortality being good when confined to Earth, and then being submerged into the larger question of whether a very large and interplanetary human presence would be good. I think that this argument sort of favors a more near-term reach for immortality. The smaller and more geographically concentrated the human population is by the time it’s immortal, the better able it is to coordinate and plan for interplanetary growth. If humanity spreads to the stars, then coordination ability declines. If immortality is bad in conjunction with interplanetary civilization, the horse is out of the barn.
5Matt Brooks21d
Of course, it would, but if you're reducing the risk of totalitarian lock-in from 0.4% to 0.39% (obviously made up numbers) by waiting 200 years I would think that's a mistake that costs billions of lives.
Free-spending EA might be a big problem for optics and epistemics

apologies if this was obvious from the responses in some other way, but did you consider that the person who gave a 9 might have had the scale backwards, i.e. been thinking of 1 as the maximally uncomfortable score?

1Trevor Levin1mo
Hmm, this does seem possible and maybe more than 50% likely. Reasons to think it might not be the case is that I know this person was fairly new to EA, not a longtermist, and somebody asked a clarifying question about this question that I think I answered in a clarifying way, but may not have clarified the direction of the scale. I don't know!
Critique of OpenPhil's macroeconomic policy advocacy

I don't understand what you think Holden / OpenPhil's bias is. I can see why they might have happened to be wrong, but I don't see what in their process makes them systematically wrong in a particular way.

I also think it's generally reasonable to form expectations about who in an expert disagreement is correct using heuristics that don't directly engage with the content of the arguments. Such heuristics, again, can go wrong, but I think they still carry information, and I think we often have to ultimately rely on them when there's just too many issues to investigate them all.

8artifex2mo
It’s not the kind of bias you’re thinking of; not a cognitive or epistemic bias, that is. It’s dovish bias, as in a bias to favor expansionary policy. The non-biased alternative would be a nondiscretionary target that does not systematically favor either expansionary or contractionary policy. (If we want to talk about epistemic bias, and if I allow myself to be more provocative, there could also be a different kind of bias, social desirability: “you kind of value all people equally and you care a lot about how the working class is doing and what their bargaining power is” sounds good and is the kind of language you expect to find in a political party platform. This was in an interview and in a response prompted by Ezra Klein, but just seeing language like that used could be a red flag.) Yes, but: 1. Not when making high-risk grants, where the value comes from your inside-view evaluations of the arguments for each grant (or category of grants, if you’re funding multiple people working on the same or similar things but you have evaluated these things for yourself in sufficient detail to be confident that the grants are overall worth doing). 2. Not as a substitute for directly engaging with the content of the arguments, but in addition to doing that and as a way to guide your engagement with the arguments (to help you see the context and know what arguments to look at). Unless you really don’t have the time to engage with the arguments, but there are a lot of hours in a year and this is kind of Open Philanthropy’s job. 3. Never while framing as a good thing the fact that you’re deferring to experts instead of engaging with the arguments yourself, never while implying that there would be something wrong about engaging with the arguments yourself instead of (or in addition to) deferring to experts.
Announcing Alvea—An EA COVID Vaccine Project

(in case anyone else was confused, this was a reply to a now-deleted comment)

Simplify EA Pitches to "Holy Shit, X-Risk"

I don't know. Partly I think that some of those people are working on something that's also important and neglected, and they should keep working on it, and need not switch.

Simplify EA Pitches to "Holy Shit, X-Risk"

I think to the extent you are trying to draw the focus away from longtermist philosophical arguments when advocating for people to work on extinction risk reduction, that seems like a perfectly reasonable thing to suggest (though I'm unsure which side of the fence I'm on).

But I don't want people casually equivocating between x-risk reduction and EA, relegating the rest of the community to a footnote.

  • I think it's a misleading depiction of the in-practice composition of the community,
  • I think it's unfair to the people who aren't convinced by x-risk arguments,
  • I think it could actually just make us worse at finding the right answers to cause prioritization questions.
Simplify EA Pitches to "Holy Shit, X-Risk"

It's not enough to have an important problem: you need to be reasonably persuaded that there's a good plan for actually making the problem better, the 1% lower. It's not a universal point of view among people in the field that all or even most research that purports to be AI alignment or safety research is actually decreasing the probability of bad outcomes. Indeed, in both AI and bio it's even worse than that: many people believe that incautious action will make things substantially worse, and there's no easy road to identifying which routes are both safe... (read more)

2Neel Nanda3mo
Fair point re tractability What argument do you think works on people who already think they're working on important and neglected problems? I can't think of any argument that doesn't just boil down to one of those
Simplify EA Pitches to "Holy Shit, X-Risk"

My main criticism of this post is that it seems to implicitly suggest that "the core action relevant points of EA" are "work on AI or bio", and doesn't seem to acknowledge that a lot of people don't have that as their bottom line. I think it's reasonable to believe that they're wrong and you're right, but:

  • I think there's a lot that goes into deciding which people are correct on this, and only saying "AI x-risk and bio x-risk are really important" is missing a bunch of stuff that feels pretty essential to my beliefs that x-risk is the best thing to work o
... (read more)
2Neel Nanda3mo
Can you say more about what you mean by this? To me, 'there's a 1% chance of extinction in my lifetime from a problem that fewer than 500 people worldwide are working on' feels totally sufficient
9Neel Nanda3mo
This is a fair criticism! My short answer is that, as I perceive it, most people writing new EA pitches, designing fellowship curricula, giving EA career advice, etc, are longtermists and give pitches optimised for producing more people working on important longtermist stuff. And this post was a reaction to what I perceive as a failure in such pitches by focusing on moral philosophy. And I'm not really trying to engage with the broader question of whether this is a problem in the EA movement. Now OpenPhil is planning on doing neartermist EA movement building funding, maybe this'll change? Personally, I'm not really a longtermist, but think it's way more important to get people working on AI/bio stuff from a neartermist lens, so I'm pretty OK with optimising my outreach for producing more AI and bio people. Though I'd be fine with low cost ways to also mention 'and by the way, global health and animal welfare are also things some EAs care about, here's how to find the relevant people and communities'.
Long-Term Future Fund: May 2021 grant recommendations

It's been about 7 months since this writeup. Did the Survival and Flourishing Fund make a decision on funding NOVID?

2Jonas Vollmer5mo
Based on the publicly available information on the SFF website [https://survivalandflourishing.fund/], I guess the answer is 'no', but not sure.
Longtermism in 1888: fermi estimate of heaven’s size.

Pointing out more weirdnesses may by now be unnecessary to make the point, but I can't resist: the estimate also seems to equivocate between "number of people alive at any moment" and "number of people in each generation", as if the 900 million population was comprised of a single generation that fully replaced itself each 31.125 years. Numerically this only impacts the result by a factor of 3 or so, but it's perhaps another reason not to take it as a serious attempt :)

Is EA compatible with technopessimism?

Can you give examples of technopessimists "in the wild"? I'm sure there are plenty of examples of "folk technopessimism" but if you mean something more fleshed-out than that I don't think I've seen it expressed or argued for a lot. (That said, I'm not very widely-read, so I'm sure there's lots of stuff out there I don't hear about.)

"if AI has moral status, then AI helping its replicas grow or share pleasant experiences is morally valuable stuff". Sure, but I think the claim is that "most" AI won't be interested in doing that, and will pursue some other goal instead that doesn't really involve helping anyone.

1acylhalide7mo
Very interesting point, I have some thoughts on it, let me try. First some (sorta obvious) background: Animals were programmed to help each other because species that did not have this trait were more likely to die. Then came humans who not only were programmed to help each other, they also used high-level thinking to find ways to help each other. Humans are likely to continue helping each other even once this trait stops being essential to our survival. This is not guaranteed though.* It is possible that in early stages, the AI will find its best strategy for survival is to create a lot of clones. This could be on the very same machine, on different machines across the world, on newly built machines in physically secure locations, or even on newly invented forms of hardware (such as those described in https://en.wikipedia.org/wiki/Natural_computing). [https://en.wikipedia.org/wiki/Natural_computing).] It is possible that this learned behaviour persists even after it is not essential to survival. Although there is also a more important meta-question imo. Do we care about "beings who help their clones" or do we care about "beings who care about their type surviving, and hence help their clones"? If the AI for instance decides that perfect independent clones are not the best strategy for survival, and instead it should grow like a hydra or a coral - shouldn't we be happy for it to thrive in this manner? A coral-like structure could be one where all the subprocesses run mostly independently, but yet are not fully disconnected from a backbone compute process. This is in some ways how humans grow. Even though each human individuals is physically disconnected from other individuals (we don't share body parts), we do share social and learning environments that critically shape our growth. Which makes us a lot closer to a single collective organism than say bacteria. *This reason this will be stable even when it is not essential to survival is simply because there is n
The Cost of Rejection

It's a little aside from your point, but good feedback is not only useful for emotionally managing the rejection -- it's also incredibly valuable information! Consider especially that someone who is applying for a job at your organization may well apply for jobs at other organizations. Telling them what is good or bad with their application will help them improve that process, and make them more likely to find something that is the right fit for them. It could be vital in helping them understand what they need to do to position themselves to be more useful... (read more)

Agree!

In our current hiring round for EA Germany, I'm offering all 26 applicants "personal feedback on request if time allows", and I think it's probably worth my time at least trying to answer as many feedback requests as I can.

I'd encourage other EA recruiters to do the same, especially for those candidates that already did work tests. If you ask someone to spend 2h on an unpaid work test, it seems fair to make at least 5min time for feedback.

How would you run the Petrov Day game?

Like Sanjay's answer, I think this is a correct diagnosis of a problem, but I think the advertising solution is worse than the problem.

  • A month of harm seems too long to me,
  • I can't think of anything we'd want to advertise on LW that we wouldn't already want to advertise on EAF, and we've chosen "no ads" in that case.
How would you run the Petrov Day game?

I'd like to push the opt-in / opt-out suggestion further, and say that the button should only affect people who have opted in (that is, the button bans all the opted-in players for a day, rather than taking the website down for a day). Or you could imagine running it on another venue than the Forum entirely, that was more focused on these kinds of collaborative social experiments.

I can see an argument that this takes away too much from the game, but in that case I'd lean towards just not running it at all. I think it's a cute idea but I don't think it feel... (read more)

How would you run the Petrov Day game?

I think this correctly identifies a problem (not only is it a bad model for reality, it's also confusing for users IMO). I don't think extra karma points is the right fix, though, since I imagine a lot of people only care about karma insofar as it's a proxy for other people's opinions of their posts, which you can't just give 30 more of :)

(also it's weird inasmuch as karma is a proxy for social trust, whereas nuking people probably lowers your social trust)

Honoring Petrov Day on the EA Forum: 2021

Sure, precommitments are not certain, but they're a way of raising the stakes for yourself (putting more of your reputation on the line) to make it more likely that you'll follow through, and more convincing to other people that this is likely.

In other words: of course you don't have any way to reach probability 0, but you can form intentions and make promises that reduce the probability (I guess technically this is "restructuring your brain"?)

1EricHerboso8mo
This is not how I understand the term. What you're describing is how I would describe the word "commitment". But a "precommitment" is more strict; the idea is that you have to follow through in order to ensure that you can get through a Newcomb's paradox situation. You can use precommitments to take advantage of time-travel shenanigans, to successfully one-box Newcomb, or to ensure that near-copies of you (in the multiverse sense) can work together to achieve things that you otherwise wouldn't. With that said, it may make sense to say that we humans can't really precommit in these kinds of ways. But to the extent that we might be able to, we may want to try, so that if any of these scifi scenarios ever do come up, we'd be able to take advantage of them.
Honoring Petrov Day on the EA Forum: 2021

Yeah, that did occur to me. I think it's more likely that he's telling the truth, and even if he's lying, I think it's worth engaging as if he's sincere, since other people might sincerely believe the same things.

Honoring Petrov Day on the EA Forum: 2021

I downvoted this. I'm not sure if that was an appropriate way to express my views about your comment, but I think you should lift your pledge to second strike, and I think it's bad that you pledged to do so in the first place.

I think one important disanalogy between real nuclear strategy and this game is that there's kind of no reason to press the button, which means that for someone pressing the button, we don't really understand their motives, which makes it less clear that this kind of comment addresses their motives.

Consider that last time LessWrong wa... (read more)

4Arepo8mo
All of this seems consistent with Peter's pledge to second strike being +EV, as long as he's lying.
2Linch8mo
(I've also downvoted Peter's comment when I first read it, for similar reasons).
Cultured meat predictions were overly optimistic

While I think it's useful to have concrete records like this, I would caution against drawing conclusions about the cultured meat community specifically unless we draw a comparison with other fields and find that forecast accuracy is better anywhere else. I'd expect that overoptimistic forecasts are just very common when people evaluate their own work in any field.

The motivated reasoning critique of effective altruism

Another two examples off the top of my head:

Three charitable recommendations for COVID-19 in India

GiveIndia says donations from India or the US are tax-deductible.

Milaap says they have tax benefits to donations but I couldn't find a more specific statement so I guess it's just in India?

Anyone know a way to donate with tax deduction from other jurisdictions? If 0.75x - 2x is accurate, it seems like for some donors that could make the difference.

(Siobhan's comment elsewhere here suggests that Canadian donors might want to talk to RCForward about this).

1Tejas Subramaniam1y
Hi! So the Swasti Oxygen for All fundraiser does not offer a tax deduction for the United States (I asked them recently). Swasth’s Oxygen for India fundraiser offers tax deductions for donations from the United States for donations above $1,000 (the details are specified in the link). We are happy to check about other countries!
1manyag1y
Hello! Unfortunately I don't think they have a list for all the countries where they're tax exempt, but if you have a specific country in mind, I can try and check for you!
AMA: Toby Ord @ EA Global: Reconnect

You've previously spoken about the need to reach "existential security" -- in order to believe the future is long and large, we need to believe that existential risk per year will eventually drop very close to zero. What are the best reasons for believing this can happen, and how convincing do they seem to you? Do you think that working on existential risk reduction or longtermist ideas would still be worthwhile for someone who believed existential security was very unlikely?

5Denise_Melchin1y
+1, very interested in this. I didn't find the reasons in the Precipice that compelling/not detailed enough, so I'd be curious for more.
Why EA groups should not use “Effective Altruism” in their name.

It seems plausible that reasonable people might disagree on whether student groups on the whole would benefit from being more or less conforming to the EA consensus on things. One person's "value drift" might be another person's "conceptual innovation / development".

On balance I think I find it more likely that an EA group would be co-opted in the way you describe than an EA group would feel limited from doing  something effective because they were worried it was too "off-brand", but it seems worth mentioning the latter as a possibility.

Why EA groups should not use “Effective Altruism” in their name.

I think this post doesn't explicitly recognize a (to me) important upside of doing this, which applies to doing all things that other people aren't doing: potential information value.

This post exists because people tried something different and were thoughtful about the results, and now potentially many other people in similar situations can benefit from the knowledge of how it went. On the other hand, if you try it and it's bad, you can write a post about what difficulties you encountered so that other people can anticipate and avoid them better.

By contrast, naming your group Effective Altruism Erasmus wouldn't have led to any new insights about group naming.

Deference for Bayesians

Bluntly I think a prior of 98% is extremely unreasonable. I think that someone who had thoroughly studied the theory, all credible counterarguments against it, had long discussions about it with experts who disagreed, etc. could reasonably come to a belief that strong. An amateur who has undertaken a simplistic study of the basic elements of the situation can't IMO reasonably conclude that all the rest of that thought and debate would have a <2% chance of changing their mind.

Even in an extremely empirically grounded and verifiable theory like physics, f... (read more)

4John G. Halstead1y
This is maybe getting too bogged down in the object-level. The general point is that if you have a confident prior, you are not going to update on uncertain observational evidence very much. My argument in the main post is that ignoring your prior entirely is clearly not correct and that is driving a lot of the mistaken opinions I outline. Tangentially, I stand by my position on the object-level - I actually think that 98% is too low! For any randomly selected good I can think of, I would expect a price floor to reduce demand for it in >99% of cases. Common sense aside... The only theoretical reason this might not be true is if the market for labour is monopsonistic. That is just obviously not the case. There is also evidence from the immigration literature which suggests that native wages are barely affected by a massive influx of low skilled labour, which implies a near horizontal demand curve. There is also the point that if you are slightly Keynesian you think that involuntary employment is caused by the failure of wages to adjust downward; legally forbidding them from doing this must cause unemployment.
Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected

I agree with Halstead that this post seems to ignore the upsides of creating more humans. If you, like me, subscribe to a totalist population ethics, then each additional person who enjoys life, lives richly, loves, expresses themselves creatively, etc. -- all of these things make for a better world. (That said, I think that improving the lives of existing people is currently a better way to achieve that than creating more -- but I wouldn't say that creating more is wrong).

Moreover, I think this post misses the instrumental value of people, too. To underst... (read more)

3RafaelF1y
It is indeed a very tricky question. Of course, there is a chance that each newly born child becomes a climate researcher, politician, social worker etc. - but what are the odds? And do they outweigh, as you mentioned, all the "bad" (for lack of a better work that encompasses suffering and future issues) points? My personal view on this, as I mentioned in my reply to Larks is that, until a certain point (namely the point where there is neither conflict about scarce resources within human society nor intense suffering caused by human society to other sentient beings) each additional individual probably has a net-positive happiness/suffering balance sheet. Once the population has reached a certain size, however, this may tip. Total carbon emissions are a direct function of (number of emitters) * (emission levels). Total factory farmed animal suffering is a direct function of (number of consumers) * (amount of factory-farmed meat consumed). The average consumer nowadays consumes more than 40 kg of meat p.a. - that means that several sentient beings spent almost all their lives in intense suffering because of one average consumer. Is that a net-positive balance sheet? Either way, my main point is about voluntary pregnancy avoidance and not about forced population reduction through whatever coercive means. If providing access to family planning counselling, women's education and empowerment means, and to contraceptives is so cheap - and at the same time links free choice with significant net-positive effects on several EA-aligned cause areas - what reasons would support not helping close that unmet need?
Population Size/Growth & Reproductive Choice: Highly effective, synergetic & neglected

The only place where births per woman are not close to 2 is sub-saharan Africa. Thus, the only place where family planning could reduce emissions is sub-saharan Africa, which is currently a tiny fraction of emissions.

This is not literally true: family planning can reduce emissions in the developed world if the desired births per woman is even lower than the actual births per woman. But I don't dispute the substance of the argument: it seems relatively difficult to claim that there's a big unmet need for contraceptives elsewhere, and that should determine what estimates we use for emissions.

At least in the US women have been having fewer children than they want for many decades:

As a result, the gap between the number of children that women say they want to have (2.7) and the number of children they will probably actually have (1.8) has risen to the highest level in 40 years.

Deference for Bayesians

I buy two of your examples: in the case of masks, it seems clear now that the experts were wrong before, and in "First doses first", you present some new evidence that the priors were right.

On nutrition and lockdowns, you haven't convinced me that the point of view you're defending isn't the one that deference would arrive at anyway: it seems to me like the expert consensus is that lockdowns work and most nutritional fads are ignorable.

On minimum wage and alcohol during pregnancy, you've presented a conflict between evidence and priors, but I don't feel li... (read more)

1Matthew Tromp1y
I think economics is especially prone to this kind of "It's more complicated than that" issue. The idea that firms will reduce employment in response to higher minimum wage places a great deal of faith in the efficiency of markets. There are plenty of ways that an increased cost of labor wouldn't lead to lower demand: there might be resistance to firing employees because of social pressures; institutions may simply remain stuck in thinking that a certain number of people are necessary to do all the work that needs to be done, and be resistant to changing those attitudes. Consider the phenomenon of "bullshit jobs". If you've ever worked in an office, in the public or private sector, you've probably noticed that a huge number of employees seem to do little of substance. Even as someone who worked a minimum wage job in a department store, many of my co-workers seemed to do little if anything of use, and yet no effort was made to ensure that everyone was being productive. If anything, I would argue that the idea that markets are, by default, perfectly efficient (or close to perfectly efficient) goes against the lived experience of me and the people I know, and my prior is to disbelieve arguments predicated on it, unless there is some specific evidence or particularly good reason to think that the market would be very efficient in a specific case (such as securities trading, where there are a huge number of smart, well-qualified people working very hard to exploit any inefficiencies to make absurd amounts of money)
2John G. Halstead1y
Hello, my argument was that there are certain groups of experts you can ignore or put less weight on because they have the wrong epistemology. I agree that the median expert might have got some of these cases right. (I'm not sure that's true in the case of nutrition however) The point in all these cases re priors is that one should have a very strong prior, which will not be shifted much by flawed empirical research. One should have a strong prior that the efficacy of the vaccine won't drop off massively for the over 65s even before this is studied. One can see the priors vs evidence case for the minimum wage more formally using Bayes theorem. Suppose my prior that minimum wages reduce demand for labour is 98%, which is reasonable. I then learn that one observational study has found that they have no effect on demand for labour. Given the flaws in empirical research, let's say there is a 30% chance of a study finding no effect conditional on there being an effect. Given this, we might put a symmetrical probability on a study finding no effect conditional on there being no effect - a 70% chance of a null result if minimum wages in fact have no effect. Then my posterior is = (.3*.98)/(.3*98+.7*.02) = 95.5% So I am still very sure that minimum wages have no effect even if there is one study showing the contrary. FWIW, my reading of the evidence is that most studies do find an effect on demand for labour, so after assimilating it all, one would probably end up where one's prior was. This is why the value of information of research into the minimum wage is so low. On drinking in pregnancy, I don't think this is driven by people's view of acceptable risk, but rather by a myopic empiricist view of the world. Oster's book is the go-to for data-driven parents and she claims that small amounts of alcohol has no effect, not that it has a small effect but is worth the risk. (Incidentally, the latter claim is also clearly false - it obviously isn't worth the risk.) On your
Where I Am Donating in 2016

I don't know if this meets all the details, but it seems like it might get there: Singapore restaurant will be the first ever to serve lab-grown chicken (for $23)

BenMillwood's Shortform

Hmm, I was going to mention mission hedging as the flipside of this, but then noticed the first reference I found was written by you :P

For other interested readers, mission hedging is where you do the opposite of this and invest in the thing you're trying to prevent -- invest in tobacco companies as an anti-smoking campaigner, invest in coal industry as a climate change campaigner, etc. The idea being that if those industries start doing really well for whatever reason, your investment will rise, giving you extra money to fund your countermeasures.

I'm sure... (read more)

2Hauke Hillebrandt2y
I think these strategies can actually be combined: A patient philanthropist sets up their endowment according to mission hedging principles. For instance, someone wanting to hedge against AI risks could invest in (leveraged) AI FAANG+ ETF ( https://c5f7b13c-075d-4d98-a100-59dd831bd417.filesusr.com/ugd/c95fca_c71a831d5c7643a7b28a7ba7367a3ab3.pdf [https://c5f7b13c-075d-4d98-a100-59dd831bd417.filesusr.com/ugd/c95fca_c71a831d5c7643a7b28a7ba7367a3ab3.pdf] ), then when AI seems more capable and risky and the market is up, they sell and buy shorts, then donate the appreciated assets to fund advocacy to regulate AI. I think this might work better for bigger donors. Like this got me thinking: https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley [https://www.vox.com/recode/2020/10/20/21523492/future-forward-super-pac-dustin-moskovitz-silicon-valley] “We can push the odds of victory up significantly—from 23% to 35-55%—by blitzing the airwaves in the final two weeks.” https://www.predictit.org/markets/detail/6788/Which-party-will-win-the-US-Senate-election-in-Texas-in-2020 [https://www.predictit.org/markets/detail/6788/Which-party-will-win-the-US-Senate-election-in-Texas-in-2020]
BenMillwood's Shortform

I don't buy your counterargument exactly. The market is broadly efficient with respect to public information. If you have private information (e.g. that you plan to mount a lobbying campaign in the near future; or private information about your own effectiveness at lobbying) then you have a material advantage, so I think it's possible to make money this way. (Trading based on private information is sometimes illegal, but sometimes not, depending on what the information is and why you have it, and which jurisdiction you're in. Trading based on a belief that... (read more)

2Hauke Hillebrandt2y
Agreed, but I don't think there's a big market inefficiency here with risk-adjusted above market rate returns. Of course, if you do research to create private information then there should be a return to that research. True, but I've heard that in the US, normally, if I lobby in the U.S. for an outcome and I short the stock about which I am lobbying, I have not violated any law unless I am a fiduciary or agent of the company in question. Also see https://www.forbes.com/sites/realspin/2014/04/24/its-perfectly-fine-for-herbalife-short-sellers-to-lobby-the-government/#95b274610256 [https://www.forbes.com/sites/realspin/2014/04/24/its-perfectly-fine-for-herbalife-short-sellers-to-lobby-the-government/#95b274610256] I really like this, but... This seems to be why people have a knee jerk reaction against it.
Objections to Value-Alignment between Effective Altruists

Here are a couple of interpretations of value alignment:

  • A pretty tame interpretation of "value-aligned" is "also wants to do good using reason and evidence". In this sense, distinguishing between value-aligned and non-aligned hires is basically distinguishing between people who are motivated by the cause and people who are motivated by the salary or the prestige or similar. It seems relatively uncontroversial that you'd want to care about this kind of alignment, and I don't think it reduces our capacity for dissent: indeed peo
... (read more)
I think your claim is not that "all value-alignment is bad" but rather "when EAs talk about value-alignment, they're talking about something much more specific and constraining than this tame interpretation".

To attempt an answer on behalf of the author. The author says "an increasingly narrow definition of value-alignment" and I think the idea is that seeking "value-alignment" has got narrower and narrower over term and further from the goal of wanting to do good.

In my time in EA value alignment has, among some... (read more)

BenMillwood's Shortform

Though betting money is a useful way to make epistemics concrete, sometimes it introduces considerations that tease apart the bet from the outcome and probabilities you actually wanted to discuss. Here's some circumstances when it can be a lot more difficult to get the outcomes you want from a bet:

  • When the value of money changes depending on the different outcomes,
  • When the likelihood of people being able or willing to pay out on bets changes under the different outcomes.

As an example, I saw someone claim that the US was facing civil war. Someone else ... (read more)

5Hauke Hillebrandt2y
Also see: https://marginalrevolution.com/marginalrevolution/2017/08/can-short-apocalypse.html [https://marginalrevolution.com/marginalrevolution/2017/08/can-short-apocalypse.html]
Ramiro's Shortform

I don't think this is a big concern. When people say "timing the market" they mean acting before the market does. But donating countercyclically means acting after the market does, which is obviously much easier :)

Slate Star Codex, EA, and self-reflection

While I think it's important to understand what Scott means when Scott says eugenics, I think:

a. I'm not certain clarifying that you mean "liberal eugenics" will actually pacify the critics, depending on why they think eugenics is wrong,

b. if there's really two kinds of thing called "eugenics", and one of them has a long history of being practiced by horrible, racist people coercively to further their horrible, racist views, and the other one is just fine, I think Scott is reckless in using the word here. I've never ... (read more)

My response to (b): the word is probably beyond rehabilitation now, but I also think that people ought to be able to have discussions about bioethics without having to clarify their terms every ten seconds. I actually think it is unreasonable of someone to skim someone’s post on something, see a word that looks objectionable, and cast aspersions over their whole worldview as a result.

Reminds me of when I saw a recipe which called for palm sugar. The comments were full of people who were outraged at the inclusion of such an exploitative, unsustainable ingre

... (read more)
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

I'm very motivated to make accurate decisions about when it will be safe for me to see the people I love again. I'm in Hong Kong and they're in the UK, though I'm sure readers will prefer generalizable stuff. Do you have any recommendations about how I can accurately make this judgement, and who or what I should follow to keep it up to date?

2Linch2y
For your second question, within our community, Owain Evans [https://twitter.com/OwainEvans_UK] seems to have good thoughts on the UK. alexrj (on this forum) and Vidur Kapur [https://twitter.com/vidur_kapur] are based in the UK and they both do forecasting pretty actively, so they presumably have reasonable thoughts/internal models about different covid-19 related issues for the UK. To know more, you probably want to follow UK-based domain experts too. I don't know who are the best epidemiologists to follow in the UK, though you can probably figure this out pretty quickly from who Owain/Alex/Vidur listen to. For your first question, I have neither a really good generalizable model or object-level insights to convey at this moment, sorry. I'll update you if something comes up!
I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Do you think people who are bad at forecasting or related skills (e.g. calibration) should try to become mediocre at it? (Do you think people who are mediocre should try to become decent but not great? etc.)

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

As someone with some fuzzy reasons to believe in their own judgement, but little explicit evidence of whether I would be good at forecasting or not, what advice do you have for figuring out if I would be good at it, and how much do you think it's worth focusing on?

How to Fix Private Prisons and Immigration
No one is going to run a prison for free--there has to be some exchange of money (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences.

In the predominant popular consciousness, this is not sufficient for the exchange to be moral. Buying a slave and treating them well is not moral, even if they end up with a happier life than they otherwise would have had. Personally, I'm consequentialist, so in some sense I agree with you, but even the... (read more)

1FCCC2y
No, this consequence was one of my intentions. It was not an afterthought. Not every goal needs to be stated, they can be implied. ...by the convict's own free will. And just because that's the only thing being measured, doesn't mean I'm disregarding everything else. Societal contribution and a person's value are different things: A person who lives separately from society has value. But I don't know how to construct a system that incorporates that value. This is a misunderstanding of the policy. Crimes that occur within prison must be paid for, so the prisons want to protect their inmates. This is a good point. Maybe they should be put in a public prison.
How to Fix Private Prisons and Immigration

As my other comment promised, here's a couple of criticisms of your model on its own terms:

  • "If the best two prisons are equally capable, the profit is zero. I.e. criterion 3 is satisfied." I don't see why we should assume the best two prisons are equally capable? Relatedly, if the profit really is zero, I don't see why any prison would want to participate. But perhaps this is what your remark about zero economic profit is meant to address. I didn't understand that; perhaps you can elaborate.
  • Predicting the total present value o
... (read more)
1FCCC2y
That's correct. Profit=Revenue−Costs. The profit that most people think about is the accounting profit. Accounting profit ignores opportunity costs, which is what you give up by doing what you're doing (bear with me a moment). Economic profit, on the other hand, includes these opportunity costs in the calculation. For example, let's say Tom Cruise quits acting and decides to bake cakes for a living. Even if his cake shop earns him $1M in accounting profit, he's giving up all the money he could earn acting instead. So his economic profit is actually negative. I think you could actually just fix this in the model and still reach the same conclusion (though you'd need extra assumptions to make it work). I really just wanted to introduce my idea for the prison system, rather than make an airtight argument to justify it. ... It is very difficult, but that's exactly what the financial markets do. Yep. If someone is great at running prisons, you want them to do so, regardless of how good they are at predicting the future. Ideally, you would have a system that allows any good expert to thrive, even if they know little about anything outside of their expertise. But companies deal with this all the time. When they're developing a new product, they have to predict which research ventures will be fruitful and which won't be. They have to predict how well products will sell. They have to predict product breakage rates. They have to predict what advertising will work the best. All these things are hard, which is why companies fail. But they are replaced by ones who better succeed at solving all the issues. ... Well, yeah. That's why I say to not measure those things. Only measure the big things. The reason why I mention that later in my post, rather than including it in the core argument, is because you need to "smooth things out" with simplifying assumptions to make logical arguments work. You could actually use my proposal as a secondary, opt-in public education system
How to Fix Private Prisons and Immigration

My instinctive emotional reaction to this post is that it worries me, because it feels a bit like "purchasing a person", or purchasing their membership in civil society. I think that a common reaction to this kind of idea would be that it contributes to, or at least continues, the commodification and dehumanization of prison inmates, the reduction of people to their financial worth / bottom line (indeed, parts of your analysis explicitly ignore non-monetary aspects of people's interactions with society and the state; as far as I can tell, al... (read more)

5FCCC2y
No one is going to run a prison for free--there has to be some money exchanged (even in public prisons, you must pay the employees). Whether that exchange is moral or not, depends on whether it is facilitated by a system that has good consequences. I think a worthy goal is maximizing the societal contribution of any given set of inmates without restricting their freedom after release. This goal is achieved by the system I proposed (a claim supported by my argument in the post). Under this system, I think prisons will treat their inmates far better than they currently do: allowing inmates to get raped probably doesn't help maximize societal contribution. "Commodification" and "dehumanization" don't mean anything unless you can point to their concrete effects. If I've missed some avoidable concrete effect, I will concede it as a good criticism. Not every desirable thing needs to be explicitly stated in the goal of the system: Good consequences can be implied. As I mentioned, inmates will probably be treated much better under my system. Another good implicit consequence of satisfying stated goal, is that prisons will only pursue a rehabilitative measure if and if it is in the interests of society (again, you wouldn't want to prevent the theft of a candy bar for a million dollars). I account for the nonmonetary aspects of the crimes. But yes, the rest is ignored. If this ignored amount correlates with the measured factors, this is not really an issue.
How to Fix Private Prisons and Immigration

As an offtopic aside, I'm never sure how to vote on comments like this. I'm glad the comment was made and want to encourage people to make comments like this in future. But, having served its purpose, it's not useful for future readers, so I don't want to sort it to the top of the conversation.

Will protests lead to thousands of coronavirus deaths?

The number of possible pairs of people in a room of n people is about n^2/2, not n factorial. 10^2 is many orders of magnitude smaller than 10! :)

(I think you are making the mistake of multiplying together the contacts from each individual, rather than adding them together)

2Linch2y
lol I thought that 10! was a surprise, rather than a factorial...
Load More