bwr

bwr's Posts

Sorted by New

bwr's Comments

Are there good EA projects for helping with COVID-19?

This is super interesting - Some of the most interesting-sounding links seem broken, though [edit: fixed]

Are there good EA projects for helping with COVID-19?
Answer by bwrMar 04, 202012

I've seen concern that hospitals will run out of ventilators. Potential intervention: design a cheap machine to pump bag valve masks (which are ubiquitous and apparently do much of the same job as a ventilator, but currently require a human operator). I'd guess you could build something to perform this job for <$50; possibly very quickly if you had a team of competent engineers.

I don't know how you'd get them distributed though, and I'm skeptical that the FDA would make it easy to sell them to US hospitals. I'm interested in anyone with experience in the medical device space, or experience in the constraints on what devices hospitals are allowed to use, weighing in on that question.

How effective is household recycling?
Answer by bwrAug 29, 201913

Rob Wiblin wrote a post about recycling and garbage disposal last month; you might find what you're looking for there or in the references at the bottom.

Why do you reject negative utilitarianism?
Answer by bwrFeb 12, 20198
What have you read about it that has caused you to stop considering it, or to overlook it from the start?

This response seems unlikely to be a crux for you, but I don't often see it written explicitly, so I'll mention it anyway in case someone reading hasn't thought of it:

Negative utilitarianism implies that you would prefer to destroy a universe with an unbounded amount of certain positive experience, if that would prevent an infinitesimal chance of one speck of dust getting in someone's eye.

This means that a negative utilitarian will basically always prefer that the universe is destroyed, since there will always (I suspect) be uncertainty about which things suffer (1 is not a probability).

Debate and Effective Altruism: Friends or Foes?

[This comment previously consisted of an objection that misunderstood the point of this post, and was mostly deleted]

This is an interesting topic that I hadn't heard discussed before, and I appreciate learning about these benefits!

While I understand that your goal here was to list arguments in favor of competitive debate, and leave any counterarguments out of the scope, I also think that in doing so you might have fallen short of the stated promise to

do so in the spirit of anti-debate – pointing out the limitations of my arguments where I notice them, and leaving open the possibility that anti-debate could be a superior alternative.

Overall, I think that this aim is incompatible with your decision that

[the disadvantages of competitive debate] – and therefore any all-things-considered conclusions – fall outside of the scope of this post.

unless you plan to write further posts following up on those disadvantages.

In particular, it seems like this post naturally raises the question "and what are the negative impacts of competitive debate on the debaters, if any?", to which it seems like there are some obvious answers, and probably some less obvious ones.

I think that listing benefits on its own is a fine basis for a post; it just doesn't seem to me like "the spirit of anti-debate".

Even non-theists should act as if theism is true
There are no obvious structural connections between knowing correct moral facts and evolutionary benefit.

...

There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.

I haven't read Lukas Gloor's post, so I'm not sure whether this counts as "subjectivism" and therefore is implausible to you, but:

Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit. There might be objective facts about exactly which moral systems provide this benefit, and believing in a useful moral system could help you to enact that moral system.

For example, it could be the case that what is "good" is what benefits your genes without benefiting you personally. People could thus correctly believe that there are some actions that are good, in the same way they believe that some actions are "helpful". I think, and have been told, that there are mathematical reasons to think this particular instantiation is not the case, but I haven't fully understood them yet.