Overall this is a good point, but I have one nit:
If this were true, there would be only one chef in the world (the person who is best at being a chef), only one baker, one software engineer, etc.
I don't think this follows; in particular, following the policy "everyone does the thing which they are best at in the world" doesn't actually make a prescription for most people, since most people are not the best in the world at anything (unless you take a weirdly granular view of things, like "the best Orthodontist named P. Sherman, with an office at 42 Wallaby Way, Sydney", at which point the reductio stops seeming obviously absurdum)
This seems really spot-on to me. Thus far, my 2 years of startup experience (at Cruise, a different self-driving car company broadly similar to Aurora) often feels like it was the most important thus far for my personal growth. In fact, I think it's likely that I should have continued in that role for another year, rather than shifting into direct work when I did.
This is super interesting - Some of the most interesting-sounding links seem broken, though [edit: fixed]
I've seen concern that hospitals will run out of ventilators. Potential intervention: design a cheap machine to pump bag valve masks (which are ubiquitous and apparently do much of the same job as a ventilator, but currently require a human operator). I'd guess you could build something to perform this job for <$50; possibly very quickly if you had a team of competent engineers.
I don't know how you'd get them distributed though, and I'm skeptical that the FDA would make it easy to sell them to US hospitals. I'm interested in anyone with experience in the medical device space, or experience in the constraints on what devices hospitals are allowed to use, weighing in on that question.
Rob Wiblin wrote a post about recycling and garbage disposal last month; you might find what you're looking for there or in the references at the bottom.
What have you read about it that has caused you to stop considering it, or to overlook it from the start?
This response seems unlikely to be a crux for you, but I don't often see it written explicitly, so I'll mention it anyway in case someone reading hasn't thought of it:
Negative utilitarianism implies that you would prefer to destroy a universe with an unbounded amount of certain positive experience, if that would prevent an infinitesimal chance of one speck of dust getting in someone's eye.
This means that a negative utilitarian will basically always prefer that the universe is destroyed, since there will always (I suspect) be uncertainty about which things suffer (1 is not a probability).
[This comment previously consisted of an objection that misunderstood the point of this post, and was mostly deleted]
This is an interesting topic that I hadn't heard discussed before, and I appreciate learning about these benefits!
While I understand that your goal here was to list arguments in favor of competitive debate, and leave any counterarguments out of the scope, I also think that in doing so you might have fallen short of the stated promise to
do so in the spirit of anti-debate – pointing out the limitations of my arguments where I notice them, and leaving open the possibility that anti-debate could be a superior alternative.
Overall, I think that this aim is incompatible with your decision that
[the disadvantages of competitive debate] – and therefore any all-things-considered conclusions – fall outside of the scope of this post.
unless you plan to write further posts following up on those disadvantages.
In particular, it seems like this post naturally raises the question "and what are the negative impacts of competitive debate on the debaters, if any?", to which it seems like there are some obvious answers, and probably some less obvious ones.
I think that listing benefits on its own is a fine basis for a post; it just doesn't seem to me like "the spirit of anti-debate".
There are no obvious structural connections between knowing correct moral facts and evolutionary benefit.
There do not seem to be many candidates for types of mechanism that would guide evolution to deliver humans with reliable beliefs about moral reasons for action. Two species of mechanism stand out.
I haven't read Lukas Gloor's post, so I'm not sure whether this counts as "subjectivism" and therefore is implausible to you, but:
Another way to end up with reliable moral beliefs would be if they do provide an evolutionary benefit. There might be objective facts about exactly which moral systems provide this benefit, and believing in a useful moral system could help you to enact that moral system.
For example, it could be the case that what is "good" is what benefits your genes without benefiting you personally. People could thus correctly believe that there are some actions that are good, in the same way they believe that some actions are "helpful". I think, and have been told, that there are mathematical reasons to think this particular instantiation is not the case, but I haven't fully understood them yet.