IT

Ian Turner

604 karmaJoined

Comments
156

If the problem is ann employee rebellion, the obvious alternative would be to organize the company in a jurisdiction that allows noncompete agreements?

These things are not generally enforced in court. It’s the threat that has the effect, which means the non-disparagement agreement works even if it’s of questionable enforceability and even if indeed it is never enforced.

@Zvi  has a blog post about all the safety folks leaving OpenAI. It’s not a great picture. 

If Tina were to advertise that 100% of the profits generated by her store were going to a specific charity, in the current economic arrangement, this would not be a real Profit for Good business.

How much does the ability of companies to muddle the water affect your analysis? It seems to me that even today, regular for-profit companies find ways to imply that they are social beneficial, even when the opposite is true.

Oh sure, I'll readily agree that most startups don't have a safety culture. The part I was disagreeing with was this:

I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract

Regarding finance, I don't think this is about 2008, because there are plenty of trading firms that were careful from the outset that were also founded well before the financial crisis. I do think there is a strong selection effect happening, where we don't really observe the firms that weren't careful (because they blew up eventually, even if they were lucky in the beginning).

How do careful startups happen? Basically I think it just takes safety-minded founders. That's why the quote above didn't seem quite right to me. Why are most startups not safety-minded? Because most founders are not safety-minded, which in turn is probably due in part to a combination of incentives and selection effects.

Not disagreeing with your thesis necessarily, but I disagree that a startup can't have a safety-focused culture. Most mainstream (i.e., not crypto) financial trading firms started out as a very risk-conscious startup. This can be hard to evaluate from the outside, though, and definitely depends on committed executives.

Regarding the actual companies we have, though, my sense is that OpenAI is not careful and I'm not feeling great about Anthropic either.

(I didn’t read the whole post)

Is deep honesty different from candor? I was surprised not to see that word anywhere in this post.

I am not that knowledgable myself. But about the vaccines, my understanding is that they are not that effective and that distributing them is very expensive. The vaccines require a cold chain, multiple doses spread well apart, and the vaccine is delivered as an injection. These are all major obstacles to cost-effective distribution in a developing country setting, so while some might say that "progress is slower than it should be", personally I have pretty low expectations.

My impression is that most CO2 offsets are bogus, basically the climate change version of “just 25 cents will help save a child’s life”. If you subject them to a GiveWell style analysis, I would guess most of these offset programs fall apart, or at least deliver way less than the promised counterfactual impact.

Also logically I think it would make sense to lump offsets in with other charitable giving and subject them to the same scrutiny, and when you do that it just doesn’t make sense to buy offsets. Even within the climate cause area, I really doubt that buying offsets would be cost effective, and I also doubt that climate is the most cost effective cause area right now.

Load more