I don't know exactly what markets you're referring to, but have you considered that they could be right?
And maybe it's worth the trade-off, but if you're consistently applying the principle of "more information is always good", you should want to know when people are annoyed or angry with you (although it might turn out that when you reflect you conclude that there are limits on this principle).
Maybe I'm missing something, but I think it's a negative sign that mirror bacteria seems to have pretty much not been discussed within the EA community until now (that said, what really matters is the percent of biosecurity folk in the community who have heard of this issue).
Does scaling make sense with a principles-first strategy? My intuition would be that with a principles-first strategy it makes more sense to focus on quality over quantity.
I think a key crux here is whether you think AI timelines are short or long. If they're short, there's more pressure to focus on immediately applicable work. If they're long, then there's more benefit to having philosophers develop ideas which gradually trickle down.
I don't know exactly what markets you're referring to, but have you considered that they could be right?
And maybe it's worth the trade-off, but if you're consistently applying the principle of "more information is always good", you should want to know when people are annoyed or angry with you (although it might turn out that when you reflect you conclude that there are limits on this principle).