There's a potential for concerns (even real concerns) about AI safety to increase the cost of AI research, to the point that relatively attainable and extremely wealth generating AI technologies simply don't get developed because of the barriers put in place in front of their development. Even if they still get developed, AI safety concerns can certainly slow down that development. Whether that's a good thing or not depends on both the potential dangers of AI and the potential benefits.
Another related issue is that while AI presents risks, it can also help us to deal with other risks. To the extent that AI safety research slows down the development of AI at all, it contributes to the other risks that AI could help us to mitigate. If AI can help us develop vaccines to prevent the next pandemic, failing to get AI developed before the next pandemic puts us at greater risk, for example.
Or, to sum up in other words: opportunity costs.