At his Fake Nous blog, the philosopher Michael Huemer has an old post about why Bayesian statistics is better than traditional p-value based statistics. The post also discusses the problem of forming original priors for beliefs and how that doesn't undermine Bayesianism.
I think the post is particularly good at explaining the case for Bayesianism.
I think most frequentists would agree that Bayesian inference is more intuitive. Bayesian inference is much more computationally difficult though, and you usually get the same answer anyways. (Bayesian estimators are typically asymptotically equivalent to classical estimators!)
> and you usually get the same answer anyways
I don't agree with this! In reality we don't get asymptotic properties, we get finite sample properties, and these can vary greatly. E.g. MLE often won't even converge for hierarchical models without a fair amount of data. Also, for bespoke models there often isn't a published frequentist estimator available, and attempting to derive one would be a much bigger issue for most people than the computational resources required for MCMC or variational inference.