If you're reading this and thinking about messaging me, you should, and if that stops being true I will edit this.
My guess would be yes. I too would really like to see data on this, although I don't know how I'd even start on getting it.
I imagine it would also be fairly worthwhile just to quantify how much is being lost by people burning out and how hard it is to intervene - maybe we could do better, and maybe it would be worth it.
I certainly agree that outside EA, consequentialism just means the moral philosophy. But inside I feel like I keep seeing people use it to mean this process of decision-making, enough that I want to plant this flag.
I agree that the criterion of rightness / decision procedure distinction roughly maps to what I'm pointing at, but I think it's important to note that Act Consequentialism doesn't actually give a full decision procedure. It doesn't come with free answers to things like 'how long should you spend on making a decision' or 'what kinds of decisions should you be doing this for', nor answers to questions like 'how many layers of meta should you go up'. And I am concerned that in the absence of clear answers to these questions, people will often naively opt for bad answers.
I get the impression many orgs set up to support EA groups have some version of this. Here are some I found on the internet:
Global Challenges Project has a "ready-to-go EA intro talk transcript, which you can use to run your own intro talk" here: https://handbook.globalchallengesproject.org/packaged-programs/intro-talks
EA Groups has "slides and a suggested script for an EA talk" here: https://resources.eagroups.org/events-program-ideas/single-day-events/introductory-presentations
To be fair, in both cases there is also some encouragement to adapt the talks, although I am not persuaded that this will actually happen much, and that when it does, it might still be obvious that you're seeing a variant on a prepared script.
My understanding is that some philosophers do actually think 'consequentialism' should only refer to agent-neutral theories. I agree it's confusing - I couldn't think of a better way to phrase it.