My name is Tom Jump, I want to share a moral framework I’ve been working on and get genuine philosophical criticism. I’m not trying to convert anyone or claim this is “the answer.” I want to know where it fails, where it’s redundant, or where it produces unacceptable conclusions. (I published book on it and talk about it for years on my YouTube channel, just as evidence the model is not ai generated)
Core idea
The entire framework is built on one claim:
All involuntary imposition on the will of a conscious being is immoral.
All voluntary assistance of the will of a conscious being is moral.
There are no other foundational rules.
No outcome maximization.
No divine commands.
No moral authority.
No virtue rankings.
Everything reduces to one question:
Was a conscious being forced against their will, or not?
---
What “imposition” means here
An involuntary imposition is any state or action that overrides, constrains, or frustrates the will of a conscious agent without their consent.
That includes obvious cases like assault, theft, coercion, or using someone’s body without permission.
It also includes cases people don’t usually label as moral at all, like a rock falling on someone. That sounds strange, but it leads to an important distinction.
---
Moral badness vs moral blame
This framework separates two things that are often mixed together.
Moral badness is about whether a state of affairs violates a will.
Moral blame is about whether an agent is responsible for that violation.
If a rock falls on someone, their will is violated. That is morally bad in this framework.
But no agent chose it, so no one is morally blameworthy.
This allows the model to say something many systems struggle with:
Something can be morally bad without anyone being morally guilty.
---
Why outcomes don’t justify coercion
Under this framework, killing one non-consenting person to save five others is still immoral.
Even if total suffering is reduced.
Even if the intention is good.
Even if the outcome looks better overall.
The reason is simple: someone’s will was used as a means.
Reducing harm can matter when comparing unavoidable tragedies, but it doesn’t magically turn coercion into moral action. “Less bad” does not become “good.”
---
What this framework seems to handle well
Supporters (and critics) often point out that it cleanly explains things like:
Why consent feels morally fundamental
Why good intentions don’t excuse violations
Why tragic outcomes can be morally bad without implying moral failure
Why nature can cause morally bad states without being evil
Why many moral disagreements collapse into disputes about coercion versus permission
It also avoids a lot of internal tension around exceptions, rule-breaking, aggregation problems, or appeals to authority.
---
What this framework is not
It is not utilitarian.
It is not deontological.
It is not virtue ethics.
It is not religious.
It is not nihilistic.
It also does not say the world can be perfected, that tragedy can always be avoided, or that anyone is morally obligated to optimize outcomes.
It describes what moral wrongness is, not what must be enforced.
---
What I want criticism on
I’m looking for serious objections, not agreement:
Where does this framework break?
What counterexamples make it collapse?
Is it secretly smuggling in assumptions it claims to reject?
Does it reduce to an existing theory under closer analysis?
Are there cases where its conclusions are clearly unacceptable?
If you think the core axiom is wrong, I want to know exactly where and why.
There is a lot more to it this is just a summary, one of the things I did was I copied it into untrained chat GPT and asked it to rate my complete model vs other religions and models of morality and mine rated higher than any other.
Here you can find all the parts and all the stuff you need to copy it into Chat GPT:
https://www.churchofthebestpossibleworld.org/askchatgpt
Thanks for reading, and I’m happy to engage with criticism in the comments.

This seems pretty deontological to me.
It seems like the core claim implies that genocide is no worse than having a phone stolen.
i dont know how you got the idea all immoral action are equivalent based on anything i said there
I'm taking what you said literally under the core idea, that there are no other foundational rules. But perhaps comparison of different actions is not considered foundational. How would the framework compare/rank different outcomes or actions?
For example, how would it compare two actions involving involuntary imposition?
Or if an action involves one involuntary imposition (immoral) but also one voluntary assistance (moral), what does that imply overall? Is it always considered immoral, or does it depend on the extent of the imposition relative to the assistance?
that's just a brief overview (which is says in the last paragraph) there is a significant amount more to it listed on the site linked at the bottom, which answers all of those types of surface level questions