I often have trouble finishing one of your pieces because they always make me think of digressions. But I appreciate that.
“One should either apply the principle consistently to all sentient life (and then, presumably, humans must mainly be engaged in “correcting” the disadvantages of other animals) or drop it completely.”
Are these really the only choices? The argument takes for granted that sentience is the criterion for deciding the status of moral patients. Given the point of the argument being made, other criteria would probably fare even worse for redistributionists. But perhaps there is a twisty argument for a different criterion, one that would exclude animals (or enough of them to seem plausible). The specific one that I prefer would not help the redistributionists, but maybe they might be able to find a more sympathetic parallel.
I assume that if one is “unlucky” enough to be a non-sentient being, then one cannot plausibly be compensated for that. So, I’m not sure what “other criteria” than sentience could be used by redistributists. And I can’t see how to exclude non-human animals in a non-arbitrary way. But, of course, I might be overlooking something. You do not cite the criterion that you prefer.
I think I’ve mentioned it before, but maybe not. I think “sentience” doesn’t work for two reasons. First, because very strictly speaking, “sentience” refers to having senses rather than being rational or whatever people usually mean when they use it loosely. Secondly, even when being less strict, human infants, adult coma victims, etc. are arguably not sentient (in the colloquial sense), but are still usually considered moral patients, entities with rights that moral agents should observe.
A full moral agent should be able and willing to observe norms, fulfill duties. No one expects a broader category to work. And so that delineates the largest possible scope of moral agency. I think this means that moral agents should have rights that correspond to their obligations. (Other positions might be adopted, but I won’t examine those here.) So moral agents should have the same rights as moral patients as a default.
To extend the category of moral patient to include non-humans, non-adults, or incapacitated adults, I suggest that it requires a guardian or proxy who is a moral agent and is willing to fulfill the obligations on behalf of the moral patient. This is basically a person who is willing to take responsibility for the moral patient. So animals that have a guardian/owner have moral status derived from that relationship. Other moral agents should respect the guardian's responsibilities, but the guardian also has additional duties. I cannot arbitrarily harm your dog, but I can defend myself if it attacks me, and you would be liable. Similarly, a parent (natural or adoptive) has responsibility for a child, and relatives have responsibility for incapacitated persons.
Currently, the state acts as the guardian of last resort, so that all children and incapacitated adults have a de facto guardian. But due to the nature of the state, they do not really hold any liability, so wards of the state have no recourse. In a society without a state, this role could be played by religious institutions or charities. The extreme result is the same - someone in sufficiently dire circumstances has no recourse other than the kindness of bystanders, which carries no obligation or liability. Perhaps this is acceptable because indigent persons often do not wish to receive the “help” offered by a state or institution.
I've put this in terms of moral agency and moral patient status. This is perhaps an error, and the categories should be legal or have to do with social interaction or whatever. I'm going through a sophomoric stage where i am not sure how to distinguish morality from prudence or justice.
As I said, I don’t think the redistributionists would prefer my account of moral patient status, as it provides even less cover for their approach. My original point was just that sentience is not the only possible criterion, and they would perhaps like to come up with a better one. That is not to say that they could.
I can’t recall your having mentioned it before. Etymologically, “sentience” refers to experiences via the senses. But philosophers often use it to mean something like “having some level of consciousness”. Presumably, because it is useful to replace an awkward expression with one word. And because “having some level of consciousness” is what really matters, whether it occurs via the senses or not (such as when we are asleep and dreaming).
Human infants are surely normally conscious and sentient. Coma victims are usually understood not to be either, or barely so. Of course, I would understand the rights regarding infants as belonging to their guardians. And the rights of coma victims only persist as long as they might recover (as though waking from a dreamless sleep) or until any contracts to support them lapse.
A full moral agent need not be willing to “observe norms, fulfill duties”. Other animals are not moral agents, but they are moral patients. They cannot behave morally or immorally, but they can be treated morally or immorally—just because they are sentient.
I am inclined to say that moral agents have rights and duties, but moral patients do not. It is immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”.
“sentience is not the only possible criterion” for what? It seems to be the, unintentionally, implied criterion for having “unfair” advantages or disadvantages that can be “corrected”.
>“having some level of consciousness” is what really matters,
That is vague. Philosophers aren't willing to agree even that other humans have consciousness. Animals and bacteria and insects might or might not. Panpsychists hypothesize that everything does. It's unhelpful.
> the rights of coma victims only persist as long as they might recover (as though waking from a dreamless sleep) or until any contracts to support them lapse.
Or until whoever is paying the bill decides to stop. What kind of a right is that? One that depends on someone accepting responsibility.
>A full moral agent need not be willing to “observe norms, fulfill duties”.
Moral agents that are unwilling to observe norms or fulfill duties are not very good moral agents. Why does consciousness entitle them to rights, If they are unwilling to refrain from violating others' rights? If they deny the existence of rights, why should rights apply to them? What does moral agency mean if not at least the ability to fulfill duties? We can argue over willingness, I suppose.
>Other animals are not moral agents,
On your account, wouldn't this depend on how conscious they are?
>but they are moral patients. They cannot behave morally or immorally, but they can be treated morally or immorally—just because they are sentient.
This seems different from what you wrote previously. If moral patient status is derived from consciousness, What does moral agency depend on? Still consciousness, but more of it? Or ability to behave morally or immorally? Isn't that a question of how other people evaluate their action, rather than something intrinsic to the action itself?
>I am inclined to say that moral agents have rights and duties, but moral patients do not.
Certainly animals have no duties.
>It is immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”.
I suppose we can come up with a distinct term for that. But it doesn't seem very parsimonious when it's the same concept. What is the difference between X having a moral right not to have great suffering inflicted on it and it being immoral to inflict great suffering on it? Perhaps a thing with a right will have recourse when the right is violated, but a thing that it is immoral to inflict suffering on might not have recourse?
>“sentience is not the only possible criterion” for what?
Moral patient status.
>It seems to be the, unintentionally, implied criterion for having “unfair” advantages or disadvantages that can be “corrected”.
Different terms for the same thing? But what counts as fair is always controversial. Before we can know how to play fair, we need to know the rules of the game and who is playing.
We need to separate metaphysical problems from epistemological ones. If it is correct, then the thesis that consciousness is what ultimately matters appears to be fairly clear and helpful. How it can be decided who, or what, has consciousness is a completely separate problem. Your response is like saying to a theory of what truth is (e.g., accurate depiction) that the theory is vague and unhelpful because we don’t know which theories are accurate depictions. That is a separate problem.
Libertarian rights don’t depend on other people accepting responsibility. We always have the right to liberty (to not have impositions initiated on us unnecessarily). But what counts as our liberty can depend on circumstances, including any valid contracts.
Moral agents who are unwilling to observe norms or fulfil duties are morally bad but still fully moral agents (hence we hold them morally responsible).
Being a person is what entitles one to rights and obliges one to fulfil duties. To violate some of someone else’s rights is not thereby to lose all of one’s own rights but to be liable for full restitution (any part of which may be taken as retribution).
I assume other animals are not persons (roughly, beings capable of having conscious theories about theories that extend beyond guessing their truth value).
Moral agency depends on the ability to comprehend rights and duties. Persons have this. But people can disagree about which rights and duties exist.
Rights are interpersonal moral rules that only persons can understand, claim, and respect. But we may have no practical recourse if they are violated.
I can’t see how a non-sentient thing can in itself be treated immorally. If I spitefully destroy your flowers, then I behave immorally towards you and not the flowers.
What are “different terms” for the “same” what?
We can even have a theory of fairness that is prior to any game. But we first need to know what problem we are trying to solve with that theory.
> Your response is like saying to a theory of what truth is (e.g., accurate depiction) that the theory is vague and unhelpful because we don’t know which theories are accurate depictions. That is a separate problem.
If we had no way of determining which theories are more accurate than their rivals, accurate depiction would be useless as a theory of truth. Then whatever theory of truth you used would be as good as another. We have no final, infallible method of comparing theories, but we have something useful.
This doesn't really address your objection, though. Consciousness only seems to work because we feel like we can spot it. Okay, lets go with that. Why does consciousness work as the criterion for moral patient status? Because the entity can suffer. Why is this relevant? Perhaps morality demands that we never cause others to suffer. Or stronger, morality demands that we prevent suffering. Or some form of compassion is a prerequisite for social cooperation?
The strong version is way too strong, we would spend our entire lives doing nothing but trying to help wild animals, which suffer quite a bit even if we aren't adding to it.
Even the weak one seems impossible to comply with. Perhaps the Jains approximate it, and that seems to be their ideal.
Compassion seems ambiguous- compassionate win, to whom? I am not convinced.
Is there a weaker version, where we should avoid causing terrible suffering but can do some sort of cost/benefit analysis for lesser suffering?
Maybe virtue ethics gives the answer. The virtuous person can cause certain unimportant sorts of suffering, but not other sorts?
If we assume liberty, does that help us answer?
my answer seems better. People who are willing and able are in it together to protect themselves from things that are not.
>Libertarian rights don’t depend on other people accepting responsibility.
Perhaps not by definition, but pragmatically they do. If no one takes responsibility, no libertarians rights will be respected. Assuming liberty works as a theoretical tool, but not as a pragmatic one. In practice, we need something more.
>We always have the right to liberty (to not have impositions initiated on us unnecessarily).
Because we are conscious, or ...?
Usually you argue about what liberty means, and leave out claims that we ought to adopt liberty (though clearly you advocate it). Are you expanding the scope of your argument? Is that a hypothesis or a conclusion?
>Moral agents who are unwilling to observe norms or fulfil duties are morally bad but still fully moral agents (hence we hold them morally responsible).
Do we hold them responsible if they are clearly insane? We would treat them marginally differently from a wild animal, by holding open the possibility that they might change.
Are we describing something that happenes to them or to us when we say we hold them responsible? We can imprison a violent nihilist and cage a tiger, or kill them. Can we hold either of them responsible?
>Being a person is what entitles one to rights and obliges one to fulfil duties.
That just changes the terms if the question without answering it. What is a person? We have discussed this before and l was not convinced that this adds anything.
>To violate some of someone else’s rights is not thereby to lose all of one’s own rights but to be liable for full restitution (any part of which may be taken as retribution).
Sure. Violating rights is not the same as denying that they restrain you. There is a difference between admitting that one cheated and paying restitution and denying that one is playing the game.
>persons (roughly, beings capable of having conscious theories about theories that extend beyond guessing their truth value).
Ah!
>Moral agency depends on the ability to comprehend rights and duties.
This sounds a lot like being able to observe norms and fulfill duties. Is comprehending enough if you can't fulfill? Perhaps we only disagree about the necessity of willingness? I should clarify, I do not consider an ordinary thief to be unable or unwilling to observe norms or fulfill duties. I just consider them to be cheating. The difference is that on some level the cheater must admit that punishment is just. They want to continue playing the game.
>Rights are interpersonal moral rules that only persons can understand, claim, and respect. But we may have no practical recourse if they are violated.
I was speaking of the ideal. When a right is violated, ideally the victim would have recourse, though in reality they may not. We can also imagine the possibility that there are other wrongs where recourse is not possible or appropriate, though at least at first glance that seems unlikely.
>I can’t see how a non-sentient thing can in itself be treated immorally. If I spitefully destroy your flowers, then I behave immorally towards you and not the flowers.
I agree, but can I steelman it?
What is our theory of morality? Religious persons might consider it immoral to burn a holy text, even when done by its owner. The ten commandments proscribes the making of graven images. We might not consider these acts immoral, but others have. Perhaps they would consider them to be offenses against themselves, or against their divinity. Or they might consider all sacred items to have shared ownership of a sort, or an easement. For me at least, it is easy to reframe restrictions on treatment of objects as property violations. That requires an owner or something similar, but then who is disputing the action if not the owners or their proxy? If no one is objecting, and everyone has the relevant information, was there an immoral act?
>What are “different terms” for the “same” what?
"immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”."
If it is immoral to gratuitously inflict great suffering on animals, this is the same as saying it flouts their rights, or flouts the rights of their owners/guardians/responsible persons. For me these are isomorphic if not fully synonymous. I admit it is possible to think that one can do something immoral that no one else objects to or has a right to object to. But even if such acts exist, what is the recourse? One might try to change one's self, but what is the consequence if one does not? One cannot take one's self to court and pay restitution to one's self. I guess we gather a burden of sin.
>We can even have a theory of fairness that is prior to any game. But we first need to know what problem we are trying to solve with that theory.
Or rules of the meta-game. People who want to solve a different, incompatible problem won't agree.
I often have trouble finishing one of your pieces because they always make me think of digressions. But I appreciate that.
“One should either apply the principle consistently to all sentient life (and then, presumably, humans must mainly be engaged in “correcting” the disadvantages of other animals) or drop it completely.”
Are these really the only choices? The argument takes for granted that sentience is the criterion for deciding the status of moral patients. Given the point of the argument being made, other criteria would probably fare even worse for redistributionists. But perhaps there is a twisty argument for a different criterion, one that would exclude animals (or enough of them to seem plausible). The specific one that I prefer would not help the redistributionists, but maybe they might be able to find a more sympathetic parallel.
I assume that if one is “unlucky” enough to be a non-sentient being, then one cannot plausibly be compensated for that. So, I’m not sure what “other criteria” than sentience could be used by redistributists. And I can’t see how to exclude non-human animals in a non-arbitrary way. But, of course, I might be overlooking something. You do not cite the criterion that you prefer.
I think I’ve mentioned it before, but maybe not. I think “sentience” doesn’t work for two reasons. First, because very strictly speaking, “sentience” refers to having senses rather than being rational or whatever people usually mean when they use it loosely. Secondly, even when being less strict, human infants, adult coma victims, etc. are arguably not sentient (in the colloquial sense), but are still usually considered moral patients, entities with rights that moral agents should observe.
A full moral agent should be able and willing to observe norms, fulfill duties. No one expects a broader category to work. And so that delineates the largest possible scope of moral agency. I think this means that moral agents should have rights that correspond to their obligations. (Other positions might be adopted, but I won’t examine those here.) So moral agents should have the same rights as moral patients as a default.
To extend the category of moral patient to include non-humans, non-adults, or incapacitated adults, I suggest that it requires a guardian or proxy who is a moral agent and is willing to fulfill the obligations on behalf of the moral patient. This is basically a person who is willing to take responsibility for the moral patient. So animals that have a guardian/owner have moral status derived from that relationship. Other moral agents should respect the guardian's responsibilities, but the guardian also has additional duties. I cannot arbitrarily harm your dog, but I can defend myself if it attacks me, and you would be liable. Similarly, a parent (natural or adoptive) has responsibility for a child, and relatives have responsibility for incapacitated persons.
Currently, the state acts as the guardian of last resort, so that all children and incapacitated adults have a de facto guardian. But due to the nature of the state, they do not really hold any liability, so wards of the state have no recourse. In a society without a state, this role could be played by religious institutions or charities. The extreme result is the same - someone in sufficiently dire circumstances has no recourse other than the kindness of bystanders, which carries no obligation or liability. Perhaps this is acceptable because indigent persons often do not wish to receive the “help” offered by a state or institution.
I've put this in terms of moral agency and moral patient status. This is perhaps an error, and the categories should be legal or have to do with social interaction or whatever. I'm going through a sophomoric stage where i am not sure how to distinguish morality from prudence or justice.
As I said, I don’t think the redistributionists would prefer my account of moral patient status, as it provides even less cover for their approach. My original point was just that sentience is not the only possible criterion, and they would perhaps like to come up with a better one. That is not to say that they could.
I can’t recall your having mentioned it before. Etymologically, “sentience” refers to experiences via the senses. But philosophers often use it to mean something like “having some level of consciousness”. Presumably, because it is useful to replace an awkward expression with one word. And because “having some level of consciousness” is what really matters, whether it occurs via the senses or not (such as when we are asleep and dreaming).
Human infants are surely normally conscious and sentient. Coma victims are usually understood not to be either, or barely so. Of course, I would understand the rights regarding infants as belonging to their guardians. And the rights of coma victims only persist as long as they might recover (as though waking from a dreamless sleep) or until any contracts to support them lapse.
A full moral agent need not be willing to “observe norms, fulfill duties”. Other animals are not moral agents, but they are moral patients. They cannot behave morally or immorally, but they can be treated morally or immorally—just because they are sentient.
I am inclined to say that moral agents have rights and duties, but moral patients do not. It is immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”.
“sentience is not the only possible criterion” for what? It seems to be the, unintentionally, implied criterion for having “unfair” advantages or disadvantages that can be “corrected”.
>“having some level of consciousness” is what really matters,
That is vague. Philosophers aren't willing to agree even that other humans have consciousness. Animals and bacteria and insects might or might not. Panpsychists hypothesize that everything does. It's unhelpful.
> the rights of coma victims only persist as long as they might recover (as though waking from a dreamless sleep) or until any contracts to support them lapse.
Or until whoever is paying the bill decides to stop. What kind of a right is that? One that depends on someone accepting responsibility.
>A full moral agent need not be willing to “observe norms, fulfill duties”.
Moral agents that are unwilling to observe norms or fulfill duties are not very good moral agents. Why does consciousness entitle them to rights, If they are unwilling to refrain from violating others' rights? If they deny the existence of rights, why should rights apply to them? What does moral agency mean if not at least the ability to fulfill duties? We can argue over willingness, I suppose.
>Other animals are not moral agents,
On your account, wouldn't this depend on how conscious they are?
>but they are moral patients. They cannot behave morally or immorally, but they can be treated morally or immorally—just because they are sentient.
This seems different from what you wrote previously. If moral patient status is derived from consciousness, What does moral agency depend on? Still consciousness, but more of it? Or ability to behave morally or immorally? Isn't that a question of how other people evaluate their action, rather than something intrinsic to the action itself?
>I am inclined to say that moral agents have rights and duties, but moral patients do not.
Certainly animals have no duties.
>It is immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”.
I suppose we can come up with a distinct term for that. But it doesn't seem very parsimonious when it's the same concept. What is the difference between X having a moral right not to have great suffering inflicted on it and it being immoral to inflict great suffering on it? Perhaps a thing with a right will have recourse when the right is violated, but a thing that it is immoral to inflict suffering on might not have recourse?
>“sentience is not the only possible criterion” for what?
Moral patient status.
>It seems to be the, unintentionally, implied criterion for having “unfair” advantages or disadvantages that can be “corrected”.
Different terms for the same thing? But what counts as fair is always controversial. Before we can know how to play fair, we need to know the rules of the game and who is playing.
We need to separate metaphysical problems from epistemological ones. If it is correct, then the thesis that consciousness is what ultimately matters appears to be fairly clear and helpful. How it can be decided who, or what, has consciousness is a completely separate problem. Your response is like saying to a theory of what truth is (e.g., accurate depiction) that the theory is vague and unhelpful because we don’t know which theories are accurate depictions. That is a separate problem.
Libertarian rights don’t depend on other people accepting responsibility. We always have the right to liberty (to not have impositions initiated on us unnecessarily). But what counts as our liberty can depend on circumstances, including any valid contracts.
Moral agents who are unwilling to observe norms or fulfil duties are morally bad but still fully moral agents (hence we hold them morally responsible).
Being a person is what entitles one to rights and obliges one to fulfil duties. To violate some of someone else’s rights is not thereby to lose all of one’s own rights but to be liable for full restitution (any part of which may be taken as retribution).
I assume other animals are not persons (roughly, beings capable of having conscious theories about theories that extend beyond guessing their truth value).
Moral agency depends on the ability to comprehend rights and duties. Persons have this. But people can disagree about which rights and duties exist.
Rights are interpersonal moral rules that only persons can understand, claim, and respect. But we may have no practical recourse if they are violated.
I can’t see how a non-sentient thing can in itself be treated immorally. If I spitefully destroy your flowers, then I behave immorally towards you and not the flowers.
What are “different terms” for the “same” what?
We can even have a theory of fairness that is prior to any game. But we first need to know what problem we are trying to solve with that theory.
> Your response is like saying to a theory of what truth is (e.g., accurate depiction) that the theory is vague and unhelpful because we don’t know which theories are accurate depictions. That is a separate problem.
If we had no way of determining which theories are more accurate than their rivals, accurate depiction would be useless as a theory of truth. Then whatever theory of truth you used would be as good as another. We have no final, infallible method of comparing theories, but we have something useful.
This doesn't really address your objection, though. Consciousness only seems to work because we feel like we can spot it. Okay, lets go with that. Why does consciousness work as the criterion for moral patient status? Because the entity can suffer. Why is this relevant? Perhaps morality demands that we never cause others to suffer. Or stronger, morality demands that we prevent suffering. Or some form of compassion is a prerequisite for social cooperation?
The strong version is way too strong, we would spend our entire lives doing nothing but trying to help wild animals, which suffer quite a bit even if we aren't adding to it.
Even the weak one seems impossible to comply with. Perhaps the Jains approximate it, and that seems to be their ideal.
Compassion seems ambiguous- compassionate win, to whom? I am not convinced.
Is there a weaker version, where we should avoid causing terrible suffering but can do some sort of cost/benefit analysis for lesser suffering?
Maybe virtue ethics gives the answer. The virtuous person can cause certain unimportant sorts of suffering, but not other sorts?
If we assume liberty, does that help us answer?
my answer seems better. People who are willing and able are in it together to protect themselves from things that are not.
>Libertarian rights don’t depend on other people accepting responsibility.
Perhaps not by definition, but pragmatically they do. If no one takes responsibility, no libertarians rights will be respected. Assuming liberty works as a theoretical tool, but not as a pragmatic one. In practice, we need something more.
>We always have the right to liberty (to not have impositions initiated on us unnecessarily).
Because we are conscious, or ...?
Usually you argue about what liberty means, and leave out claims that we ought to adopt liberty (though clearly you advocate it). Are you expanding the scope of your argument? Is that a hypothesis or a conclusion?
>Moral agents who are unwilling to observe norms or fulfil duties are morally bad but still fully moral agents (hence we hold them morally responsible).
Do we hold them responsible if they are clearly insane? We would treat them marginally differently from a wild animal, by holding open the possibility that they might change.
Are we describing something that happenes to them or to us when we say we hold them responsible? We can imprison a violent nihilist and cage a tiger, or kill them. Can we hold either of them responsible?
>Being a person is what entitles one to rights and obliges one to fulfil duties.
That just changes the terms if the question without answering it. What is a person? We have discussed this before and l was not convinced that this adds anything.
>To violate some of someone else’s rights is not thereby to lose all of one’s own rights but to be liable for full restitution (any part of which may be taken as retribution).
Sure. Violating rights is not the same as denying that they restrain you. There is a difference between admitting that one cheated and paying restitution and denying that one is playing the game.
>persons (roughly, beings capable of having conscious theories about theories that extend beyond guessing their truth value).
Ah!
>Moral agency depends on the ability to comprehend rights and duties.
This sounds a lot like being able to observe norms and fulfill duties. Is comprehending enough if you can't fulfill? Perhaps we only disagree about the necessity of willingness? I should clarify, I do not consider an ordinary thief to be unable or unwilling to observe norms or fulfill duties. I just consider them to be cheating. The difference is that on some level the cheater must admit that punishment is just. They want to continue playing the game.
>Rights are interpersonal moral rules that only persons can understand, claim, and respect. But we may have no practical recourse if they are violated.
I was speaking of the ideal. When a right is violated, ideally the victim would have recourse, though in reality they may not. We can also imagine the possibility that there are other wrongs where recourse is not possible or appropriate, though at least at first glance that seems unlikely.
>I can’t see how a non-sentient thing can in itself be treated immorally. If I spitefully destroy your flowers, then I behave immorally towards you and not the flowers.
I agree, but can I steelman it?
What is our theory of morality? Religious persons might consider it immoral to burn a holy text, even when done by its owner. The ten commandments proscribes the making of graven images. We might not consider these acts immoral, but others have. Perhaps they would consider them to be offenses against themselves, or against their divinity. Or they might consider all sacred items to have shared ownership of a sort, or an easement. For me at least, it is easy to reframe restrictions on treatment of objects as property violations. That requires an owner or something similar, but then who is disputing the action if not the owners or their proxy? If no one is objecting, and everyone has the relevant information, was there an immoral act?
>What are “different terms” for the “same” what?
"immoral to gratuitously inflict great suffering on another animals, but not because this flouts their “rights”."
If it is immoral to gratuitously inflict great suffering on animals, this is the same as saying it flouts their rights, or flouts the rights of their owners/guardians/responsible persons. For me these are isomorphic if not fully synonymous. I admit it is possible to think that one can do something immoral that no one else objects to or has a right to object to. But even if such acts exist, what is the recourse? One might try to change one's self, but what is the consequence if one does not? One cannot take one's self to court and pay restitution to one's self. I guess we gather a burden of sin.
>We can even have a theory of fairness that is prior to any game. But we first need to know what problem we are trying to solve with that theory.
Or rules of the meta-game. People who want to solve a different, incompatible problem won't agree.