Free will, manipulation, and operant conditioning

The typical argument against compatibilism these days rests around manipulation cases, contrived instances in which a person’s behavior is altered or directed by some external agent. For instance:

Imagine someone is manipulated by direct electronic stimulation of her brain, causing her every moment to moment decision and action in such a way that she still feels she is making free decisions according to her own will, but in fact she is controlled by the machine which stimulates her brain. Is she morally responsible for her actions?

I think we all agree that the answer is “no”; someone attached to such a machine is not responsible for their actions. Then, so the argument goes, we are not responsible for our actions in real life, for causality is just such a machine.

Yet when I contemplated the above scenario, one question immediately came to mind: Who attached her to that machine? If she attached herself, there is no threat to her free will (indeed, it could well expand her freedom, by removing her drug addition or somesuch). If someone else attached her (a mad scientist, as I think the scenario seems to imply), then that person holds moral responsibility for her actions. In either of these cases, the almost universal intuition is that free will remains intact.

But what if, as is closest to the real-life situation, the laws of nature attached her to that machine? It’s hard to imagine that scenario, but let’s do our best. Suppose there is a law of nature that we are all bound up to brain-stimulating machines; it’s just how the world works. Everything else was the same; otherwise we still have the same feelings, desires, and experiences we do now. Would we thereby conclude that we have no free will?

I don’t think we would! I think we would conclude that the brain-stimulating machines, and whatever purposes they are designed to serve, are part of our free will, or more properly, part of the apparatus by which our free will is achieved. We might even fear that if we turned off the machines, our free will would thereby be destroyed.

And while you may disagree—my own intuitions are a little fuzzy on this weird scenario—nonetheless it seems to me that you are only begging the question if you insist that beings attached to such machines have no free will. The reason that the scenario seems so compelling at first is that someone else is controlling us—another conscious being, whose will is morally relevant. The responsibility is therefore transferred to the other person. When you take that away, and leave nature itself to control our behavior, the responsibility can no longer be transferred; it must either remain or be destroyed, and at least my intuition is that it remains. Maybe yours differs; but that’s not a rational argument, it’s a difference of intuition.

The more interesting question, for me at least, is how we can properly characterize behaviors that appear to be volitional but not rational—actions that, at first glance, we would say are conducted out of our own free will but nonetheless display a weakness of character. It is tempting to say that free will consists in the capacity to behave rationally; but if that is right, then people who don’t act rationally (which includes all of us some of the time and some of us nearly all of the time) are lacking in free will, and we cannot hold them responsible for their actions.

In some cases this seems right: A serial killer so insane that he does not understand why chopping people into pieces is unacceptable does in fact seem to have something broken inside his brain, something that may well render him beyond all rational argument. Such a being seems more like a shark than a human; an animal with raw desire and no capacity for empathy or reflection.

But in cases of more minor irrationality, this kind of reasoning doesn’t seem to work. First, there are those who say that morality amounts merely to being substantively rational; I’m not sure this is right, but if it is, then not only serial killers but every other person who has ever acted immorally has acted out of irrationality. Anyone who lies, anyone who steals, anyone who rapes; they have all had their rationality compromised. And so, we can hold no one responsible for their actions in any deep moral sense; there are only brains that work (behave morally) and brains that don’t work (behave irrationally).

Even if we reject that idea, there are clear examples of people behaving irrationally—i.e. against their own objectives—that nonetheless seem to be within the realm of volitional action. Suppose you are in New York and want to board a train for Washington, D.C.; but you misread the sign and instead get on a train headed for Philadelphia. It seems profoundly strange to say that you acted against your will, but nonetheless it seems apparent that you acted irrationally—your behavior did not serve your own interests. It was a mistake; an honest mistake, but a mistake—something went wrong in your decision process.

Now, it may be possible to narrow the definition of “volitional” and/or expand the definition of “rational” to make the two sets coextensive; but this seems quite counter-intuitive and a little ad hoc. Mistakes and (sane) crimes just don’t feel to me like cases where free will is compromised. They are attributable (presumably) to some sort of cognitive malfunction; but the malfunction does not appear to be in the volition system.

Indeed, I think I can give this intuition a little more rational warrant. What is really at stake, when we ask whether a person has free will? It seems to me that what is really at stake is the question, “Are we morally justified in rewarding or punishing this person?” If you were to conclude, “No, they do not have free will, but we are justified in punishing them”, I would think either that you meant something different than I do by free will or that you were simply talking nonsense. Inversely, if your ruling was “Yes, they have free will, but we may not reward or punish them”, I would be similarly confused. Moreover, the concern that without free will, our moral and legal discourse collapses seems to be founded upon this general notion—that reward and punishment, crucial to ethics and law as they are, are dependent upon free will. Hence, we can explain why we would say that mistakes and crimes qualify as “free will”; it would make sense to reward and punish them, given the right circumstances. (We punish mistakes only when they are negligent; this also seems to be precisely the circumstances under which punishment is likely to have an effect. If you were doing your best, what good does it do to punish you for failing?)

Yet, consider this as a scientific question, as I like to do in ethics generally. What kind of organism can respond to reward and punishment? What sort of thing will change its behavior based upon rewards, punishments, and the prospect thereof? Certainly you must agree that there is no point punishing a thing that will not be affected by the punishment in any way—banging your fist on the rocks will not make the rocks less likely to crush your loved ones. Conversely, I think you’d be hard-pressed to say it’s pointless to punish if the punishment would result in some useful effect. Maybe it’s not morally relevant—but then, why not? If you can make the world better by some action, does that not ceteris paribus give you moral reason to perform that action?

We know exactly what sort of thing responds to reward and punishment: Animals. Specifically, animals that are operant-conditionable, for operant conditioning consists precisely in the orchestrated use of reward and punishment. Humans are of course supremely operant-conditionable; indeed, we can be trained to do incredibly complex things—like play a piano, pilot a space shuttle, hit a fastball, or write a philosophy paper—and, even more impressively, we can learn to train ourselves to do such things. In fact, clearly something more than operant conditioning is at work here, because certain human behaviors (like language) are combinatorically complex and therefore impossible to learn by simple reward and punishment. There is a lot of innate cognition going on in the human brain, which parcels the world into comprehensible systems—but over that layer of innate cognition we can add a virtually endless range of possible learned behaviors, far wider-ranging than those of any other organism.

That is to say, learning as a quantitative trait—the capacity to change future behavior based upon past behavior in a goal-directed way—is precisely in alignment with our common intuitions about free will—that humans have the most, animals have somewhat less, computers might possibly have some, and rocks have none at all. Yes, there are staunch anthropocentrist dualists who would insist that animals and computers have no “free will”. But if you ask someone, “Did that dog dig that hole on purpose?” their immediate response will not include such theological considerations; it will attribute some degree of free choice, perhaps less than our own, to canis lupus familiaris. Indeed, I think if you ask, “Did the chess program make that move on purpose?” the natural answer attributes some sort of will even to the machine. (Perhaps it is a mere simulation of will, or else its will is a reflection of the will of the designer; but nonetheless it’s hard to deny that the queen took your bishop intentionally. It certainly didn’t do it by accident.)

Yet, if the capacity to respond to reward and punishment is all we need to justify reward and punishment, then the problem of free will collapses. We should punish criminals if, and only if, punishing them will reform them to better behavior (or set an example to deter others from similar crimes). Did we lose some deep sense of moral desert and retribution? Maybe, but I think we can probably work it back in (say, define “just desert” as the precise amount of reward or punishment necessary to motivate behavioral change), and if we can’t, it doesn’t seem like much to lose. Either way, we can still have a justice system and moral discourse. Even better, we can justify quite naturally the idea that humans have more free will than animals, who have more free will than inanimate objects.

Indeed, we can do better than that; we can now determine empirically whether a given entity is a proper agent of moral discourse. The insane serial killer who does not seem to understand the meaning of pain may indeed fail to qualify, in which case we should kill him and be done with it, the same way we would kill a virus or destroy an oncoming asteroid. Or he may turn out to qualify—to respond to rewards and punishments—in which case we should punish him as we would other moral agents. The point is, this is a decidable question, at least in principle; all we need are a few behavioral and psychological experiments to determine the answer.

Honestly, the biggest problem with my theory is that it seems so obvious: If that’s all there is to free will, what are we even arguing about?

Tags: , , , , , , , , , , , , , , , ,

Comments are closed.