Skinner's main target in Science and Human Behavior (New York:
Free Press, 1953), and also elsewhere, e.g. in Beyond Freedom and Dignity (New York: Alfred A. Knopf, 1971), is what
he calls "mentalism," namely the appeal to inner psychological phenomena in the explanation of human behavior. What is wrong
with mental notions in the explanation of behavior? Many things, according to Skinner, among them these:
1. We cannot directly observe mental phenomena.
As a result they are "inferential." On Skinner's view, this disqualifies them as scientific explanations of behavior. Skinner
appeals to this point at several places in our reading. In a discussion of psychoanalytic theory, he writes: "any mental event
which is unconscious is necessarily inferential, and the explanation [which makes use of it] is therefore not based upon independent
observations of a valid cause" (30 ; bracketed page references are to the excerpt from Science and Human Behavior
in Ned Block, ed., Readings in the Philosophy of Psychology, Cambridge: Harvard Univ. Press, 1980). Later he makes
a similar criticism of commonsense psychological explanation, e.g. explaining why someone drinks water by saying he is thirsty.
Skinner writes, of this "explanation," that: "if it means that he drinks because of a state of thirst, an inner causal event
is invoked. If this state is purely inferential--if no dimensions are assigned to it which would make direct observation possible--it
cannot serve as an explanation" (33 ). At one point, Skinner even seems to identify "inferential" with "fictional" (28
Now, there is surely something valuable and important about this. Invoking
phenomena which we cannot directly observe in explanations of things we can observe is always risky. I do not mean risky in
the sense that we may turn out to be wrong; virtually any scientific claim is risky in that sense, including claims about
things we can directly observe. The more serious risk is that we will make claims which are really not testable at all, which
empirical evidence can never show to be mistaken because we can always fudge the theory a bit to explain why the evidence
was to be expected after all. To use Karl Popper's term, the danger of "inferential" states is that theories making use of
them may not be falsifiable. (Popper himself wrote of psychoanalysis: "those 'clinical observations' which analysts
naively believe confirm their theory cannot do this any more than the daily confirmations which astrologers find in their
practice." Popper, Conjectures and Refutations: The Growth of Scientific Knowledge (New York: Basic Books, 1962), pp.
37-38.) So we can read Skinner as making the important point that when we invoke theoretical entities or phenomena we need
to do so in such a way that the theory making use of them makes predictions about observable phenomena which can be falsified.
But Skinner seems to take himself to have shown something
much stronger than this, namely that a scientific theory should not make use of inferred entities or phenomena at all.
And this seems much too strong a claim. If we restricted physics, or even archeology or paleontology, to making use only of
things that can be directly observed, we would deprive ourselves of most of their most interesting results--and also of a
good deal of their predictive power. It often happens that the best theory which accounts for observed phenomena and makes
predictions about unobserved but observable phenomena makes use of a good deal of theoretical apparatus for which our only
evidence is inferential. An analogy may be helpful in seeing this point. Imagine typing things into the keyboard of a computer,
observing the computer's responses, and trying to formulate hypotheses about how the machine will respond to various future
stimuli. Conceivably we could do this without appealing to any hypotheses about how the machine is programmed, so that
our theory simply took the form of correlations between inputs and outputs. But it seems quite clear that it will be far more
useful to hypothesize about the machine's (internal, not directly observable) program, using hypotheses about the program
together with information about inputs to formulate predictions about the machine's output. Now we may not be quite like computers,
but presumably the principles which govern our behavior are at least as complex as those that govern a computer, so we may
reasonably expect that formulating hypotheses about our own internal states and processes will turn out to be the most effective
way of explaining and predicting our behavior. At the very least, it seems clear that it would be a mistake to rule out a
priori any theory which made use of such hypotheses.
2. Mentalistic accounts are not genuinely explanatory.
Skinner argues that many supposed explanations are really just made up on the spot and do not provide a genuine account of
one's behavior. After giving a number of examples (e.g. one is confused because his mind is failing; one is disorganized because
his ideas are confused), he writes: "in all this it is obvious that the mind and the ideas . . . are being invented on the
spot to provide spurious explanations" (30 ).
Again, Skinner seems to be providing a useful warning: it would be a
mistake to take such offhand remarks as having much explanatory power. (But, for a defense of the view that commonsense psychology
does provide a fairly powerful explanatory account of a good deal of our behavior, see the writings of Jerry Fodor, e.g. his
Psychosemantics (MIT Press, 1987), Chapter One.) On the other hand, most such remarks are not even supposed
to be explanations of behavior; often they are just casual ways of describing it. The explanatory emptiness of much
of our ordinary talk about mental events is not evidence that mentalistic notions can find no place in a genuinely scientific
account of human behavior.
3. Mentalistic explanations are typically redundant.
Skinner claims that mentalistic explanations really just restate the facts of behavior in more obscure language. He writes,
for example, that: "A single set of facts is described by the two statements: 'He eats' and 'He is hungry.' . . . A single
set of facts is described by the two statements: 'He plays well' and 'He has musical ability.'" (31 ). Here there seems
to be at least a trace of the linguistic thesis of philosophical behaviorism as exemplified by Carnap and, at one time, Hempel.
The idea seems to be that the mentalistic statements have the same meaning as the behaviors that count as evidence for them.
But 'He eats' and 'He is hungry' don't mean quite the same thing (either could be true without the other being true), and
in cases where mentalistic notions are doing more theoretical work it will be even clearer that there is no straightforward
translation from mentalistic talk into behavioristic talk.
4. The "middle link" argument. Skinner suggests
that, since the inner mental states which are supposed to explain behavior are themselves determined by external stimuli,
they can safely be ignored: we can leave out the middleman and simply study the relations between stimuli and behavior. "Unless
there is a weak spot in our causal chain so that the second link is not lawfully determined by the first, or the third by
the second, then the first and third links must be lawfully related. If we must always go back beyond the second link for
prediction and control, we may avoid many tiresome and exhausting digressions by examining the third link as a function of
the first" (35 ).
At first sight, this looks very reasonable. If S determines M and M
determines R, then S indirectly determines R: why not just consider the relationship between S and R, ignoring M? Dennett's
computer analogy, which I mentioned above, is helpful here. It may be that the most effective way of explaining the relationship
between S and R is by way of hypotheses about the nature of M. What is "hard-wired" in aside (this is comparable to human
genetic makeup), how the machine is programmed is determined by inputs to the machine and, together with current inputs, determines
the machine's output: but trying to predict output on the basis of input alone, without hypotheses about the machine's internal
states and processes, is likely to be a disaster. It is worth mentioning Noam Chomsky's discussion of this point early in
his review of Skinner's Verbal Behavior (in Language vol. 35 no. 1, 1959; reprinted in Ned Block, ed., Readings
in the Philosophy of Psychology Volume 1 (Harvard, 1980)): "Anyone who sets himself the problem of analyzing the causation
of behavior will . . . concern himself with . . . the record of inputs to the organism and the organism's present response,
and will try to describe the function specifying the response in terms of the history of inputs. . . . The differences that
arise between those who affirm and those who deny the importance of the specific 'contribution of the organism' to learning
and performance concern the particular character and complexity of this function" (49).
5. Mentalistic explanations are homuncular.
Skinner in a number of places objects to mentalistic explanations that they in effect invoke a little person or homunculus
with all the same abilities that the ordinary person has. "The inner man is regarded as driving the body very much as the
man at the steering wheel drives a car" (29 ). Explaining the behavior of a person by appealing to a little person inside
the head, "driving" the body, clearly does not accomplish anything, since the actions of the homunculus are just as much in
need of explanation as the actions of the person were originally. This is the criticism Dennett takes most seriously; Dennett's
version is: "Since psychology's task is to account for the intelligence or rationality of men and animals, it cannot fulfill
its task if anywhere along the line it presupposes intelligence or rationality" (Dennett, "Skinner Skinned," in Dennett, Brainstorms,
Cambridge: MIT Press, 1978, p. 58).
All right. There's clearly something to this. But notice two things.
(1) It clearly doesn't accomplish anything to "explain" someone's behavior by reference to a "homunculus" just as smart as
the original person. But it doesn't follow that homunculi are useless. They may nevertheless accomplish something if they
are dumber than the original person. We might be able to understand the capacities of a person in terms of the interactions
of a number of agents each of which has simpler capacities than the original person; we might then explain each of these dumber
agents in terms of a system of still dumber agents, and so on until at the very bottom level we have something so simple it
can be understood in terms of neurons firing or something of the sort. This kind of explanation is familiar from computer
science: a big complicated program may have a number of subroutines which can be thought of as agents dumber than the original
program; these subroutines may themselves be decomposed into more basic routines, and so on, until at the bottom we reach
circuits opening and closing. For the view that something like this is the best way to understand the human mind, see e.g.
Marvin Minsky, The Society of Mind; see also some of Dennett's essays in Brainstorms, especially "Artificial
Intelligence as Psychology and as Philosophy."
(2) The second thing to notice is that even if we must ultimately explain
intelligence or rationality in terms that don't presuppose intelligence or rationality, it doesn't necessarily follow that
intelligence and rationality should not be appealed to at all, or that they are ultimately unreal. Rather than showing that
we aren't really rational, such an explanation might instead show what rationality consists in, might show what it
is to be rational. This is Dennett's main point in "Skinner Skinned." Dennett argues that there is a crucial difference between
explaining and explaining away (65). If our explanation of apparently rational behavior turns out to be extremely
simple, we may want to say that the behavior was not really rational after all. But if the explanation is very complex and
intricate, we may want to say not that the behavior is not rational, but that we now have a better understanding of what rationality
consists in. (Compare: if we find out how a computer program solves problems in linear algebra, we don't say it's not really
solving them, we just say we know how it does it. On the other hand, in cases like Weizenbaum's ELIZA program, the explanation
of how the computer carries on a conversation is so simple that the right thing to say seems to be that the machine isn't
really carrying on a conversation, it's just a trick.)
Professor Brown’s summation
of and critical analysis of Skinner’s criticisms of mentalism has the slant of a believer in the cognitive process with
its internal events. Skinner viewed this as an epiphenomenona. My own approach (which at this point I am too far removed in time from my last detailed study of Skinner
to state that he made these same distinctions) is to admit rational analysis, much like mathematical analysis as a skill which
humans have. Thus it is profitable to talk of concepts, of plans, of values,
and like in analyzing behavior. How accurate the verbal account is as to the
behavior is another question. This issue becomes muddled with the question of
mentalism. For rational analysis does not presuppose a mind—a computer
can do both. Thus all that is needed for the process is a brain, and we are left
with the issue of does a set of neuron have volitions, desires, and such, or are we simply a complex pigeon with the ability
to do mathematics, talk, and do rational analysis? Skinner and I take the later