Block invites us to consider the following two cases. In both cases, let’s suppose we have an appropriate specification of inputs and outputs:
In each case, functionalism is committed to saying that the whole system has whatever mental states you have. But there seems to be a prima facie doubt whether it has mental states, especially whether it has qualitative mental states (e.g. whether there is anything it is like for that system to see a red apple, or to taste an apricot).
Relying on intuitions to make substantive philosophical points is bad methodology, but Block adds that we have some arguments supporting our intuition:
In his paper “The nature of mental states” Putnam claims that in order for something to have mentality, it can’t be that it has objects which themselves are mental as parts. One problem with this view is that it is ad hoc. A second problem is that it’s too strong. Let’s suppose for a second that there are very small beings that build spaceships the size of subatomic particles, and that those spaceships behave the way that Physics says subatomic particles behave. We may give a story explaining how a person comes to be composed entirely of those little spaceships, but we have no reason to suppose that such a person would be deprived of mentality. Block claims that one important difference between this case and the homunculi-head robot is that in the former, being composed of very little spaceships makes a difference only to the microphysics of a person, but not to her psychology. Not so with the latter.
Most of the problems raised against commonsense functionalism point out that it seems to choose the wrong kind of theory to define mental states:
Perhaps the day will come when our brains will be periodically removed for cleaning. Imagine that this is done initially by treating neurons attaching the brain to the body with a chemical that allows them to stretch like rubber bands, so that no connections are disrupted. As technology advance, in order to avoid the inconvenience of one’s body being immobilized while one’s brain is serviced, brains are removed, the connections between brain and body being maintained by radio, while one goes about one’s business. After a few days, the customer returns and has the brain reinserted. Sometimes, however, people’s bodies are destroyed by accidents while their brains are being cleaned. If hooked up to input sense organs (but not output organs) these brains would exhibit none of the usual platitudinous connections between behavior and clusters of inputs and mental states. If, as seems plausible, these brains could have almost all the same mental states as we have, Functionalism is wrong. (p. 298)
We can suppose that there are two people, A and B, such that the objects that they both call green look to A the way objects we call green look to us, but to B the way objects we call red look to us. But we can further assume that those sensations in A and B play exactly the same causal roles, in which case functionalism would claim that they have exactly the same mental state. However, their mental states are different: one has an experience of green (A) and the other has an experience of red (B). In other words, whenever they see something that they both call green, they have different qualia.
Psychofunctionalism claims that in order for something to have a certain mental state, it must stand in the appropriate causal relations to whatever psychological events, states, processes and other entities actually obtain in us in whatever way such entities are causally related to one another. However, this entails that a lot of things that would intuitively have mental states would in fact have mental states. Block uses the following example:
Suppose we meet Martians and find that they are roughly Functionally [that is, functionally like us as determined by commonsense psychology] (but not Psychofunctionally) equivalent to us. When we get to know Martians, we find them about as different from us as humans we know. We develop extensive cultural and commercial intercourse with [the Martians]. We study each other’s science and philosophy journals, go to each other’s movies, read each other’s novels, etc. Then Martian and Earthian psychologists compare notes, only to find that in underlying psychology, Martians and Earthians are very different... Imagine that what Martian and Earthian psychologists find when they compare notes is that Martians and Earthians differ as if they were the end products of maximally different design choices (compatible with rough functional equivalence in adults). Should we reject our assumption that Martians can enjoy our films, believe their own apparent scientific results, etc.?... Surely there are many ways of filling in the Martian/Earthian difference I sketched on which it would be perfectly clear that even if Martians behave differently from us on subtle psychological experiments, they nonetheless think, desire, enjoy, etc. To suppose otherwise would be crude human chauvinism. (Remember theories are chauvinist insofar as they falsely deny that systems have mental properties and liberal insofar as they falsely attribute mental properties.)(pp. 310-311)
If Psychofunctionalism is true, then the Martians wouldn’t have the kinds of mental states that we do, and maybe they wouldn’t even have mental states! This seems to be the wrong consequence.
The commonsense functionalist specifies inputs and outputs in the same way as the behaviorist, which is in terms of sensory inputs and things like hand movements, utterances, and the like. This is chauvinist: it has the consequence that anything without hands, or without the ability to make utterances would be able to have mental states.
On the other hand, psychofunctionalism defines inputs and outputs in terms of neural activity. But then the only creatures capable of having such inputs and outputs will be the ones that are neurologically like us, or that have neurons in the first place. But what about creatures with no neurons, or creatures with different neural structures? According to psychofunctionalism, they wouldn’t have mental states either.
One way to solve this problem would be to define the inputs and outputs themselves in a functionalist fashion. That is, to give functional specifications for them just like we gave functional specifications for mental states. However, there is an obvious problem with this strategy:
Economic systems have inputs and outputs, e.g., influx and outflux of credits and debits. And economic systems also have a rich variety of internal states, e.g., having a rate of increase of GNP equal to double the Prime Rate. It does not seem impossible that a wealthy sheik could gain control of the economy of a small country, e.g., Bolivia, and manipulate its financial system to make it functionally equivalent to a person, e.g., himself. If this seems implausible, remember that the economic states, inputs, and outputs designated by the sheik to correspond to his mental state, inputs, and outputs, need not be "natural" economic magnitudes [...] The mapping from psychological magnitudes to economic magnitudes could be as bizarre as the sheik requires. (p. 315)
But it’s just very implausible that whatever the sheik does, it can make the economy of Bolivia have a mental
life. So this new way of specifying inputs and outputs is too liberal.