PHIL2206 Philosophy of Mind
Week 6: Machine Functionalism

Roles and realizers

  1. There are such things as roles (or functions, or jobs):
  2. To be a company director, for example, is not to be a certain way, but to do a certain thing (play a certain role). So too for what it is to be a scrum half, a school principal, and so on.
  3. Something which plays a certain role (performs that function, does that job) is said to realize that role. Mr Brown currently realizes the role of Principal of Kooringal Primary School.
  4. Many roles are multiply realizable. This is not to say that many different things can play the role, but that many different kinds of thing can play the role.

Fuctionalism

  1. According to functionalism in the philosophy of mind, mental phenomena are roles. Pain, for example, is a role. To be a pain is to play that role; to have a pain or to be in pain is to have something which is playing that role.
  2. What is the pain role? More about this soon, but for now let's say: detecting tissue damage.
  3. It turns out that in humans the pain role is played by the firing of c-fibres.

    We might say that, in humans, pain is the firing of c-fibres. This is not to say that pain itself the firing of c-fibres, just that it is the firing of c-fibres that plays the pain role.

    Compare this with the following: The Principal of Kooringal Primary School is Mr Brown; this is not to say that to be The Principal of Kooringal Primary School is to be Mr Brown, just that it is Mr Brown who is playing that role.

  4. There seems to be nothing in our concept of pain that requires it to be realized by the firing of c-fibres, nor even anything physical even (it seems to allow that non-physical ghosts and angels can be in pain). Thus our concept of pain allows that it is a multiply realizable role.
  5. If it is multiply realizable, then other things could have played the pain role in humans, and perhaps in some organisms (e.g. worms) other things do play the pain role.
  6. If we were to discover that pain is not multiply realizable (e.g. that it can only be realized by physical things), then that would be an a posteriori discovery, not something we can know a priori.

    Contrast this with the following: We know that bacehlors must be male, and that triangles must have exactly three angles. We did not discover these things by a posteriori investigation, but know them a priori - anyone who has the concept BACHELOR and TRIANGLE is in a position to know these things, without further investigation.

Comparisons with Cartesian dualism, behaviourism, and the identity theory

  1. How does functionalism compare to Cartesian Dualism?
  2. How does it compare to behaviourism?
  3. How does it compare to the identity theory?

Machine functionalism

  1. According to machine functionalism, to have a mind (or to be a mind?) is to be a computer that is running a sufficiently complex program, and mental phenomena are computational roles within such computers.
  2. To make this more precise, we can take a computer to be a Turing machine. So to have a mind is to be a Turing machine with a sufficiently complex machine table, and the mental states of a mind are the internal states of the Turing machine.
  3. A Turing machine is anything, x, of which the following are true:
    1. x has a finite 'alphabet' of symbols (which we can take to be just '0' and '1').
    2. x has a 'tape', divided into 'squares', unbounded in both directions, on which symbols from the alphabet can be written.
    3. x can read and write symbols on the squares of the tape.
    4. x has a finite set of 'internal states', and it is in one of these states at any given time.
    5. x has a 'machine table', which determines what it is to do at any time, given its internal state and the symbol on the tape (we can think of this as it's program).
    6. x can do the following things: (a) write symbols onto the tape, (b) move the tape one square to the left or to the right, (c) enter a new internal state, (d) halt.
  4. Here is a Turing machine (or a kind of Turing machine?) that computes the successor of a number. It has two symbols in its alphabet, '0' and '1', two internal states, A and B, and the following machine table:
    10
    A Change to state B
    Write a '1'
    Move tape to right
    Stay in state A
    Write a '0'
    Move tape to right
    B Stay in state B
    Write a '1'
    Move tape to right
    Change to state A
    Write a '1'
    Halt
  5. There are Turing machines that are different from this one (e.g. have a different machine table), and yet produce the same input-output behaviour.
  6. If we think of a Turing machine's machine table as including a specification of the alphabet and internal states of the Turing machine, then we can say that Turing machines are individuated by their machine tables. That is, Turing machines x and y are the same Turing machine (or the same kind of Turing machine), iff x and y have the same machine table.
  7. A very interesting fact: a Turing machine's machine table can be encoded into a string of '0's and '1's, which can then be printed on tape and fed into another Turing Machine. We can thus do away with all but one Turing machine, a universal Turing machine, which can mimic all others.
  8. We can see why it is tempting to think that anything with a mind is a Turing machine, with inputs, outputs, and internal states. And also why it is tempting to thinkthat any Turing machine with a sufficiently complex machine table (and thus sufficiently complex input-output behaviour) counts as having a mind.
  9. Note that an internal state of a Turing machine that counts as having a mind can only be the total mental state of the Turing machine. To get an account of partial mental states we need a more fine-grained notion of the internal state of a Turing machine.

A concern

  1. Functionalists complain that the identity theory does not allow for there to be pain in things that do not have a brain of a certain kind - that it is too restrictive of what kinds of thing can be in pain.
  2. But is machine functionalism also too restrictive? For something to be in pain, it must be a Turing machine with a particular internal state. But to be have a particular internal state it must be a particular kind of Turing machine: it does not make sense to compare internal states aross different kinds of Turing machines. So there is a particular kind of Turing machine such that something can only be in pain if it is a Turing machine of that kind. But it seems implausible that worms and humans are the same kind of Turing machine.
  3. Here is one possible response: the one thing can simultaneously be a Turing machine of many different kinds. There may be a kind of Turing machine, k1, such that worms are but humans are not Turing machines of kind k1, a kind of Turing machine, k2, such that humans are but worms are not Turing machines of kind k2, and a kind of Turing machine, k, such that worms and humans are both Turing machines of kind k. Pain could be an internal state of that common kind of Turing machine.

The Turing test

  1. When is a Turing machine's machine table complex enough to for the Turing machine to count as having a mind?
  2. Turing proposed the following sufficient condition: If a Turing machine can pass the Turing test, then it is a mind.
  3. The Turing test: A tester is to connected to a human and a Turing machine, each by a keyboard and monitor. The task of the tester is to determine which of the two is the human and which is the Turing machine. The Turing machine has to try to fool the tester. If it can, at least if it can more than 50% of the time, then it passes the test.
  4. Why not take this to be a necessary condition as well? Because that would make it too hard for something to be a mind: there are plenty of things that have minds (so we think) that would probably fail the Turing test: sufficiently feable minds, animals without language, non-humans (even if they have language), and so on.
  5. But can a machine functionalist accept this test? It seems very behaviouristic, and unconcerned about whether or not something has internal states.

Searle's Chinese room argument

  1. Searle used the following thought experiment to argue that minds cannot be computers.
  2. There is a room. Inside is someone who understands no Chinese. He is fed questions written in chinese. He has a set of rules which tell him, for any given sequence of symbols that he receives, what sequence of symbols he is to return. The rules do not require that he understand the symbols. The sequences that he returns are appropriate responses to the questions, in Chinese.

    Searle claims that this case shows that machine functionalism is wrong: The room (or the man in the room) is playing the appropriate computational role, but the room (and the man in the room) does not understand Chinese. So there is more to understanding Chinese than playing the appropriate computational role.

  3. The problem, Searle suggests, is that computation is mere manipulation of symbols. But can never be enough for understanding.