A blog about problems in the field of psychology and attempts to fix them.

Sunday, January 15, 2012

Thinking, Behaving, and Monkeys with Joysticks

I hope that the "thought" experiment I suggested last week stands on its own. The point was, at the least, to make people wonder how strong the causal relationship is between 'thinking about moving' and 'moving', and wonder if they might be very different phenomenon. Here I hope to demonstrate how the standard assumptions about the relationship between thinking and behaving can lead to some pretty awkward descriptions of phenomenon, and how a more embodied approach might do better. (Full disclosure, I'm still struggling with this, and will do a good, but definitely not great job.) Our case study come from the work of Dr. Miguel Nicolelis, who does ridiculously cool 'neuro-engineering' work down at Duke University. Our awkward descriptions of that work come from an interview on the Diane Rehm Show, aired on National Public Radio. There, Dr. Nicolelis describes research that culminates in a monkey moving a cursor on a computer screen via implants that detect neuronal activity in its brain. Here, roughly, is how the study works:

  1.  A monkey is taught to play a joystick-controlled game. The monkey is reinforced for using the joystick to get an on-screen cursor onto a randomly placed dot. 
  2. While the monkey is playing, we record neural activity in the motor cortex through direct implants.
  3. The neural signals are sent to a computer in the next room, which is itself attached to a robotic arm in another room that is holding a joystick. 
  4. Because the lab has some savvy programmers, the computer starts to replicate the monkey's movements pretty quickly, based on a pretty small number of inputs. To double check the computer, you have it run live with the monkey, and see that the monkey-held joystick and the robot-held joystick are moving the same, in real time, on novel trials.
  5. You remove the joystick from the monkey's cage, but leave the implants, and let the monkey see the cursor movements generated by the robot arm... which the monkey does not know is in the other room.
  6. Pretty quickly the monkey starts doing the task just as well as before.
  7. Write a Science article.  
These experiments have been cranked up quite a bit (e.g., making a robot walk on another continent, etc.), but the basic outline of these fancier studies is identical.

The Easy Way to Sell This Work

Nicolelis explains this work well in the interview. He is engaging, communicates the medical and social importance of the work convincingly, and gets a surprising amount of the science into a general-audience-friendly form. Where he comes up short, I think, is in providing a more sophisticated way for listeners to understand the relationship between the monkey and the apparatus at the end of the study. In fairness to Nicolelis, that might not have been one of his goals in this context. I understand that things must be simplified for general audiences, and I really do not mind when that happens. However, I have seen talks in academic contexts where people use the same language, and have read a few of Nicolelis's articles just to confirm that they are not much better. Nicolelis presents his research in the context of modern brain-body dualism, with strong overtones of mind-body dualism. The succinct version of this is his statement that:
The monkey could enact its voluntary motor will just by thinking. 
Other questionable phrasings included:
she didn't need to move her body at all, she just had to imagine the movements that she had to make.
we can record the electrical brainstorms that are generated in the brain, in the cortex, the motor cortex, you know, where the plans for us to move are generated half a second before we even start moving.... [The brain] plans the future of our motion and during that window, half a second or so
 The Problem with the Easy Way

"Voluntary motor will"? "Imagine the movements"? "Plans for us to move"?

Is this really the best way to talk about what is happening here?

Of course, most people are dualists, so when they hear this type of talk, they nod their heads as if they understand the implications of this experimental work. But they don't. This can be seen plain as day when you leave the relative safety and sanity of NPR. For example, in this news report on similar work, the newscaster actually leads in by saying that, thanks to new research, telekinesis is possible! Nobody without a hell of background could have heard the NPR interview and thought:
And by 'decide' Nicolelis means something neurons do, and by 'motor will' he means something neurons do, and by 'imagine' he means something neurons do, and by 'plans' he means something neurons do.
Rather, this dualistic language is appealing exactly because it misleads the listener. It allows the listener stay safely dualistic, while also allowing the researcher to pretend they are not promoting dualism. This last part would be evident if someone cornered the researcher within hearing distance of his academic colleagues. The researcher would quickly claim that he intended to use those terms in a totally non-dualistic way, placing the blame for any 'misunderstanding' on the part of the listener... while also claiming that their goal was to educate the listener. Alas, few will tell our researcher that if he doesn't want to encourage his listeners to think dualistically, he should stop using dualistic locutions. 

Even if we take the researcher at their word that they didn't intend anything dualistic... you are still left with the problem that there is absolutely no evidence that the monkey is 'thinking about moving', 'imagining moving', 'planning to move', or anything else along those lines. This criticism still stands, if we put aside any reductionist or eliminativist arguments. --- Granting a standard cognitive-neuroscience, 'brain = mind' model: Did the monkey use areas of the brain that are involved in thinking, imagining, and planning, or did they only use areas involved in moving? Certainly, in these experiments, we only have evidence that they are using the areas involved in moving. 

A Better Way of Presenting These Results

To better explain what is happening in these studies, we need to stick closely to the language of embodiment. What we have at the end of the study is 'a monkey moving a joystick'.

"But wait!", comes the knee jerk reaction, "you just skipped everything interesting. The whole point is that the monkey is not really moving the joystick, the computer and the robot arm are moving the joystick!"

Nope. The monkey is moving the joystick. The monkey, the computer, and the mechanical arm have become a single system that interacts with the environment, and that system moves the joystick. The monkey-brain, or at least the part of interest, is doing exactly what it did when its arm was moving, and that is the point - if that part of the brain was not doing the same thing, the whole set up would not work. Thus, the new relationship between monkey-brain and the mechanical-arm is the same type of relationship that there was originally between the monkey-brain and the monkey-arm. If we were willing to say that 'the monkey' was moving the joystick in the first case, we have been given no reason to change that. Returning to the interview quotes above:
The monkey could enact its voluntary motor will just by thinking
she didn't need to move her body at all, she just had to imagine the movements that she had to make had to move the joystick.
This creates a problem. The problem is that most people want to talk about the relationship between the monkey and the robot-arm in a different way than we talked about the relationship between the monkey and the monkey-arm. Given that the old cognitive language does nothing but continue to invoke dualistic notions and mislead people about what is happening, it needs to go away. This leaves us with two possible option: 1) Get over our discomfort and agree that the two relationships are the same and that all the same terms should be used to describe them. Given this option, we would still have the difficult task of creating conventions for deciding when a formerly-separate part of the world has become part of the organism. It should be noted that this is a problem biologists have been worrying about for a while. 2) Come up with a better way of talking about systems-level phenomenon like these that distinguishes the two situations. We need a better way to talk about systems level effects that involve the brain either way, and I'm not sure how that language would distinguish the robot-arm movements vs. the monkey-arm movements.

Concluding Thoughts

Just to re-emphasize the point, this is amazing research; my only complaint is with the way it is being discussed, because the way it is being discussed leads people to think inaccurately about broad aspects of psychology. I should mention that Dr. Nicolelis, in the same interview, does a great job of talking about how still-popular ideas about brain-area-specialization are increasingly suspect, both because of evidence for distributed influences throughout the brain and because of evidence for much neural-plasticity. Also, he does good educational things with kids in Brazil, which he talks about briefly towards the end (with other discussion here and here).

At any rate, to connect back to my demo from last time, I want to offer two concrete predictions. The first is phenomenological: When these types of 'mind-control' devices are attached to humans who can self-report, we will find out that controlling the robotic arm is experienced as very much like 'moving', and not at all like 'thinking about moving.' There are already similar-enough devices in humans that this could be checked easily.

The second prediction is neuroscientific: If you built a robot, and you want it to move the way a person is moving (or the way they would be moving if not paralyzed), you would connect the robot to different parts of the brain than if you wanted the robot to move the way the same person was imagining a robot moving.

What do you think? Am I full of it? Will those predictions pan out?


  1. Hi Eric -

    Before listening to the DR interview, based on your quotes I feared it would be so watered down and pop science laden as to be pointless. But I'd say Nicolelis did a decent job of describing the experimental setup - although I recently read his book, so I can't judge the interview from the perspective of first time exposure to the concepts.

    As to the quotes, describing the brain as "planning" a movement seems pretty innocuous notwithstanding its slight implication of a homunculus. I guess one could go off into the world of readiness potentials et al, but that would seem an even worse approach. I agree that attributing "thinking" and "imagining" to the monkey is most unfortunate. The meaning of such words is controversial enough without throwing them around carelessly. And out of context, movement caused "just by thinking" does sound like telekinesis - although reporting the results that way and attributing them to an "implanted microchip" were pure invention.

    I don't think "the monkey-brain, or at least the part of interest, is doing exactly what it did when its arm was moving" is strictly correct. One of the interesting results of the experiment was that at some point in the training, the concurrent movement of the monkey's arm ceased and only the robot arm continued to move. So, although some relevant parts of the brain continued to function to a large degree - perhaps "exactly" - as before, others did not.

    Re extended mind/cognition, the book has some interesting material about neurological changes accompanying the use of tools that seems to support that concept.

    I'm all for dumping quasi-dualist language in formal contexts, but I don't think even a relatively well informed public is ready for the vocabulary of neurophysiology. So, maybe we need to give scientists speaking to the public some slack until there's a consensus vocabulary that avoids both the casual - but misleading - vocabulary of dualism and the formal - but lay person unfriendly - one of neurology.

  2. I don't think "the monkey-brain, or at least the part of interest, is doing exactly what it did when its arm was moving" is strictly correct.

    Well, here is part of the rub, isn't it. There is only one aspect of the monkey-brain we know about for certain, the part hooked up to the direct recording electrodes. If that part of the brain was not doing more or less exactly the same thing, then the system wouldn't be functioning properly! And yet, all the language being used seems designed to convince us that something different is happening.

    To be fair, I suppose we also know that something is different, because the monkey-arm isn't moving any more. But we do not know the nature of that difference... except that it can't be a difference in the neurons we are recording from, because those ones are doing the same thing.

    Also, I'm all for giving some slack to those trying to communicate with the public. In response to your comment, I have been wondering why have such conflicted feelings in this case. My best guess is this: When I hear talks on shows like this from chemists, or physicists, or biologists, or astronomers, or engineers, or medical doctors, they seem to be legitimately struggling to get their complex ideas across to the public. They seem to see it as part of their job as a public representative of science to help listeners think about the work they are describing in at least a slightly more sophisticated manner. This, I think, is how they earn their lee way, by genuinely struggling to communicate complex ideas as best they can. I rarely get this feeling from the public representatives of psychology. They seem exactly not to be struggling for better, more accurate way of explaining their findings, they seem perfectly happy playing fast and loose with misleading language. Thus, I am not sure how much slack I am willing to give them.

    Does that make sense?

  3. Let's separate two issues. First, the issue of the language used in describing the experiment and its results.

    There is admittedly some showmanship in both the book and the interview. Eg, Aurora's performance was repeated using a robot arm at MIT that was moved by signals sent from Duke via the Internet. That seems intended purely for dramatic effect since transferring those signals over long distances with tolerable delays was a straightforward communication engineering problem. Nicolelis clearly considers his results important (I'm not qualified to judge whether he has, but they certainly seem impressive), so I'm inclined to give him the benefit of the doubt and assume that behind the blatant marketing are honorable intentions. (After all, he's not one of those disreputable psychologists!)

    Since my first comment, I've had to revisit my objection to describing the robot arm's movement as driven by Aurora's "thinking". In the transcript of the DR interview there is a large gap roughly between minutes 18 and 40. My impression based on that untranscribed segment is that Nicolelis is including in "thinking" not only brain activity that might typically be described in casual conversation as "conscious" but also brain activity that in contrast would be described as "unconscious". I've recently encountered other writers who seem to support that inclusive use of "thinking" and so am inclined to accede to the intent to capture both activities with a common descriptor. But I continue to agree that in formal discourse that specific terminology should be avoided.

    BTW, that segment also describes the experiments that address tools as effectively extensions of the body.

    1. Agreed....But if the appeal of the language is due to something the researcher explicitly disavows, then there is a (potentially) unacceptable slight of hand taking place. I'll admit from the start that don't know about his book's target market. That aside...

      My intuition (and I could be wrong) is that 1) a typical listener to NPR does not understand "thinking" to include "any neural activity of the brain", and 2) a typical listener to NPR find's the report far more interesting than they otherwise would specifically because the word "thinking" is being used.

      If I am correct about those points, then people concerned with the public understanding of psychology should be concerned about the language used in reports like this. Maybe they should be concerned a lot, maybe only a little.

  4. Now to the issue of the relationship between the brain activity with and without the joystick. First, a caveat: the following is based on my recollection of the book's description of the experiment. Unfortunately, I had to return the book to the library and so can't check the accuracy of my recall. Anyway, FWIW I think there may be even more difference between the brain activity in the two cases than I was previously willing to argue for.

    The experiment comprises three phases. In phase 1, Aurora learns to use a joystick, which moves a visible (to Aurora) robot arm, which in turn controls a cursor on a visible display. Analogous to what Nicolelis calls the "receptive field" comprising all neurons which respond to sensory stimulus at a specific point, I'll call the set of motor cortex neurons that directly participate in moving Aurora's arm when using the joystick the "emissive field" for that action (EFarm). Clearly, the array of probes embedded in Aurora's motor cortex must be positioned so that many of the probes (preferably all) receive inputs from members of EFarm.

    After Aurora has learned to move the joystick so as to win her reward, in phase 2 the electrical activity from the probe array is processed by an adaptive algorithm to produce signals that cause a second robot arm (not visible to Aurora) to approximate the visible arm's movement as Aurora moves it using the joystick. This creates a direct relationship between Aurora's brain activity while using the joystick and watching the visible robot arm and the resulting movement of that robot arm. A key observation here is that the neurons in EFarm will in general be responsive to brain activity other than that which controls Aurora's arm; ie, they can belong to the emissive fields of actions other than moving the joystick. This suggests that movement of Aurora's arm and movement of the robot arm could in principle be decoupled.

    In phase 3, Aurora learns to move the cursor even though the joystick is disengaged from the robot arm, and ultimately she stops moving the joystick. This demonstrates decoupling of the brain activity involved in moving the joystick and that involved in moving the robot arm.

    So, it may be true that the brain activity involved in moving the robot arm is the same whether or not Aurora is moving the joystick. But the activity involved in moving the joy stick obviously isn't. In phase 3, Aurora has to learn to reproduce the brain activity that caused the invisible robot arm's movement in phase 2, and in the process of doing so also learns that the brain activity that produced movement of her arm is unnecessary. Another way of saying this is that in phase 3 the probe array's being in EFarm becomes irrelevant.

    At least that's my take on what's going on.

  5. Agreed again! To (I hope) just re-phrase what you said:

    The part of Aurora's brain that was previously involved in moving both Aurora's arm and the robot arm (via the computer interface) is doing the same thing in all parts of the study. Other parts of Aurora's brain are surely doing quite different things in different parts of the study.

    No contest there.

    My bigger problem is in how we present the material. My intuition is that the average listener who hears these announcements is lead to believe that Aurora is 'thinking about' moving the robot arm in the same sense that I, sitting here on my chair, might 'think about' dancing the cha-cha. That is simply misleading (at least with regards to what the researcher actually measured), and I don't know how we can be anything but misleading when we keep using the language of the old paradigms.

  6. Issue 1:

    Presenting complex information on any subject to a lay audience presents a real dilemma. I think we agree that a good start would be for pros to be especially careful about the vocabulary used within the relevant community. In psychology/phil of mind, as a non-professional I may be wrong, but it seems to me that dropping "mentalese" would be a good step. In which case problematic words like "thinking" would just disappear from formal discourse, and hopefully from informal presentations to lay audiences as well.

    Issue 2:

    CW: the brain activity involved in moving the robot arm is the same whether or not Aurora is moving the joystick

    EC: The part of Aurora's brain that was previously involved in moving both Aurora's arm and the robot arm (via the computer interface) is doing the same thing in all parts of the study.

    Here I'm at - probably way beyond - the limits of my understanding. But right or wrong, my view is that for any motor action there will be a region of the motor cortex that plays a role in that action (what I call the emissive field for the action - EF). The EFs for distinct actions may intersect, in which case the firing pattern of neurons in the intersection will be a composite of the individual firing patterns. Aurora's moving her natural arm and moving her "extended" arm are distinct actions. The fact that firings due to both are sensed by the probe array suggests that the respective EFs do overlap - which in turn suggests that when Aurora's natural arm stops moving, the firing of the neurons in the overlap no longer includes a contribution due to that action.

    Successful or not, the intent of my wording was to capture the idea that the contribution to firings of neurons in the overlap that is due to controlling the robot arm don't change, although the composite firings do. So, I don't think agree with your rewording. But I would of course have to abandon that position if convinced that my understanding of the process is wrong - which is quite likely.

  7. Issue 1: Agreed! I'm not sure that the terms function any better in a professional context, but I'm happy with just the conclusion that they do not work well in interaction with non-professionals.

    Issue 2: My intuition is that your description does correctly describe what is happening in some part of the brain. But I'm still not convinced it could be what is happening in the exact part of the brain where the senors are.

    The experiment begins by measuring 1) neural activity from specific neurons in the brain, while the monkey moved a joystick, and 2) how Aurora moved the joystick. That data was fed into a computer attached to a robot-arm. Over a few weeks, some cool "learning" algorithms churned away until --- given a set of monkey-neural-inputs --- the robot-arm moved its joystick in the same way as Aurora moved her joystick. Right?

    If that is the case, then the set-up of the experiment ensures that: For the robot-arm to continue functioning properly, the exact neurons being measured must continue to do (pretty-much) whatever they did during training. If those neurons started doing something (too) different, then the algorithm transforming neural signals into robot arm movements would no longer be properly calibrated. That is, if the neurons-being-measured started doing something very different than what happened during the training trials, then the robot-arm would necessarily also be doing something very different from during the training trails. There is no reason to think the algorithm would be robust if the function of those neurons differed dramatically between training and test.


    P.S. Charles: Thank you for staying with me on this! These are conceptually difficult ideas, and it is great to be able to go back and forth.

  8. Re "staying with you", ditto - only more so. I have no one with whom to have exchanges on such topics other than generous souls like you that I "meet" on the web, so I'm all the more appreciative.

    We're now at the point beyond which I can't go w/o getting the book back (which I intend to do, but there's a waiting list). I have a vague recollection that the algorithm for driving the robot arm had to be reconstituted for different phases of the experiment, but I'm definitely not sure about that. If not, then your argument seems correct.

  9. I have the book again, and find that I had many details of the experiment wrong. Here is a complete rewrite of my Jan 18, 1:18 PM description and conclusions.

    The experiment comprises three phases. In phase 1, Aurora learns to play a game by using a joystick to control a cursor on a display. She wins by moving the cursor to a randomly positioned circle on the display. In phase 2, electrical activity from the probe array during multiple plays is processed to produce optimal regression coefficients which in subsequent games can be used by an adaptive algorithm to generate - from the probe array outputs - a signal that drives a robot arm so as to closely approximate the movement of Aurora's arm.

    In phase 3, control of the cursor is reassigned to the robot arm so that Aurora must learn to move the cursor without using the joystick. She does and ultimately even stops moving her arm, thereby demonstrating decoupling of brain activity involved in moving her arm and and that involved in moving the robot arm,

    The regression coefficients produced in phase 2 are also used in phase 3. But Aurora must nevertheless again learn how to move the cursor. Although it's possible that the brain activity required for her to win a specific game in phase 3 is the same as the brain activity required to win the same game in phase 2, that seems unlikely since the required activity apparently has to be learned anew.

  10. Interesting, thanks!

    The problem of "relearning" is particularly intriguing. The challenge, surely, is to get a particular part of your brain to do what it was doing when you were moving your arm, but without moving your arm. That is, the monkey has to "relearn," but the computer's regression is the same, and so within a small range of error, the monkey must give the regression the same inputs given during training.

    Of course, you then have to say all that without a "you", because there is not inner monkey telling the brain what to do. How to banish the dualism?

    Perhaps the needed neutrality could be achieved with claims about the decoupling previously coupled brain activities? In any case, we still need a way of describing the new monkey-computer-robot system - it is still the case that the monkey is "moving the cursor on the screen" equally whether doing so with its own hand or with the robotic hand. The embodiment people (including me) can only complain about the inappropriateness of cog-neuro terminology so much, we need to offer an alternative language that makes more sense.

  11. It might help to describe in more detail how the "test subject+probe array+computers+robot arm" system worked as best I can tell from the book's somewhat sketchy description.

    During phase 2, for each play the electrical activity detected by the 100 element probe array was recorded, as was Aurora's arm position. For each identifiable recorded arm movement (see Note 1 below), the researchers extracted probe array data for the one second time interval that preceded the start of the movement. This data was further divided into ten 100 msec segments, and for each segment-probe combination (10 X 100 = 1000 total), the number of action potential spikes was counted. Weighted sums of these 1000 values were treated as predictions of Aurora's recorded arm movement. A regression analysis based on the correlation between the predicted and the recorded arm positions yielded a set of optimal regression coefficients for each play. Over the course of this "training" phase, the regression coefficients derived for multiple plays converged to "stable" values. The resulting sequence of estimated positions was then transformed into continuous control signals to the robot arm, which was able to closely track Aurora's actual arm trajectories. (See Note 2).

    In phase 3, these stable regression coefficients were used, but since control of the cursor had been transferred from Aurora's arm to the robot arm, Aurora had to learn how to make the cursor move without moving her arm.

    The game requires adaptive behavior. The initial and target cursor location pair changes from game to game, so Aurora has to find a new winning sequence of moves for each new game. And since I assume that any physical action is context- dependent, I see no obvious reason that the winning sequence of movements for the same cursor location pair in two different games would necessarily be the same. Eg, in general there will be subtle differences in the subject's internal physical state even if the external context is carefully controlled. For the same reason, I doubt that even if a cursor location pair from phase 2 is repeated in phase 3 Aurora would find the same winning sequence of moves.

    Re terminology, although I agree that most "mentalese" should be avoided, there are some words - like "learn" - that seem OK if clearly defined. One might distinguish between "learn-that" and "learn-how", the latter signifying skill development. (I tend not to make that distinction since I think of everything as "learning how", but just by chance I did use "learn how" above.) Also, in drafting my last comment, I too initially described Aurora as "relearning", but dropped the "re" since although the observable behavior is the same, IMO the relevant brain activity most likely isn't - which "relearn" may suggest.

    Note 1: Precisely what constitutes an "arm movement" event was not clearly specified in the book, but for our purposes I don't think it matters.

    Note 2: Despite having background in relevant topics (statistics, stochastic processes, digital filtering), I found the description of this "optimization" process very confusing. So, my account may not be accurate in the details. Again, I don't think it matters as long as it's not totally off base.