Noah put me on to some gratifying articles on the Dynamical Systems approach to cognition appearing in the latest edition of Topics in Cognitive Science. They all share a skepticism of Dynamical Systems as a scientific movement. Sure, it’s a useful approach that should be pursued; it tells us things about cognition we would otherwise have difficulty modeling, but is it really necessary to make a religion out of it? Or, as Matthew Botvinick puts it in “Why I Am Not a Dynamicist:”
The message is that one must choose: One may either use differential equations to explain phenomena, or one may appeal to representation. This strikes me as a false dilemma.
Right. And it strikes me as a lot more than that besides.
One way it strikes me is as reminiscent of a lot of pointless discussions about programming langauges in Computer Sciency-fields. Everyone loves defending their favorite programming language. And this isn’t completely nonsensical – some languages are indeed crap, and among those that aren’t, some are more appropriate to certain tasks than others. You know you’ve matured as a programmer when you find yourself reaching for different tools for different jobs. And you know you’re an expert when you find yourself reaching always for the tool that you’re familiar with and applying it differently as the job requires.
The main thing that programming language debates revolve around is how abstract the thing should be. It’s circles and circles about whether you want something that is semantically transparent, or something that is close to the machine. This often gets phrased as a question of programmer vs. equipment cycle efficiency, but that’s actually a bit of a red herring. A lot of “high-level” languages (especially the really functional ones like Scheme, Haskell, ML) compile down to highly efficient machine code, and there are even cases where writing for the machine will lead you into efficiency traps. But we’ll keep talking about it this way for the foreseeable future because in the general case C++ is the right amount of tradeoff between readability and speed when speed is mission-critical.
But the real reason people stick to their guns on these issues, truth be told, has nothing to do with a hypothetical efficiency/clarity tradeoff, though it’s true enough that one often exists and is relevant. The real issue is one of productivity vs. thoroughness. Productivity junkies like to get the thing up and running as quickly and tersely as possible. Likewise, for maintenance, they like to quickly isolate the source of the problem and then make the minimum number of changes necessary to get things back on track. All of this is much easier if the programming language itself is closer to their level than the computer’s. Worrying as little as possible about how the computer actually operates is critical here.
Clarity junkies are the type of people who are nervous if they don’t understand something thoroughly. It isn’t enough that the program works in the way advertised, they don’t like operating on it if they can’t answer very finely-detailed questions about how it works. These people are less interested in quick repairs and semantic transparency. There is less of a need for the program to implement the algorithm transparently because they’re willing to go in and read it line-by-line anyway.
Now, if you’re seeing these things as directly opposed, you’re indulging in false dilemma. It’s possible to have it both ways or neither way. Which is why I think a lot of programming discussions go nowhere – because both “sides” agree with the other’s principles. The discussion ends up styling itself as a discussion of principle when it’s really about priority.
To a large degree, I think something similar is going on in the recent wars of how “embodied” our models of cognition need to be. Both Symolists and anti-Symbolists agree that semantic transparency is a good thing – we want our models to describe as directly as possible what they represent – and both Symbolists and anti-Symbolists agree that it’s important to know exactly how it is that the brain does what it does. But neither side seems to appreciate that both sides share these goals; in short, they mistake an argument over priorities as an argument over principle.
One of the first problems that you get in a class about connectionist networks is the classic XOR gate problem. You can go over to Wikipedia and read more about it if you like; the gist, though, is that XOR is a function that requires a so-called hidden layer to work in a neural net. XOR, briefly, is exclusive or. So, 1 and 1 is false, 0 and 0 is false, 1 and 0 is true, and 0 and 1 is true. Either one OR the other BUT NOT BOTH. Now, I’ve just given you a very symbolic representation of this function, right?
|Input One||Input Two||Output|
So it obviously CAN be represented as a symbol system. Actually, what do we gain by representing it any other way? Why is it actually interesting to implement this in a neural net?
From where I stand, it’s interesting because we want to know how the brain, which is made of interconnected unifunctional neurons, implements this inherently symbolic function. But when I was learning about connectionism in class, it wasn’t really presented to me that way. Rather, the professor in question seemed to see it as a slam dunk refutation of the idea that we’d ever need to represent mental functions with symbols.
This strikes me as completely missing the point.
I think XOR is an excellent function to use in introducing conenctionist networks precisely because it is simple enough that I can trace all the paths through the network and convince myself that it works. But tracing all the paths through the network gets comparatively harder for harder functions, and it is demonstrably impossible for extremely complicated functions – like human grammar. And, more to the point, even for a simple function like XOR, which it IS possible to trace through a network, I’d still prefer to explain it to someone using symbols, just because that’s the path of least resistance. That shows immediately what is going on, no extra effort required.
So what I think it really comes down to is whether you are more interested in understanding what XOR is, or how XOR might be implemented in something like the brain. If it’s the latter, you pick up the neural netowrk tool. If it’s the former, you draw the symbol table. Like with the programming languages debate, what’s the “right” answer depends on what your priorities are.
Which is why I think it’s pointless for “dynamicists” to paint themselves in opposition to “symbolists.” First of all, not everything about cognition that we’re interested in knowing happens or is best understood with reference to time. Second, just beacuse we know that the brain is the locus of human thought and that it exists in time, we aren’t assured that explaining thought algorithms in terms of time may not be bringing unnecessary baggage to the table. It’s just like with XOR. Of course it’s important to know HOW XOR is implemented in the brain – no model of cognition will ever be complete without that knowledge. At the same time, if you’re just trying to isolate XOR as a function and study its properties, it ceases to matter whether it’s implemented in a brain or a computer or on a sheet of paper as a truth table.
That, to me, is what symbol systems are for. Whether or not the thing is physically real is an annoying distraction. We KNOW that underlyingly non-symbolic strata can implement symbol systems – this is confirmed by any neural network demonstration that does exactly that. I really don’t care that we get the same results in two networks with slightly different weight on their connections. That is unimportant to getting XOR up and running in such a network. It works, or doesn’t work, on the symbolic level. Period. Whether the symbols are “really there” or not is a nonsensical thing to ask.
The reason that connectionists ask it anyway is, I suspect, because they are the C++ programmers of cognitive science. Their models are necessarily simpler than the real neurons in the real brain, but it’s close enough to human “machine code” to count. They do what they do NOT becuase they’re really obsessed, as they claim to be, with complete accuracy in representation. It’s more that they’re nervous about not understanding things at the machine level. Given a bunch of symbols, they immediately need reassurance that it really does compile down to neurons. THAT’s what it’s really all about.
And they have a point. Symbolic approaches to cognition are dangerous – because it’s too easy to build in your hidden assumptions before you know you have them.
But symbolic approaches to cognition are also more productive – because they worry about what’s actually being implemented more than the details of the implementation.
There is very obviously a place for both. We know that symbolic approaches will not be the final answer to any cognitive questions, because the brain itself is not a clean implementation. We absolutely will need to know details of the implementation to get at the complete description. We also know that for any sufficiently complex system, whether or not we represent it in terms of symbols we’ll need some dynamic models for it because the symbol tables are so large and interconnected that it’s practically impossible to imagine all the effects that each component has on the others. Dynamic approaches are in this sense what multiplication is to addition.
But other times dynamic models are just a distraction, and this is what their more vocal adherents need to learn. The need to understand that we symbolists are no more convinced of the physical reality of symbols than they are; we know very well that they are a method of abstract representation. Really, there is no argument here. There’s just a question of whether you are, as an individual and at a given moment, more interested in understanding things to fine precision, or in understanding them in a semantically transparent way. The task, coupled with what kind of person you are, determines the answer. No need to draw boundaries, and no need to take sides.
So, I wholeheartedly agree with what I understood to be the point of these papers. Dynamical Systems as an approach to understanding cognition gets a big, hearty thumbs up. Dynamical Systems as a faction fighting for prominence in the study of cognition is a stupid waste of time. So, if you’re just generally more comfortable working in dynamical approaches and/or you feel more satisfied when an explanation has a dynamic component, then you should make that your primary tool, and the world will thank you for your hard work and dedication. What you shouldn’t do is go around telling everyone else that it’s the only useful tool for any cognitive problem, because it plainly isn’t.