a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: The Science of Your Mind, Part 3

Blog: The Science of Your Mind, Part 3


Let’s Get Physical (Symbol System)

Hello again, world, and welcome back to this series of cognitive science articles covering Michael Dawson’s book Mind, Body, World. As always, I’ll be selecting some important and interesting concepts from the book, but I encourage you to read the book if you want the full nuance and history. In case you missed them, here are links to the other articles in this series:

This time around, we’re going to use the four-tiered analysis framework we outlined in the previous article to dive deeper into classical cognitive science. Version 1.0. The CogSci OG.

But this is more than just a history lesson. Like all OGs, the influence of classical cognitive science persists to this day. Many classical theories still guide current research. Luckily, science doesn’t follow the iPhone release schedule, so we won’t need to throw this article away next September when the new model comes out with twice the axioms.

Here we go!

To infinity and beyond

Recall from Part 1 that classical cognitive science relies heavily on the metaphor of the digital computer. “Why is that?” you might ask. The short answer is that the digital computer — a real-life universal Turing machine, not just a thought experiment, capable of computing anything that can be computed — both seemed capable of modeling the mind and was the only such tool on hand. To understand how that conclusion was reached, you’ll need a bit of history.

Early philosophers of mind (think Descartes) were dualists, which means they believed that the mind and body were separate and fundamentally different. For example, you can divide the body (e.g. you can lose an arm), but can you divide the mind? And while the body is finite, the mind seems capable of infinite complexity. It’s not entire unreasonable to then conclude that while the body is physical, the mind is not. It’s made of different “stuff.” Oh, and it controls the body through the pineal gland, because obviously.

Materialists, on the other hand, believe that mind is the product of physical processes. Dualists and materialists don’t see eye to (third) eye. The debate was long, fierce, and (I like to imagine) fraught with excellently choreographed street battles of musical prowess a la West Side Story.

Trust me, this is what science was like back then!

Also, you definitely can divide the mind.

Now, if you’re an aspiring cognitive scientist in the time of dualism, you have a problem: you have some really interesting theories, but how do you test them? The mind might not be physical, but all your equipment is. Given a theory, it’s not like you can peer into a mind to see if what’s going on in there lines up with theory. Even today, tools like EEG and MRI measure the physical consequences of thought, but not thought itself.

So the next best thing you can do is to build a model. If you can make something that acts exactly like the mind, then it would be reasonable to conclude that the model and the subject (mind) are performing the same function. According to the concept of multiple realization from Part 2, that should be true even if they’re made of different material.

You see a glimmer of hope when Alan Turing strolls along and shows that a machine of finite matter can, in theory at least, exhibit an infinite variety of behavior. Such a machine is known as a universal Turing machine, and it should be able to compute any function that can be computed.

You’re not satisfied, though. Doing things “in theory” has been the whole damn problem. You want something IRL! In particular, you want a physical symbol system, which is essentially a universal Turing machine that “is also realizable within our physical universe.” (Newell, 1980, p. 136). But the dualists say that there can’t be a physical machine that acts like the mind, because infinity and all that, and you just can’t even anymore.

The Digital Age

So now you’ve traded all your beans for science tools made of boring old matter, your nights are filled with feverish dreams about symbols, and your roommate Jeff is mad at you for turning the spare room into a lab and he was never really sure about this whole “science” thing anyway. If only someone could just build one of these Turing machines.

And then the first digital computer is born. And you lose. Your. Shit. We built one. We actually built one! A real-life physical symbol system, with all the infinities and all that jazz.

But is that enough? We knew from studying language and grammar (see the book) that the mind is at least as powerful as a physical symbol system. But is a physical symbol system powerful enough to produce cognition?

At this point, it’s only one more step to classical cognitive science. Logicism is the assumption that “cognition is a rule-governed symbol manipulation of the sort that a physical symbol system is designed to carry out.” If you accept that assumption, you have the physical symbol system hypothesis: “the necessary and sufficient condition for a physical system to exhibit general intelligent action is that it be a physical symbol system” (Newell, 1980, p. 170).

This is where the magic happens. You see, there’s this cool thing about physical symbol systems. Since they can each perform any computable function, that means that any one physical symbol system can simulate any other. So, if the mind is a physical symbol system, and so is a digital computer, that means a digital computer can simulate a mind.

On Symbols

To recap, the digital computer was proof positive of the existence of a universal Turing machine. Add in the physical symbol system hypothesis, and classical cognitive science is born. In particular, it’s founded on the idea that “cognition is computation, the brain implements a universal machine, and the products of human cognition belong to the class of computable functions.” And so cognitive scientists set off developing models of cognitive processes based on symbol manipulation.

If you’ve been reading this series since Part 1, I imagine there’s been one as yet unanswered question that has been burning a whole through your consciousness: just what in the darkest depths of undying night is symbol manipulation?

First off, watch your tongue! But also, good question! Imagine you’re holding a Rubik’s cube. The face facing you is completely blue, the back is completely green, and the yellow, red, white, and orange faces are also almost complete. I say almost, because there’s a row of red in the yellow face on top, a row of white in the red, and so on… Whoever solved the cube didn’t bother to make the final turn, that layabout!

The urge to complete the puzzle is overwhelming. You try to ignore it, but you know the itch is going to keep you up nights. So you take that front face and turn it clockwise 90 degrees.

Ahh, much better.

So that Rubik’s cube you conjured in your mind was a symbol. It’s a mental construct that you use to reason about the real, physical cube out there on the coffee table. You can interact with the symbol, turn it around, manipulate it, all without touching the real cube.

A symbol can have any properties that are useful for solving the problem at hand. For example, you might construct a geographical symbol to navigate a familiar neighborhood (in two minutes I’ll be at the intersection, take a left, then I’m at Chipotle) or more complex symbols to play a game like chess (I swear I’m going to beat you this time Jeff, you cheater!).

So a physical symbol system is a machine that uses this type of symbol-based logic.

Ramping up Production

One example of a physical symbol system from classical cognitive science is the production system. A production system has a collection of rules or productions that are used to manipulate its symbols. A production has pre-conditions or requirements that the symbol must meet in order for this rule to apply. Applying the rule changes the symbol from one state to another by modifying its attributes, as described by the production. Given a symbol in a certain state the system searches for all productions whose requirements are met, and then those productions are applied to produce a symbol in a new state.

Production systems illustrate some key principles of classical cognitive science. For example, they have clearly separated structure, process, and control. The structure (the symbols) are stored in working memory. These are separate (both in nature and in storage location) from the process (the productions), which are stored in long-term memory and recalled only as needed. And both the structure and process are separate from the control, which is the code that controls which productions are chosen and applied.

Production systems also illustrate that classical sandwich or sense-think-act cycle we discussed in Part 1. First, it senses the environment to construct the symbol. Then it thinks about the symbol and chooses the productions to apply. Then it acts out its chosen productions on the symbol (and potentially the real object). Rinse and repeat.

If you’d like to learn more about production systems and other models from classical cognitive science, check out this free course. Of course, there’s also a lot more detail in the book that I haven’t covered here, so go give it a read.

Prove it!

All this talk about symbols is well and good, but can we be sure the simulations are actually modeling human cognition? You can build a model of anything. How do we know the production system isn’t just modeling the assumptions of its inventor, Dr. Production (I assume all things in science are named after their inventor).

As we discussed above, proving the validity of a model involves showing that the model and the subject are equivalent, or that they have the same behavior under the same conditions. This is where our multi-tiered analysis framework from Part 2 comes in handy, because there are multiple levels of detail at which we are interested in describing “behavior.”

A model and subject are weakly equivalent if they are solving the same problem. This is the highest level of analysis, the computational level. This is also called Turing equivalence, in reference to the age-old machine intelligence test described by Turing and made badass as fuck in the film Ex Machina.

Weak equivalence is not really all that exciting. After all, we’re not just interested in understanding what problems we solve, but how we solve them. That requires descending down to the algorithmic and architectural levels of analysis and showing strong equivalence. A strongly equivalent model is solving the same problem as its subject using the same procedures that are composed of the same primitives.

When you get down to strong equivalence, it can get a bit eerie. For example, EPIC is the name of an implementation of a production system. Studies have shown that this computer model actually reproduces certain psychological phenomena, like the psychological refractory period! This is an example of error evidence, and it’s motivation to consider EPIC strongly equivalent with the human cognitive processes it is modeling.

Functional Analysis

It’s worth reflecting on this top-down approach of cognitive science. Why go from behavior and proceed downwards. Why don’t we instead collect a bunch of data from neurons and try to build our models up from there? Well we do, it’s called computational neuroscience, and if you like that approach you’re still a good person (in fact, you’re probably awesome and I want to be your friend).

Cognitive science, on the other hand, tends to rely on functional analysis to reverse engineer models from subject behavior. We start with the problem, collect artifactual evidence (remember complexity, error, and intermediate-state evidence from Part 2?) to deduce procedures, and then do our best impression of a five-year-old relentlessly asking “why?” until we’ve broken the procedure down into primitives. Remember that primitives are cognitively impenetrable (like Magneto) and can be explained by physical mechanisms, which belong to the implementational level of analysis.

Example: Think-Out-Loud Protocol

One example of a functional analysis technique is the think-out-loud protocol. Experiments that use this technique will have subjects narrate what they’re thinking while solving a problem. We can then use the subject’s transcript to reconstruct the steps they took, giving us an idea of the algorithm that the subject might have been using. This is an example of gathering intermediate-state evidence.

Consider converting each step of each subject’s algorithm into a belief state, which represents the subject’s mental contents at that step. If we were talking about Rubik’s cube, each state might be a configuration of the cube. You can then imagine the transcript of each subject’s solution describing a path though this graph of belief states, from start to goal state.

Now we can test our theories. Given an algorithm that we think subjects might be using to solve the problem, we can build a model that uses that algorithm. If the model takes a completely different path though the graph than any of the subjects, it would make it more unlikely that the subject is using that algorithm.

Example: Mental Imagery

There’s a bit of contention in the field when it comes to mental imagery. Depictive theory proposes that while visual information is stored in long-term memory as propositional statements, it is turned into depictive representations in working memory and acted on using spatial primitives. Propositional theory proposes that the information continues to be stored as propositional statements even in working memory. Here’s a hillarious slide I found to illustrate the difference:

The question here is one of architecture. Does our cognitive architecture provide spatial primitives for processing visual information? When we’re engaged in mental imagery, are we actually working with a mental “image”?

There’s some evidence for depictive theory. For example, an experiment trained subjects to visualize a map with several landmarks on it. Each subject was then asked to mentally scan across the map from a starting landmark to a goal landmark. The experiment found that the time it took to perform this mental scan increased linearly with the distance between the starting and ending locations, as if the subject were scanning their finger across a physical map. This is an example of complexity evidence.

Think about that for a second. There is no physical map, it’s only in your head. It doesn’t have physical dimensions of width, length, or distance. Yet it seems like we can interact with this imagined map as if it has spatial properties!

However, a subsequent experiment cast some doubt on this result. In this version of the experiment, the subjects were asked to shift their attention from a starting landmark to another, then provide the compass direction back to the original landmark. With this prompt, the response time no longer increased linearly with distance between the landmarks. This shows that “scanning” is cognitively penetrable, because it can be affected by whether or not the subject thinks about it as scanning. If it’s cognitively penetrable, then it can’t be a primitive of the cognitive architecture.

Implementations and Modularity

The above examples show how you can go from problem to process to primitives. To get down to the physics of things, the implementation, you need neuroscience. If you have an idea of what a primitive of the cognitive architecture might be, you need to be able to back it up with some brain data. What regions or networks in the brain are responsible for performing that primitive?

Modularity is the idea that a general information processor is a collection of specialized processors. A cognitive process is broken down into sub-processes, each one is handled separately by a difference module, and then the result of each module is recombined. For example, studies of language processing in stoke patients has shown that it’s possible to be speaking complete gibberish while under the impression that you’re as eloquent as the Bard himself. You can understand what’s being said to you, you know the words you want to say, but something goes wrong when you try to speak. And you’ll get really frustrated, too: I said “pass the peas,” why are you staring at me like a madman?. There are analogous results in vision, where a subject can see shapes but not motion.

Modules would be cognitively impenetrable because they have a fixed neural architecture. Remember that cognitive impenetrability was also a requirement for our cognitive primitives. So if you can find a module and show that it’s performing some function that maps well to your candidate primitive, that’s good evidence for it actually being a primitive. However, it’s important to note that brain imaging studies generally show that many areas of the brain are involved with practically every task. This means that a module doesn’t have to be localized in one region of the brain, but can be a distributed network across the brain.

One last cool thing…

If you haven’t heard of it before, evolutionary psychology is the study of how human psychological processes would have evolved and where our cognitive architectures might have branched off from our evolutionary cousins.

Theory of mind is the ability to simulate the contents of someone else’s mind, even if it disagrees with your own. For example, when you see me playing the air drums as I walk down the street, you understand that I probably have Rush playing on full blast in my head. You probably do not assume that my arms are being possessed by an epileptic ghost.

Some research has suggested that dogs seem to be capable of theory of mind whereas monkeys, a closer genetic relative to humans, are not! One reason for this might be that humans and dogs have spent the past 20,000+ years in co-evolution, ever since we began our sadistic project of turning the noble wolf into whatever the hell this thing is:

You did this

Source: Artificial Intelligence on Medium

(Visited 4 times, 1 visits today)
Post a Comment

Newsletter