Advancing the ideas in my previous post, I would view a coordinated process architecture as a viable basis for cognitive models…
Digital computers provide a pseudo-positivistic means to study human information processing. As any Turing-equivalent computer can be implemented as a virtual machine built upon von Neumann architecture, human cognition can be modeled, assuming it is Turing-equivalent in the first place.
According to Pylyshyn (1984), two programs can be thought of as strongly equivalent or as different realizations of the same algorithm or the same cognitive process, if they can be represented by the same program in some theoretically specified virtual machine. The formal structure of the virtual machine — or what Pylyshyn calls its functional architecture — represents
“the theoretical definition of, for example, the right level of specificity (or level of aggregation) at which to view mental processes, the sort of functional resources the brain makes available — what operations are primitive, how memory is organized and accessed, what sequences are allowed, what limitations exist on the passing of arguments and on the capacities of various buffers, and so on” (92).
Mental algorithms are viewed as being executed by this functional architecture.
The idea of functional architecture as the implementation-independent interpretation and control mechanism of symbols is the methodological key to cognitive science. A model can be considered as valid, if a computer simulation is strongly equivalent with the human functional architecture, i.e. it uses the same primitive procedures in its problem solving than people.
Traditional computational models such as Turing machines, register machines and the lambda calculus are concerned of reading or writing on a storage medium (tape or registers), or invoking a parametric procedure, but they fall short in describing interactional behavior. Computer algorithms derived from lambda calculus are based on a single thread of execution or a set of parallel but non-interacting tasks. Such algorithms are procedural, sequential, goal-oriented, hierarchical and deterministic. Arguably, cognitive models based on these algorithms inherit the same limitations.
Milner (1999) introduces the pi-calculus for “analysing properties of concurrent communicating processes, which may grow and shrink and move about”. In pi-calculus, the focus is on systems that interact and interrupt one another. There are many deeply nested, independent but coordinated, interacting threads of execution.
In conventional computer languages, types such as strings and integers represent values that can further be aggregated to objects or records. Conventional computer languages focus upon computation with these values and records. By contrast, types in languages derived from pi-calculus represent behavioral patterns. Primitives would include high-level things such as “signing a new customer” as well as low-level tasks such as addition of two integers.
In the early days of computer science, the study has revolved around sequential programs running on a single machine and performing calculational tasks. While computing is becoming increasingly parallel and distributed, the role of an individual computer is more like that of a computing node rather than that of a central computing unit. The legacy of von Neumann architecture is fading in the face of algorithms and standards that operate on a network of computers rather than a single CPU.
In cognitive modeling, the idea of computation as communication has not yet been embraced. Pi-calculus would provide a plausible avenue towards cognitive models of strong equivalence. In the advent of networked computing, it also becomes possible, in practice, to construct virtual machines of unprecedented scale with a functional architecture closer to human cognition than before.