Skip navigation

Wayne Christensen, John Sutton, and Doris McIlwain

Skilled action is a fundamental explanatory target for the cognitive sciences, but despite an abundance of data there is relatively little high level, integrative skill theory. Here we  present a synthetic theory, Mesh, which incorporates diverse skill phenomena in an integrated account that emphasizes action complexity and task difficulty. In contrast with dominant non-cognitive views of skill, Mesh proposes that cognitive processes make an important contribution to almost all skilled action, with cognitive control being focused on strategic aspects of performance and playing a greater role as difficulty increases. In support of this account we provide analyses of skill experience, experimental research on skill automaticity, and a body of prior theory. Mesh is shown to accommodate forms of skill experience suggestive of both automaticity and cognitive control, and experimental evidence for skill automaticity, including evidence that experts have reduced memory for performance of sensorimotor skills, and that performance is impaired when it is attended. We then develop a fundamental explanation for why skilled action control should have the attributes proposed by Mesh, based on analysis of a collection of influential skill theories. In essence, automatic control has limited ability to cope with variability, while cognitive processes provide powerful forms of flexibility and have fewer limitations than often thought. Finally, we suggest that Mesh provides grounds for substantially revising dual process views of cognitive architecture.

Download the pdf here.

Advertisements

Wayne Christensen and John Sutton

One of the most basic questions about skill is whether (and if so how and in what circumstances) cognitive processes make a contribution. Non-cognitive or automatic views are intuitively appealing and have been elaborated in several influential theories of skill learning and performance, including Anderson (1982) and Dreyfus et al. (1986). Some strands of recent empirical research have provided support for the non-cognitive position, indicating that experts have reduced memory for performance of sensorimotor skills (Beilock and Carr 2001), and that performance is impaired when it is attended (e.g. Wulf 2007). However there are also conflicting findings for skills thought of as ‘sensorimotor’ (e.g. Rosenbaum et al. 2007) and ‘cognitive’ (e.g. Holding and Reynolds 1982). Here we argue that cognitive processes make an important contribution to almost all skilled action, including sensorimotor skills. They tend to be focused on strategic aspects of task performance, and are most prominent in challenging conditions.

References
Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89(4), 369–406.
Beilock, S., & Carr, T. (2001). On the fragility of skilled performance: What governs choking under pressure? Journal of Experimental Psychology: General, 130(4), 701–725.
Dreyfus, H. L., S. E. Dreyfus, T. Athanasiou. (1986). Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. New York: Free Press.
Holding, D. H., & Reynolds, R. I. (1982). Recall or evaluation of chess positions as determinants of chess skill. Memory & Cognition, 10(3), 237–242.
Rosenbaum, D. A., Cohen, R. G., Jax, S. A., Weiss, D. J., & van der Wel, R. (2007). The problem of serial order in behavior: Lashley’s legacy. Human Movement Science, 26, 525–554.
Wulf, G. (2007). Attention and motor skill learning. Human Kinetics.

Presentation to the Philosophy Department, University of Tasmania, August 2012.

Wayne Christensen and John Michael

Embodied cognition approaches tend to marginalize the role of representation in cognitive abilities or argue that it can be dispensed with entirely. In contrast we adopt an explanatory approach to embodied cognition that uses embodied interaction as a basis for understanding representational capacities. Key aspects of this approach include analysis of the interaction conditions that require representation, and analysis of the role of representations in the control of interaction. Here we apply this approach to social cognition. Recently Apperly and Butterfill have argued for a two systems view of social cognition which postulates a flexible, late developing system that supports sophisticated adult-like theory of mind abilities, and an early developing inflexible system that uses limited, non-propositional forms of representation and supports rapid social cognitive abilities in both children and adults. Hutto has criticized Apperly and Butterfill’s account, arguing that social cognitive abilities in young children should not be viewed as representational at all (the children are ‘mind minding’ rather than ‘mind reading’). Drawing on an analysis of the role of mental models in the cognitive control of skilled action we argue that both positions are flawed. Contra Hutto, young children are very likely to be representing (rather than merely discriminating) agency-relevant attributes of others, even if they aren’t directly representing mental states. Contra Apperly and Butterfill, fast, efficient, online social cognition may be representationally sophisticated. Experts in fields such as aviation and military command show the ability to make rapid judgments that are informed by very complex representations of the current situation, and there are reasons to think that this capacity manifests in social cognition. We discuss some of the properties of mental models that may underlie this type of phenomenon.

Presentation for Varieties of Representation: Kazimierz Naturalist Workshop 2011.

Many approaches to the evolution of cognition take it that the explanatory task is to identify key representational abilities that support behavioral flexibility of some kind, and the selection pressures that will favor these representational abilities. Using Sterelny’s (2003) decoupled representation account as an exemplar of this kind of theory I argue for a more encompassing strategy that explicitly treats representational issues as embedded within larger issues concerning sensorimotor organization and agency. I present a theory of the evolution of advanced cognition that addresses each of these kinds of factors in an integrated fashion, including (i) a model of ‘self-directed’ agency that captures key aspects of instrumental goal-directedness in relation to the agent as a whole and across time, (ii) a ‘distributed hierarchically organized control’ (DHOC) model of sensorimotor architecture that describes the relations between multiple neural systems that contribute to behavior control, and the role of executive control within this architecture, and (iii) an account of generalized relational representation capacities that support flexible goal-directedness and problem solving. These proposals are supported with a range of animal and human evidence, including ‘incentive revaluation’ experiments which assay goal-directedness, cognitive neuroscience research on neural architecture and executive function, and research on the relational representational capacities of the hippocampus that are thought to support declarative memory. When we take a ‘systems approach’ to the evolution of cognition that addresses these kinds of agent-level and cognitive architecture issues evolutionary questions become questions about the origination and transformation of architectures, and I conclude by discussing some of the kinds of selection pressures that may have been involved in the elaboration of the DHOC architecture.

Talk for ISHPSSB 2011.

Early in December I attended a workshop in Budapest for a project called the Comparative Mind Database, a component of the CompCog European Research Networking Programme. The objective of CompCog is to promote the development of “real” comparative cognition, seen to involve a coherent theoretical background, unified terminology, and standard methods. The role of the Comparative Mind Database is to develop applications of advanced information technologies and methods to support comparative cognition research. The CMD project is at an early stage of development, and the purpose of the workshop was to present the initial ideas and some pilot studies, and get feedback from relevant researchers with a view to shaping future directions.

From the perspective of a philosopher with an interest in the evolution of cognition and multidisciplinary integration the CMD project is a fascinating venture, not just because it may become a valuable resource, but also conceptually, because it raises interesting questions about the design of mechanisms to promote scientific integration. What follows are some ideas on the design of a comparative mind database that have arisen during and since the workshop. I’m not a member of the CMD team so this doesn’t reflect internal thinking, and I’m interested primarily in conceptual design rather than technical nuts and bolts.

Why a comparative mind database might be a good thing

Comparative cognition research has some extremely difficult problems: it aims to investigate and compare the cognitive abilities of different species in circumstances where the conceptualization of the cognitive abilities is uncertain and changing, methods are evolving, and the major physical and behavioral differences between species make it necessary to modify methods for different species even when attempting to measure the same cognitive ability. A database that integrates comparative cognition research could help in a variety of ways. XML-based data codification schemes and software tools might serve as a good way to promote standardization of methods, and at the same time facilitate sophisticated large-scale analyses. Advanced data mining and visualization techniques could help to detect subtle patterns largely invisible at the level of the individual study. “Community” tools like wikis have the potential to help researchers interact in richer and more dynamic ways, further promoting conceptual integration.

Design questions

But although there are some interesting possibilities, it’s not obvious what specific shape a comparative mind database might have. What I’ll do now is pose some design questions. Many of these are in an “X vs Y” form, but since a likely answer is often “both” the point is usually to highlight a distinction rather than suggest that the distinction corresponds to a discrete choice.

Value adding service vs repository

Not all databases are storehouses for original data; some harvest data from existing sources and provide value-adding services. ISI Web of Knowledge is an example of the latter approach, as is the just-launched PhilPapers. An advantage of harvesting is that it is a relatively easy way to obtain a large amount of data in a short amount of time, which means that the database can be up and running quickly. On the other hand, the services provided need to be reasonably compelling. If the database is a repository for a unique kind of data it has a more obvious value as a resource, but it could take a while to acquire enough data to be useful.

Clearly a mixed approach is possible. Harvesting could enable the database to begin providing a service relatively quickly, and the development of value-adding services might be a way to explore what functions the comparative cognition community will find most useful. In the meantime, the repository could be developed.

Type of data

If the database is to be a repository then there is the question of what kind of data to store. Options include metadata structured to a metadata scheme crafted for the specific domain and provided directly by the researchers in some way, full text papers along the lines of a preprint archive, experimental data, or some combination.

Rich vs sparse data coding

Whatever kind of data is stored, it needs to be coded in some way. Here a basic choice is between rich and sparse coding schemes. A rich coding scheme formally codes many attributes of the data, whilst a sparse scheme captures only a few attributes. A rich coding scheme can provide more power and can therefore support more informative analyses. For example, Poldrack 2006 conducted a meta-analysis to evaluate the strength of ‘reverse inferences’ in fMRI research using neural imaging data held by the BrainMap database. These reverse inferences involve taking activation of a particular brain area during a task to indicate the involvement of a particular cognitive process, on the basis that other research has found an association between that cognitive process and the brain area in question. Poldrack found that reverse inferences are relatively weak, but noted that stronger inferences could be drawn if imaging databases used more fine-grained cognitive coding. The databases code imaging data using very broad cognitive categories, whereas the researcher will usually be interested in much more specific cognitive processes.

So rich coding can make a database more valuable as a resource, but it also raises problems. Just by providing many more decision points at formulation and during the coding process a rich scheme provides more opportunities for error to creep in. The more complex the coding scheme is the harder it will be to gain community acceptance, and if the coding categories reflect the conceptual distinctions being drawn in current research they will also almost inevitably tend to be somewhat controversial. These kinds of factors can reduce the value of the database as a resource; scientists may be reluctant to base research on a disputed coding scheme, may face resistance during peer review if they do, and errors in the data can seriously taint the database. This last problem has arisen as an issue for DNA databases. Sparse coding reduces the exposure to error and controversy, but at the expense of representational power.

Effectively, the rich vs sparse coding scheme choice faces a kind of type I/type II error tradeoff. As such, there are factors pushing in both directions. However there are some reasons favoring a conservative approach. If the database is to be used for publishable research its coding scheme and data will need to be robust against a wide range of challenges. If the database becomes widely used then any problems that arise have the potential to compromise large swathes of the literature.

Controlled vs open coding schemes

Another key choice is whether the coding scheme should be controlled or open. A controlled vocabulary is a centrally managed coding scheme, whereas open schemes have a more folksonomic character, allowing individuals to add new terms as they see fit. Controlled coding schemes can have the advantages of being consistent and well-organized, but they impose a significant management burden because mechanisms for formulation and revision are needed. The more ambitious the coding scheme is, the more onerous the management requirements will be. DSM is a well known example of a controlled coding scheme that illustrates the value a controlled scheme can have, and just how demanding the management process can be. Because the revision cycle can be long, a controlled classification scheme may lag well behind the categories used by current research.

An open coding scheme can respond rapidly to current developments, but at the expense of the consistency of terms and coherent organization of the scheme. To some extent tools like social tagging systems can ameliorate the chaos by creating a central record of terms and suggesting terms during the tagging process (Delicious is an example of how this kind of thing can work). These methods are unlikely to produce the consistency of a controlled scheme, however.

Again, though, this doesn’t have to be a strictly either/or choice. One kind of mixed strategy would be to use both a sparse controlled vocabulary and a rich and flexible open scheme. From the point of view of data quality, the controlled scheme would be “gold standard” and the open scheme would be “use with caution”, but when used with appropriate caution the open scheme might still be very valuable. Moreover, information derived from trends in the open coding scheme could be used to inform revisions of the controlled scheme.

Minimalist vs maximalist software tools for data coding

The kind of coding system chosen has an impact on the mechanisms needed to code the data and get it into the database. At one end of the spectrum authors could provide simple metadata to the database using a basic web form. At the other end of the spectrum a complete software suite would take the author from the first stages of experiment design to the final paper, smoothly adding a multitude of codes along the way. Somewhere in the middle, plugin software something like Endnote could work with existing word processor and statistics programs.

Open vs focused functional objectives

A further type of question concerns the kinds of uses envisaged for the data. The most agnostic approach is to simply leave this open; ‘low level’ data is made available to researchers to do with as they will. At the other extreme the database is built around a very specific high level purpose. An internet-based taxonomy database, as envisaged by Godfray 2002, is an example of the latter possibility.

Taking the taxonomy example as a model, a comparative mind database might adopt a high level representational framework designed to efficiently capture the information of most interest to comparative cognition research. For instance, a species-oriented ‘view’ showing phylogenetic relations mapped with cognitive abilities might be treated as a core function. A cognitive ability-oriented view might display all the species in which a particular ability has been demonstrated, together with key variations. A method-oriented view might show variations in the application of a particular paradigm within and across species.

Warehouse vs knowledge environment

The database might function as a warehouse, being primarily oriented to storing information and providing only a simple interface for accessing the data. On the other hand, it might be more like a ‘knowledge environment’, providing textual resources like annotation, reference information, and conceptual and methodological discussion.

Closed vs ‘crowd-sourced’ content creation

If the database is to be something like a knowledge environment, the source of content could be closed, e.g. using an editor/solicited contribution model, or it could be ‘crowd-sourced’ by the community. The latter option would have a strong ‘Web 2.0’ flavor.

One interesting possibility is that users could add tags and annotations to existing items in the database. For instance, Reader and Laland (2002) conducted a meta-analysis examining relations between brain size, behavioral innovation, social learning and tool use. They examined more than 1000 articles, and part of the analysis effectively involved re-coding papers, such that behavioral descriptions using keywords such as ‘‘novel’’ and ‘‘never seen before” were counted as instances of behavioral innovation. If these papers had been held in a CMD database Reader and Laland could have performed their recoding within the database, with those tags being associated with the original papers and available to other researchers for further use or critical scrutiny. In this case the recoding was quite straightforward, but in other cases it can involve more complex interpretation (with an important class of interpretations being “not really a case of x after all”). Appending such interpretations to papers would in effect be a modern recreation of the classical commentary.

This kind of possibility connects back to the issue of the controlled coding scheme. Zsófia Virányi has pointed out in conversation that conducting meta-analyses would be an effective way of zeroing in on the kind of information that should be included in a controlled coding scheme. More generally, as noted earlier, an open coding system could serve as a useful source of information guiding the formulation and revision of a controlled vocabulary. A system that allowed appended recoding and commentary would provide a direct form of ongoing evaluation for the controlled scheme.

Conclusion: two paths to integration

Returning to the larger objectives, in principle a CMD could help bring coherence to comparative cognition research in a variety of ways: it can be a vehicle for the standardization of terminology and methods, by collating data it can facilitate research that takes into account a wider range of the total amount of information available, and by means such as wikis it can provide a forum for conceptual and theoretical integration. A final set of questions concerns the pros and cons of each of these kinds of goals; I’ll focus on the first and third.

The standardization of terminology and methods is a worthy general objective, but an issue that came up in several of the talks at the workshop is that methods need to be adapted and revised, so enforcing standardization too strictly can be counterproductive. This debate has happened before: Cassman and colleagues gave a caustic assessment of the disorganization of systems biology, together with a recommendation for the creation of:

…a central organization that would serve both as a software repository and as a mechanism for validating and documenting each program, including standardizing of the data input/output formats. …

This repository would serve as a central coordinator to help develop uniform standards, to direct users to appropriate online resources, and to identify — through user feedback — problems with the software. The repository should be organized through consultation with the community, and will require the support of an international consortium of funding agencies.

This call to action prompted a stern response from Quackenbush, who argued that such standardized systems are appropriate for mature research fields, but not for emerging fields, where innovation and diversity are essential. Quackenbush concludes:

We believe that the centralized approach proposed by Cassman and colleagues would not fare well compared with more democratic, community-based approaches that understand and include research-driven development efforts. Creating a rigid standard before a field has matured can result in a failed and unused standard, in the best of circumstances, and, in the worst, can have the effect of stifling innovation.

The point is important to consider for a CMD since comparative cognition is still a relatively immature field. But it need not count against any kind of attempt at centralized integration. It does count in favor of a sparse, cautious approach to controlled coding and method standardization, but open coding and wiki systems are compatible with innovation and diversity, whilst still promoting overall integration.

One way to think about it is in terms of two paths to the integration of a field: a ‘low path’ centred on standardized terminology and methods, and a ‘high path’ centered on concepts and theory. In an immature field both paths have many difficulties, but somewhat counterintuitively the high path may be more feasible and important in the earlier phases of development. More feasible because some degree of qualitative conceptual integration is still possible even without precisely defined terms and methods. More important because a reasonably coherent high level understand of the field is needed in order to decide how to standardize basic terms and methods. For example, Cajal’s neuron doctrine provided a basic conceptual framework that profoundly shaped modern neuroscience, but it flowed from a keen attention to the ‘big picture’, including global brain organization and evolutionary and ecological context (Llinás 2003). It is hard to imagine that he could have developed such a productive conceptualization without this larger understanding. Seen as a vehicle for promoting change, low path integration is the more obvious strategy for a database to pursue, but current technologies make it possible for a database to also or instead aim at facilitating high path integration.