Epistemology & Cognition Lab
Department of Brain & Cognitive Sciences
Faculty of Humanities & Social Sciences
The research in the lab combines philosophical and experimental work. The latter kind is based on philosophical questions (primarily in epistemology) that are decomposed into smaller, more specific behaviourally testable questions.
Purely philosophical questions include:
- What is the nature of physical computation as it is conceived in cognitive explanations?
- How are physical computational processes individuated?
- What is the explanatory role of computation in cognitive explanations?
- How should 'information' be understood for it to play its central explanatory role in the cognitive sciences?
- If 'information' is understood functionally (i.e., as being receiver-dependent), can it still be scientifically legitimate and objective?
- What are the limitations of the ambitious project to explain behaviour, perception, and cognition using the predictive processing framework alone (i.e., the brain as a Bayesian hypothesis tester)?
- In what respects are knowledge-that (roughly, knowledge of facts) and knowledge-how (roughly, procedural knowledge) similar and different?
- Does knowledge-how amount to skillful knowledge?
Current active experimental projects focus on the relation amongst learning, skill acquisition, and automaticity.
- Is cognitive control amenable to automatisation? does it improve (i.e., become faster and more accurately executed) with practice?
- Does the mere performance of automatic cognitive processes (e.g, reading words or numbers) improve the performance of other cognitively-controlled tasks (e.g., classification tasks)?
- Is such improvement confined to same-domain processing (reading words -> concept classification) or not (reading numbers -> concept classification)?
- Can the Stroop effect (as a paradigmatic case of automatic processing) be suppressed over time (e.g., after practicing the same task for several weeks)?
Most recent publications:
Abstract. Automaticity is still ill-understood, and its relation to habit formation and skill acquisition is highly debated. Recently, the principle of caching has been advanced as a potentially promising avenue for studying automaticity. It is roughly understood as a means of storing direct input-output associations in a manner that supports instant lookup. We raise various concerns that should be addressed before the theoretical progress afforded by this principle can be evaluated. Is caching merely a metaphor for computer caching or is it a computational model that can be used to derive testable predictions? How do the short-term and long-term effects of automaticity relate to the distinction between working memory and long-term memory? Does caching apply to stimulus-response associations—as already suggested by Logan’s instance theory—or to algorithms, too? How much experience is required for caching and how does caching depend on the task’s type? What is the relation between control processes and caching as these pertain to the possible suppression of automatic processes? Dealing with these questions will arguably also advance our understanding of automaticity.
Abstract. Computational physical systems may exhibit indeterminacy of computation (IC). Their identified physical dynamics may not suffice to select a unique computational profile. We consider this phenomenon from the point of view of cognitive science and examine how computational profiles of cognitive systems are identified and justified in practice, in the light of IC. To that end, we look at the literature on the underdetermination of theory by evidence (UTE) and argue that the same devices that can be successfully employed to confirm physical hypotheses can also be used to rationally single out computational profiles, notwithstanding IC.