Epistemology & Cognition Lab

 

Department of Brain & Cognitive Sciences
Faculty of Humanities & Social Sciences

The research in the lab combines philosophical and experimental work. The latter kind is based on philosophical questions (primarily in epistemology) that are decomposed into smaller, more specific behaviourally testable questions.

Purely philosophical questions include:

  1. What is the nature of physical computation as it is conceived in cognitive explanations?
  2. How are physical computational processes individuated?
  3. What is the explanatory role of computation in cognitive explanations?
  4. How should 'information' be understood for it to play its central explanatory role in the cognitive sciences?
  5. If 'information' is understood functionally (i.e., as being receiver-dependent), can it still be scientifically legitimate and objective?
  6. What are the limitations of the ambitious project to explain behaviour, perception, and cognition using the predictive processing framework alone (i.e., the brain as a Bayesian hypothesis tester)?
  7. In what respects are knowledge-that (roughly, knowledge of facts) and knowledge-how (roughly, procedural knowledge) similar and different?
  8. Does knowledge-how amount to skillful knowledge?

Current active experimental projects focus on the relation amongst learning, skill acquisition, and automaticity.

  1. Is cognitive control amenable to automatisation? does it improve (i.e., become faster and more accurately executed) with practice?
  2. Does the mere performance of automatic cognitive processes (e.g, reading words or numbers) improve the performance of other cognitively-controlled tasks (e.g., classification tasks)?
  3. Is such improvement confined to same-domain processing (reading words -> concept classification) or not (reading numbers -> concept classification)?
  4. Can the Stroop effect (as a paradigmatic case of automatic processing) be suppressed over time (e.g., after practicing the same task for several weeks)?

                                                                                                                                                                                                                    

Most recent publications:

Colombo, M. and Fresco, N. (2024) Why Perceptual Experiences cannot be Probabilistic

Abstract. Perceptual Confidence is the thesis that perceptual experiences can be probabilistic. This thesis has been defended and criticised based on a variety of phenomenological, epistemological, and explanatory arguments. One gap in these arguments is that they neglect the question of whether perceptual experiences satisfy the formal conditions that define the notion of probability to which Perceptual Confidence is committed. Here, we focus on this underexplored question and argue that perceptual experiences do not satisfy such conditions. But if they do not, then ascriptions of perceptual confidence are undefined; and so, Perceptual Confidence cannot be true.

Fresco, N. and Elber-Dorozko, L. (2024) Scientists Invent New Hypotheses, Do Brains? Cognitive Science

Abstract. How are new Bayesian hypotheses generated within the framework of predictive processing? This explanatory framework purports to provide a unified, systematic explanation of cognition by appealing to Bayes rule and hierarchical Bayesian machinery alone. Given that the generation of new hypotheses is fundamental to Bayesian inference, the predictive processing framework faces an important challenge in this regard. By examining several cognitive-level and neurobiological architecture-inspired models of hypothesis generation, we argue that there is an essential difference between the two types of models. Cognitive-level models do not specify how they can be implemented in brains and include structures and assumptions that are external to the predictive processing framework. By contrast, neurobiological architecture-inspired models, which aim to resemble brain processes better, fail to explain important capacities of cognition, such as categorisation and few-shot learning. The (‘scaling-up’) challenge for proponents of predictive processing is to explain the relationship between these two types of models using only the theoretical and conceptual machinery of Bayesian inference.