Abstract
Computational explanations focus on information processing required in specific cognitive capacities, such as perception, reasoning or decision-making. These explanations specify the nature of the information processing task, what information needs to be represented, and why it should be operated on in a particular manner. In this article, the focus is on three questions concerning the nature of computational explanations: (1) What type of explanations they are, (2) in what sense computational explanations are explanatory and (3) to what extent they involve a special, “independent” or “autonomous” level of explanation. In this paper, we defend the view computational explanations are genuine explanations, which track non-causal/formal dependencies. Specifically, we argue that they do not provide mere sketches for explanation, in contrast to what for example Piccinini and Craver (Synthese 183(3):283–311, 2011) suggest. This view of computational explanations implies some degree of “autonomy” for the computational level. However, as we will demonstrate that does not make this view “computationally chauvinistic” in a way that Piccinini (Synthese 153:343–353, 2006b) or Kaplan (Synthese 183(3):339–373, 2011) have charged it to be.
Original language | English |
---|---|
Journal | Synthese |
Volume | 193 |
Issue number | 12 |
Pages (from-to) | 3931–3949 |
Number of pages | 19 |
ISSN | 0039-7857 |
DOIs | |
Publication status | Published - 27 May 2016 |
MoE publication type | A1 Journal article-refereed |
Fields of Science
- 515 Psychology
- 611 Philosophy
- 6162 Cognitive science