Explaining explanation using new developments in logic and formal semantics.
Ever since Hempel introduced his influential Deductive-Nomological model for explanation, it is clear that logic plays an important role in explanation. Nevertheless, one faces serious problems when trying to apply traditional logical ideas to explanation.
A first problem is that explanation is arguably a hyperintensional notion, i.e. the relata of the explanation relation do not permit substitution of strict equivalents. It is not because A (possibly together with some other sentences) explains B and A and A’ are necessarily equivalent (or even logically equivalent) that we can say that A’ explains B. Neither can we say that A explains B’, for an arbitrary B’ necessarily equivalent to B. One of the reasons for this is that every sentence C that may provide insights is equivalent to an infinity of entirely opaque sentences (e.g. C is strictly equivalent to the disjunction of C and the negation of a very complicated necessity, e.g. an advanced theorem of mathematics – or even of logic). Another reason is that it seems that people successfully come up with explanations of necessary truths and use necessary truths as elements of explanations. Substitution of strict equivalents would make these cases of explanation trivial. Finally, it needs to be hyperintensional because the usual troubles with logical omniscience readily boil up as soon as substitution of equivalents is accepted.
A second problem is caused by the very nature of classical logical implication (be it material, formal or even strict implication). This relation is obviously very different from the relation of explanation. Contradictions logically imply everything, but they explain very little. Moreover, unlike explanation, logical implication does not have any notion of usefulness, relevance or economy built in; if some principles imply something, one can always enlarge the set of principles with whatever irrelevant sentence without breaking the relation of implication. This is not the case for explanation. One looks for a minimal explanation, or, maybe more accurately, an explanation of which all of the elements contribute in some way to explaining the explanandum.
So traditional logical systems cannot readily be used for explaining explanation. However, recently, alternative formal tools have been developed that may on the contrary turn out to be very helpful. Examples are on the one hand
(a) theories that aim to explicate hyperintensionality such as Neighborhood semantics, Transparent Intensional Logic, Leitgeb’s HYPE or Truthmaker Semantics and on the other hand
(b) non-classical logics that aim to solve problems raised by classical logic such as Relevance Logics, Connexive Logic, Paraconsistent Logics, or Mulivalued Logics.
At the same time, recent research has obtained considerable progress in developing formal models for important notions useful in understanding explanations, such as counterfactual conditionals (conditional logics, probabilistic semantics, Lewis semantics), degrees of belief (Bayesian nets), abduction (e.g. (Adaptive) logics of abduction), belief revision (AGM and many others), grounding (Fine’s and Poggiolesi’s proposals), aboutness (Yablo’s proposals), justification (justification logic, intuitionistic logic), and proof (provability logic, reverse mathematics, cut elimination, etc.).
This conference aims to investigate what the state of the art in all of these domains may contribute to the notion of causal or non-causal explanation in empirical science, in mathematics, and even in philosophy.