top of page

Works in Progress

Decoherence, Branching, and the Born Rule for a Mixed-State Everettian Multiverse (joint work with Eddy Keming Chen) (draft available upon request!) 

Everettians have tried to justify the Born rule by using self-locating uncertainty (e.g. Sebens and Carroll 2014, Vaidman and McQueen 2018) and decision theory (Deutsch 1999, Wallace 2012). However, they focus exclusively on a pure-state Everettian multiverse, for which the quantum state of the multiverse is represented by a wave-function. Recent works in quantum foundations (e.g. Chen 2018) suggest that it is viable to consider a mixed-state Everettian multiverse, for which the quantum state of the multiverse is represented by a (mixed-state) density matrix. In this work, we work out the conceptual foundations for decoherence in a mixed-state multiverse, and show that the standard justifications of the Born rule in the case of a pure state can be extended to the case of a mixed state. Furthermore, the extended justifications provide a unification of 'classical' and 'quantum' probability in terms of self-location and epistemic uncertainty. The resultant theory is arguably simpler than the pure-state version of Everettian quantum mechanics.

Check, Please: De-idealizing De-idealization 

It is doubtless that scientific inquiry inextricably involves the use of idealizations: for ease of calculation and representation, we assume the absence of friction, the perfect sphericity of cows, or the impeccable rationality of human beings. Idealizations are, strictly speaking, false. Yet they play a crucial role in many of our best sciences in representing all sorts of phenomena, from coffee cups to black holes. Much ink has thus been spilled over how to justify them. A predominant view focuses on justifying these idealizations via de-idealization procedures. A more recent cluster of views, due to Potochnik (2017) and Knuttila & Morgan (2019), pushes back by arguing that these idealizations stand on their own and need no explicit de-idealization, and that de-idealization is too demanding, respectively. I’ll argue that this disagreement hinges on a strong philosopher’s construct of de-idealization, an idea that itself needs to be de-idealized. I explicate two weaker senses in which idealizations can be de-idealized.

A Problem of Indeterminism for the Humean Nomological Approach to Bohmian Mechanics (draft available upon request!)

On the one hand, (1) Bohmian particles have deterministic trajectories. On the other hand, (2) Bohmian mechanics is time-reversal invariant. However, given (3) the Humean nomological approach – on which wave-functions are interpreted as (part of) the laws, and the laws are interpreted as Humean laws – I argue that a contradiction  arises. At least one of (1) - (3) must go. I consider and reject several options: giving up time-reversal invariance, giving up deterministic trajectories, and revising our definition of determinism. In the end, I conclude that we should abandon the Humean nomological approach instead.

The Time in Thermal Time (draft available upon request!)

Attempts to quantize gravity in the Hamiltonian approach lead to the 'problem of time'; the resultant formalism is often said to be `frozen', non-dynamical, and fundamentally timeless. To resolve this problem, Connes & Rovelli (1994) suggest the adoption of a `thermal time hypothesis': the flow of time emerges thermodynamically from a fundamentally timeless ontology. While statistical states are typically defined to be in equilibrium with respect to some background time,  Connes & Rovelli propose that we instead define time in terms of these statistical states: statistical states define a time according to which they are in equilibrium. To avoid circularity, we better have a good conceptual grasp on notions such as `equilibrium' and `statistical state' which are independent of time. Here, however, I argue that these concepts either implicitly presuppose some notion of time, or cannot be applied justifiably yet to the fundamentally timeless context.

Quasi-Stationarity: The Impossible Process (draft available upon request!)

​An ubiquitously used idealization in physics, e.g. black hole physics, is quasi-stationarity. Prominently, Hawking (1975) employed this idealization in making his argument for black hole evaporation. The core idea is to assume that a system is evolving ‘so slowly’ that it can be modeled dynamically as a sequence of stationary systems. Here, I argue that quasi-stationary processes, taken literally, are impossible in the same vein as Norton’s (2016) evaluation of quasi-static processes. Furthermore, while Norton vindicates the widespread use of quasi-staticity by showing how this ‘impossible’ idealization can be readily ‘de-idealized’ for use in quotidian thermodynamical reasoning, I argue that Hawking’s (1975) argument for black hole evaporation cannot yet be justified in a similar fashion. One would need to provide a procedure for finding an approximately globally conserved energy via approximate Killing fields, but this remains an open question in general relativity.

Egalitarianism, Equipossibility, Equiprobability: An Old Problem for Pettigrew’s New Argument for the Principle of Indifference (draft available upon request!)

In philosophical folklore, Laplace is commonly construed as claiming that equal possibilities entail equal probabilities. However, ‘equally possible’ better not mean ‘equally probable’ since this renders the claim circular. Yet, there doesn’t seem to be a plausible ‘possibility-probability link’. Similar attempts to justify the principle of indifference by appealing to equipossibility typically risk either circularity or a lack of justification. Recently, Pettigrew (2016a/2016b) gives a novel argument for the principle of indifference by adapting Joyce’s well-known arguments from accuracy (1998/2009). Here, I will argue that Pettigrew’s argument, too, relies on some notion of equipossibility via the assumption of egalitarianism: likewise, his argument is either circular or yet unjustified. However, I conclude on a positive note. Pettigrew’s argument can be seen as an explication of the Laplacean argument, and hence of the principle of indifference.

bottom of page