Philosophers have long worried that certain systems, e.g., black holes or self-gravitating
systems, do not ‘really’ behave thermodynamically. These worries typically hold fixed a thick,
classical thermodynamic notion of equilibrium and then note its apparent failure in new domains. This paper instead articulates a thin, historically rooted concept of equilibrium that is explicitly scale-relative. On this minimal picture, equilibrium applies whenever (i) some properties appear stationary relative to appropriate spatial and temporal scales; (ii) these stationary properties can be represented as balances between counteracting tendencies; and (iii) we can systematically model how the system responds when the balance is perturbed.
We show how this three-part concept underlies familiar uses of equilibrium from classical mechanics to thermodynamics, but also fragments differently in different contexts. We then apply it to two contemporary case studies. In classical and relativistic gravitational thermodynamics, we argue that self-gravitating systems and dynamical black holes admit non-trivial minimal equilibrium regimes, once we attend to scale and to quasi-local horizon structures, even if they lack global thermodynamic equilibria. In quantum statistical mechanics, we argue that prethermal states and generalized Gibbs ensembles support equilibrium reasoning—despite failures of thermalization in the usual Gibbsian sense—because they realize the minimal pattern of balance and response at suitable scales and for restricted classes of observables. Methodologically, we suggest that classical thermodynamics is best viewed as but one historically salient instance of this broader practice of identifying and exploiting minimal equilibrium regimes, rather than as a fixed package to be either fully recovered or abandoned in new domains.
Hume formulated his original problem of induction in 1739, before there were cars, phones, satellites, outer-space missions, moon landings, semiconductors, or computers. In 2025, we have these and more, all fruits of labor from the trees of inductive practices deemed incapable of producing genuine knowledge. In contrast, today, the problems of induction remain unresolved, and philosophers continue to debate solutions to these problems. Indeed, since Hume, the number of problems has only grown. This stark contrast raises a question: without justification of the sort that could ‘solve’ the problems of induction, how have we managed to carry on with our inductive practices, and by what standards are we assessing them? In my view, these questions can only be assessed by focusing on the practice of inductive practices: the contexts – and norms – relative to which induction occurs.
Starting with the global problems, this Element will introduce readers to the problems of induction, while broadening the standard discourse about induction – largely involving general, formal, and rule-based approaches to induction in purely epistemic domains – to motivate a contextualist middle-range theory of induction. This draws attention to what I’ll call local problems of induction: the question of how to describe and evaluate inductive practices ‘in the wild’, given a lack of a solution to the global problems. To do so, the Element will consider four questions: how are we to understand the global problems of induction? (Ch. 1) To what extent do extant formal, rule-based, theories of induction address these problems? (Ch. 2) To what extent does a prominent new theory – the material theory of induction – succeed, and in what sense does it differ from its formal opponents? (Ch. 3) Finally, is the debate overly idealized, and confined to traditionally ‘epistemic’ contexts? De-idealizing this discussion to include new contexts in which non-epistemic norms feature, what other problems deserve more attention from philosophers, and is there an account of induction that is conducive for unearthing these problems? (Ch. 4) I aim to sketch preliminary answers to these questions.
Artificial Intelligence is reshaping biomedical science, offering unprecedented opportunities for innovation in personalized medicine, clinical diagnostics, and drug discovery. However, the increasing complexity of AI systems, including large language, multimodal, reasoning and agentic models, brings significant ethical and technical challenges. There is a need for developers and data scientists to engage more deeply and meaningfully with AI and data governance activities, transitioning from “builders” to “stewards”. To achieve this, we, a multi-national team of data scientists, clinicians, engineers and ethicists, through deliberate discussions and pulling from prior roundtables and personal reflections, developed a set of 7 principles to help developers expand their expertise beyond the technical to encompass liability, societal impact, and human-centered design. This ranges from mindset shifts towards shared accountability, to fostering diverse, multidisciplinary teams to mitigate discrimination and bias.
Much work in the foundations of statistical mechanics have begun with the following assumption: the arena of inquiry is a classical phase space, over which probability distributions are appropriately defined. That is to say, the domain of inquiry is classical statistical mechanics. Given that we no longer take classical mechanics to be the fundamental realm of physics, a natural question is: what changes in the quantum domain? In this article, I'll do two things. First, I survey the reformulations of old debates -- the thermodynamic arrow of time, emergence and reduction, and the Gibbs vs. Boltzmann debate -- in the quantum domain. Second, I highlight a set of new issues in the foundations of quantum thermodynamics and statistical mechanics: the question of what, if anything, are the appropriate quantum analogues of the concepts of classical statistical mechanics -- work, energy, entropy, and equilibrium/equilibration.
Recent foundational work in QTD reveals a `falling apart' of various classical thermodynamic concepts in this new domain, akin to the situation in relativistic thermodynamics discussed by \citet{Chua2023}. This paper focuses on one such cluster of conceptual issues for the thermodynamic work. I first diagnose a widely-discussed no-go theorem due to Perarnau-Llobet et al (2017), which asserts that no 'universal' thermodynamic work can be defined in QTD, given some intuitive and widely accepted desiderata. I argue that, contrary to the consilient classical context, these desiderata encode physically distinct scenarios in quantum mechanics for reasons to do with the quantum measurement problem. Because different procedures which were consilient in defining work in the classical regime no longer agree in the quantum regime, the classical work concept falls apart in the quantum regime. I discuss how we can attempt to restore consilience in this regard by appealing to work defined in terms of quantum forces over trajectories, but argue that this breaks consilience in other aspects: the bump in the rug is merely moved, not removed. This raises questions about the goals of quantum thermodynamics, and when 'universality' is universal enough for the purposes of generalizing thermodynamics to the quantum domain.
There is a longstanding view in the philosophy of physics that the variant quantities of a physical theory are defective in some ways compared to invariant quantities. Many have attributed this defect to the thought that variant quantities are not objective, unmeasurable, or even unreal. We argue against this view by scrutinizing examples in physics, in which variant quantities are not only measurable, but crucial for capturing the physical content of many theories. Furthermore, we diagnose why many physical laws are variant under simple transformations, e.g., spatial translations or boosts: these equations track relational quantities that encode information about how physical systems behave relative to particular observers or instruments, e.g. instruments at rest relative to a system, or systems in thermodynamic equilibrium relative to an observer. We end by responding to the objection that the search for invariance was crucial to the development of special and general relativity.
There is a question of whether de-idealization is needed for justified use of --- for `checking' --- idealizations. We argue that the standard philosophical account of de-idealization has become too idealized, but that this does not preclude the possibility of justificatory practices that show how models can adequately represent the world. In turn, motivated by examples in physics, we de-idealize de-idealization by relaxing the standards for approximation and realism, identifying at least three procedures for de-idealization: intra-model, inter-model, and measurement de-idealizations. These highlight ways in which idealizations can -- and have been! -- checked in physics without appealing to the philosopher's idealized notion of de-idealization.
Much of the century-old debate surrounding the status of thermodynamics in relativity has centered on the search for a suitably relativistic temperature; recent work by Chua (2023) has suggested that the classical temperature concept – consilient as it is in classical settings – ‘falls apart’ in relativity. However, these discussions have still tended to assume an unproblematic Lorentz transformation for – specifically, the Lorentz invariance of – the pressure concept. Here I argue that, just like the classical temperature, the classical concept of pressure breaks down in relativistic settings. This situation suggests a new thermodynamic limit – a ‘rest frame limit’ – without which an unambiguous thermodynamic description of systems does not emerge. I end by briefly discussing how thermodynamics, in requiring preferred frames, bears on the idea of so-called symmetry-to-reality inferences.
Physical Coherence and Time's Emergence
Under review, draft available upon request!
It is often said that time vanishes in quantum gravity. One general approach to quantum gravity accepts this fundamental timelessness but seeks to derive time's emergence at a non-fundamental level. To better assess such approaches, I develop the criterion of physical coherence and situate it in context by applying it to two programs for time's emergence, drawing from recent works by Chua and Callender (2021) and Chua (forthcoming): semiclassical time and thermal time. Unlike some recent arguments for the metaphysical incoherence of time's emergence, which rule out all claims of time’s emergence `from on high' once we’ve fixed a definition of metaphysical emergence, my criterion of physical coherence leaves open the possibility that some programs in quantum gravity may succeed on their own terms in providing a physically coherent derivation of time from no-time. This sets a challenge for proponents of time's emergence to clarify the conceptual foundations of their program, while at the same time acting as a litmus test for a program's success.