Publications
(please cite as "Chua, E. Y. S" or "Chua, Eugene Y. S.")
Peer-reviewed writings
The Problem of Atypicality in LLM-Powered Psychiatry
Accepted at the Journal of Medical Ethics.
Joint work with Bosco Garcia and Harman Brah.
Large language models (LLMs) are being considered for use in the outpatient psychiatric context for a variety of reasons, to do with public health and access. However, given the target audience — patients with possible psychiatric disorders — the application of LLMs should merit caution. Here we develop a particular problem for LLM-powered psychiatry, which we call the problem of atypicality. We distinguish this problem from the well-known problem of hallucinations. While the latter focuses on the question of the truth or falsehood of LLM outputs, the problem of atypicality is a problem concerning appropriateness of LLM outputs. The problem arises because of the mismatch between LLMs aiming to generate outputs appropriate for typical users, and users who may be atypical relative to the general population. We examine the extent to which technical solutions such as fine-tuning can reduce this problem, and propose recommendations for accommodating this problem.
The Tolman effect is well-known in relativistic cosmology but rarely discussed outside it. That is surprising because the effect -- that systems extended over a varying gravitational potential exhibit temperature gradients while in thermal equilibrium -- conflicts with ordinary classical thermodynamics. In this paper we try to better understand this effect from a foundational perspective. We make five claims. First, as Tolman knew, it was Einstein who first discovered the effect, and furthermore, Einstein's derivation helps us appreciate how robust it is. Second, the standard interpretation of the effect in terms of 'local temperature' leads to the breakdown of much of classical thermodynamics. Third, one can rescue thermodynamics by using Einstein's preferred interpretation in terms of the 'wahre Temperatur' -- what we'll call global temperature -- but it too has some costs. Fourth, the effect is perhaps best understood in terms of clocks as opposed to energy loss. Fifth, inspired by a proposal of Einstein's elsewhere, we sketch an interpretation of the effect in terms of a third novel temperature, which we call the 'wahre-local temperature'. On this view, temperature -- and thermodynamics -- is defined only in relation to local clocks. In sum, we view the fragmentation of temperature in thermodynamics as a natural and expected result of the fragmentation of time in general relativity.
A family of arguments for black hole evaporation relies on conservation laws, defined through symmetries represented by Killing vector fields which exist globally or asymptotically. However, these symmetries often rely on the idealizations of stationarity and asymptotic flatness, respectively. In non-stationary or non-asymptotically-flat spacetimes where realistic black holes evaporate, the requisite Killing fields typically do not exist. Can we `de-idealize' these idealizations, and subsequently the associated arguments for black hole evaporation? Here, I critically examine the strategy of using `approximately Killing' fields to de-idealize black hole spacetimes and approximately extend conservation laws to non-idealized cases. I argue that this approach encounters significant challenges, undermining the use of these idealizations to justify the evaporation of realistic -- rather than idealized -- black holes, and raising questions about the justified use of such idealizations.
Journal for General Philosophy of Science, 2025.
Invited contribution to Special Issue "On Time in the Foundations of Physics", eds. Andrea Oldofredi and Cristian Lopez
Preparing general relativity for quantization in the Hamiltonian approach leads to the `problem of time,' rendering the world fundamentally timeless. One proposed solution is the `thermal time hypothesis,' which defines time in terms of states representing systems in thermal equilibrium. On this view, time is supposed to emerge thermodynamically even in a fundamentally timeless context. Here, I develop the worry that the thermal time hypothesis requires dynamics -- and hence time -- to get off the ground, thereby running into worries of circularity.
Decoherence, Branching, and the Born Rule for a Mixed-State Everettian Multiverse
Joint work with Eddy Keming Chen.
In Everettian quantum mechanics, justifications for the Born rule appeal to self-locating uncertainty or decision theory. Such justifications have focused exclusively on a pure-state Everettian multiverse, represented by a wavefunction. Recent works in quantum foundations suggest that it is viable to consider a mixed-state Everettian multiverse, represented by a (mixed-state) density matrix. Here, we discuss the conceptual foundations for decoherence and branching in a mixed-state multiverse, and extend arguments for the Born rule to this setting. This extended framework provides a unification of `classical' and `quantum' probabilities, and additional theoretical benefits, for the Everettian picture.
T Falls Apart: On the Status of Classical Temperature in Relativity
Philosophy of Science 90(5), 2023.
Winner of the 2022 Mary B. Hesse essay award and the 18th Robert K. Clifton Prize in Philosophy of Physics
Taking the formal analogies between black holes and classical thermodynamics seriously seems to first require that classical thermodynamics applies in relativistic regimes. Yet, by scrutinizing how classical temperature is extended into special relativity, I argue that it falls apart. I examine four consilient procedures for establishing the classical temperature: the Carnot process, the thermometer, kinetic theory, and black-body radiation. I show how their relativistic counterparts demonstrate no such consilience in defining relativistic temperature. As such, classical temperature doesn’t appear to survive a relativistic extension. I suggest two interpretations for this situation: eliminativism akin to simultaneity, or pluralism akin to rotation.
Kriterion: Journal of Philosophy 36(2), 2022.
Invited contribution to Special Issue "Lakatos's Undone Work: The Practical Turn and the Division of Philosophy of Mathematics and Philosophy of Science", eds. S. Nagler, H. Pilin, D. Sarikaya
Lakatos’s analysis of progress and degeneration in the Methodology of Scientific Research Programmes is well-known. Less known, however, are his thoughts on degeneration in Proofs and Refutations. I propose and motivate two new criteria for degeneration based on the discussion in Proofs and Refutations – superfluity and authoritarianism. I show how these criteria augment the account in Methodology of Scientific Research Programmes, providing a generalized Lakatosian account of progress and degeneration. I then apply this generalized account to a key transition point in the history of entropy – the transition to an information-theoretic interpretation of entropy – by assessing Jaynes’s 1957 paper on information theory and statistical mechanics.
Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no–time.
Conventional wisdom holds that the von Neumann entropy corresponds to thermodynamic entropy, but Hemmo and Shenker (2006) have recently argued against this view by attacking von Neumann's (1955) argument. I argue that Hemmo and Shenker's arguments fail due to several misunderstandings: about statistical mechanical and thermodynamic domains of applicability, about the nature of mixed states, and about the role of approximations in physics. As a result, their arguments fail in all cases: in the single-particle case, the finite-particles case, and the infinite-particles case.
Improving LIME Robustness with Smarter Locality Sampling
Accepted after peer review at AdvML '20: Workshop on Adversarial Learning Methods for Machine Learning and Data Mining, KDD2020, August 24, 2020, San Diego, CA.
Video of talk available here.
Joint work with Sean Saito, Rocco Hu and Nicholas Capel.
Explainability algorithms such as LIME have enabled machine learning systems to adopt transparency and fairness, which are important qualities in commercial use cases. However, recent work has shown that LIME's naive sampling strategy can be exploited by an adversary to conceal biased, harmful behavior. We propose to make LIME more robust by training a generative adversarial network to sample more realistic synthetic data which the explainer uses to generate explanations. Our experiments demonstrate that our proposed method demonstrates an increase in accuracy across three real-world datasets in detecting biased, adversarial behavior compared to vanilla LIME. This is achieved while maintaining comparable explanation quality, with up to 99.94% in top-1 accuracy in some cases.
The laws of classical logic are taken to be logical truths, which in turn are taken to hold objectively. However, we might question our faith in these truths: why are they true? One general approach, proposed by Putnam and more recently Dickson or Maddy, is to adopt empiricism about logic. On this view, logical truths are true because they are true of the world alone – this gives logical truths an air of objectivity. Putnam and Dickson both take logical truths to be true in virtue of the world’s structure, given by our best empirical theory, quantum mechanics. This assumes a determinate logical structure of the world given by quantum mechanics. Here, I argue that this assumption is false, and that the world’s logical structure, and hence the related ‘true’ logic, are underdetermined. This leads to what I call empirical conventionalism.
Public writings, non-peer-reviewed articles, book reviews etc.
LLM-Powered Psychiatry from Back to Front
The Blog of the Linde Center for Science, Society, and Policy, Caltech.
Non peer-reviewed, shortened, version. Full paper accepted at Journal of Medical Ethics.
The increasing demand for mental healthcare and the persistent shortfall in supply for mental healthcare providers creates a problem of access, something that large language models (LLMs) seem poised to ameliorate. However, we sketch out some potential ethical risks which may arise for LLM-powered psychiatry, starting from LLMs' back-end stochastic nature, to hyperparameter tuning, fine-tuning and prompting. We end with some questions about whether LLMs can really resolve the problem of access.