T Falls Apart: On the Status of Classical Temperature in Relativity (Forthcoming, Philosophy of Science 90(5), winner of the 2022 Mary B. Hesse essay award and the 18th Robert K. Clifton Prize in Philosophy of Physics)
Taking the formal analogies between black holes and classical thermodynamics seriously seems to first require that classical thermodynamics applies in relativistic regimes. Yet, by scrutinizing how classical temperature is extended into special relativity, I argue that it falls apart. I examine four consilient procedures for establishing the classical temperature: the Carnot process, the thermometer, kinetic theory, and black-body radiation. I show how their relativistic counterparts demonstrate no such consilience in defining relativistic temperature. As such, classical temperature doesn’t appear to survive a relativistic extension. I suggest two interpretations for this situation: eliminativism akin to simultaneity, or pluralism akin to rotation.
No Time for Time from No-Time (with Craig Callender, Philosophy of Science 88(5), 2021)
Programs in quantum gravity often claim that time emerges from fundamentally timeless physics. In the semiclassical time program time arises only after approximations are taken. Here we ask what justifies taking these approximations and show that time seems to sneak in when answering this question. This raises the worry that the approach is either unjustified or circular in deriving time from no–time.
Improving LIME Robustness with Smarter Locality Sampling (with Sean Saito, Rocco Hu and Nicholas Capel, AdvML '20: Workshop on Adversarial Learning Methods for Machine Learning and Data Mining, KDD2020, August 24, 2020, San Diego, CA.)
(Video of talk available in gallery.)
Explainability algorithms such as LIME have enabled machine learning systems to adopt transparency and fairness, which are important qualities in commercial use cases. However, recent work has shown that LIME's naive sampling strategy can be exploited by an adversary to conceal biased, harmful behavior. We propose to make LIME more robust by training a generative adversarial network to sample more realistic synthetic data which the explainer uses to generate explanations. Our experiments demonstrate that our proposed method demonstrates an increase in accuracy across three real-world datasets in detecting biased, adversarial behavior compared to vanilla LIME. This is achieved while maintaining comparable explanation quality, with up to 99.94% in top-1 accuracy in some cases.
Degeneration and Entropy (in Lakatos’s Undone Work: The Practical Turn and the Division of Philosophy of Mathematics and Philosophy of Science, special issue of Kriterion: Journal of Philosophy, edited by S. Nagler, H. Pilin, and D. Sarikaya, 2022.)
Lakatos’s analysis of progress and degeneration in the Methodology of Scientific Research Programmes is well-known. Less known, however, are his thoughts on degeneration in Proofs and Refutations. I propose and motivate two new criteria for degeneration based on the discussion in Proofs and Refutations – superfluity and authoritarianism. I show how these criteria augment the account in Methodology of Scientific Research Programmes, providing a generalized Lakatosian account of progress and degeneration. I then apply this generalized account to a key transition point in the history of entropy – the transition to an information-theoretic interpretation of entropy – by assessing Jaynes’s 1957 paper on information theory and statistical mechanics.
Does Von Neumann Entropy Correspond to Thermodynamic Entropy? (Philosophy of Science 88(1), 2021)
Conventional wisdom holds that the von Neumann entropy corresponds to thermodynamic entropy, but Hemmo and Shenker (2006) have recently argued against this view by attacking von Neumann's (1955) argument. I argue that Hemmo and Shenker's arguments fail due to several misunderstandings: about statistical mechanical and thermodynamic domains of applicability, about the nature of mixed states, and about the role of approximations in physics. As a result, their arguments fail in all cases: in the single-particle case, the finite-particles case, and the infinite-particles case.
An Empirical Route to Logical ‘Conventionalism’
(In: Baltag A., Seligman J., Yamada T. (eds) Logic, Rationality, and Interaction. LORI 2017. Lecture Notes in Computer Science, vol 10455. Springer, Berlin, Heidelberg.)
The laws of classical logic are taken to be logical truths, which in turn are taken to hold objectively. However, we might question our faith in these truths: why are they true? One general approach, proposed by Putnam and more recently Dickson or Maddy, is to adopt empiricism about logic. On this view, logical truths are true because they are true of the world alone – this gives logical truths an air of objectivity. Putnam and Dickson both take logical truths to be true in virtue of the world’s structure, given by our best empirical theory, quantum mechanics. This assumes a determinate logical structure of the world given by quantum mechanics. Here, I argue that this assumption is false, and that the world’s logical structure, and hence the related ‘true’ logic, are underdetermined. This leads to what I call empirical conventionalism.