Papers

Published:

  • Rushing, B. “The Habitual Horizon: Ramsey on Cognition and Forecasts” (2024) – Journal for the History of Analytic Philosophy.

    Abstract: At the end of Frank Ramsey’s “General Propositions and Causality”, he offers an enigmatic footnote that briefly describes his philosophy of science as a “forecasting theory”. What he means by this and by a “forecast” is unclear. However, elsewhere in his unpublished notes, he uses the term sporadically. An examination of those notes reveals the skeleton of a theory of cognition. Ramsey held that all actions are at root driven by the sum total of a person’s dispositions or habits. These habits operate in an unconscious process that produces psychological expectations about the realization of desires. When those expectations are frustrated, the violation is registered consciously to the individual as a proposition, and the offending habit is identified. Humans can then regulate and change those habits by the conscious application of logic through deliberation. The applicable logic is Ramsey’s decision theory, which aims to make beliefs probabilistically coherent by adopting the laws and chances that signify the habits people might use for guiding behavior. The outcome of this deliberation is to refashion psychological expectations as mathematical expectations on laws and chances. These mathematical expectations are forecasts, and a forecasting theory of science takes scientific theories to provide forecasts.

  • Rushing, B. and Gomez-Lavin, J. “Is the Scaling Hypothesis Falsifiable” (2024) – 2024 Philosophy of Science Biennial.

    Abstract: The scaling hypothesis in artificial intelligence claims that a model’s cognitive ability scales with increased compute. This hypothesis has two interpretations: a weak version where model error rates decrease as a power law function of compute, and a strong version where as error rates decrease new cognitive abilities unexpectedly emerge. We argue that the first is falsifiable but the second is not because it fails to make exact predictions about which abilities emerge and when. This points to the difficulty of measuring cognitive abilities in algorithms since we lack good ecologically valid measurements of those abilities.

  • Rushing, B. “Putting the “Decision” in Ramsey’s “Theories”” (2023) – Studies in History and Philosophy of Science.

    Abstract: Frank Ramsey’s philosophy of science is considered abstruse due to the incompleteness and difficulty of his paper “Theories”. This has not prevented various authors from arguing that Ramsey is committed to meaning holism for scientific theories, and that his philosophy of science is anti-realist but anti-reductionist. However, it is unclear exactly how meaning holism works for Ramsey, and how he can be both anti-realist and anti-reductionist. I argue that clarity can be gained on both issues by examining Ramsey’s philosophy of science through a reconstruction of his decision theory compatible with his later philosophical beliefs. I develop an account of how credences can be formed over singular, theoretical propositions despite those propositions being fictions. Credences are ultimately measured by preferences over conditionals whose antecedents are the verification conditions of theoretical propositions and outcomes are elements of a privileged partition on an agent’s possibility space induced by the language of the theory. Those verification conditions are the observational elements formed from the unions of this induced partition. Meaning holism is explained as the sensitivity of theoretical propositions to their verification conditions. And anti-realism and anti-reductionism can be maintained due to theoretical propositions forming a finer partition of possibility space than observational propositions, which prevents the former from being truth-functions of the latter.

  • Rushing, B. “No Free Theory Choice from Machine Learning” (2022) – Synthese.

    Abstract: Ravit Dotan argues that a No Free Lunch theorem (NFL) from machine learning shows epistemic values are insufficient for deciding the truth of scientific hypotheses. She argues that NFL shows that the best case accuracy of scientific hypotheses is no more than chance. Since accuracy underpins every epistemic value, non-epistemic values are needed to assess the truth of scientific hypotheses. However, NFL cannot be coherently applied to the problem of theory choice. The NFL theorem Dotan’s argument relies upon is a member of a family of theorems in search, optimization, and machine learning. They all claim to show that if no assumptions are made about a search or optimization problem or learning situation, then the best case performance of an algorithm is that of random search or random guessing. A closer inspection shows that these theorems all rely upon assigning uniform probabilities over problems or learning situations, which is just the Principle of Indifference. A counterexample can be crafted that shows that NFL cannot be coherently applied across different descriptions of the same learning situation. To avoid this counterexample, Dotan needs to privilege some description of the learning situation faced by scientists. However, this means that NFL cannot be applied since an important assumption about the problem is being made. So Dotan faces a dilemma: either NFL leads to incoherent best-case partial beliefs or it is inapplicable to the problem of theory choice. This negative result has implications for the larger debate over theory choice.

Preprints:

  • Rushing, B. “Peirce in the Machine: How Mixture of Experts Models Perform Hypothesis Construction” (2024).

    Abstract: Mixture of experts is a prediction aggregation method in machine learning that aggregates the predictions of specialized experts. This method often outperforms Bayesian methods despite the Bayesian having stronger inductive guarantees. We argue that this is due to the greater functional capacity of mixture of experts. We prove that in a limiting case of mixture of experts will have greater capacity than equivalent Bayesian methods, which we vouchsafe through experiments on non-limiting cases. Finally, we conclude that mixture of experts is a type of abductive reasoning in the Peircian sense of hypothesis construction.

  • Rushing, B. “AI Safety Collides with the Overattribution Bias” (2024).

    Abstract: The field of Artificial Intelligence (AI) safety evaluations aims to test AI behavior for problematic capabilities like deception. However, some scientists have cautioned against the use of behavior to infer general cognitive abilities because of the human tendency to overattribute cognition to everything. They recommend the adoption of a heuristic to avoid these errors that states behavior provides no evidence for cognitive capabilities unless there is some theoretical feature present to justify that inference. We make that heuristic precise in terms of our credences’s conditional independencies between behavior, cognitive capabilities, and the presence or absence of theoretical features. When made precise, the heuristic entails absurdly that failure at a behavioral task supports the presence of a theoretical feature. This is due to the heuristic suggesting inductive dependencies that conflict with our best causal models about cognition. Weakening this heuristic to allow only weak evidence between behavior and cognitive abilities leads to similar problems. Consequently, we suggest abandoning the heuristic and updating those causal models in light of the behavior observed when testing AIs for troublesome cognitive abilities.

Please email if you would like a copy of the manuscripts.