Zina B. Ward

Much of science is aimed at producing homogeneity: experiments are designed to isolate uniform causal factors, data are expected to be observer-independent, and replications show that findings are stable across laboratories. Moreover, induction itself is often thought to require an assumption that nature is regular or uniform. The things we study, however, are often inconveniently heterogeneous. So too are the people studying them: different scientists adopt different assumptions, approaches, and values. My research explores what the ubiquity of such variation means for our philosophical understanding of science and for science itself.

Variation and Natural Kinds

My book project, From Clusters to Kinds: A Naturalized Epistemology of Classification, develops a novel cluster theory of kinds that aims to capture the role of kinds in our cognitive economy. (In a recent paper on William Whewell I offer a general characterization of the family of cluster theories of kinds.) Compared to existing cluster theories, my account is minimal in the sense that it holds only that kinds are clusters of objects in an abstract property space, i.e., a multidimensional space whose axes represent significant properties objects can have. Such property clusters are hybrid entities shaped by both the structure of the world and what we find interesting in it. I show that this view of kinds has important normative implications for the practice of classification.

Philosophy of Neuroscience and Psychology

We think, feel, and perceive differently from one another. My doctoral research examined the scientific and philosophical consequences of such individual differences in cognitive science. In published work, I formulate a theory of what it takes to explain variation, discuss the different ways we characterize individual differences in cognitive science, and reconsider how neuroscientific data should be aggregated in the face of variation between brains. (The latter paper won the 2022 Popper Prize.)

There is also variability in the representational ascriptions made to minds and brains. Although philosophers have focused their attention on sensory representation, neuroscientists and psychologists ascribe content to a diverse array of non-sensory states, including motor states. In a recent paper I trace the history of the concept of representation through an early debate about what the motor cortex “represents.” I’m now writing a paper on what it means for a motor state to be a representation. In future work I plan to explore other classic puzzles about representation in the motor domain.

Values in Science

Scientists routinely draw different conclusions based on their differing values, but philosophers disagree about whether this is legitimate. I aim to put these discussions of values in science on firmer footing by examining the nature of values and the role of scientific claims in public discourse. In one paper I propose a disambiguation of how values relate to scientific choices. I then use my taxonomy to critically assess the view that non-epistemic values must play a role in scientific justification. Elsewhere I argue that scientists apply epistemic values in different ways, but this doesn’t necessitate the involvement of non-epistemic values in science. I am now thinking about the role of scientific claims in public justification and how debates about values in science link up with discussions of pragmatic encroachment and the ethics of belief.

Data Ethics

Technological advances have made tools for exercising influence over people’s behavior increasingly powerful. I am interested in moral and political questions raised by such automated influence. I am currently working on a collaborative project on recommender systems (such as those found in Netflix, Spotify, Instagram, and Google search). We propose a taxonomy of recommender systems that captures how specific design features make such systems more or less threatening to users’ agency. I am also thinking about what makes an automated decision-making system fair and what philosophers can add to technical discussions of algorithmic fairness.