Simons Cluster on Algorithmic Fairness
Over the summer, the Simons Institute ran a short program (a “cluster”) on Algorithmic Fairness, which I briefly discuss: here. This was only one instance of a flurry of recent programs and events on Algorithmic Fairness that had a substantial representation of TOC researchers (often as part of a multidisciplinary collaboration). The growing interest within the theory community in Algorithmic Fairness and, more generally, in the societal implications of computation is highly motivated and timely given how prevalent computation is in every aspect of our lives (also see: here).
I will devote several posts (by myself and others) to the (beautiful) “emerging theory of algorithmic fairness.” Most of these posts will be more technical, but I’d like to devote today’s post to a short discussion of what theoreticians can contribute to this multidisciplinary effort.
My own belief is that computer scientists cannot solve Algorithmic Fairness (and privacy in data analysis or any other issue of this sort) on their own. On the other hand, these issues, in their current computation-driven large-scale incarnation, cannot be seriously addressed without major involvement of computer scientists. Furthermore, what is needed (as I will try to demonstrate in future posts) is a true collaboration, rather than a division of work, where one community sub-contracts another for specific expertise.
All-or-nothing-ism
One of the reasons the Theory of Computing is particularly suited to this challenge is our basic optimism in the face of complexities and even impossibilities. The topic of Algorithmic Fairness seems to be particularly entangled with such complexities. This is the source of a line of criticism on the inherent limitations of the “tech solutionist” approach to Algorithmic Fairness. For example, “discrimination is the result of biases in the data and cannot be addressed at the level of machine learning.” Another example: “unless we understand the causal structure we are analyzing, fairness cannot be obtained.” These criticisms (while not as devastating as they are sometimes presented) are not without merit, and they deserve a much more technical discussion (that will hopefully come in future posts). At this point I’d like to make two comments:
- The computational lens has served us well in the study of Cryptography, Game Theory, Learning , Privacy and beyond. There is already evidence that it is serving us well in the study of Algorithmic Fairness. I believe that the pessimistic view of what I would call “all-or-nothing-ism” ignores an incredible track record of Theory of Computing in addressing complicated human-involving subject areas, and ignores the progress already made on Algorithmic Fairness.
- Furthermore, no one is planning to stop analyzing data (for example in medical research) because our data is imperfect or because we didn’t figure out causality, Algorithmic Fairness requires both the best solutions we can come up with right now, and a concerted research effort to guarantee better fairness in the future.
Leave a Reply