Web dev at the end of the world, from Hveragerði, Iceland

Links (5 August 2024)

This week’s highlight: a paper on how machine learning research is filled with pseudoscience

“The reanimation of pseudoscience in machine learning and its ethical repercussions: Patterns”

This paper is excellent. Some highlights:

However, the mythology surrounding ML presents it—and justifies its usage in said contexts over the status quo of human decision-making—as paradigmatically objective in the sense of being free from the influence of human values. This enables the “laundering” of uninterrogated values into the outputs of such ML-based decision-making and decision-support systems, where they are then reified as objective empirical truth.

Perhaps unsurprisingly, it is precisely this same logic that motivates the usage of ML methods in resurrecting physiognomic research programs in the 21st century. The method of training a model to detect correlations from raw data is justified in being more reliable, objective, or free from (human) bias.

The gatekeeping methods present in scientific disciplines that typically prevent pseudoscientific research practices from getting through are not present for applied ML in either industry or academic research settings. The same lack of domain expertise and subject-matter-specific methodological training characteristic of those undertaking applied ML projects is typically also lacking in corporate oversight mechanisms as well as among reviewers at generalist ML conferences. ML has largely shrugged off the yoke of traditional peer-review mechanisms, opting instead to disseminate research via online archive platforms. ML scholars do not submit their work to refereed academic journals. Research in ML receives visibility and acclaim when it is accepted for presentation at a prestigious conference. However, it is typically shared and cited, and its methods built upon and extended, without first having gone through a peer-review process. This changes the function of refereeing scholarship. The peer-review process that does exist for ML conferences does not exist for the purpose of selecting which work is suitable for public consumption but, rather, as a kind of merit-awarding mechanism. The process awards (the appearance of) novelty and clear quantitative results. Even relative to the modified functional role of refereeing in ML, however, peer-reviewing procedures in the field are widely acknowledged to be ineffective and unprincipled.7879 Reviewers are often overburdened and ill-equipped to the task. What is more, they are neither trained nor incentivized to review fairly or to prioritize meaningful measures of success and adequacy in the work they are reviewing.

This brings us to the matter of perverse incentives in ML engineering and scholarship. Both ML qua academic field and ML qua software engineering profession possess a culture that pushes to maximize output and quantitative gains at the cost of appropriate training and quality control.

Honestly difficult to choose which bit of this paper to quote.

The rest

You can also find me on Mastodon and Bluesky