Links (5 August 2024)
This week’s highlight: a paper on how machine learning research is filled with pseudoscience #
“The reanimation of pseudoscience in machine learning and its ethical repercussions: Patterns”
This paper is excellent. Some highlights:
However, the mythology surrounding ML presents it—and justifies its usage in said contexts over the status quo of human decision-making—as paradigmatically objective in the sense of being free from the influence of human values. This enables the “laundering” of uninterrogated values into the outputs of such ML-based decision-making and decision-support systems, where they are then reified as objective empirical truth.
…
Perhaps unsurprisingly, it is precisely this same logic that motivates the usage of ML methods in resurrecting physiognomic research programs in the 21st century. The method of training a model to detect correlations from raw data is justified in being more reliable, objective, or free from (human) bias.
…
The gatekeeping methods present in scientific disciplines that typically prevent pseudoscientific research practices from getting through are not present for applied ML in either industry or academic research settings. The same lack of domain expertise and subject-matter-specific methodological training characteristic of those undertaking applied ML projects is typically also lacking in corporate oversight mechanisms as well as among reviewers at generalist ML conferences. ML has largely shrugged off the yoke of traditional peer-review mechanisms, opting instead to disseminate research via online archive platforms. ML scholars do not submit their work to refereed academic journals. Research in ML receives visibility and acclaim when it is accepted for presentation at a prestigious conference. However, it is typically shared and cited, and its methods built upon and extended, without first having gone through a peer-review process. This changes the function of refereeing scholarship. The peer-review process that does exist for ML conferences does not exist for the purpose of selecting which work is suitable for public consumption but, rather, as a kind of merit-awarding mechanism. The process awards (the appearance of) novelty and clear quantitative results. Even relative to the modified functional role of refereeing in ML, however, peer-reviewing procedures in the field are widely acknowledged to be ineffective and unprincipled.7879 Reviewers are often overburdened and ill-equipped to the task. What is more, they are neither trained nor incentivized to review fairly or to prioritize meaningful measures of success and adequacy in the work they are reviewing.
…
This brings us to the matter of perverse incentives in ML engineering and scholarship. Both ML qua academic field and ML qua software engineering profession possess a culture that pushes to maximize output and quantitative gains at the cost of appropriate training and quality control.
Honestly difficult to choose which bit of this paper to quote.
The rest #
- “I watched Nvidia’s Computex 2024 keynote and it made my blood run cold | TechRadar”. “For everyone else, however, all I saw was the end of the last few glaciers on Earth and the mass displacement of people that will result from the lack of drinking water.”
- “But there is something off with the Deno ecosystem”. I noted this a while back and I think Deno’s ecosystem has only gotten worse since. Focusing on node compatibility might look on paper like a good strategy for adoption, but it both sabotages your own ecosystem and condemns you to forever follow, not lead, node
- “Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025”. On the one hand, everything from Gartner is unadulterated bullshit, they’re esp. prone to made-up just-so storytelling like with their utterly fact-free “hype cycle”. On the other, they’re bullish on “AI” and still come up with this, so the truth is likley to be worse.
- “A handful of reasons JavaScript won’t be available - Piccalilli”. “It’s always safe to assume JavaScript will not be available, so here’s a quick list of very realistic reasons it won’t be.”
- “Opinion: What’s behind the AI boom? Exploited humans”. “In Iceland, we visited data center workers who documented the energy-intensive nature of these centers, which consume more electricity than Icelandic households combined.” One of the reasons why this shit makes me so angry. This is directly threatening Iceland’s energy transition as it’s soaking up all excess capacity
- “Consenting to decisions | everything changes”. “it isn’t necessarily the case that consensus decision-making processes require more time.”
- ‘Adverse impacts of revealing the presence of “Artificial Intelligence (AI)”’. ‘The findings of the study indicated that the inclusion of the “Artificial Intelligence” term in descriptions of products and services decreases purchase intention.’ Like I’ve been saying, we don’t need the term “slop”. Consumers have decided that “AI” in its entirety is bullshit
- “Nike’s $25B blunder shows us the limits of “data-driven” | by Pavel Samsonov | Jul, 2024 | Medium”
- “Study Finds Consumers Are Actively Turned Off by Products That Use AI”. “When AI is mentioned, it tends to lower emotional trust, which in turn decreases purchase intentions”
- “Everything Is Broken. Once upon a time, a friend of mine… | by Quinn Norton | The Message | Medium”. Software, and computing by extension, is so incredibly broken
- “TBM 304: Losing a Day a Week to Inefficiencies?”. “However, for companies to truly address the challenge, they must figure out how to remove the layers of fear, blame, and apathy.”
- “Teaching to the Test. Why It Security Audits Aren’t Making Stuff Safer”. “We all know this crap doesn’t work and the sooner we can stop pretending it makes a difference.”
- “How Extreme Heat Harms Planes, Trains, Water Mains and Other Crucial Infrastructure | Scientific American”. “Even without the climate crisis, the country’s aging infrastructure is struggling because of insufficient maintenance and heavy demand.”
- “Tech stocks take a pounding as hedge fund Elliott warns AI trades like Nvidia are in ‘bubble land’”. “What began with Tesla and Google has now gathered steam with selloffs spreading across all major cloud computing and semiconductor stocks, led by Intel.” It’ll be interesting to see what happens next. And interesting to note that this dip was triggered by Intel’s mismanagement.
- “NVIDIA’s next-gen Blackwell AI GPUs delayed, rumor has it ‘design flaws’ are to blame”. The longer this goes on the bigger the “Osbourne Effect” on current gen chips will last. Also, between Nvidia, AMD, and Intel, US chip companies don’t seem to have their act together.
- “Google Gemini AI Ad Backlash: ‘Dear Sydney’ Pulled From Olympics on NBC”. “At the same time, Google continued to defend the spot, which was created by its in-house creative team.” I don’t think it’s a coincidence that both this and Apple’s Crush ad were made by in-house teams. These companies are disconnected bubbles rendering them incapable of understanding people from outside the bubble. Now think about how their software and product design is done inside the bubble as well.
- “Two months of feed reader behavior analysis”
- “Elliott says Nvidia is in a ‘bubble’ and AI is ‘overhyped’”. “However, the chipmaker is still up about 120 per cent this year and more than 600 per cent since the start of last year.” When the bubble actually pops, stock price drops won’t be limited to low double digit percentages. Nvidia was probably overpriced before the 600 per cent rise.
- “Why the collapse of the Generative AI bubble may be imminent”
- “Cascade Layers are useless* - Manuel Matuzovic”. “*if you don‘t understand the problems they solve”
- “inessential: NetNewsWire and Conditional GET Issues”
- “The Deno Package Paradox – David Bushell – Freelance Web Design (UK)”. “Last week I noted that six months after public launch JSR has less than 5000 packages. Checking today that number is 3655 — it fell by over 1000”. There just isn’t a good reason for anybody to use JSR