Comment

John Oliver on the "Unwinding" of Medicaid

22
The Ghost of a Flea4/18/2024 11:52:26 am PDT

re: #18 Axolotl

From what I have seen, AI is not more than another tool for repetitive tasks or maybe data analysis. Even in those areas it is very limited and lightyears away from where people imagine it is.

Yeah, the hard thing about talking about AI is that really we’re talking about big algorithms, and there’s a whole taxonomy to big algorithms that’s lumped under one term…largely because the people who own those machines are incentivized to hype their product as the cusp of scifi general intelligence.

Like…really big algorithms for research are useful and good, as long as they’re not black boxes. LLMs are productivity tools. Image makers are…interesting…but ultimately also a productivity tool.

The part which triggers the Luddite in me is that the people controlling these systems have no incentive (with a hype-based tech economy) to ever discuss their limitations. The immediate leap from LLM production to “we must be assigned power now to align general intelligence that we will create later” is…scammy at best, deeply sinister when contextualized by the general Palo Alto/Silicon Valley worldview. Also, that AI owners try to conceal the human labor required to make these systems function—constant labeling and pruning done in sweatshop conditions—and how they are selling these models to large businesses speaks to how this technology exists to primarily convenience capital-holders, and while it is entirely possible these systems could be used for pro-social ends most of the existing value propositions involve facilitating squeezing more value from more and more desperate people.

[And a specific concern we should all have is that the veneer of objectivity granted to AI by both existing tropes and the ongoing promotional campaign has value even if AIs have no objectivity. These are black boxes that create pretexts, and that technology in the hands of people making life-and-death decisions should be questioned constantly. What would powerful people want? A machine that can challenge their preferred conclusions or a machine that is constrained to justify their conclusions?]