Comment

John Oliver on the "Unwinding" of Medicaid

48
Axolotl4/18/2024 12:43:25 pm PDT

re: #22 The Ghost of a Flea

Yeah, the hard thing about talking about AI is that really we’re talking about big algorithms, and there’s a whole taxonomy to big algorithms that’s lumped under one term…largely because the people who own those machines are incentivized to hype their product as the cusp of scifi general intelligence.

Like…really big algorithms for research are useful and good, as long as they’re not black boxes. LLMs are productivity tools. Image makers are…interesting…but ultimately also a productivity tool.

The part which triggers the Luddite in me is that the people controlling these systems have no incentive (with a hype-based tech economy) to ever discuss their limitations. The immediate leap from LLM production to “we must be assigned power now to align general intelligence that we will create later” is…scammy at best, deeply sinister when contextualized by the general Palo Alto/Silicon Valley worldview. Also, that AI owners try to conceal the human labor required to make these systems function—constant labeling and pruning done in sweatshop conditions—and how they are selling these models to large businesses speaks to how this technology exists to primarily convenience capital-holders, and while it is entirely possible these systems could be used for pro-social ends most of the existing value propositions involve facilitating squeezing more value from more and more desperate people.

[And a specific concern we should all have is that the veneer of objectivity granted to AI by both existing tropes and the ongoing promotional campaign has value even if AIs have no objectivity. These are black boxes that create pretexts, and that technology in the hands of people making life-and-death decisions should be questioned constantly. What would powerful people want? A machine that can challenge their preferred conclusions or a machine that is constrained to justify their conclusions?]

I tried to avoid the use of the word “scam” but I am glad you did. I work with some of these tools and I just don’t understand the hype. As of now and as far as I can tell and not on the immediate horizon do these tools demonstrate volition or come up with their own ideas.

Until then they are indistiquishable from a calculator or a shovel in a day to day business situation.

However, I am concerned over multiple nefarious purposes. For example, it could analyse data for an insurance company in order to determine your rates (the black box you mentioned).

It could be used to analyse all avalable data to determine whether a company should hire you, or a school should accept you. All using an unasailable black box.

It could (will be) used to analyse your social media to determine your political proclivities. This could be used in the future for warfare purposes. Think a roving drone with face recognition and a mounted gun is that far around the corner? I think we have all the requisite components for that.

I know the first use of armed roving drones with facial recognition will be used for a “honorable” purpose. Wouldn’t it be great if Isreal could only target the bad guys? Then it will be used all over and will be cheap as hell.

It can do these things today or in the near future.