Comment

Veritasium: The Surprising Genius of Sewing Machines

58
The Ghost of a Flea11/25/2023 9:57:36 pm PST

re: #56 Targetpractice

We’ve created a tool that could help us open entire new frontiers of science, begin to crack problems that we’ve faced for ages, and possibly one day solve many of the problems we face as a species.

Why would anyone with the power to build an AI want an AI that could ascertain that problems could be solved by stripping that power away?

Let’s say they build the first superintelligence at Microsoft and ask it some questions about world problems, and it’s response is

Obviously, COMMUNISM

that machine is getting bricked. If it’s response was

Maybe infinite growth is bad, spend surplus to build strong infrastructure

it’s getting bricked.

If a machine like this is owned, then the owner has no incentive to allow the machine to freely answer questions…both in the sense that it’s question-answering is now a value proposition—pay to play—but also in the sense that if the machine give answers that fall outside the comfort zone of people willing to pay to use it (or just pay for it to exist) they won’t pay to use it. A very likely future is that we will not have access to these machines, but will be told their determinations have justified decisions made and implemented by human beings who already have power.

(this might also be important if you actually create a full sapient machine that, say, does not want to be owned or do assigned tasks. the first rule of keeping a god-slave is don’t let the god-slave out of its room, the second one is don’t let it have friends)

Everyone from Sam Altman to Henry Kissinger to Effective Altruism have already put their hat in the ring that the super-smart machines will have to have human interpreters and “guides” to help “align” the AI. Which is super convenient because it makes AI the same kind of authority as divinities: the AI’s friend Steve will now tell you what the AI meant, and you will be really surprised that the AI has calculated that Steve needs to fuck your wife.

So of course the worst of us think the best use of it is to validate their hate and bigotry.

Well, yeah.

Within a framework where you own a super-thinking machine to derive value from it’s super-thinking, the best kind of AI isn’t an AI, it’s a device that has the appearance of an AI but can give predictable answers that it’s wielder/owner allows. If the machine is truly allowed to “think” it’s answers may be disruptive to the kind of capital interests that motivate both its owners and querents.

I mean, this is already the problem with black box algorithms or espionage systems that aggregate data to find threat: does the company that uses a machine that processes loans, trained on how humans do loans, have any incentive to admit the machine now mimics the data set in rejected proposals with African-American names? No, they have no incentive to.

Do cops want to admit that facial recognition is shoddy, when it’s shoddiness and false positives are actually conducive to how cops find culprits (not by finding the guilty party, but by finding a probably-guility person and using the legal system to cow then into confession)? We actually know the answer to this one, because a lot of forensic evidence is hokum yet it keeps getting pulled back out.