That was a well-written and opinion, Nervensaegen. Still, if I may, I will provide a different perspective.
Nervensaegen: Could we please stop calling it "AI"? That abbreviation "AI" is a marketing term.
The abbreviation "AI" is quite old. What I dislike is the marketing abuse around "Artificial Intelligence". I think they are overselling it. And I too, expect to see a major waste of resources, with machines talking to machines as if they are human, and training off the output of other models. I foresee a bad spiral ahead and another "AI winter" because of all the hype marketing people sold us.
Are NFT still a thing?
Nervensaegen: It is, and always was, a "pattern match algorithm".
I would argue that is pretty much what human intelligence is. That is why we "see" faces in objects and other places (e.g. the Cydonia photo), why you react when you hear a sound similar to your name, the sound of your alarm clock or your phone.
Even those IQ tests are just pattern matching challenges!
While there are many facets to intelligence, we can agree that pattern matching is a major part of it.
Nervensaegen: But, ask it anything it hasn't been trained on, like "draw me an elephant", and you get (surprise) another dog.
I don't expect any person to do any better if asked to draw a picture of something based on a name they never heard. In addition, names are quite often arbitrary, anyway.
And have you seen how infants name things? They reuse the few names they already know. That is the behavior you see with a model that lives in a world with just cats and dogs.
Nervensaegen: This cannot be fixed. Because of the very nature of PMAs as such, they only deal in probabilities based on how often they have seen a certain input during training.
I think that is pretty much how we do things, also. The difference being that we have been doing it for much longer.
For example: back in 2019, people went to the doctor with some flu-like symptoms, and the flu was quite common, so it took them some time to recognize that the doctors were dealing with a new virus. And when someone presented that hypothesis, it was criticized.
If you put a boxer and a kickboxer in a ring, how do you think things will go down?
The garbage plant example is funny, but there are multiple factors that need to be considered. One is the abstract concept of "Artificial Intelligence"; then there are the theoretical approaches to AI, that are specific to a particular class of problem (no "General AI" for now) -- such as Neural Networks, Bayesian Models, etc; and finally, the implementation or practical use of these models.
As an analogy, if I see a car crash, was it a driver accident, a defect in the car design or was it another indication that trains are the superior for of land transportation?
Nervensaegen: What makes them dangerous though is the security risks they pose.
I agree. But it goes both ways: Machines have no common sense, so they can be tricked -- it will take some time to fix that problem; But also, they cannot be overtrusted, and that is, IMH, a
user problem.