It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
AI Generated Quite Impressive song.
https://www.youtube.com/watch?v=U8JIx0YfNts
https://www.fbi.gov/contact-us/field-offices/sanfrancisco/news/fbi-warns-of-increasing-threat-of-cyber-criminals-utilizing-artificial-intelligence
But of course, criminals would use AI-driven chatbots and fakes to scale up scams and phishing attacks.

As an author, I get contacted by scammers two to three times a week. About 20% of those scams are now executed by AI chatbots. At least one of the recent scams very obviously used ChatGPT.

Almost every single attack during the last three months attempted to impersonate trustworthy individuals.

And if it isn't one thing, it's the other. A while back, a scammer tried to convince me to write a series of 50 articles (one article per week for a year). It turned out to be a lousy attempt to get me to train their AI model.

Unfortunately, that crap has already become business as usual.

The issue becomes more urgent every couple of months, when scammers come up with a new scheme you don't immediately recognize.

As a private individual, you might choose to ignore all messages, which is sad enough as it is. However, when you are a professional, trying to pay bills by building your brand and sales, you can't afford to not reply to inquiries from alleged customers or friendly authors. The problem is that these days, the majority of those inquiries are scams, and even if they don't manage to extort money, you still end up wasting substantial amounts of time and resources just talking to these idiots.

I dread the day when we end up with company chatbots talking to scam chatbots. A day, that may very well already be upon us.
avatar
Geromino: AI exists since the 1950s and since the 1950s people, even some experts, have completely unrealistic expectations about AI. *shrug*
avatar
rtcvb32: It's programming, give it complex enough programming and it can seem alive enough, within reason.
Uh, I dont understand quite a bit of your posting. At least not really.

Either way AI needs:

- A lack of a traditional function to solve a problem in the mathematical optimal way, so we are forced to use heuristics (approximations) instead. If we already know a way to solve a problem with traditional programming, AI simply cant be better than that. AI by very nature tends to be wasteful, i.e. it does a lot of unnecessary operations, thus wasting processing time and also energy. So using AI can never be more efficient than traditional programming, if we already have a clean solution.

- Test data

- A test function

AI Is NOT programming. You may need quite a bit programming around AI, like to provide an interface and an interpreter, but AI itself is just an enormous mass of data that describes a network of logical functions.

AI is trained, not programmed.

You mention chess. Chess is an excellent example for a problem that could best be solved by AI. The strongest chess engines available, such as Stockfish, use AI in crucial parts of their programming, where we usually had to use heuristics.

- The lacking traditional function is evaluating the position.

- The test data can be generated by simply running the AI against another chess program.

- The test function is simply the question who wins, or if it was a draw.

A different example is a LLM (large language model) like ChatGPT. Here the test function has to be human beings, which is why developing a LLM is so super expensive (around 100 mio US$).
avatar
Geromino: Either way AI needs:

- A lack of a traditional function to solve a problem in the mathematical optimal way, so we are forced to use heuristics (approximations) instead. If we already know a way to solve a problem with traditional programming, AI simply cant be better than that. AI by very nature tends to be wasteful, i.e. it does a lot of unnecessary operations, thus wasting processing time and also energy. So using AI can never be more efficient than traditional programming, if we already have a clean solution.

- Test data

- A test function
Preach it!

Unfortunately, CEOs don't seem to have gotten that memo just yet.

They still think that "AI" is a shortcut to get around expensive programming by throwing somebody else's hardware and computing power at it. Whereas, training the software is supposedly done by waving a magic wand, by a code monkey paid in bananas.

To make matters worse, the problems they are trying to solve are usually long since solved by traditional programming. Their test data is incoherent in format, and insufficient in quantity. And the "test function" exists only in the form of some dude's gut feeling.

avatar
Geromino: A different example is a LLM (large language model) like ChatGPT. Here the test function has to be human beings, which is why developing a LLM is so super expensive (around 100 mio US$).
The issue we have with CEOs is, that they look at LLMs, get told by some marketing people "it's super easy", then pretend to understand what is going on, and explain to the tech folks that we "HAVE to be able to leverage this technology" to generate money.

Their plan is literally:
- "throw some AI at it"
- ....something-something-something
- start shoveling money
Post edited August 05, 2024 by Nervensaegen
avatar
rtcvb32: It's programming, give it complex enough programming and it can seem alive enough, within reason.
avatar
Geromino: Uh, I dont understand quite a bit of your posting. At least not really.
'Training' an AI is just having it lazily programmed. I compare it to throwing spaghetti at the wall and see what sticks, things that may improve or be slightly better are put in for another round. Repeat a million times.

Getting a manual programming in something like C Robots would involve you putting in an infinite loop and looking for criteria. So it may do something like:
angle=0
loop(true){
angle = angle + 10 //assume 360 degrees around
if (detect_enemy(look)){
fire(detect_enemy(look)); //detect enemy returns an estimated distance
}
}
That would in essence have someone turning in circles looking for an enemy and then firing at them. Very basic, but you could give rules for movement, following a path or general stuff. Writing guards to follow a path and then react certain ways, depending on state, time of day, can give them a level of complexity that looks a little bit lifelike.

Final Fantasy 12 was rather nice, as you had the Gambit system. Gambits were a list of commands you could give to party members; The higher on the list the higher priority, first match goes. So say you could do:
If Health <= 50% Use Health Potion on Self
Attack Nearest Enemy
This would result in them attacking any enemy they see, but if their health got low they'd immediately use a potion as the potion is higher priority, where if you reverse it, they'd only use a potion if there was no enemies around to attack AND their health was under 50%. And in the event there's nothing else to do, there's probably a hidden option where they will always follow the leader.

FF12 was nice in that you could easily automate characters that acting on said gambits seemed to have a life of their own, though usually using buffs, removing debuffs, and going after select weaknesses would be divided up among the party where healing and attacking would remain as defaults. Though there was only like 10-14 gambits per person.
Yeah, yeah that's all great kid. I'm glad for you, glad that you're excited to post literally inhuman garbage masquerading as content and creativity.

Massive arm sweeping gesture to clear a table.

I prefer what the ProCreate CEO has to say on the matter.
Post edited August 20, 2024 by dnovraD
I just wish there was a way to stop YT from recommending me AI "content". No matter how many times I click "not interested" or "don't recommend channel" I'll get another AI "art" slideshow or some other AI "movie" the next day. Hell, there's not even an option to block this crap for 30 days, like those wretched "YT shorts".
avatar
Breja: I just wish there was a way to stop YT from recommending me AI "content". No matter how many times I click "not interested" or "don't recommend channel" I'll get another AI "art" slideshow or some other AI "movie" the next day. Hell, there's not even an option to block this crap for 30 days, like those wretched "YT shorts".
Have you tried using any of the alternative methods for viewing YouTube videos (and/or browsing for them)?

On Android, there is NewPipe, which is quite a nice front-end (until Google/YouTube makes periodic adjustments/alterations that temporarily break it).

As for desktop operating systems (including the BSD family, Linux, macOS, and Windows), there is yt-dlp, which can enable playback of YouTube videos on your media player (such as mpv).
avatar
Palestine: Have you tried using any of the alternative methods for viewing YouTube videos (and/or browsing for them)?

On Android, there is NewPipe, which is quite a nice front-end (until Google/YouTube makes periodic adjustments/alterations that temporarily break it).

As for desktop operating systems (including the BSD family, Linux, macOS, and Windows), there is yt-dlp, which can enable playback of YouTube videos on your media player (such as mpv).
And VLC can stream the videos, when Alphabet/Google isn't actively breaking that feature. (Currently broken.)
https://www.youtube.com/watch?app=desktop&amp;v=eJtm6SnNVek
avatar
Palestine: On Android, there is NewPipe, which is quite a nice front-end (until Google/YouTube makes periodic adjustments/alterations that temporarily break it).
For what's worth, NewPipe also have a Windows and Linux desktop versions.
Also I hope that humanoid machines on't be able to create human memories which it might be done in so-called passive life style way like for instance it was presented in Animatrix. But it should be done more in active way like it was presented in The X-Files...etc.
https://www.youtube.com/watch?v=-MUEXGaxFDA
Post edited August 28, 2024 by TheHalf-Life3
avatar
TheHalf-Life3: I remember of how people were enraged by Article 11 and 13 so called ACTA 2.0 of where Artificial Intelligence would filter everything complete digital censorship.
Hello Arsenal Gear!!!!!
That was a well-written and opinion, Nervensaegen. Still, if I may, I will provide a different perspective.

avatar
Nervensaegen: Could we please stop calling it "AI"? That abbreviation "AI" is a marketing term.
The abbreviation "AI" is quite old. What I dislike is the marketing abuse around "Artificial Intelligence". I think they are overselling it. And I too, expect to see a major waste of resources, with machines talking to machines as if they are human, and training off the output of other models. I foresee a bad spiral ahead and another "AI winter" because of all the hype marketing people sold us.
Are NFT still a thing?

avatar
Nervensaegen: It is, and always was, a "pattern match algorithm".
I would argue that is pretty much what human intelligence is. That is why we "see" faces in objects and other places (e.g. the Cydonia photo), why you react when you hear a sound similar to your name, the sound of your alarm clock or your phone.
Even those IQ tests are just pattern matching challenges!
While there are many facets to intelligence, we can agree that pattern matching is a major part of it.

avatar
Nervensaegen: But, ask it anything it hasn't been trained on, like "draw me an elephant", and you get (surprise) another dog.
I don't expect any person to do any better if asked to draw a picture of something based on a name they never heard. In addition, names are quite often arbitrary, anyway.

And have you seen how infants name things? They reuse the few names they already know. That is the behavior you see with a model that lives in a world with just cats and dogs.

avatar
Nervensaegen: This cannot be fixed. Because of the very nature of PMAs as such, they only deal in probabilities based on how often they have seen a certain input during training.
I think that is pretty much how we do things, also. The difference being that we have been doing it for much longer.
For example: back in 2019, people went to the doctor with some flu-like symptoms, and the flu was quite common, so it took them some time to recognize that the doctors were dealing with a new virus. And when someone presented that hypothesis, it was criticized.

If you put a boxer and a kickboxer in a ring, how do you think things will go down?

The garbage plant example is funny, but there are multiple factors that need to be considered. One is the abstract concept of "Artificial Intelligence"; then there are the theoretical approaches to AI, that are specific to a particular class of problem (no "General AI" for now) -- such as Neural Networks, Bayesian Models, etc; and finally, the implementation or practical use of these models.

As an analogy, if I see a car crash, was it a driver accident, a defect in the car design or was it another indication that trains are the superior for of land transportation?

avatar
Nervensaegen: What makes them dangerous though is the security risks they pose.
I agree. But it goes both ways: Machines have no common sense, so they can be tricked -- it will take some time to fix that problem; But also, they cannot be overtrusted, and that is, IMH, a user problem.