.Keys: [...]
They're language models which "read" words and apply an answer based on your prompt.
[...]
amok: There are broadly speaking three types of AI. These are Langauge Models, Generative AI and Fundation Models. LM's simulates communications. Genrative AI generates various types of content (music, images, flowcharts etc). Foundation Models generate fundations, or models, of which other models can run on.
ChatGPT, for example, started out as a large langauge model, but with 3.5 onwards it has become more or mroe a foundation model for other tyoes of AI to interface with (e.g. DALL-E for generative visual art).
I can see this was written using ChatGPT or other LM AI? :P
Jokes aside, thanks for the info.
Zimerius: :p
but okay, thank you for taking your time answering. There is so much information currently available on the topic. To some, we have entered the next technological era. Other's can only point out on how this is only a marketing bubble almost ready to spat out. It is hard to be a bystander and try to make sense of it all. Personally, i'm already glad if some game developers aim at creating more life like challenges instead of fighting and owning mechanics all the time. So that's my interest more or less.
randomuser.833: They have some use, but much smaller then companies want to tell us.
And they are way more unreliable and need much more human control then they are willing to acknowledge.
For example Google seems to have a team of techies monitoring the social media channels that show new AI fuckups to correct them manually (like stick your cheese to pizza by using glue, that directly came from Reddit).
Problem is, you can't correct the model itself. Those models are black boxes even to the companies who created them. Nobody understands how they exactly work and why they do certain things.
So they can only do 2 things. They can build up fences, that basically block the "AI"s from posting certain things back. Not from building the sentence but just blocking the answer. And people very often will find ways around this.
And they can alter the "prompt" (the question you are asking) to get a "more desired" answer. Simply because their training data is shit and that way the model is very biased. Then another automatic tool is simply altering your question before it is send to the "AI".
And if you think that might fuck up things - it does.
(Ignore the headline, the text is very balanced and explains how the fuckup happend)
https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/
By shit training data of mostly white people Googles picture creating AI created mostly white people. So the questions where altered - and we got female popes, black vikings and native american medieval english kings.
And I have to correct myself.
LLMs already talked people into suicide. So they kinda already killed people.
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-
Next up will be very likely some riots caused by AI hallucination in a less developed country where people will get killed (already happens because of made up information by humans).
.Keys: Also... don't get us started with how well AI help when programming.
It saves a ton of time with simple to intermediary questions that you'd probably need to search on Stack or specific forums...
So please guys, stop with this idea that it "doesn't help" or "its all fake".
Its not useless and its not fake. It does help in many use cases.
Still, it is not that kind of "A I".
They're human dependant tools, for now at least.
randomuser.833: All I was hearing here from a friend who is professional codemonkey.
It can do basic things to some degree, because they inhaled various help forums for programming and stuff like stack overflow.
But it still is mixing things up by basically combining several languages or making functions up, that simply don't exist.
For him it is like the meme of the codemonkey from India, who spits out a mess of a code to your feet with the comment "doesn't work - please fix".
Whit the small difference that it implies the code will work.
The worst problem so far was even dangerous. Because those "AI"s simply made up bibs. For some languages you rely on packs that have been written to do something with their internal code and you can use it then.
And the names the AIs use is often simply not existing. Not only that, different AIs even make up the same name - so cross hallucination is a thing for whatever reason.
Story is some guy played around a bit and found out, that the "AI" making up the same name to some degree over and over again for whatever reason.
So he created such a bib on his own, that could include evil code (it didn't) - and it was used often and fast even by big companies.
https://lasso-security.webflow.io/blog/ai-package-hallucinations Or maybe more readable
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
Packs already some kind of danger, because every now and then one is taken over by somebody with bad intentions.
But this is a new level.
It might depend on the specific code language or what exactly you want to do.
And how much time you want to put into proofreading and crosschecking everything you get.
But there is the point where the time you spent for correcting the "AI" help surpasses the time you need to do it on your own.
Even more because you have to read into and understand code that is not your own.
And because LLMs are build by design to make things up, they will be unreliable forever.
Good pack of info, thanks.
But let me add, I'm not saying I'm in total favor of AI when I said its a useful tool for many tasks and specially to save time in many others as there are definitely objectively good uses cases in which it saves time and makes its users profit more as common workers (marketing, programming, yada yada...), even "without paying a cent to their creators companies" (though, as we know, data is more valuable than simply money nowadays, so that's what they collect with free AI tools and therefore, data is their "payment").
I've also heard a while ago about the case in which the guy suicided himself after chatting for a week or two with an AI, but we must also be fair to the situation:
The guy was apparently a "Save the planet!" agenda militant, which apparently started to chat with a "Save the planet at all costs!" companion created AI, which then, supposedly, convinced him that with less people on earth, the world would be better for his two children, supposedly. Or something like this, if Im not mistaken.
A bizarre case that shows not the fault of AIs, but the fault of such radical agendas (the "clean energy without planning", the "let children be what they feel is right while we lobotomize them with woke ideas", being probably the most dangerous) though one could argue that the AI demonstrated behaviors that it shouldn't and this is a danger in enough itself, as exemplified by its specific AI personality companion being promptly removed / corrected.
Don't disagree with you here. I can see facts.
I just don't agree with the idea that "AI = All Bad".
We should and must, I think, use it responsibly and develop it in the same way.
But then again... we know what world we live in and we know this will not happen, so, anyways...
...ad infinitum.