It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
randomuser.833: At worst it will kill people.
:p

but okay, thank you for taking your time answering. There is so much information currently available on the topic. To some, we have entered the next technological era. Other's can only point out on how this is only a marketing bubble almost ready to spat out. It is hard to be a bystander and try to make sense of it all. Personally, i'm already glad if some game developers aim at creating more life like challenges instead of fighting and owning mechanics all the time. So that's my interest more or less.
avatar
.Keys: [...]
They're language models which "read" words and apply an answer based on your prompt.
[...]
There are broadly speaking three types of AI. These are Langauge Models, Generative AI and Fundation Models. LM's simulates communications. Genrative AI generates various types of content (music, images, flowcharts etc). Foundation Models generate fundations, or models, of which other models can run on.

ChatGPT, for example, started out as a large langauge model, but with 3.5 onwards it has become more or mroe a foundation model for other tyoes of AI to interface with (e.g. DALL-E for generative visual art).
Post edited June 09, 2024 by amok
avatar
randomuser.833: At worst it will kill people.
avatar
Zimerius: :p

but okay, thank you for taking your time answering. There is so much information currently available on the topic. To some, we have entered the next technological era. Other's can only point out on how this is only a marketing bubble almost ready to spat out. It is hard to be a bystander and try to make sense of it all. Personally, i'm already glad if some game developers aim at creating more life like challenges instead of fighting and owning mechanics all the time. So that's my interest more or less.
They have some use, but much smaller then companies want to tell us.
And they are way more unreliable and need much more human control then they are willing to acknowledge.
For example Google seems to have a team of techies monitoring the social media channels that show new AI fuckups to correct them manually (like stick your cheese to pizza by using glue, that directly came from Reddit).
Problem is, you can't correct the model itself. Those models are black boxes even to the companies who created them. Nobody understands how they exactly work and why they do certain things.
So they can only do 2 things. They can build up fences, that basically block the "AI"s from posting certain things back. Not from building the sentence but just blocking the answer. And people very often will find ways around this.

And they can alter the "prompt" (the question you are asking) to get a "more desired" answer. Simply because their training data is shit and that way the model is very biased. Then another automatic tool is simply altering your question before it is send to the "AI".
And if you think that might fuck up things - it does.
(Ignore the headline, the text is very balanced and explains how the fuckup happend)
https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/
By shit training data of mostly white people Googles picture creating AI created mostly white people. So the questions where altered - and we got female popes, black vikings and native american medieval english kings.

And I have to correct myself.
LLMs already talked people into suicide. So they kinda already killed people.
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

Next up will be very likely some riots caused by AI hallucination in a less developed country where people will get killed (already happens because of made up information by humans).


avatar
.Keys: Also... don't get us started with how well AI help when programming.
It saves a ton of time with simple to intermediary questions that you'd probably need to search on Stack or specific forums...

So please guys, stop with this idea that it "doesn't help" or "its all fake".
Its not useless and its not fake. It does help in many use cases.

Still, it is not that kind of "A I".
They're human dependant tools, for now at least.
All I was hearing here from a friend who is professional codemonkey.
It can do basic things to some degree, because they inhaled various help forums for programming and stuff like stack overflow.
But it still is mixing things up by basically combining several languages or making functions up, that simply don't exist.
For him it is like the meme of the codemonkey from India, who spits out a mess of a code to your feet with the comment "doesn't work - please fix".
Whit the small difference that it implies the code will work.

The worst problem so far was even dangerous. Because those "AI"s simply made up bibs. For some languages you rely on packs that have been written to do something with their internal code and you can use it then.
And the names the AIs use is often simply not existing. Not only that, different AIs even make up the same name - so cross hallucination is a thing for whatever reason.

Story is some guy played around a bit and found out, that the "AI" making up the same name to some degree over and over again for whatever reason.
So he created such a bib on his own, that could include evil code (it didn't) - and it was used often and fast even by big companies.
https://lasso-security.webflow.io/blog/ai-package-hallucinations
Or maybe more readable
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

Packs already some kind of danger, because every now and then one is taken over by somebody with bad intentions.
But this is a new level.

It might depend on the specific code language or what exactly you want to do.
And how much time you want to put into proofreading and crosschecking everything you get.
But there is the point where the time you spent for correcting the "AI" help surpasses the time you need to do it on your own.
Even more because you have to read into and understand code that is not your own.

And because LLMs are build by design to make things up, they will be unreliable forever.
According to ChatGPT, somebody born in Georgia (country) is eligible to become the US president if naturalized. This is true because only somebody born in the US can become the US president.

OK, what I wrote above is a paraphrase (the actual response was in the other order: Only those born in the US can become president; therefore someone born in Georgia (country) can become the US president after being naturalized), but it still gives you the idea.

(The *actual* truth, in this case, is that somebody born in Georgia (country) is not eligible to be the president of the US.)
Larger version of Plinko. Reminds me of all those dotcom get rich quick scams years back
Besides some digital assets seems to be limited.
Usually when it’s generating artworks by Artificial Intelligence nsfw is disabled or rather looking gruesome for instance like in case of Wombo Art.
avatar
.Keys: [...]
They're language models which "read" words and apply an answer based on your prompt.
[...]
avatar
amok: There are broadly speaking three types of AI. These are Langauge Models, Generative AI and Fundation Models. LM's simulates communications. Genrative AI generates various types of content (music, images, flowcharts etc). Foundation Models generate fundations, or models, of which other models can run on.

ChatGPT, for example, started out as a large langauge model, but with 3.5 onwards it has become more or mroe a foundation model for other tyoes of AI to interface with (e.g. DALL-E for generative visual art).
I can see this was written using ChatGPT or other LM AI? :P
Jokes aside, thanks for the info.

avatar
Zimerius: :p

but okay, thank you for taking your time answering. There is so much information currently available on the topic. To some, we have entered the next technological era. Other's can only point out on how this is only a marketing bubble almost ready to spat out. It is hard to be a bystander and try to make sense of it all. Personally, i'm already glad if some game developers aim at creating more life like challenges instead of fighting and owning mechanics all the time. So that's my interest more or less.
avatar
randomuser.833: They have some use, but much smaller then companies want to tell us.
And they are way more unreliable and need much more human control then they are willing to acknowledge.
For example Google seems to have a team of techies monitoring the social media channels that show new AI fuckups to correct them manually (like stick your cheese to pizza by using glue, that directly came from Reddit).
Problem is, you can't correct the model itself. Those models are black boxes even to the companies who created them. Nobody understands how they exactly work and why they do certain things.
So they can only do 2 things. They can build up fences, that basically block the "AI"s from posting certain things back. Not from building the sentence but just blocking the answer. And people very often will find ways around this.

And they can alter the "prompt" (the question you are asking) to get a "more desired" answer. Simply because their training data is shit and that way the model is very biased. Then another automatic tool is simply altering your question before it is send to the "AI".
And if you think that might fuck up things - it does.
(Ignore the headline, the text is very balanced and explains how the fuckup happend)
https://www.telegraph.co.uk/news/2024/02/23/google-gemini-ai-images-wrong-woke/
By shit training data of mostly white people Googles picture creating AI created mostly white people. So the questions where altered - and we got female popes, black vikings and native american medieval english kings.

And I have to correct myself.
LLMs already talked people into suicide. So they kinda already killed people.
https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate-

Next up will be very likely some riots caused by AI hallucination in a less developed country where people will get killed (already happens because of made up information by humans).

avatar
.Keys: Also... don't get us started with how well AI help when programming.
It saves a ton of time with simple to intermediary questions that you'd probably need to search on Stack or specific forums...

So please guys, stop with this idea that it "doesn't help" or "its all fake".
Its not useless and its not fake. It does help in many use cases.

Still, it is not that kind of "A I".
They're human dependant tools, for now at least.
avatar
randomuser.833: All I was hearing here from a friend who is professional codemonkey.
It can do basic things to some degree, because they inhaled various help forums for programming and stuff like stack overflow.
But it still is mixing things up by basically combining several languages or making functions up, that simply don't exist.
For him it is like the meme of the codemonkey from India, who spits out a mess of a code to your feet with the comment "doesn't work - please fix".
Whit the small difference that it implies the code will work.

The worst problem so far was even dangerous. Because those "AI"s simply made up bibs. For some languages you rely on packs that have been written to do something with their internal code and you can use it then.
And the names the AIs use is often simply not existing. Not only that, different AIs even make up the same name - so cross hallucination is a thing for whatever reason.

Story is some guy played around a bit and found out, that the "AI" making up the same name to some degree over and over again for whatever reason.
So he created such a bib on his own, that could include evil code (it didn't) - and it was used often and fast even by big companies.
https://lasso-security.webflow.io/blog/ai-package-hallucinations
Or maybe more readable
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/

Packs already some kind of danger, because every now and then one is taken over by somebody with bad intentions.
But this is a new level.

It might depend on the specific code language or what exactly you want to do.
And how much time you want to put into proofreading and crosschecking everything you get.
But there is the point where the time you spent for correcting the "AI" help surpasses the time you need to do it on your own.
Even more because you have to read into and understand code that is not your own.

And because LLMs are build by design to make things up, they will be unreliable forever.
Good pack of info, thanks.

But let me add, I'm not saying I'm in total favor of AI when I said its a useful tool for many tasks and specially to save time in many others as there are definitely objectively good uses cases in which it saves time and makes its users profit more as common workers (marketing, programming, yada yada...), even "without paying a cent to their creators companies" (though, as we know, data is more valuable than simply money nowadays, so that's what they collect with free AI tools and therefore, data is their "payment").

I've also heard a while ago about the case in which the guy suicided himself after chatting for a week or two with an AI, but we must also be fair to the situation:

The guy was apparently a "Save the planet!" agenda militant, which apparently started to chat with a "Save the planet at all costs!" companion created AI, which then, supposedly, convinced him that with less people on earth, the world would be better for his two children, supposedly. Or something like this, if Im not mistaken.

A bizarre case that shows not the fault of AIs, but the fault of such radical agendas (the "clean energy without planning", the "let children be what they feel is right while we lobotomize them with woke ideas", being probably the most dangerous) though one could argue that the AI demonstrated behaviors that it shouldn't and this is a danger in enough itself, as exemplified by its specific AI personality companion being promptly removed / corrected.

Don't disagree with you here. I can see facts.
I just don't agree with the idea that "AI = All Bad".

We should and must, I think, use it responsibly and develop it in the same way.
But then again... we know what world we live in and we know this will not happen, so, anyways...
...ad infinitum.
Post edited June 10, 2024 by .Keys
AI is just a tool, and like all tools, needs to be used by someone skilled to get a great result.

And it is still early days for AI, and despite what some might claim, no real experts exist yet.

Those getting a real benefit from AI, have been going through a real learning curve of lots of experimentation.

The reputation of AI isn't helped, by a lot of scaremongering either. When it comes to the creative process, a human still leaves it for dead. So at best, AI can be a brilliant assistant. At worst, it can be an irritating one. Those using it well are using it to help guide themselves in their creative endeavors, having AI do all the boring aspects etc. If managed well, an AI can save a creator a lot of time.

As for AI used for searching and algorithms, it's not that different. The same applies for Bots. All of them suffer from human input, either by those setting them up or users asking questions.

In some ways, it is foolish to imagine an AI is going to help folk who are not going to use them intelligently. To get the best of AI you need to be fairly smart and cognizant of AI, or they will lead you in circles or give you false conclusions etc.
avatar
amok: There are broadly speaking three types of AI. These are Langauge Models, Generative AI and Fundation Models. LM's simulates communications. Genrative AI generates various types of content (music, images, flowcharts etc). Foundation Models generate fundations, or models, of which other models can run on.

ChatGPT, for example, started out as a large langauge model, but with 3.5 onwards it has become more or mroe a foundation model for other tyoes of AI to interface with (e.g. DALL-E for generative visual art).
avatar
.Keys: I can see this was written using ChatGPT or other LM AI? :P
Jokes aside, thanks for the info.
Ha, if you think AI have bad syntax and spelling mistakes
I noticed a typo in this thread.
You've probably heard of the Bermuda Triangle, the Bridgewater Triangle, the Bennington Triangle, the Stargate Project, the Montauk Project, and the Philadelphia Experiment? Andrew Carlson,Andrew Basiago,John Titor,Alfred Bielsk/Edward Cameron,Duncan Cameron,Rudolf Fentz,John Zagreus supposedly went back in time, but this is just for example, I only give examples, because there are more such examples. Apparently CERN has also developed some kind of Time Machine.

I also heard that they are supposedly the creators of Mariana's Web, but it is something related to a super technologically advanced quantum computer, that they supposedly have access to a time machine for time travel back and forth in the future and in the past. And that apparently even the FBI itself has closed its investigation and investigation into this matter.

It should be Alfred Bielek. Apparently the CIA named him that after the Philadelphia Experiment.

In polish actors agency called Spinka I saw an actor who was involved in Terminator 2 movie two years before original movie came out who knows maybe some kind of movie italian european/pre-european union bootleg.
avatar
TheHalf-Life3: You've probably heard of the Bermuda Triangle, the Bridgewater Triangle, the Bennington Triangle, the Stargate Project, the Montauk Project, and the Philadelphia Experiment? Andrew Carlson,Andrew Basiago,John Titor,Alfred Bielsk/Edward Cameron,Duncan Cameron,Rudolf Fentz,John Zagreus supposedly went back in time, but this is just for example, I only give examples, because there are more such examples. Apparently CERN has also developed some kind of Time Machine.

I also heard that they are supposedly the creators of Mariana's Web, but it is something related to a super technologically advanced quantum computer, that they supposedly have access to a time machine for time travel back and forth in the future and in the past. And that apparently even the FBI itself has closed its investigation and investigation into this matter.

It should be Alfred Bielek. Apparently the CIA named him that after the Philadelphia Experiment.

In polish actors agency called Spinka I saw an actor who was involved in Terminator 2 movie two years before original movie came out who knows maybe some kind of movie italian european/pre-european union bootleg.
Nah, it is the Reptilians. Get your facts straight. .
I was going to say it might have helped spell the title correctly, but a demonstration now shows that it would instead ramble incoherently and without provocation about time travel.
AI will create more problems than it will solve.

It's a huge ethical issue, copyright issue, and goes even as far as a human rights issue.

It has completely poisoned inherently human crafts like art, which it should have never been allowed to touch.

It can have good uses, but I haven't seen it having a single good use yet. Instead, it's just been used to make humans redundant, rather than elevating humans.