It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/
AI is like pouring lighter fluid on our already smoldering planet, overtaking the last and out of fashion scam (Blockchain) in energy usage.
(Thats why i took the introduction from this page about how great web 3.0 is doing https://www.web3isgoinggreat.com - feel invited to have a laugh).

The real fun will start now, because now the music industry sued 2 AI startups that trained their AIs with music of different companies and now they claim it is fair use.
The first thing I would root for the music industry in a lawsuit I think...

And for even more laughs, a great rant about AI from somebody who has good knowledge about them.
https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
Figured I'd ask ChatGPT and see their response:

AI can greatly benefit individuals and organizations by automating tasks, providing data analysis, making predictions, and more. It can help improve efficiency, accuracy, and productivity in various industries. However, there are also concerns about the potential negative impacts of AI, such as job displacement and privacy issues. It is important to carefully consider the ethical implications and potential risks associated with AI technology.
Unfortunately, this answer neglects to mention that AI is sometimes wrong, and in fact confidently wrong (so that our usual ways of sensing that someone is lying don't work).

(The question I asked is this topic's subject, except that I fixed the typo.)
avatar
dtgreene: Figured I'd ask ChatGPT and see their response:

AI can greatly benefit individuals and organizations by automating tasks, providing data analysis, making predictions, and more. It can help improve efficiency, accuracy, and productivity in various industries. However, there are also concerns about the potential negative impacts of AI, such as job displacement and privacy issues. It is important to carefully consider the ethical implications and potential risks associated with AI technology.
avatar
dtgreene: Unfortunately, this answer neglects to mention that AI is sometimes wrong, and in fact confidently wrong (so that our usual ways of sensing that someone is lying don't work).

(The question I asked is this topic's subject, except that I fixed the typo.)
And I though ChatGPT will know an scientifical article about bullshitting LLMs
https://link.springer.com/article/10.1007/s10676-024-09775-5

Don't say anything bad about philosophical PhDs. They can come up with such discussions :D
avatar
dtgreene: Figured I'd ask ChatGPT and see their response:

Unfortunately, this answer neglects to mention that AI is sometimes wrong, and in fact confidently wrong (so that our usual ways of sensing that someone is lying don't work).

(The question I asked is this topic's subject, except that I fixed the typo.)
avatar
randomuser.833: And I though ChatGPT will know an scientifical article about bullshitting LLMs
https://link.springer.com/article/10.1007/s10676-024-09775-5

Don't say anything bad about philosophical PhDs. They can come up with such discussions :D
After reading your previous posts and articles about AI Hallucination I researched "a bit" about it.
I'm not trying to defend AI here, but continuing with the argument about being a new technology that we must learn to use because it wont go away anymore.

AI "Hallucination" (I don't like that term, but its what is used technically to explain the technical behavior of LLMs, I'd prefer to call it AI Delusions).

By what I've researched, this weird behavior of LLMs are not only known, but AI researchers and Input Engineers try to adapt and use it for their own benefit.

Its still known as something that is bad nonetheless and should be mitigated with each update of LLMs, "fixed" I think, but, while its not, people are using it on purpose to learn how LLMs behave and also to create clever inputs and receive better outputs.
Not only advance the research, but also to make the LLM be more precise in their outputs.

Basically, "if we only get AI hallucinations when we input something, it probably means we don't understand how to use the newborn LLMs yet."

To counter this above argument about the 'newborn AIs' being educated by our inputs in each session, I tried to narrow some inputs to ChatGPT until it gave me the answer I needed and confirmed. And it did.
What I ended up learning is that you can have precise answers if you create a good enough question.

So the topic is not so simple as "it hallucinates, therefore, AI always bad".
Its more about: "I need to create better questions, so the LLM will understand it better, and not come up with things that do not exist."
avatar
dtgreene: Figured I'd ask ChatGPT and see their response:

AI can greatly benefit individuals and organizations by automating tasks, providing data analysis, making predictions, and more. It can help improve efficiency, accuracy, and productivity in various industries. However, there are also concerns about the potential negative impacts of AI, such as job displacement and privacy issues. It is important to carefully consider the ethical implications and potential risks associated with AI technology.
avatar
dtgreene: Unfortunately, this answer neglects to mention that AI is sometimes wrong, and in fact confidently wrong (so that our usual ways of sensing that someone is lying don't work).

(The question I asked is this topic's subject, except that I fixed the typo.)
This. It can't even do pretty simple math correctly.

It will try to correct itself when you point to you that it is wrong (sometimes correctly, sometimes not), but that is only because it will take whatever criticism you gave it as additional input (it will not truly realize it is wrong, just take the additional corrective input you gave it as additional feed for its heuristic much like a search engine would take a second more precise query you make, once you realize the first query was too broad, to give you more targeted results).

It is useful for exploration much like a search engine is. However, unlike a search engine, it is a lot harder to trace the sources of the information (given the amount of processing that was done on them) so if you are not very knowledgeable about the domain of inquiry (so that you can properly criticize answer yourself) and you will use the given answer for something that is important, then you really need to do additional research and analysis.

You cannot take what it gives you as hard cash.
Post edited June 27, 2024 by Magnitus
avatar
randomuser.833: And I though ChatGPT will know an scientifical article about bullshitting LLMs
https://link.springer.com/article/10.1007/s10676-024-09775-5

Don't say anything bad about philosophical PhDs. They can come up with such discussions :D
avatar
.Keys: After reading your previous posts and articles about AI Hallucination I researched "a bit" about it.
I'm not trying to defend AI here, but continuing with the argument about being a new technology that we must learn to use because it wont go away anymore.

AI "Hallucination" (I don't like that term, but its what is used technically to explain the technical behavior of LLMs, I'd prefer to call it AI Delusions).

By what I've researched, this weird behavior of LLMs are not only known, but AI researchers and Input Engineers try to adapt and use it for their own benefit.

Its still known as something that is bad nonetheless and should be mitigated with each update of LLMs, "fixed" I think, but, while its not, people are using it on purpose to learn how LLMs behave and also to create clever inputs and receive better outputs.
Not only advance the research, but also to make the LLM be more precise in their outputs.

Basically, "if we only get AI hallucinations when we input something, it probably means we don't understand how to use the newborn LLMs yet."

To counter this above argument about the 'newborn AIs' being educated by our inputs in each session, I tried to narrow some inputs to ChatGPT until it gave me the answer I needed and confirmed. And it did.
What I ended up learning is that you can have precise answers if you create a good enough question.

So the topic is not so simple as "it hallucinates, therefore, AI always bad".
Its more about: "I need to create better questions, so the LLM will understand it better, and not come up with things that do not exist."
The thing is, nobody does know how the LLM works internally.
It just shows, it isn't done with feeding more information into it to stop the bullshitting. In fact, we will be soon at the point where is nothing left to feed.

AI bullshitting (even better term because they do what humans do when they bullshit - just make something up), is an inherent feature of how LLMs work.
There is no way around it. No "ask better", no output alteration. LLMs by their core design, are about remixing the input information they got to give an answer. And because those models have no understanding of everything, but just remix old human answers, they will bullshit.
And somebody tries to sell you the good old "you're holding it wrong" when their "AI" starts to bullshit, they are either delusional or they are blandly lying to your face.
avatar
randomuser.833: And somebody tries to sell you the good old "you're holding it wrong" when their "AI" starts to bullshit, they are either delusional or they are blandly lying to your face.
No one tried to do that. I understand your points, though this wont make this technology disappear.
So why not learn how it works, how it is being developed and prepare for what is to come?

By the way, I recommend this interview from Jordan Peterson with Brian Roemmele which has good insights about what AI / LLM will probably become in the future.
AI developers know of the limitations you point out. It doesn't change the fact this will keep being developed now.

Peterson starts pointing out about the AI Hallucination with specific questions and then after checking academy sources some were made up by ChatGPT.
Brian then explains this limitation of LLMs - which is a good explanation Imo.

"ChatGPT: The Dawn of Artificial Super-Intelligence | Brian Roemmele | EP 357" - (1:59:16 duration)
https://youtu.be/S_E4t7tWHUY

Theres even a good point of the interview that Brian talks about the black box / "the hidden layer" of LLMs. The internal decision making of LLMs that nobody can actually "see". What code it generates inside itself that makes it decide which answer is better, and what he explains might be interesting for us.

avatar
Magnitus: It is useful for exploration much like a search engine is. However, unlike a search engine, it is a lot harder to trace the sources of the information (given the amount of processing that was done on them) so if you are not very knowledgeable about the domain of inquiry (so that you can properly criticize answer yourself) and you will use the given answer for something that is important, then you really need to do additional research and analysis.
That's a very good point. The fact that you cannot know what source the LLM used to give you the specific answer you received is, in fact, a troublesome behavior that also annoys me. Jordan also points that out in said interview.

On the topic of trying to get to the source, I've tried a while ago to ask where the answer ChatGPT gave to me came from, but not only one time, I tried to force it to give me the specifics, and It couldn't. It always came up with the "Im sorry about the confusion, Im a LLM that .. yada yada.." explaining that its sources are many and that it only gives generic responses based on these sources.

It seems to be coded to never give the sources in which it was feed with? I don't know. Maybe it was the specific topic I asked it for.
Post edited June 27, 2024 by .Keys
The LLMs don't give you a source, because the companies try to hide how hard they used intellectual property they actually would have to pay for.
Typical TechBro, law doesn't apply to you.

it is artificial altered output.

And if people managed to break out of it - well, it did show exactly this.

avatar
randomuser.833: And somebody tries to sell you the good old "you're holding it wrong" when their "AI" starts to bullshit, they are either delusional or they are blandly lying to your face.
avatar
.Keys: No one tried to do that. I understand your points, though this wont make this technology disappear.
So why not learn how it works, how it is being developed and prepare for what is to come?

By the way, I recommend this interview from Jordan Peterson with Brian Roemmele which has good insights about what AI / LLM will probably become in the future.
AI developers know of the limitations you point out. It doesn't change the fact this will keep being developed now.

Peterson starts pointing out about the AI Hallucination with specific questions and then after checking academy sources some were made up by ChatGPT.
Brian then explains this limitation of LLMs - which is a good explanation Imo.

https://youtu.be/S_E4t7tWHUY

Theres even a good point of the interview that Brian talks about the black box / "the hidden layer" of LLMs. The internal decision making of LLMs that nobody can actually "see". What code it generates inside itself that makes it decide which answer is better, and what he explains might be interesting for us.
If you are told you got bullshit because you don't understand how to do your question right, it is "you're holding it wrong".
While there are enough cases where it is not even MY input question, that is send to the LLM, but a butchered question that has been already altered by the company behind the LLM to get - more desired - output.

I mean, don't get me wrong.
I nowhere say that tech will go away.
I can see those models helping you when you are writing a text by giving better alternatives (like Word already does, but at a higher level).

But we will see a big collapse of those LLM bubble. The money that is poured into the LLM pot can't made back with what LLM can do. That is why LLM marketing guys and LLM CEOs lying to your face what LLM will be able to do.
They already prepare for the survival of the fittest (the one that gets the most money).
The first wave will most likely die to ignoring intellectual property. And ChatGPT is on that shoping block already

But people will notice, that you can't trust any output of an LLM in the long run. And my bet is they will do it in the hard way.
I mean, somewhere i saw that an LLM was seen as something that can do the taxes for big companies.
You will do that to that single point when all artificial borders the company behind the LLM added break, the software starts to bullshit and you got tax authorities in front of your desk.

LLMs will be helpfull as a advanced search engine and spellchecker.
But you will always have to recheck their output and you won't be able to trust their output beyond your personal knowledge.

I mean, there is the option that LLMs can "read" a longer article for you and give you the short version of it. And even that does not work. Even there they start to bullshit. And they do so with the largest part of human knowledge already inhaled. More data won't fix this.

Nobody can tell me that bullshitting on "give me an abstract of this text [insert text]" can be dismissed with "your question was bad".
Post edited June 27, 2024 by randomuser.833
Chatgpt is like a more complicated search engine, with all the dangerous virus-inducing links that modern search engines contain lol

Nothing wrong with algorithms, they are based on mathematics and math has been around thousands of years to help us build roads, bridges, buildings and to calculate medicine and chemisty, physics.

One thing is people delegate too much coding and math to these chatgpt things while it would actually do their brain good to learn logical thinking, which can be a great aid in artistic thinking for example. Too much of math of course can take away time to be an artist lol

I think the whole internet has always in its history brought out extremist views and it's easy to pick-and-choose the answers like a mama bird feeding the baby birds worms that are already all chewed up, and sometimes accidentally we pick the extremist views and we have both "Hate AI !" and "I use AI in everything I do!" lol

Personally, I was surpised to see Chatgpt even mentioned, it was a bit like moon boots in the 90s... kinda funny invention but practically mostly useless to me lol
avatar
randomuser.833: I mean, don't get me wrong.
I nowhere say that tech will go away.
I can see those models helping you when you are writing a text by giving better alternatives (like Word already does, but at a higher level).

But we will see a big collapse of those LLM bubble. The money that is poured into the LLM pot can't made back with what LLM can do. That is why LLM marketing guys and LLM CEOs lying to your face what LLM will be able to do.
They already prepare for the survival of the fittest (the one that gets the most money).
The first wave will most likely die to ignoring intellectual property. And ChatGPT is on that shoping block already

But people will notice, that you can't trust any output of an LLM in the long run. And my bet is they will do it in the hard way.
I mean, somewhere i saw that an LLM was seen as something that can do the taxes for big companies.
You will do that to that single point when all artificial borders the company behind the LLM added break, the software starts to bullshit and you got tax authorities in front of your desk.

LLMs will be helpfull as a advanced search engine and spellchecker.
But you will always have to recheck their output and you won't be able to trust their output beyond your personal knowledge.

I mean, there is the option that LLMs can "read" a longer article for you and give you the short version of it. And even that does not work. Even there they start to bullshit. And they do so with the largest part of human knowledge already inhaled. More data won't fix this.

Nobody can tell me that bullshitting on "give me an abstract of this text [insert text]" can be dismissed with "your question was bad".
Agreed on that. What you say about the AI Bubble and the exaggerated marketing behind it makes a lot of sense to me.
I've personally heard about a case of a lawyer that used ChatGPT to create his case and, when the judge and all the team behind the case reviewed it, it had many 'made up precedents' created by ChatGPT delusions/hallucinations.

I still recommend the interview nonetheless because, as explained, and you concur, it seems, its not going away anymore, so we might make good use of it with responsibility. Well, at least in our realm of use, of course. Not that we have any chances of actually demanding any LLM developer/company to develop such technologies with ethics above all else... We know how this topic works.
One issue is this:
* People aren't used to this new technology.
* The LLM's output looks, at least at first, like it's written by a human.
* Therefore, people expect the LLM to behave like a human.
* But the LLM does not actually behave like a human, particularly when it gives incorrect answers.
* So, the usual ways that people tell if someone isn't being entirely correct or truthful don't work with LLMs.
avatar
dtgreene: One issue is this:
* People aren't used to this new technology.
* The LLM's output looks, at least at first, like it's written by a human.
* Therefore, people expect the LLM to behave like a human.
* But the LLM does not actually behave like a human, particularly when it gives incorrect answers.
* So, the usual ways that people tell if someone isn't being entirely correct or truthful don't work with LLMs.
(Sorry for the long post ahead)

Agreed.
I'd like to add to this topic that would be good if we separate things a bit.
Allow me to explain:

As it is with practically all niches of technologies, LLMs will also have its general public users who are convinced about all sorts of common behaviors about AI which may be or are indeed false and will use such AIs/LLMs as a human companion that have answers to all their questions (like some people use Google today, although we know LLMs are not like Search Engines).
But there are a niche of people whose not only develop to this technology but understands much more about it and push the technological advancements forward, be it ethically or not.

I think as it is with the FLOSS community, we have a good opportunity here.
Here's an example:

General public will of course buy their iPhones and Google Androids and use them without ever learning about data collection and that they can change privacy settings, probably they won't ever know that something like DivestOS or F-Droid exist.
But there are a niche of people, albeit small, of developers and common users that not only know about alternatives but also push the FLOSS agenda online on their companies, sites and personal friends and family.

This will absolutely happen with AIs, Machine Learning, LMMs, and so on.

A good example of this is this site:

https://opening-up-chatgpt.github.io/

Here's a TL;DR from the site:

Our paper makes the following contributions:

We review the risks of relying on proprietary software
We review best practices for open, transparent and accountable 'AI'
We find over 40 ChatGPT alternatives at varying degrees of openness, development and documentation
We argue that tech is never a fait accompli unless we make it so, and that openness enables critical computational literacy

We find the following recurrent patterns:

Many projects inherit data of dubious legality
Few projects share the all-important instruction-tuning
Preprints are rare, peer-reviewed papers even rarer
Synthetic instruction-tuning data is on the rise, with unknown consequences that are in need of research
ChatGPT is not the only viable LLM that exists. Although, because of such absurd marketing, it is possibly the most known one.

We are possibly about to see a rise of "Free and Open Source Large Language Models", FLOSSLLMs, if you will. And I do think that a reality where you can install a locally offline functional AI, feed with ethically selected Data Sets of public knowledge in your PC is not distant.
(I don't know any personally, but they might exist already - I'm still researching and studying about it.)

I personally think that we, as a DRM-Free community, therefore, a niche one, could learn from our experiences on gaming community and also apply such ideas to this area of technological advancements in our personal opinions about it.

Well.. at least lets hope we will be able to use "DRMFree-FLOSSLLMs" one day with personalized privacy specially.
According to one of former US Navy Marines Alfred Bielek/Edward Cameron Artificial Intelligence gonna replace mankind/humans. This person was involved for instance in both Philadelphia Experiment and Project Montauk.
https://www.quora.com/Will-ChatGPT-replace-programmers/answer/Brian-Smith-5956
avatar
.Keys: snip
An update to this topic:

Just found about AI2's OLMo.
OLMo is a name used to refer to Open Language Models, which means, its basically Free and Open Source Large Language Models.

Allen Institute was founded by former Microsoft founder (Peter Allen, I believe) to be a non profit organization focused on open AI research and development.

Here's their repository to those interested:
https://github.com/allenai/OLMo

And a video lecture explaining what AI2's OLMo is and how it compares to other Closed Source Large Language Models:

AI Marketplace - "AI2's OLMo (Open Language Model): Overview and Fine-Tuning" - (1:00:22 duration)
https://www.youtube.com/watch?v=LvYGK4-1J58

Also more research resources to those interested:

https://huggingface.co/

https://blog.allenai.org/olmo-open-language-model-87ccfc95f580?gi=8cf8ea565a54
Post edited June 30, 2024 by .Keys
...
Post edited June 29, 2024 by superstande