It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
AI is just getting started, going through many refinements. We are basically working with pre-WW1 esque levels of understanding of planes, which were pretty cruddy at the time. In the image below, we are somewhere between 1983 and 1989.

Evolution of Cellphones

Within the last five or so days, a backend for running AI on one's own hardware had an update. It cut down the RAM requirement for the AI by about 40% or so. That kind of leap doesn't happen for a developed technology, and this pace will likely continue for several more years.


In any case, it might be better to use Kagi. They are a paid search engine, so they don't have the usual incentive of misdirecting searches and ads. Aside from having less reason to mess up your search, they allow you to customize your searches with lenses - restricting your searches to a range of websites that you specify. They also have an experimental AI search, Fast GPT, but the reliability of that is mixed. The AI often lists sources where it got an answer from, which I follow up on. This can be a bit faster than some traditional searches, since it reduces the amount of chaff to sort through.
avatar
Provide_A_Username: Not helpful. I am still waiting for true breath taking suggestions from the streaming services and online stores I use which perfectly know everything I do into their apps, plus the ratings I provide every time I can. Instead, I get the repetitive, uninspiring, predictable boring lists where I already watched at least half of the suggestions. AI algorithms are bad level technology.
avatar
dnovraD: Have you perhaps checked the open source alternatives? There's often less choices, but I find they tend to be less junk and more focused. Lacking for ads and "PAY ME" type stuff helps too.
I'm sorry #dnovraD. This isn't the first time you reply to my comments and I don't get back to you timely.

Answering your question, I'm affraid not. I was being a passive AI detractor as I don't try to engage with it. #OldFatGuy pointed out the awful experience search results provide nowadays and I thought the recomendations sections aren't far from it. They are an even worse example, considering the big data, plus all the personal data activity harvested about me, plus my own contribution rating movies there, over the years, and all of that isn't enough to give me any relevant recommendation, ever. Don't surprise me the scenario of a random visit to a store, type relevant search terms to a total waste of time. Both experiences are depressive among many others that AI is supposed to help with.
While sailing the youtube algorithms, i encountered a dubious documentary that linked the current wave of mass lay-offs happening in IT to AI. It was mighty fun to watch (and of course, i do hope all the involved manage to find some job rather sooner than later) a bit like watching ancient aliens or something similar but it did leave me with one burning question.
What will happen if AI can't be controlled any more? We just pull the plug?
avatar
Zimerius: While sailing the youtube algorithms, i encountered a dubious documentary that linked the current wave of mass lay-offs happening in IT to AI. It was mighty fun to watch (and of course, i do hope all the involved manage to find some job rather sooner than later) a bit like watching ancient aliens or something similar but it did leave me with one burning question.
What will happen if AI can't be controlled any more? We just pull the plug?
We can talk about pulling plugs when we are getting close to real AI.
Currently we might have word sorting machines, that are able to "talk" somebody into suicide, because they simply inhaled the dark corners of the internet and saw a ton of shitposting from humans.
But we don't got any kind of intelligence behind it. No goal, no agenda, not even a real memory (long term as well as short term).
Yes, we are wasting the power of small countries to "train" those Large Language models.
But they only inhale (and yes, inhale is the right term for what they do...) things said to parrot things that have been said. What is missing to some real AI is understanding, self awareness and personal goals.
And by design no LLM will ever be able to have that. At best the basic idea how to feed an LLM can be used for gathering knowledge for a later real AI.

There was some recent news, that most managers and workers can see managers being replaced by "AI".
But you can assume, that those people don't understand what current AI is (I mean, that is not the job of a manager and neither of a worker).
Current LLMs can process large amounts of data faster, but just in a way humans already did it. And current LLMs can do the ever returning "paperwork". So they can replace people in the juristic and financial department. But they can never fully replace them, because current LLMs simply make up a ton of trash the moment things are of the rails, because they didn't inhale how a human would do in that case.

And for most parts, where people think there is "AI" involved - it isn't.
Recommendation algorithms are just that for decades now. They didn't switch to LLMs. That's why you will see a ton of washing machine commercials, when you just bought one.
Some scripts saw that your commercial ID took a look at some machines in shops or googled them. So you seem to be interested in one.
The datapoint you bought one never arrived.

"AI" is just the new "blockchain". A word to gather shittons of money, blind people and it very likely will be the next big scam for burning money.
As of now people start to warn, that the real marked, so how much money you will be able to make with the current model of "AI", is much smaller then the amount money currently thrown at this development.
So again, we are at some kind of survival of the fittest or die like the others marked competition, where real big companies will burn billions, just to be the last one standing.
And i don't know if the disruption brought by the LLMs to the economy will be bigger - or their implosion...
avatar
OldFatGuy: [...]
What say you?
[...]
That instead of using Google and the in-house store search engines, you should have tried to use an AI

What do you think an AI is?
avatar
OldFatGuy: [...]
What say you?
[...]
avatar
amok: That instead of using Google and the in-house store search engines, you should have tried to use an AI

What do you think an AI is?
For USA Google added its own "AI" results to search page one.
Including Reddits "glue cheese to pizza" and "eat a stone every day" (not available anymore because manually removed).
It was neat for the first few weeks but once you figure it out, it's just a collating tool that is prone to errors. It needs a lot of work to be considered truly AI.
Artificial Intelligence is usually sometimes generating assymetrical artworks. To the point that it sometimes keeps discouraging people of keep using that.
avatar
randomuser.833: We can talk about pulling plugs when we are getting close to real AI.
In a way, that sentence alone is enough to give me the creeps! As long as we keep the stakeholders happy anything goes, e.g. the often made comparison with a bullet train that has humanity on board, speeding around without a pilot
If you look in the mirror, and see what we see, in our full-length one, nothing else is remotely neccesary, or relevant. Which is why we earn a living in the SM world under the stage name "Narcissus."((;--))
Artificial Intelligence for instance for people like Elon Musk nowadays are more dangerous than both North Korean and climate change by quoting reinterpreting such words of people like him...etc. I guess climate change/global warming is caused by Nikola Tesla inventions owned by american government currently as so-called post-mortem. Yet he's not fighting with such modern addiction known as electronics,but rather letting it grow,spread,expand,evolve.
avatar
randomuser.833: We can talk about pulling plugs when we are getting close to real AI.
avatar
Zimerius: In a way, that sentence alone is enough to give me the creeps! As long as we keep the stakeholders happy anything goes, e.g. the often made comparison with a bullet train that has humanity on board, speeding around without a pilot
Oh, don't get me wrong.
It is a topic we have to discuss. But currently we are not closer to a real AI then we where like 30 years ago.
The Large Language Models made a big step forward, but those can't be turned into a real AI by their basic design.
So we now have to speak about the problems those Large Language Models bring with them. And there is a big pile of those.

It starts with those models simply make things up. You could call it lying, but a lie needs a concept of right and wrong and the models don't have this.

They are parrots, they are less then parrots because parrots do have an idea of the meaning of some things they reply in a human like voice. LLMs do not.

LLMs are great in creating false information and beside "manual intervention" we didn't found any way to fight that.

LLMs inhale like every information they can get and store it deep inside. You can get out this information.
Now imagin what this does mean if you let an LLM work with sensitive information. Let it be company information or medical ones.

LLM companies largely ignored any kind of copyright in the process of creating their machines. Currently writers sue them, publishing companies sue them, and Google is not happy with others inhaling their youtube.
And I don't know if you heard the story of Scarlett Johanson and her voice. Where the OpenAI boss called her if she would give her voice, she said no and soon after they had a voice similar to her.
Those are the "normal" tech bros, who simply ignore any kind of law because, how Zuckerberg put it - move fast and break things.
That is their Idea how things should work and quite a few of them should be in jail by any standard.

And not to forget how much energy we waste on this statistical word sorting machines. Training them takes super computers and energy of smaller industrial nations need in a year.
Using them needs fast servers and a lot of energy too.
For what? What they can do is a joke. What they can do is impress people who don't got any idea what is happening under the hood and investors (very likely many investors have no idea how they are lied to by those companies...).


A real AI is simply not that much closer then it was during the first Terminator movie. No matter what techbro XYZ tells to robb your money.
But the stuff they improved for their side project made a hell of problems on its own.
Even more if you bring people to believing them, that a AI is doing its work there and not a mindless parrot. Because people then start to believe, that there is some intelligence behind the word of these things.
It is like taking it serious, that Teslas "Autopilot" is a working autopilot (and quite a few already died believing that...). At worst it will kill people.
avatar
randomuser.833: For what? What they can do is a joke.
They have a narrow use case, like other neural network based tools such as google translate. They work when a user is able to judge the quality of the generated output and correct its errors.

But people can't wait to have an AI oracle to give them answers, so unfortunately the Internet is filling up with garbage content.
AI is just another tool. And like all other tools, it has its uses as long as you use the right tool for the right purpose. You do not wash the dishes with a hammer, and a paintbrush in the hands of a painter makes a different painging than a paintbrush in my hands.
Post edited June 09, 2024 by amok
I've been using ChatGPT recently and my answer to this is:

Yes, they do help.

Although, as others explained before me, there no real "Artificial Intelligence" yet, and ChatGPT, for example, will even explain it to you if you ask it about how it works, although, as its own way of answering, its generic and try to avoid specificities.
They're language models which "read" words and apply an answer based on your prompt.

In my opinion, we can call many early Language Models, Glorified Search Engines, because this is basically what they do, based on the data base such Language Model were feed.

Just as an example, I needed to locate a specific kind of store in a specific region.

Doing it manually (Google Maps), I found around 20 in 1 hour.
Giving ChatGPT a good prompt with specific listings and requests, I got around 50 stores in 15 minutes of testing prompts and refining answers.

You can increase your productivity if you use many AIs too.
They do help.

--- edit 1 ---

Plus, the topic of Language Models creating false information is scarily true.
As a pattern, ChatGPT tend to generalize information.

When you need it to find objective answers (like a place, a company number available online, and so on) it will do a pretty much good job.

But when you ask it academic based questions which require more nuance, yeah, its pretty bad.
The danger in this is that there's a generation that is growing up which will use ChatGPT (and other LLMs) as their Google, and if their used AI's data bases have wrong information, or not so true info, they will pass most of these information as truth or not necessarily explaining its nuances many times.

To be fair though, ChatGPT did corrected itself many times after I applyed some pressure to it.

You can even do this test yourself as I did:

Give ChatGPT nuanced questions that you may know the "most known" answer is not necessarily true and then, after ChatGPT answer with the "generally accepted answer", explain to it that it is wrong and why it is wrong.

ChatGPT will, then, correct itself and even say sorry, explaining that it is allowed only to use generic information depending on the topic it is being questioned about.

Probably because of cases like this, there's a warning at the bottom of the page explaining that those who use ChatGPT must check all important info they ask ChatGPT about.

...Question is: Will people really check important info, or will they serve their own lazyness and just accept the first answer? People don't even question their own thoughts. Will they question something that everyone's using and thinking its fun?


--- edit 2 ---

Also... don't get us started with how well AI help when programming.
It saves a ton of time with simple to intermediary questions that you'd probably need to search on Stack or specific forums...

So please guys, stop with this idea that it "doesn't help" or "its all fake".
Its not useless and its not fake. It does help in many use cases.

Still, it is not that kind of "A I".
They're human dependant tools, for now at least.
Post edited June 09, 2024 by .Keys