It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
Just a little reminder how the TechBros behind "AI" think.
They think the same way as all TechBros. Rules are for other people.

This time, everything on the open internet is free to use
https://www.theverge.com/2024/6/28/24188391/microsoft-ai-suleyman-social-contract-freeware

Wish you a lot of luck in front of a court when you just used a random picture from the internet, and this guy thinks his AI is allowed to inhale everything.

The fun part about this thinking is, when the court says "No", they can trash their "AI" and start from scratch, because there is no way to make an LLM unlearn certain sources.
avatar
randomuser.833: Just a little reminder how the TechBros behind "AI" think.
They think the same way as all TechBros. Rules are for other people.

This time, everything on the open internet is free to use
https://www.theverge.com/2024/6/28/24188391/microsoft-ai-suleyman-social-contract-freeware

Wish you a lot of luck in front of a court when you just used a random picture from the internet, and this guy thinks his AI is allowed to inhale everything.

The fun part about this thinking is, when the court says "No", they can trash their "AI" and start from scratch, because there is no way to make an LLM unlearn certain sources.
I'm with you in opinions about 'BigTechBros' saying we own nothing. I hate it.
But I really can't agree with the idea that everyone developing AI thinks like this, that is, like 'BigTechBros'.

Also it doesn't look like only BigTech opinions are worth anymore and they can push anything they want without backlash.
The world is changing.
Practical real world occurrences are changing the way general public see what is right and what is wrong, and, while the morals are twisted specially because of social media and social engineering agendas everywhere, groups of people fighting for the balance are emerging.

I want to point out my previous post again:

avatar
.Keys: Just found about AI2's OLMo.
OLMo is a name used to refer to Open Language Models, which means, its basically Free and Open Source Large Language Models.

(...)

Here's their repository to those interested:
https://github.com/allenai/OLMo

And a video lecture explaining what AI2's OLMo is and how it compares to other Closed Source Large Language Models:

AI Marketplace - "AI2's OLMo (Open Language Model): Overview and Fine-Tuning" - (1:00:22 duration)
https://www.youtube.com/watch?v=LvYGK4-1J58

(...)
We can complain, I'm with you in that. But what can we actually do to give people alternatives?
If we recognize that AI development won't stop - why not push for ethical AI development as a society?
I mean, some big names are already doing that - Elon Musk for example it seems?
(Although things are much more complex than simply 'one big name guy pushing this speech' and we know it.)
Post edited June 30, 2024 by .Keys
avatar
randomuser.833: Just a little reminder how the TechBros behind "AI" think.
They think the same way as all TechBros. Rules are for other people.

This time, everything on the open internet is free to use
https://www.theverge.com/2024/6/28/24188391/microsoft-ai-suleyman-social-contract-freeware

Wish you a lot of luck in front of a court when you just used a random picture from the internet, and this guy thinks his AI is allowed to inhale everything.

The fun part about this thinking is, when the court says "No", they can trash their "AI" and start from scratch, because there is no way to make an LLM unlearn certain sources.
avatar
.Keys: I'm with you in opinions about 'BigTechBros' saying we own nothing. I hate it.
But I really can't agree with the idea that everyone developing AI thinks like this, that is, like 'BigTechBros'.

Also it doesn't look like only BigTech opinions are worth anymore and they can push anything they want without backlash.
The world is changing.
Practical real world occurrences are changing the way general public see what is right and what is wrong, and, while the morals are twisted specially because of social media and social engineering agendas everywhere, groups of people fighting for the balance are emerging.
2 upcoming "Music creating AIs" are sued by the music industry, because their AI inhaled copyrighted music and creating music that is very close to the big ones.
Several LLMs are sued by big writers (including OpenAI) for inhaling copyrighted books.
A big bunch of AIs (including upcoming small projects) are accused of ignoring robots.txt and hiding the real identity of their crawlers, letting them look like normal browsers.
Several "AIs" have been accused of even inhaling stuff behind paywalls, so their "AIs" can perfectly parrot information from very exclusive story perfectly.
Picture generating AIs have been accused of inhaling copyrighted pictures.
Meta says "everything you post on Metathings is now open for training our "AI"".
Adobe says "everything you are doing with our software is now for open for training our "AI"".
ChatGPT has crawled Youtube, violating intellectual property by inhaling things from Youtube.
Google has bought access to Reddit, so "everything you post on Reddit is now open for training our "AI"".

Srsly, so far I only heard about A SINGLE ONE AI project (that is not from the science realm) , that was NOT violating intellectual property to outright breaking laws.
And that is the project behind Frostbite Orckings,
https://www.youtube.com/@orckings

So yeah, not _all_ TechBros are the same, but the overwhelming majority of the leaders in the current wave of wannabe AI development has the same mindset as the TechBros from the Crypto/Blockchain wave.
And from that wave a good part of the (back then) big ones is now in jail and some even vanished from the face of earth (most likely being dead, search for Crypto Queen).

I'm not talking about scientifical projects here, but about the projects that want to play big.

And as much as I dislike IP laws (including all those Lex Disney over the years), this time i deeply root for the IP holders and I wish they are stomping all those Bros to the ground.



And btw, calling Musk out in anything together with the word ethical - please stop.
And his AI is "everything you post on Twitter is now used for train our "AI"".
Post edited June 30, 2024 by randomuser.833
Anyway would for instance Mariana’s Web be related somehow to Artificial Intelligence and probably located in Area 51 in North America/United States in Europe/European Union nowadays…etc. or still not yet?
I heard that FBI closed such of this case.
Post edited July 16, 2024 by TheHalf-Life3
A little unrelated but what happened to blockchain? It was all the rage several years ago when even a drink company's valuation increased by changing the drink's name to blockchain.

https://www.theguardian.com/technology/2017/dec/21/us-soft-drinks-firm-changes-name-bitcoin-long-island-iced-tea-corp-shares-blockchain

Blockchain and its offshoot, bitcoin, was supposed to revolutionize currency and accounting because the blockchain would be a permanent public ledger that could not be altered which would make the recording and sharing of information much easier and more secure. There are also concrete use cases for blockchain, namely in accounting and hospital records.

[url=https://builtin.com/blockchain/blockchain-healthcare-applications-companies#:~:text=How%20Can%20Blockchain%20Be%20Used,healthcare%20researchers%20unlock%20genetic%20code]https://builtin.com/blockchain/blockchain-healthcare-applications-companies#:~:text=How%20Can%20Blockchain%20Be%20Used,healthcare%20researchers%20unlock%20genetic%20code[/url].

Maybe the technology is still not mature enough to be used in a more commercialized process that has become widespread but even with a technology like blockchain which seems to have clear specific industrial applications, the impact hasnt met the hype. I feel there are parallels with the current discussion on AI. While I do think there should be discussions on AI ethics and how companies should approach AI use in business (especially creative), I do wonder if current and even near future AI will be able to do what the experts fear it could. I really doubt AI will be able to create movies or stories to rival Michael Bay, let alone the greats like Spielberg.

I feel utimately, AI may be good as a brainstorming tool by generating prompts and get the brain churning but ultimately, even decent work will require human creativity that cannot be rivaled by AI. Maybe Im completely off-base here and underestimating AI though.
I've asked different chatbots basic questions about games and movies and the info it regurgitated back was always completely wrong or severely outdated.
avatar
Tokyo_Bunny_8990: [...] I really doubt AI will be able to create movies or stories to rival Michael Bay [...]
A lobotomised hamster would make movies better that Michael Bay
AI is just a tool really. It can't just magically make the game for you or anything. But it is useful for automating certain tasks that don't require a lot of finite skill.

In fact, there is at least one game for sale here on GOG that uses a little bit of AI art. Stasis: Bone Totem. They uses a little bit of AI faces in the little 'voice logs' you can find around the game. Something like that is a good use for the current AI tech. Basically making simple assets that would have other wise taken weeks to create at a pretty significant coast. Being able to throw AI at little tasks like that will probably be a god-send for small studios that don't have a lot of resources to throw the little touches like that.
Post edited July 17, 2024 by Noishkel
At the moment is a hit and miss, but remember, the first computer is also a junk compared with your smartphone. Nobody knows if it will boom or will go to irrelevance.
For instance nowadays Elon Musk people like him see Artificial Intelligence more dangerous than climate change…etc.
avatar
OldFatGuy: What say you?
First that AI is the wrong name being used. It's artificial, but it's not intelligent. It is nodes with weights each of which may give a yes or no to give final solution, the solution of 42. But without the question, the answer is useless.

Being a programmer, the nodes are something like a binary tree, except a bit more complicated. If i did a scan using say 5 levels of huffman for words rather than letters, then any random selection of possible words following N words, would give me something that seems reasonable, and that is a lot like how the 'AI' will work; But how they apply the weights or RNG is the only deviation. 'Programming' these is akin to throwing spaghetti against the wall, seeing what sticks, and then repeating with those sticking particles multiplied a few thousand times. Repeat hundreds of thousands or millions of times, and you might get something that resembles what you're after. It's lazy programming, without understanding how it was built.

The machine has no feelings. It's not self aware. It's not intelligent to the least. On the surface it may seem intelligent, until you start looking at the actual answer. Lawyers asking ChatGPT to help it make cases, and it looks legit, except everything including other cases it's references don't exist. Suggesting you add glue to your pizza so your cheese will stick; suggestions for depression to include jumping off a bridge.

There are uses for the tools of this system; Having more humanistic speech output for TTS, as well as graphical enhancements for pictures and videos, maybe it will work where it can take a crappy recording, detect every note with what instrument, and even the words and tone, then recreate songs in CD quality.

Specific tools may be helpful, but i wouldn't trust it with my life. Trivial and harmless things, sure. But not on anything important.
Yes and no it's both dangerous and safe for instance for such people as Elon Musk.
It seems for instance nowadays Drug Stores/Medicine Pharmacies such as for instance Exscientia started using Artificial Intelligence…etc.
avatar
TheHalf-Life3: It seems for instance nowadays Drug Stores/Medicine Pharmacies such as for instance Exscientia started using Artificial Intelligence…etc.
Virus detection on zip files has apparently been switched to some form of AI... because everything is getting flagged from some sites... And then 24 hours later it's not...

No, going down this road will just annoy and fail to work correctly.
Besides do you think that Lukas Pravda(he probably is old friend of Michal"LEOTCK"Bonk probably Erik"Zeur"Bogeholt doesn't even know both Zane & Victor Albert Delacroix) as one of employee of Exscientia is a scammer according to one of dad's female friend son Marcel Mazur that is working for Kopernik Observatory & Science Center by quoting and reinterpreting his own words. That's of what he told me in LinkedIn. For instance Unreal Tournament Subredditors somehow know him as one of Community Members. Well I have a contact with LEOTCK via Google Mail(GMail),IRC Chat(mIRC),Mobile Phone. I just only once report such case to the police station. My own parents even threatened me with psychiatrists. Calling such people as police or even psychiatrists it would be I mean it might be too risky act of wisdom braveness I mean. I guess I am a coward traitor by the way. It's a case of both conducted crazy human experiments(so-called White Room Tortures)and things related to paranormal activities are such people for instance nowadays still believe in such things or even I would be too paranoid myself by quoting and reinterpreting his own words of for instance such username know as pseudonim Victor Albert Delacroix. Dylan Wheeler seems to look similar to Michal Bonk...etc.
Post edited July 28, 2024 by TheHalf-Life3