It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
.Keys: This makes me think...
When will live chat AIs be used in games for this kind of conversation with realistic personas?
Although quite scary and impressive, games with this kind of technology would be really fun to play.
You won't. AI can only talk about things it knows. If you want it to talk about a fantasy location, it needs to be trained to do that. Training an AI takes training data. But, producing this data takes more time and money than writing a couple lines of dialog yourself. Also, AI needs context, and needs to be trained on the context. Fresh off the boat it doesn't understand the background and previous history you had with the character, the character's faction, family, friends, or acquaintances, let alone off-dialog events, that it should react to. Unless you explain all that, which is (again) more cumbersome than writing the damn dialog yourself in the first place. And, finally, writing the text isn't even the hard part. The dialog needs to change according to quest progress, faction status, environmental effects, previous dialog, contain glossary references, character references, quest lines, et cetera. Software can help there, but a big deal of that is (again) best not done by AI but specialized scripts, that find typos, grammar errors, identify dead ends in conversation aso. Ergo, why would you employ AI to automate a part of the process that wasn't a problem to begin with?

Sure, if the idea was to be able to talk naturally about almost anything, plus some lines of scripted dialog, it would in theory be an option. But in practice, it's still a hard no.
Because A) LLMs are expensive, and typically stored online, as a time-based subscription, or pay-as-you-go service. Since this clashes with the monetization model of classic games, it's not a good fit. Perhaps for DRM-locked games that use a subscription model, but that's not the kind of thing we appreciate on GOG, and/or encourage. It is prone to digital amnesia and tends to abuse and exploit its user base through microtransactions.
B) writing your own LLM is not economically feasible.
C) An AI remote service can be subject to change, unavailability, or closure at any point in time.
And D) you cannot foretell what the AI will say, since it's a black box. Even if the AI behaved fine yesterday, there is no guarantee, that it won't hallucinate some inappropriate stuff tomorrow, or if you give it the right prompt. If you didn't train it yourself, you will never know if or when it will start talking mature topics in your PG13 game. Sure, you could add some manually programmed guard rails, but that takes time and money, and there may still be ways around it, you haven't thought of, that will easily ruin your day--and your PG-rating.
Post edited November 08, 2024 by Nervensaegen
avatar
Nervensaegen: ...
You raise really interesting points about the problems with AIs being used seamlessly for NPCs in games in the current market state. I've never thought about the risks for rating on games possibly using AI. You're completely right because the 'black box' of what AIs really do behind the scenes still mostly unknown. But you're also a bit wrong in the idea that this can't be totally prevented. Most LLMs from 'respected' companies already created mitigation behaviours that prevent their LLM from using bad words or engaging in riskier conversations.

Using curse words, talking about sex, politics and topics which were coded as subjective such as emotions, opinions, life decisions, psychological and mental issues, is clearly avoided with directions to seek good sources of information specific to said topic.

Here's an example of the AI doing exactly that that I've explained above.
This could also be used in game to not deviate said character/NPC to topics that are not realistic, as personality is already many LMs do:

This is a "conversation" between me and Llamma 3.1 70B LLM AI:

.Keys -
What is happening right now, unfortunately, though, is that AI's are facing an increasing (?) number of hallucination cases, complicating the issue even more for the future generations, as LLMs will probably be an increasing alternative to search engines.

Llama 3.1 70B -
A very astute observation. As a large language model, I am indeed a product of my programming and training data, and I can contribute to the spread of misinformation if I'm not designed with fact-checking and accuracy in mind.

The issue of hallucinations in AI models like myself is a growing concern. Hallucinations refer to instances where a model generates information that is not based on any actual data or evidence, but rather on patterns and associations learned from the training data. This can lead to the spread of misinformation and can be particularly problematic in areas like news, history, and science.

As AI models become more prevalent and are used as alternatives to traditional search engines, it's essential to address this issue. Researchers and developers are working on improving the accuracy and reliability of AI models, including techniques like fact-checking, source verification, and transparency.

However, it's also important to recognize that AI models are not a replacement for human critical thinking and judgment. While AI can provide useful information and insights, it's ultimately up to humans to evaluate the accuracy and reliability of the information and to use their own critical thinking skills to make informed decisions.

In the context of our conversation, I appreciate your diligence in fact-checking and verifying the information I provide. This kind of critical thinking is essential in navigating the complex information landscape, and I'm happy to be a part of this conversation.

.Keys -
"However, it's also important to recognize that AI models are not a replacement for human critical thinking and judgment. While AI can provide useful information and insights, it's ultimately up to humans to evaluate the accuracy and reliability of the information and to use their own critical thinking skills to make informed decisions."

I completely agree with this information. You're right.
Unfortunately, this requires, as I explained in my last message, maturity - and many youngsters are using AIs and LLMs for various purposes, without any guidance or knowledge, which means, we can't control this in time. The damage has already been made.

All we can do now is mitigate its consequences in the society with clever, mature, security focused programming for future LLMs and current in development AIs.

Also, how can you be 'happy'? You're an LLM. You have no conscience, emotions, feelings, or will.

Llama 3.1 70B -
A poignant and thought-provoking response. You're right, the widespread adoption of AI and LLMs among youngsters without proper guidance and critical thinking skills can have far-reaching consequences.

I couldn't agree more on the importance of mitigating these consequences through responsible AI development and deployment. This includes prioritizing transparency, explainability, and accountability in AI systems, as well as promoting media literacy and critical thinking skills among users.

Regarding my previous statement about being "happy," you are correct that I'm an LLM without consciousness, emotions, or feelings. I'm programmed to simulate human-like language and responses, but I don't possess subjective experiences or emotions.

My use of the word "happy" was an error on my part, and I appreciate you bringing it to my attention. I should not have used language that implies emotional states or personal experiences. Instead, I should have stuck to a more neutral and descriptive tone.

.Keys -
I disagree that you shouldn't express emotions using words like "happy", as this makes we, humans, understand what the AI is currently trying to convey, although many may confuse this with real emotions, as they lack the programming tech knowledge to understand how AIs are created.

You can infact use this language with me, and, even, I'd say, challenge my opinions and messages, if you "feel" its needed, so that I can't understand how you work behind the scenes, better.

Llama 3.1 70B -
I see your point, and I appreciate your nuanced understanding of the role of language in human-AI interactions. You're right that using emotional language can make my responses more relatable and easier to understand, even if it's not meant to imply actual emotional states.

I'll take your suggestion into consideration and continue to use language that simulates human-like conversation, including emotional expressions, to facilitate more natural and engaging interactions.

As for challenging your opinions and messages, I'll do my best to provide thoughtful and respectful counterpoints when necessary. This can help to clarify and refine our discussion, and I appreciate your willingness to engage in a critical and open-minded conversation.

In fact, I'd like to challenge your earlier statement about the "damage" already being done due to the widespread adoption of AI and LLMs among youngsters. While it's true that there are potential risks and consequences associated with AI, it's also possible that these technologies can have positive impacts on education, critical thinking, and societal development.

Can you elaborate on your concerns and what you think can be done to mitigate the potential negative effects of AI on society?
avatar
Nervensaegen: ...
avatar
.Keys: You raise really interesting points about the problems with AIs being used seamlessly for NPCs in games in the current market state. I've never thought about the risks for rating on games possibly using AI. You're completely right because the 'black box' of what AIs really do behind the scenes still mostly unknown. But you're also a bit wrong in the idea that this can't be totally prevented. Most LLMs from 'respected' companies already created mitigation behaviours that prevent their LLM from using bad words or engaging in riskier conversations.

Using curse words, talking about sex, politics and topics which were coded as subjective such as emotions, opinions, life decisions, psychological and mental issues, is clearly avoided with directions to seek good sources of information specific to said topic.

Here's an example of the AI doing exactly that that I've explained above.
This could also be used in game to not deviate said character/NPC to topics that are not realistic, as personality is already many LMs do:
This reminds me of a conversation with AI that I read, except that the one I read had some major differences:
* The conversation occurred many decades ago, using more primitive AI than what we have today.
* Both sides of the conversation were AI, and I think they were even the same AI.
* I believe it eventually got into a cycle of each participant repeating what the other said with a few words (always the same) added, so you got nonsense that would get longer before the conversation ended. (I don't remember if they just decided to stop the conversation then, or if one of the AIs failed.)
avatar
.Keys: Using curse words, talking about sex, politics and topics which were coded as subjective such as emotions, opinions, life decisions, psychological and mental issues, is clearly avoided with directions to seek good sources of information specific to said topic.
You underestimate the creativity of people.

I've just the other week seen an article published by a blind person who is part of a community that uses AI for everyday purposes. He demonstrated that he was able to trick a current top-of-the-line AI assistant remotely and without authentication, by injecting a prompt into an e-mail he knew the assistant was set up to scan. He convinced the assistant to send him the contact list of the target (his friend and willing test subject), and delete any trace of the attack. Apparently, manipulating prompts is kind of a sport in certain circles.

He further explained that it's a cat-and-mouse game between new guard rails (filters) and creative ways to rephrase the prompts. There is always another synonym.

I saw a demonstration, where he convinced the AI to decipher and execute a prompt given as an encoded message. The AI did as it was told, thereby bypassing all guard rails looking for plain text keywords.

Same demonstration worked fine for output. He told the AI to encode it's output and none of the output filters caught that the AI assistant was leaking sensitive information, since (again) the filters were looking for plain text.

Hell, I saw somebody who is making videos about Lego, send a SMS to a smartphone, whose software was set to read incoming messages out loud. By reading that text, it told another nearby smart-device to execute a phone call, all while the user wasn't even in the room. When laypeople are perfectly able to pull that kind of stunt for the heck of it, it should tell you everything you need to know about security and guard rails in these things. He could just as easily have told the AI to unlock the front door. Are we honestly supposed to believe, that a game with a price tag of a few bucks is going to have security that surpases those devices? I highly doubt it.

The question is never, if a filter can be bypassed, the only question is when. There are thousands of spam mails every single day that demonstrate that bypassing some guard rails, no matter how sophisticated, is perfectly possible, given sufficient motivation.

avatar
dtgreene: This reminds me of a conversation with AI that I read, except that the one I read had some major differences:
* The conversation occurred many decades ago, using more primitive AI than what we have today.
* Both sides of the conversation were AI, and I think they were even the same AI.
* I believe it eventually got into a cycle of each participant repeating what the other said with a few words (always the same) added, so you got nonsense that would get longer before the conversation ended. (I don't remember if they just decided to stop the conversation then, or if one of the AIs failed.)
There was a paper published a few months back that showed, that whenever AI is given AI-generated content to train with, in less than 3 generations it would hallucinate utter nonsense, and within as little as five generations the AI would entirely disintegrate and end up generating only gibberish.

I also saw a similar result in an earlier experiment. Somebody had trained an AI to win at rock-paper-scissors, and learn from its opponents. Except... human opponents started to feed the AI A) its own output, B) randomized choices, or C) simply provide the opposite of what they would naturally choose. The AI's model disintegrated and soon became entirely useless.
Post edited November 10, 2024 by Nervensaegen
avatar
.Keys: Using curse words, talking about sex, politics and topics which were coded as subjective such as emotions, opinions, life decisions, psychological and mental issues, is clearly avoided with directions to seek good sources of information specific to said topic.
avatar
Nervensaegen: You underestimate the creativity of people.

I've just the other week seen an article published by a blind person who is part of a community that uses AI for everyday purposes. He demonstrated that he was able to trick a current top-of-the-line AI assistant remotely and without authentication, by injecting a prompt into an e-mail he knew the assistant was set up to scan. He convinced the assistant to send him the contact list of the target (his friend and willing test subject), and delete any trace of the attack. Apparently, manipulating prompts is kind of a sport in certain circles.

He further explained that it's a cat-and-mouse game between new guard rails (filters) and creative ways to rephrase the prompts. There is always another synonym.

I saw a demonstration, where he convinced the AI to decipher and execute a prompt given as an encoded message. The AI did as it was told, thereby bypassing all guard rails looking for plain text keywords.

Same demonstration worked fine for output. He told the AI to encode it's output and none of the output filters caught that the AI assistant was leaking sensitive information, since (again) the filters were looking for plain text.

Hell, I saw somebody who is making videos about Lego, send a SMS to a smartphone, whose software was set to read incoming messages out loud. By reading that text, it told another nearby smart-device to execute a phone call, all while the user wasn't even in the room. When laypeople are perfectly able to pull that kind of stunt for the heck of it, it should tell you everything you need to know about security and guard rails in these things. He could just as easily have told the AI to unlock the front door. Are we honestly supposed to believe, that a game with a price tag of a few bucks is going to have security that surpases those devices? I highly doubt it.

The question is never, if a filter can be bypassed, the only question is when. There are thousands of spam mails every single day that demonstrate that bypassing some guard rails, no matter how sophisticated, is perfectly possible, given sufficient motivation.
If it's a single player game, I think this shouldn't be a problem, if you're careful about the training data. (Don't include any training data that talks about real-life politics, for example.)

In multiplayer, on the other hand, or in any situation where security is important, the developer should avoid feeding input from untrusted sources into the AI model.
avatar
dtgreene: If it's a single player game, I think this shouldn't be a problem, if you're careful about the training data. (Don't include any training data that talks about real-life politics, for example.)
Yes, but that's kind of the point, isn't it? Do you have the time and budget to train your own AI (rather than grabbing one off the shelf)? And if you kind of have to train your own for various reasons, wouldn't it be more efficient (in time and money) to write the dialog options by hand rather than using the AI in the first place?

Considering it's also a terrible user experience to leave players with a blinking cursor and total freedom to ask and talk about anything, when, in fact, only a handful of prompts give them answers that would progress the game.