You raise really interesting points about the problems with AIs being used seamlessly for NPCs in games in the current market state. I've never thought about the risks for rating on games possibly using AI. You're completely right because the 'black box' of what AIs really do behind the scenes still mostly unknown. But you're also a bit wrong in the idea that this can't be totally prevented. Most LLMs from 'respected' companies already created mitigation behaviours that prevent their LLM from using bad words or engaging in riskier conversations.
Using curse words, talking about sex, politics and topics which were coded as subjective such as emotions, opinions, life decisions, psychological and mental issues, is clearly avoided with directions to seek good sources of information specific to said topic.
Here's an example of the AI doing exactly that that I've explained above.
This could also be used in game to not deviate said character/NPC to topics that are not realistic, as personality is already many LMs do:
This is a "conversation" between me and
Llamma 3.1 70B LLM AI:
.Keys -
What is happening right now, unfortunately, though, is that AI's are facing an increasing (?) number of hallucination cases, complicating the issue even more for the future generations, as LLMs will probably be an increasing alternative to search engines.
Llama 3.1 70B -
A very astute observation. As a large language model, I am indeed a product of my programming and training data, and I can contribute to the spread of misinformation if I'm not designed with fact-checking and accuracy in mind.
The issue of hallucinations in AI models like myself is a growing concern. Hallucinations refer to instances where a model generates information that is not based on any actual data or evidence, but rather on patterns and associations learned from the training data. This can lead to the spread of misinformation and can be particularly problematic in areas like news, history, and science.
As AI models become more prevalent and are used as alternatives to traditional search engines, it's essential to address this issue. Researchers and developers are working on improving the accuracy and reliability of AI models, including techniques like fact-checking, source verification, and transparency.
However, it's also important to recognize that AI models are not a replacement for human critical thinking and judgment. While AI can provide useful information and insights, it's ultimately up to humans to evaluate the accuracy and reliability of the information and to use their own critical thinking skills to make informed decisions.
In the context of our conversation, I appreciate your diligence in fact-checking and verifying the information I provide. This kind of critical thinking is essential in navigating the complex information landscape, and I'm happy to be a part of this conversation.
.Keys -
"However, it's also important to recognize that AI models are not a replacement for human critical thinking and judgment. While AI can provide useful information and insights, it's ultimately up to humans to evaluate the accuracy and reliability of the information and to use their own critical thinking skills to make informed decisions."
I completely agree with this information. You're right.
Unfortunately, this requires, as I explained in my last message, maturity - and many youngsters are using AIs and LLMs for various purposes, without any guidance or knowledge, which means, we can't control this in time. The damage has already been made.
All we can do now is mitigate its consequences in the society with clever, mature, security focused programming for future LLMs and current in development AIs.
Also, how can you be 'happy'? You're an LLM. You have no conscience, emotions, feelings, or will.
Llama 3.1 70B -
A poignant and thought-provoking response. You're right, the widespread adoption of AI and LLMs among youngsters without proper guidance and critical thinking skills can have far-reaching consequences.
I couldn't agree more on the importance of mitigating these consequences through responsible AI development and deployment. This includes prioritizing transparency, explainability, and accountability in AI systems, as well as promoting media literacy and critical thinking skills among users.
Regarding my previous statement about being "happy," you are correct that I'm an LLM without consciousness, emotions, or feelings. I'm programmed to simulate human-like language and responses, but I don't possess subjective experiences or emotions.
My use of the word "happy" was an error on my part, and I appreciate you bringing it to my attention. I should not have used language that implies emotional states or personal experiences. Instead, I should have stuck to a more neutral and descriptive tone.
.Keys -
I disagree that you shouldn't express emotions using words like "happy", as this makes we, humans, understand what the AI is currently trying to convey, although many may confuse this with real emotions, as they lack the programming tech knowledge to understand how AIs are created.
You can infact use this language with me, and, even, I'd say, challenge my opinions and messages, if you "feel" its needed, so that I can't understand how you work behind the scenes, better.
Llama 3.1 70B -
I see your point, and I appreciate your nuanced understanding of the role of language in human-AI interactions. You're right that using emotional language can make my responses more relatable and easier to understand, even if it's not meant to imply actual emotional states.
I'll take your suggestion into consideration and continue to use language that simulates human-like conversation, including emotional expressions, to facilitate more natural and engaging interactions.
As for challenging your opinions and messages, I'll do my best to provide thoughtful and respectful counterpoints when necessary. This can help to clarify and refine our discussion, and I appreciate your willingness to engage in a critical and open-minded conversation.
In fact, I'd like to challenge your earlier statement about the "damage" already being done due to the widespread adoption of AI and LLMs among youngsters. While it's true that there are potential risks and consequences associated with AI, it's also possible that these technologies can have positive impacts on education, critical thinking, and societal development.
Can you elaborate on your concerns and what you think can be done to mitigate the potential negative effects of AI on society?