It seems that you're using an outdated browser. Some things may not work as they should (or don't work at all).
We suggest you upgrade newer and better browser like: Chrome, Firefox, Internet Explorer or Opera

×
avatar
LoboBlanco: What if it looks like this ;P

...or this one :D
avatar
viperfdl: HELL, NO! They are just creepy!
Nothing a beer can´t fix :P
avatar
Trajhenkhetlive: If it was truly sentient, yes. It wouldn't be no different. The dating rules would be all the same, long term relationship processes would be all the same, you'd still have to attract your mate. About the only thing that might be avoided is an imperative biological urge to reproduce. Some day we might prolong life by putting ones conscious in a robot body so the lines are going to get really blurred.
As long as they don´t try the same thing that OCP with that bipedal one :P
Post edited February 12, 2014 by LoboBlanco
If we're talking an actual sentient android with a personality then yes I would.

In other cases I wouldn't because I am not one of those guys who are just concerned with where they put their Philip K.'s(a little android humour).
Blade Runner, anyone? And the director's cut was the bleaker version...

Regardless of the life expectancy of the eventual android dating partner, I say nay.

I would not trade any of my friends for an android, nor would I wish to befriend one - because I do not think that the human emotion can ever be artificially simulated.

And if it is not good enough for friendship, it certainly is not good enough for love.

And would we have for example substituted Eric Cantona for an android, provided it were more perfect in making goals and less temperamental?

To that, I say certainly nay as well. What is needed for human ingenuity and shadow, shall also be needed for love.
I misunderstood, by date I thought you meant bone.

While the android may be a cheaper date, unless it is a model fashioned after Bender from Futurama, I don't see how anyone would want to date one. There are not many people I would want to date, and you are asking about fake humans. No thanks, I am happy in my current unattached life.
With the actual tendencies of humans being dehumanized progressively, it will be more than probable that androids (or artificial sentients or whatever you wanna call them) would be more human than us: kinder, more sincere, more trustworthy, more caring, more polite, more loyal, more beautiful, etc, etc, all those qualities you love from yours mascots (if you like pets) wrapped in human-like form.
Pretty much as Futurama show us.
avatar
TStael: I would not trade any of my friends for an android, nor would I wish to befriend one - because I do not think that the human emotion can ever be artificially simulated.
Human emotions are just arbitrary rewards. Once you have an AI which can think (formulate goals, and be incentivized to and rewarded for coming up with the best solutions), you have one which can feel. Making it receive bonus points for long walks on the beach is trivial.
Date an android? No. That defeats the purpose of having one.

I would purchase and train an android to do everything I want though.
Yes. Absolutely, and with no reservations.

Would it be the perfect relationship? Of course not. But the process of teaching a learning machine, one that did not judge me, what I need and desire in a relationship would be a fascinating voyage of self-discovery. And presumably it could be programmed to not simply give me whatever I wanted at all times. It could even be used as a training tool for building more successful human relationships (a subject I and many people have difficulty with), all while satisfying my basic needs for affectionate (or pseudo-affectionate) stimulation. The possibilities could be fantastic, with the risks no worse than those in any human-to-human relationship ... except of course for the random chance of buzzsaw robot rampage. But hey, even humans do that sometimes.
avatar
Zombra: ...all while satisfying my basic needs for affectionate (or pseudo-affectionate) stimulation.
Yeah. I mean I would consider anything sexual unnecessary, but the affectionate parts would be something key to the whole thing.
avatar
Starmaker: Human emotions are just arbitrary rewards. Once you have an AI which can think (...), you have one which can feel.
No. You can only say that in a third-person sense ("figuratively", IMO).

Qualia, Chinese Room.
Date? No. Befriend? Sure.

I want to find someone and start a family with them. Not going to try for the hardest form of relationship when that's fundamentally not a possibility. Friends can be just as intimate and important as a spouse.

It's really sad how lots of people underrate friendship. I've had friends who were just as fundamental to my life as a spouse, but no one seems to really talk about that. It's all about finding the one right person to spend your life with, instead of building a network of best friends you can share parts of yourself with.
avatar
hudfreegamer: I would purchase and train an android to do everything I want though.
I can just imagine the evil things I would have my robot/android do for me.

That guy taking up two parking spots, go key his car and puncture all four tires. That kid over there throwing a temper tantrum, go smack him around a bit. That DRM on Battlefield 43, remove it.

My apologies, my android must have gotten a nasty virus. I'll take him right in for a checkup with the Android Doctor.
avatar
Starmaker: Human emotions are just arbitrary rewards. Once you have an AI which can think (...), you have one which can feel.
avatar
Vestin: No. You can only say that in a third-person sense ("figuratively", IMO).

Qualia, Chinese Room.
I don't get it. This seems to be some toothless version of the Turing test that a philosopher came up with to get his name into wikipedia. And it's actually super trivial to anyone who ever took a test in school and understands the nature of evidence.

Specifically, if you luck out and the teacher thinks you understand the material, she gives you an A and you go on with your ignorant life to draw spell scrolls during macroelectrodynamics. If you consistently do well on every school test humanity can come up with until the heat death of the universe, you either have access to the key or actually understand the material. And only a philosopher can think there's a problem buried somewhere in it, by the dubious virtue of not understanding the basic concept of countable infinity.

---
Now TStael's post has nothing to do with the Turing test. Rather, he seems to say (although, TStael, do correct me if I'm wrong) that human "feels" are a completely separate category from human thoughts, and even if machine thinking is successfully achieved, something uniquely human would be missing.

To which I say, no, it wouldn't be. A hypothetical all-eternity Turing champion chatbot is necessarily going to feel good about providing correct answers. But you can also set it to feel good if it manages to work the word "purple" into an answer.
Social dynamics and human psychology is something that has been chronicled and studied extensively. The habits and mannerisms that human beings imbibe are owing to the neural synapses in our brain that lend us a personality. It is certainly possible to replicate this in machines.

I don't quite know if I am a sociopath or not but all human beings can easily borrow characteristics from someone else(their mannerisms) to mimic a near accurate depiction of their persona.

It is very possible to create an android that could do nearly everything a human being could except perhaps eat, poop and reproduce. Either way, kids are not something that I fancy.
avatar
Starmaker: I don't get it.
In a nutshell:
People have minds, people think; androids don't (won't) have minds, androids can't think.

avatar
Starmaker: This seems to be some toothless version of the Turing test that a philosopher came up with to get his name into wikipedia.
You're dissing my guys now, man. Not cool.

avatar
Starmaker: And it's actually super trivial to anyone who ever took a test in school and understands the nature of evidence.
Yeah, the world in general seemed "super trivial" to me in many regards before I've furthered my education in Philosophy ;).

avatar
Starmaker: Specifically, if you luck out and the teacher thinks you understand the material, she gives you an A and you go on with your ignorant life to draw spell scrolls during macroelectrodynamics. If you consistently do well on every school test humanity can come up with until the heat death of the universe, you either have access to the key or actually understand the material. And only a philosopher can think there's a problem buried somewhere in it, by the dubious virtue of not understanding the basic concept of countable infinity.
At first this answer seemed completely unrelated to the entire discussion... then I've realized that you have difficulty separating mental processes from physical manifestations.

At this point I know that we may very well never agree on this issue for the remainder of our natural lives. I do not wish to "convince" you, so please don't treat me as an adversary here. Please consider that I'm merely trying to show you that there is another way of thinking about this; a way that is not trivially wrong.
First consider the "Mary's Room" thought experiment. This is about "qualia"; you might as well substitute that for when she says "I don't know".
Secondly - listen to Chalmers discussing psychological zombies.

avatar
Starmaker: that human "feels" are a completely separate category from human thoughts, and even if machine thinking is successfully achieved, something uniquely human would be missing.

To which I say, no, it wouldn't be.
I think it was Descartes that understood emotions as "unclear thoughts", and that's about the only theory I can remember that brought these notions together so intimately.
I'm sorry, but a categorical difference between them is pretty... common sense.