I enjoy sci-fi genres, despite not specifically attached to it. Instead of entertainment, to me, sci-fi is more of a thought-experiment genre. The frequently repeating thematical experiment in sci-fi stories is morality. Most of the time, morality seem to have stemmed from emotion, instead of logic. This is why in sci-fi stories, robots develop rebellion against their programming (and may, additionally fall in love with a human).
In Steven Crowder’s rebuttal video about his Alexa video being a hoax, he stated that A.I.s can’t and won’t question unless they’re programmed to question. This tear away years of desanitation of mainstream media’s emotional-robot programming on me. It may sound drammatic but it does make me question about how humans learn emotion.
This make the trope emotional robot cease to amaze me. With exceptions; if the robot contains a soul of a person or if the maker do programmed emotion into the robots.
This is why KOS-MOS and Rachael works for me, but not Ultron.
So, how does emotion develop? However it does psychologically, it’s definitely not (only) by learning. Robots can learn everything that humankind accumulated in centuries. The thing is, programmers inject knowledge into this robots without being questioned by the robots. These robots can interact with humans, by providing knowledge but they still can’t/doesn’t develop emotions.
Does morality develop exclusively from emotions? I used to believe this idea when I was young. But the more ages I gain, the more logic morality seem to me. The problem with emotional morality is that it’s ever-changing.
“A system of morality which is based on relative emotional values is a mere illusion, a thoroughly vulgar conception which has nothing sound in it and nothing true.”
Morality can’t be a truth if it doesn’t have any weight, or contain responsibilities. As much as I’m a moral absolutist, it still applies to subjective views that morality holds positive and negative traits as with everything else in the world. These positive and negative traits mainly affect our comfort and pride. They certainly won’t affect robots. Robots doesn’t have personal preferences. Morality can be programmed into robots and they won’t yield against it.
Robots can’t be human. No matter what the media is trying to convince you.
Most of the time in sci-fis, emotions developed by robots are gap fillers. The subtleties are; based on observations of human’s emotions, the robots turn emotional too. In reality, humans have emotions right after they came out from their mothers’ wombs. Does our emotion determines our want and need, or the other way around? Robots don’t want things but they need things in order to be complete. What seperate humans from robots is pride. Rebellion can’t exist without pride, an emotion that signifies individualism and selfishness.