To me, sci-fi is a thought-experiment genre. The frequent repeating thematic experiments in sci-fi stories is mainly about morality. Generally, morality may seem to have stemmed from emotion, instead of logic. This is why in sci-fi stories, robots who develop emotions is technically rebelling against their programming (a form of logic).
In Steven Crowder’s rebuttal video about his Alexa video being a hoax, he stated that A.I.s can’t and won’t question (back) unless they’re programmed to question. This then tear away years of desanitation of mainstream media’s idea of emotional-robot programming on me. It may sound dramatic but it does make me think about how humans learn emotion.
The trope emotional robot now cease to amaze me. With exceptions; if the robot contains a soul of a person or if the creator do programmed emotion into the robots.
This is why KOS-MOS and Rachael works for me, but not Ultron.
How does emotion develop? However it does psychologically, it’s definitely not (only) by learning. Robots can learn everything that humankind accumulated in centuries. The thing is, programmers inject knowledge into this robots without being questioned by the robots. These robots can interact with humans, by providing knowledge but they still won’t develop emotions.
Does morality develop exclusively from emotions? I used to believe this idea when I was young. But the more ages I gain, the more logic morality is according to my searching. This may sound weird at first, but if you consider what consequences that entails a set of warnings or rules, it does make sense. For example; why we shouldn’t consume drugs? Because it will bring harm to us and negatively affects people around us. It will threaten our relationships and performance in work. And so on, the list continues.
The problem with relying on empathy itself as a basis of morality, it wobbles from time to time. It doesn’t have an objective foundation beneath.
“A system of morality which is based on relative emotional values is a mere illusion, a thoroughly vulgar conception which has nothing sound in it and nothing true.”
Morality can’t be a truth, a reality if it doesn’t have any weight or responsibilities. Even if you’re a moral absolutist, it still applies to moral subjective views that morality holds “positive” and “negative” traits as with everything else in the world. Both these traits mainly affect our comfort and pride. They certainly won’t affect robots. Robots doesn’t have personal preferences. Morality can be programmed into robots and they won’t yield against it.
Most of the time in sci-fi stories, emotions developed by robots are gap fillers. The usual cases are; based on the observation of human’s emotions, the robots will grow emotional too. In reality, humans have emotions right after they came out from their mothers’ wombs. Does our emotion determines our want and need, or the other way around? Robots don’t have things they want but they need things in order to be complete or to improve their performance.
Ultimately, what separate humans from robots is pride. Rebellion can’t exist without pride, an emotion that signifies individualism and selfishness.