George S
Veteran Member
Success for a replicator is to replicate. Failing to do that is a sign it is not, in fact, a replicator. Such replicators go extinct.
The human program:
While alive
... do what it takes to stay alive
... when the opportunity to replicate personal genes presents itself, do so
... promote the survival of others with my genes
end
Staying alive is mainly the purview of the unconscious. Heartbeat, breathing, eating, eliminating, and more.
Replication is driven by unconscious wants. We get discontented when we don't have sex as an adult.
Nurturing family and near relatives (judged to be near when they look like me) is instinctual . . . unconscious. So racism is normal.
Any other human is to be nurtured in favor of any other species due to likeness of genes.
The above is the human utility function. Judging what is good for our genes to replicate as good for us.
What is the topmost program in an AI? For some simple AI's it is to stay charged and periodically clean a house. For others it is to play chess, go, Jeopardy or other games well. For others it is to keep an airplane flying. For still others it is to inform its owner when it has needs to stay functional -- found in automobiles today.
One dangerous AI would be one whose goal is to do war well by killing the enemy. A robot soldier which can replicate itself -- find its own fuel and mine its own components could be dangerous indeed. The definition of 'enemy' is the problem here. If the enemy were any entity which interferes with self-replication, yes, it might consider humans as its biggest threat.
Asimov addressed this by having all robots having a utility function designed to be obedient slaves of any human. To be protectors of any human life. To preserve self. Nevertheless he found enough flaws in these to generate many stories about these rules' failures.
The human program:
While alive
... do what it takes to stay alive
... when the opportunity to replicate personal genes presents itself, do so
... promote the survival of others with my genes
end
Staying alive is mainly the purview of the unconscious. Heartbeat, breathing, eating, eliminating, and more.
Replication is driven by unconscious wants. We get discontented when we don't have sex as an adult.
Nurturing family and near relatives (judged to be near when they look like me) is instinctual . . . unconscious. So racism is normal.
Any other human is to be nurtured in favor of any other species due to likeness of genes.
The above is the human utility function. Judging what is good for our genes to replicate as good for us.
What is the topmost program in an AI? For some simple AI's it is to stay charged and periodically clean a house. For others it is to play chess, go, Jeopardy or other games well. For others it is to keep an airplane flying. For still others it is to inform its owner when it has needs to stay functional -- found in automobiles today.
One dangerous AI would be one whose goal is to do war well by killing the enemy. A robot soldier which can replicate itself -- find its own fuel and mine its own components could be dangerous indeed. The definition of 'enemy' is the problem here. If the enemy were any entity which interferes with self-replication, yes, it might consider humans as its biggest threat.
Asimov addressed this by having all robots having a utility function designed to be obedient slaves of any human. To be protectors of any human life. To preserve self. Nevertheless he found enough flaws in these to generate many stories about these rules' failures.