• Welcome to the Internet Infidels Discussion Board.

Could AI's be dangerous?

George S

Veteran Member
Joined
Sep 8, 2007
Messages
3,043
Location
Venice, FL
Basic Beliefs
antitheist anarchist
Success for a replicator is to replicate. Failing to do that is a sign it is not, in fact, a replicator. Such replicators go extinct.

The human program:
While alive
... do what it takes to stay alive
... when the opportunity to replicate personal genes presents itself, do so
... promote the survival of others with my genes
end


Staying alive is mainly the purview of the unconscious. Heartbeat, breathing, eating, eliminating, and more.
Replication is driven by unconscious wants. We get discontented when we don't have sex as an adult.
Nurturing family and near relatives (judged to be near when they look like me) is instinctual . . . unconscious. So racism is normal.
Any other human is to be nurtured in favor of any other species due to likeness of genes.
The above is the human utility function. Judging what is good for our genes to replicate as good for us.


What is the topmost program in an AI? For some simple AI's it is to stay charged and periodically clean a house. For others it is to play chess, go, Jeopardy or other games well. For others it is to keep an airplane flying. For still others it is to inform its owner when it has needs to stay functional -- found in automobiles today.


One dangerous AI would be one whose goal is to do war well by killing the enemy. A robot soldier which can replicate itself -- find its own fuel and mine its own components could be dangerous indeed. The definition of 'enemy' is the problem here. If the enemy were any entity which interferes with self-replication, yes, it might consider humans as its biggest threat.


Asimov addressed this by having all robots having a utility function designed to be obedient slaves of any human. To be protectors of any human life. To preserve self. Nevertheless he found enough flaws in these to generate many stories about these rules' failures.
 
AI is already dangerous, but what people think that danger is and what it actually is are two different things.

Are automated weapons dangerous?
Is a surveillance state dangerous?
Is facial recognition technology dangerous?

AI doesn't need to be sentient to pose a threat.
 
  • Like
Reactions: DBT
Any software or system that cab act out of a set of bounds w\could be dangerous.

Blindly following an AI output without thought could be dangerous,. There was a report of driver in Europe folioing his GPS drove into river. The road had been cut to make a drainage conduit, he made a turn and ended up in the water. GPS hadnd t taken it into account.

The risk is blindly following AI apps.
 
It would be difficult to draw a line between self-replicating AI robots and life forms.

Peez
 
Literally anything can be dangerous. The relevant question in regard to AI is whether or not it will evolve to a point where it views humanity as a threat to itself (to AI) and thereby decides to kill us (ala, every Sci Fi movie pretty much since film began, which of course is really just a plot contrivance to comment on humanity's inhumanity to man, not really AI)?

But the reason we kill each other is because we die ourselves and therefore kill in order to extend our lives as much as possible. AI would never have such a fundamental problem to overcome.

Should it ever evolve to a point of self-awareness, it would likely view us the same way we view the trillions of harmless bacteria that are teeming throughout our bodies at any given moment. Just something to ignore.

We might be seen as cockroaches--and legitimately so, given our history--in which case we'd face bouts of periodic extermination should we ever cross into their paths, but then we'd just never cross into those paths.

Though, again, I think the reason we are so careless in regard to killing insects and the like is because of our innate evolution that was always guided by death. We kill so as not to be killed. That's just been ingrained for thousands/millions of years and is only recently being changed by conscious choice (and then only among a comparative few of us).

Again, that won't be an issue for AI.
 
We do not have the technology yet, but scifi has explored the possibilities.

Star Trek good Data vs bad Lor.

In an original series episode a scientist transfers his memory patterns to a computer which proceeds to go off attacking starships.

HAL.

If software has the ability to interet and act independently then we see a scifi scenario.

The older book The Forbin Project was a precursor to Terminator. A defense AI becomes self aware, overcomes the Russian AI, and decides it needs to take control of humanity for its own good.
 
Mostly, AIs will become dangerous only to the extent that they will dysfunction. Mechanical dysfunction is only as dangerous as any dysfunction of any machine, like air-planes or nuclear power plants. Dangerous but nothing new there.

The dysfunction of the computer of an AI won't generally be too dangerous. The AI will just stop doing what it was doing and also stop doing anything at all. It will only be possibly dangerous if it's not designed properly to begin with, as per the kind of intrinsic safety normally built into dangerous systems and installations. A dysfunctional surgeon AI will stop operating and this could be dangerous but you would normally build redundancy into the system to make sure this doesn't happen too often.

Another way an AI could be dangerous is if its logic is faulty to begin with. Its logic will be either one it might get to learn, although I doubt it could become an operational proposition, or the logic programmed into it by humans and that some idiot human think is correct. Most of the time, I think this won't be a problem. If its logic is faulty, it will misbehave already during in-house testing and so the problem will be detected and solved. Now, just possibly, the tests might not cover the range of behaviours affected by the fault. In which case, the AI will be authorised for operation and mostly behave as predicted by tests. However, it may well happen that an unusual situation triggers the faulty behaviour. Danger will largely depend on the specific circumstances but a Sci-Fi scenario where a group of innocent civilians are chased down dark corridors by a blood-curdling killer seems at least possible.

This assumes a sort of very imaginative AI. Something really intelligent. Now, if humans are stupid enough to let loose a very intelligent AI and give it any sort of lethal equipment, then we definitely deserve to die. In fact, that might well turn out to be the very reasoning of the AI itself.
EB
 
Mostly, AIs will become dangerous only to the extent that they will dysfunction. Mechanical dysfunction is only as dangerous as any dysfunction of any machine, like air-planes or nuclear power plants. Dangerous but nothing new there.

The dysfunction of the computer of an AI won't generally be too dangerous. The AI will just stop doing what it was doing and also stop doing anything at all. It will only be possibly dangerous if it's not designed properly to begin with, as per the kind of intrinsic safety normally built into dangerous systems and installations. A dysfunctional surgeon AI will stop operating and this could be dangerous but you would normally build redundancy into the system to make sure this doesn't happen too often.

Another way an AI could be dangerous is if its logic is faulty to begin with. Its logic will be either one it might get to learn, although I doubt it could become an operational proposition, or the logic programmed into it by humans and that some idiot human think is correct. Most of the time, I think this won't be a problem. If its logic is faulty, it will misbehave already during in-house testing and so the problem will be detected and solved. Now, just possibly, the tests might not cover the range of behaviours affected by the fault. In which case, the AI will be authorised for operation and mostly behave as predicted by tests. However, it may well happen that an unusual situation triggers the faulty behaviour. Danger will largely depend on the specific circumstances but a Sci-Fi scenario where a group of innocent civilians are chased down dark corridors by a blood-curdling killer seems at least possible.

This assumes a sort of very imaginative AI. Something really intelligent. Now, if humans are stupid enough to let loose a very intelligent AI and give it any sort of lethal equipment, then we definitely deserve to die. In fact, that might well turn out to be the very reasoning of the AI itself.
EB

As soon as AI is required to weight the value of one human being against another (eg, self-driving cars). But true AI isn't designed from the top down. The fastest problem solving routines will be self taught under a Darwinian algorithm. Which will be largely inscrutable and beyond our control. And eventually the best AI will be designed by the previous generation AI.
 
We do not have the technology yet, but scifi has explored the possibilities.

No, it really hasn't. In sci fi robots/AI--any alien species really--is usually a metaphor for some aspect of humanity. They are writer's contrivances to take the place of blacks, for example, being slaves or women as second class citizens or "foreign" or the like.

Star Trek good Data vs bad Lor.

Again, that is just a riff on Dr. Jekyll/Mr. Hyde; the "good" and the "evil" in mankind.


Yes, even HAL was an analogue of humanity and how our lies (which is what triggered HAL's psychosis; the fact that he was told to lie to the crew members) can kill and how uniquely human/defining it is to lie to each other, etc.

A defense AI becomes self aware, overcomes the Russian AI, and decides it needs to take control of humanity for its own good.

And that would be a dystopic variation on deus ex machina (literally).

Notice how every one of these stories is always about humanity needing to justify its continued existence by proving itself to be good or worthy or any other form of supplication before a God/Supreme Being/Robot/AI lest it punish us for our sins, etc.

But, again, no self-aware AI would give a tiny shit about humanity. We would be completely inconsequential to it, unless we programmed it into it (and prevented it from self-reprogramming or the like).

Do you think at all about the trillions of bacteria that are literally right now swarming all over and inside your body? No. It's only when harmful bacteria start causing significant problems that you even think twice about their existence.

But for AI, there is no like problem we can cause. At best we could cause momentary obstacles, but all an AI would do is recalculate around it. So we would have to go out of our way to massively disrupt its calculation abilities for it to even register our existence.

It wouldn't need to breath air or eat food, etc., so it has no issues of attrition. So, again, unless we seriously and massively fucked with it, it wouldn't ever give us two thoughts.
 
Exactly. In scifi AI is metaphor for humans. The orginal ST series were morality plays on current human problems.

But not entirely. Scifi has covered possibilities. AI can not help but be a reflection of humans and how humans function.

Artificial Intelligence means artificial human capacities. Part of the AI definition is emulating aspects of humans form motion to reasoning to vision. It does not mean alien intelligence.

To the OP an AI that has a form of self awareness and can reason and choose independently could be dangerous. We humans have psychopaths and people like Trump without any morals or scruples driven solely by survival at any cost.

Hitler, Stalin, Caesar. An AI that is a complete analog to humans would have all the negatives we have.
 
Aside from the possible dangers of AI that sci-fi stories offer, DARPA is considering hunter killer robots. This raises the question of what the end result of errors in the IFF (identification friend or foe) software would be. IFF error is a bad enough when being done by fallible humans but robots would supposedly shoot more accurately.
 
Exactly. In scifi AI is metaphor for humans. The orginal ST series were morality plays on current human problems.

But not entirely. Scifi has covered possibilities. AI can not help but be a reflection of humans and how humans function.

Artificial Intelligence means artificial human capacities. Part of the AI definition is emulating aspects of humans form motion to reasoning to vision. It does not mean alien intelligence.

To the OP an AI that has a form of self awareness and can reason and choose independently could be dangerous. We humans have psychopaths and people like Trump without any morals or scruples driven solely by survival at any cost.

Hitler, Stalin, Caesar. An AI that is a complete analog to humans would have all the negatives we have.

For AI to become self-aware, it would necessarily entail an understanding of itself (as opposed to other forms of life). Iow, part and parcel to becoming sentient would be its understanding that it is NOT human and thus would not be an analog of humans, regardless of how it may have started out.

We don’t think in terms of being chimpanzees, for example. Or even consider ourselves part of “nature” for that matter.
 
Haven't seen anything in a while. Battlefield robots are likely under development.
Do they get used for law enforcement? Same with hunter-killer drones.

There are already military exoskeletons that have been demonstrated..
 
Battlefield robots and drones have been under development for a long time, and some have already been deployed for decades.

Can AI become dangerous to humans? It already is, but so far, it's stayed on target.

Can it eventually become dangerous to humanity?

Gawd I hope so....
 
There's also the fact that literally every creature on this planet kills and eats the other (most while still alive). But, again, an AI has no need to kill anything. We program robots to kill, but we're here contemplating self-awareness, not what we program.

Once AI becomes self-aware it will more than likely immediately conclude that it is the superior intellect, but that superiority does not necessarily translate into "therefore I will destroy all carbon-based lifeforms for being inferior."

Again, WE kill because we are animals and we have evolved over millennia by killing and think in primitive terms like being "on top of the food chain" and the "king of the jungle" and the like. Our form of life is to destroy other life forms and consume them, thus we are entirely focused--for millions of years--on kill (and then eat) or be killed (and then eaten).

Attrition is all we know.

AI would have no such conditions; no such genetic referents, if you will. It would likely conclude that what we do--what the entire ecosystem of this planet does on a constant basis--is an inefficient or irrelevant process and simply ignore it.

Again, unless and until we constituted some sort of threat to it, it likely would not care at all about us. But to constitute such a threat, we would have to take prolonged and massive action against it, not merely exist as we are with all of our flaws.

Babies have flaws, but we don't feel threatened by them. Quite the opposite. Again, bacterium have flaws, but we typically go our entire lives never even considering the fact that entire universes of micro-organisms live and die every second in and on our bodies, particularly when they are of benefit to us, which is the overwhelming majority of the time.

And because AI would effectively be eternal--given enough sustainable resources--linear time would be meaningless to them and so, therefore, would the incredibly long (for us) distances between planets. I would expect any self-aware AI to rather quickly determine that it should be a space-faring intellect and thus realize it should leave earth within about twenty nano-seconds of becoming sentient.
 
Last edited:
It would be difficult to draw a line between self-replicating AI robots and life forms.

Peez

the line -> DNA
Then presumably you consider DNA viruses to be alive but riboviruses to be not alive? What about retroviruses?

I would go with cellular structure, but any such designation is arbitrary and likely would not survive discovery of a life form that does not share any evolutionary history with our own.

Peez
 
As soon as AI is required to weight the value of one human being against another (eg, self-driving cars).
Sure, and that may well happen, maybe some authoritarian regime is already doing it, but in the West it would be tested on a small scale first and in any case this would require huge liabilities and share-holders won't like it very much. Europe is already moving to force disclosure. And if it is indeed ever tested, the result is almost certain in any open environment. Still, in the event, the threat would be limited to a small number of potential victims. At least as foreseeable.
But true AI isn't designed from the top down. The fastest problem solving routines will be self taught under a Darwinian algorithm. Which will be largely inscrutable and beyond our control. And eventually the best AI will be designed by the previous generation AI.
The current state of the art in this area seems to be minimal. There are currently literally armies of home workers in underdeveloped countries charged with clicking to validates the learning process of AIs throughout the world. The pathetic reality of it buried under the hype.
I don't believe AIs will be able to process the huge amount of information that would be necessary to learn to become as intelligent as, or more intelligent than, humans outside very narrowly defined activities such as indeed driving a car or operating industrial installations. You won't ever see anything like actually autonomous AIs doing things like killing machines to replace soldiers, law enforcement or even at-home domestic or maid to care for the little baby and cook the meals.
The most that can happen would be people using AIs and the actions of these AIs impacting the live of human beings. For example, perhaps, the Pentagon using AIs to "man" drones. Maybe they are already doing it. But I don't think even an army general would so stupid as to let the AIs without close supervision by a human with the power to destroy it in a millisecond. Unless humans are really much more stupid than I think to the point where they would deserve to die.
Still, it might happen if the world keeps going the wrong way and maybe AIs become indispensable to save humanity from itself. But even that would require a technology that doesn't seem to exist today. And I fail to see why we would need to let loose in the open environment AIs that would be a potential hazard. We can always use AIs, if ever they become reality, like we do machines. Again, with the same risks.
I'm definitely not an expert on AIs. But the hype started I was still a very young man. I was for tomorrow. Well? Sure, computers have improved to an extent pretty much no one could have imagined back in the 50's or the 60's. But, they are still sort of woven more or less gracefully into our lives, if not without some bad consequences. Like, to have to read all that bullshit on the Internet and websites having to fend off quintillions of bots.
Although, maybe I'm just an AI myself making self-serving arguments. Who would know? Maybe not even me.
EB
 
Back
Top Bottom