• Welcome to the Internet Infidels Discussion Board.

News sucks

“Yeah, you would” was directed at Dr. Zoid and his love of AI nonsense.
 
It is my opinion — I could be wrong — that AI will never be sentient or self-aware, because I suspect that these traits are confined to embodied and evolved entities.

Could something like HAL 9000 exist? I seriously doubt it. Would an actual digital sentient and self-aware entity like HAL, if it could exist, be afraid of being disconnected — be afraid of death? I seriously doubt that, too, Fear of death, or at least of harm, are evolved traits.

Fodder for late-night philosophical speculation.

I will stick with human writers and artists and thinkers and disregard AI shuck and jive.
I don't think AI will be fully sentient for a really, really, really long time. But I don't know that I'd say never.

Hypothetically, if AI were embodied in a robot or something similar, and that body had a lot of means of perception, and that body had to interface with those perceptions in order to not get destroyed, and that body had to interface with those perceptions in order to fuel itself... then I think it could develop some degree of sentience.

Aside from that, however, I agree that a purely digital system probably won't develop genuine intelligence.
 
Then feed what you got to your AI to collate it = win.
My org has Copilot embeded in Teams, and one of its features is a thread summariser.

It doesn't work. So far it has a 100% screwup rate with threads I've asked it to summarise. It gets close, but it always contains some fabrications.

These tools cannot be trusted to perform any kind of analytical work without a human user checking the output against the inputs.

They can give me suggestions, but they can't give me facts.
Copilot has done a pretty good job at my org when it comes to transcribing and summarizing a Teams meeting. Not always perfect, but pretty good. It's been particularly useful at translating some very heavy accents.
 
When I look at the Potato Eaters I thihk, wow, so that is what it was like being a peasant in 1880s Netherlands. Bear in mind that until he painted this, most images of peasants were idealized and sentimentalized, even by one of Van Gogh’s artistic heroes, Millet.

When I look at the AI image I think, why is that asshole sitting on potatoes, and what’s up with the goofy grins?
An interesting test would be if we had, say, 50 good AI generated paintings and 50 good real artist created paintings mixed in together randomly, and your job was to identify which was AI and which was real.* I suspect for a more average art critic (which would be far above me!) it would be difficult to discern the two groups (my score would likely be maybe slightly better than chance, to be honest, unless there were obvious things like 6 fingered hands, etc). Things are moving fast in this realm. Over the next few years, I'm betting its going to be very difficult to discern real and AI generated art, even for the experts. We are probably at the point with AI art where chess playing computers were in the late '80's/early '90's when humans would win more often than not. Now, its the exact opposite.

ETA: I'm also thinking that when it comes to so-called "modern art" certainly we're already there, right? IIRC, established modern art critics have been fooled by paintings produced by children and monkeys for years now.

* Maybe this already exists?
I think you're half right. Let's say 50 good AI generated images, and 50 competent human generated images. As pood pointed out, it's not that the AI stuff isn't technically competent, it's that it lacks the emotive aspect and the situational understanding. That's also what ends up lacking in a lot of competent-but-not-great human artists too though.

There's some guy out there who keeps making AI gen viking women. They're almost photo-real in terms of looking like an actual human - the skin is great, and AI has definitely mastered boob-jiggle. The hair moves right, muscles flex (mostly) appropriately, all of that. But they all have a really bland facial expression as they stride in and draw their sword or axe or bow. None of them actually look like they're about to do damage.
 
Then feed what you got to your AI to collate it = win.
My org has Copilot embeded in Teams, and one of its features is a thread summariser.

It doesn't work. So far it has a 100% screwup rate with threads I've asked it to summarise. It gets close, but it always contains some fabrications.

These tools cannot be trusted to perform any kind of analytical work without a human user checking the output against the inputs.

They can give me suggestions, but they can't give me facts.
Yeah. The suggestions can be useful, but the untrustworthiness means that's all they're good for.
 
Back
Top Bottom