• Welcome to the Internet Infidels Discussion Board.

Eliminating Qualia

?

Change machines don't know when they are dealing with a one dollar bill and when they are dealing with a five?

Have you ever actually seen one?

Change machines don't know anything. That's the point.

I'll post the paper again too:

https://ase.tufts.edu/cogstud/dennet...rs/evolerr.htm

Because, just perhaps, people have been worrying about this for a very long time before you solved it all.

I'll try to read that paper later.

Just on Um's point, I'm ok with agreeing with it, because when he says 'recognise' he doesn't mean 'know' in the sense of consciousness. I think it could be ok to say that a machine or computer can recognise (and possibly 'know' if we are careful with what we mean by that word) and therefore process, non-consciously. My computer won't let me use it until it 'recognises' my password.

But in his model, that's all a brain can do. As soon as there's consciousness, in his model, it has, for him, necessarily involved the creation of a mind, and the creation of qualia separately, with the former being the experiencer of the latter.

Now, I don't think there's an easy way to show that that's probably wrong, and I do think it may be impossible to disprove. I can't think of any way it directly contradicts any known or observable phenomenon, for instance.

We can say that it's an unusual model, in the sense of being uncommon nowadays, and we can say that there are other possible models, which are also not easy to show are probably wrong and also impossible (as far as I know) to prove or disprove.

When it comes to comparing models, we can, I think, also say that his is less parsimonious, though that of course is no guarantee of anything. We can also, I think, say that his so far raised objection to the plausibility of the more commonly accepted model (with brain as experiencer) namely that it would be a waste of precious energy for the brain to create qualia, fails on several counts. That is not to say there is not another, possibly more robust reason the brain can't be the experiencer.
 
Your "model" is not parsimonious.

Your "model" would be parsimonious only if no conversion to "red" were made.

Because your "model" doesn't need presentations. It just needs recognition. A "self", a mind, needs presentations.

But a "model" where the brain creates both that which can experience "red", the mind, and "red" is as parsimonious as possible.

Since transformations do take place.
 
It was a serious point. Think about it, a rational creature would need an internally consistent belief set, would never cut corners and would get eaten by the first predator using sub optimal but fast heuristics to satisfied. We are, for want of a word, biorational; was loosely approximate rationality to whatever degree aids survival

All of these:

https://en.m.wikipedia.org/wiki/Cognitive_bias

Are features, not bugs.

I’m not doing rationality here, because my tolerance for trolling has been reached, but Chris Cherniak’s lovely book ‘Minimal Rationality’ is a good primer.

Sure, we're subject to bias and all that, and none of us could possibly function as an entirely rational being, and by a long shot. But, equally, and contrary to what your serious point seems to suggest, I don't think you could possibly give a definition of rationality that wouldn't come down to an essentially macroscopic process, likely one selected for by evolution, i.e. not something possibly perfect like a Platonic Idea or some Fundamental Law of Nature. And I would take the rational thing to do here to be to think of rationality as a pragmatic process, meaning that you need to take into account what seem to be, broadly speaking, the essential facts of our nature. We're flesh and blood stomping the earth in a cloud of dust, not evanescent angels gracefully dancing in the sunset light.

And being rational doesn't mean being always, or even ever, right. And, yes, to be one hundred percent rational would be to be very soon very dead.

Maybe we could nonetheless go just a little bit further and agree that being rational makes a difference in practical terms, that it is very effective at least in specific contexts, and that the first idea about rationality is that you try to use rationality as much as possible as long as it is good for you.

But sure, some bias you have may mean you do too much of it for your own good.
EB

I don’t think we can really take account of our nature for the simple fact that we still don’t know what it is. We can take account of our folk psychological assumptions, but they are highly suspect. The fact is that we have a neat little hermeneutic circle with beliefs, desires and rationality. We act rationally on our beliefs to bring about our desires. Or so we think. Now define a belief and go find one, post Putnam, in the head.
 
There is, I think, one more flaw in his model (not that he will agree but no matter) and it is his own insistence that there can be no experience without both an experienced thing and an experiencer. As you, sub, have mentioned, there is a regress problem with this that does not affect the more commonly accepted model (where the brain is the experiencer).

So, if he is aware that he has a mind, or a 'me', where is the experiencer in that case? The 'I' can't be aware of itself, according to his model, where an experience needs something to be experienced and something to do the experiencing.

This does not affect the more commonly accepted model because in it, nothing experiences the brain, so there is no infinite regress.
 
Because your "model" doesn't need presentations. It just needs recognition. A "self", a mind, needs presentations.

No, the more commonly accepted model (it's not mine) does allow for there to be a need for 'presentations'. It's just that they are presentations to the brain.

I have been saying exactly this for several posts, even just today.

But a "model" where the brain creates both that which can experience "red", the mind, and "red" is as parsimonious as possible.

Since transformations do take place.

Whether 'transformations' do take place or not is not the main issue. I've already agreed to run with the idea that they do.

The point is that running with that suggestion, it's your model that has at least one extra transformation. Your brain has to create a mind, and create qualia to present to it, and there has to be a process of transaction/interaction between the two things. More separate 'things' means more transactions/interactions/transformations are needed. It's an almost inevitable consequence of a model that posits more separate interacting 'things'.
 
Last edited:
Your "model" is not parsimonious.

Your "model" would be parsimonious only if no conversion to "red" were made.

Because your "model" doesn't need presentations. It just needs recognition. A "self", a mind, needs presentations.

But a "model" where the brain creates both that which can experience "red", the mind, and "red" is as parsimonious as possible.

Since transformations do take place.

Prove it.

Objectively.
 
It would be interesting to ask a few further questions about this 'triunalistic' model.

First, does it say that there can be no conscious experience at all unless there is a mind created first? Or do some 'basic' ones go via a different, primitive route or process? What about pain? The raw, bare experience of accidentally stubbing one's toe does not seem to need to have thoughts about the pain associated with it before or during the experience (especially since the pain is happening in the head, not in the toe, but automatically feels like it's in the toe without a rerouting instruction from an 'I').

Second, and following on from that, what about other animals? Do all animals who may (it seems) experience pain have to have a sense of self? Most don't pass a self-recognition test, for example, though this is only a vague indicator.

It seems to me that these are not easy things for a triunalist model to explain.
 
Last edited:
Your "model" is not parsimonious.

Your "model" would be parsimonious only if no conversion to "red" were made.

Because your "model" doesn't need presentations. It just needs recognition. A "self", a mind, needs presentations.

But a "model" where the brain creates both that which can experience "red", the mind, and "red" is as parsimonious as possible.

Since transformations do take place.

Prove it.

Objectively.

You use that word loosely.

Logical arguments are not objective evidence but it takes logic to refute them.

Not name dropping. And claims that rational arguments that don't exist do.
 
It would be interesting to ask a few further questions about this 'triunalistic' model.

It's a bare bones model that illustrates the concept of "experience" As parsimonious as possible.

That which can experience. Call it the "mind" or the "self" or whatever.

The mind says "I" will do that and "I" will do the other and it is not just referring to the brain.

And that which can be experienced. Call them sensations and thoughts and emotions, etc..

And some mechanism to make them both, since they can't just happen by some miracle.
 
It would be interesting to ask a few further questions about this 'triunalistic' model.

It's a bare bones model that illustrates the concept of "experience" As parsimonious as possible.

No, any model with more separate 'things' almost inevitably involves more interactions, transactions and transformations. Yours has more 'things'. And an infinite regress issue.

That which can experience. Call it the "mind" or the "self" or whatever.

That which can be experienced. Call them sensations and thoughts and emotions, etc..

And some mechanism to make them both, since they can't just happen by some miracle.

No one is suggesting a miracle. It is not known how consciousness is created, but that is also true for the creations and interactions in your model. So on that front, it's even-stevens. Except that your model has more processes and interactions, because it has more 'things'.

Also, if the mind is an experiencer, what does it do the experiencing with? And what is it made of? More questions than the standard models.
 
No, any model with more separate 'things' almost inevitably involves more interactions, transactions and transformations. Yours has more 'things'. And an infinite regress issue.

To have something that can experience requires some kind of activity.

To have things it can experience also requires some kind of activity.

You cannot eliminate the need for some kind of activity and some thing to generate that activity.

You have no logic to refute this.

And infinite regress is absurd.

You have one mind experiencing red. One thing with many but not infinite variations.
 
The brain as experiencer has several things going for it that the mind doesn't. It is more of a 'thing' in that it can be observed, and it can be active, and its activity can be measured. Also, we know better what it's made of. Sure, of course there's no way to say for definite that it can experience, but why can't it?

Your objection about energy conservation doesn't nail it.
 
Last edited:
No, any model with more separate 'things' almost inevitably involves more interactions, transactions and transformations. Yours has more 'things'. And an infinite regress issue.

To have something that can experience requires some kind of activity.

Ok. That's suddenly different from an experience requires an experiencer, but no matter. It also doesn't address the points about parsimony, where a model with more interacting 'things' necessarily has more interactions.

To have things it can experience also requires some kind of activity.

Possibly. But the mind and qualia could be of the same nature, versions or different arrangements of the same 'thing'. Parsimony.

Plus, as far as I know, they've never been encountered separately. A bit like Daniel O'Donnell and Jeffrey Donaldson. Or if you prefer, Borat and Sasha Baren Cohen. Hm. :)

You cannot eliminate the need for some kind of activity and some thing to generate that activity.

I'm not trying to eliminate the need for activity or something to generate it.

You have no logic to refute this.

I'm not trying to.

And infinite regress is absurd.

Well, if it is, it's your model which has the problem with absurdity, since it's only your model that has an infinite regress of 'experiencers'.

You didn't yet get around to answering my (repeated) question about how do you know you have a mind or an 'I'. What is the experiencer of the 'I'?

And, regress in 3....2.....1...

You have one mind experiencing red. One thing with many but not infinite variations.

However 'simple' that was meant to sound, it isn't a description of your model. Your model has brain (recognizer and generator), mind (created experiencer), and qualia (created experienced things).

And I don't have a mind experiencing. You do. I have a brain experiencing (something that, unlike a mind, we at least objectively know exists and can be measurably active in different modes, with very strong correlations to consciousness for some of them, and nothing experiences the brain, so no infinite regress 'absurdity'). So do the vast majority of relevant experts, who know a lot more about these things than I do, and certainly than you do, even if no one has all the answers. In fact, I don't know of anyone, other than you, who uses your model. Do you?

Please don't cite Alan Watts, because he's basically a woo-head who thinks he can prove you don't exist.
 
Last edited:
The brain as experiencer has several things going for it that the mind doesn't.

It has many problems.

The mind is not aware of many things the brain is doing.

The mind sleeps. But the brain is very active.

Give a person LSD and their brain does not change. It's activity changes.

Clearly a separation between mind and brain exist.
 
Last edited:
Your "model" is not parsimonious.

Your "model" would be parsimonious only if no conversion to "red" were made.

Because your "model" doesn't need presentations. It just needs recognition. A "self", a mind, needs presentations.

But a "model" where the brain creates both that which can experience "red", the mind, and "red" is as parsimonious as possible.

Since transformations do take place.

Prove it.

Objectively.

You use that word loosely.

Logical arguments are not objective evidence but it takes logic to refute them.

Not name dropping. And claims that rational arguments that don't exist do.

You made an explicit claim. I asked you to prove it. You have not. Insulting another person proves nothing.

All I see is a promising discussion dragged into the gutter. Again.
 
Colour is quite obviously a brain representation of wavelength (EMR), input via the eyes, optic nerve/ ion flow/impulses, visual cortex, distribution, conscious experience, recognition, etc, etc,etc......

Color is something brains create whole that has absolutely nothing to do with the stimulation. EM waves do not have a color. Objects do not have color as a property. They have reflectiveness as a property.

But I did not say that EM waves have colour.....I said colour is the brains representation of various wavelengths of EMR. If you can see the distinction.
 
The brain as experiencer has several things going for it that the mind doesn't.

It has many problems.

The mind is not aware of many things the brain is doing.

The mind sleeps. But the brain is very active.

Give a person LSD and their brain does not change. It's activity changes.

Clearly a separation between mind and brain exist.



A mind is a brains means of interacting with the World.

Evolutionary Biology and Evolutionary psychology;


Principle 1.

''The brain is a physical system whose operation is governed solely by the laws of chemistry and physics. What does this mean? It means that all of your thoughts and hopes and dreams and feelings are produced by chemical reactions going on in your head (a sobering thought). The brain's function is to process information. In other words, it is a computer that is made of organic (carbon-based) compounds rather than silicon chips. The brain is comprised of cells: primarily neurons and their supporting structures. Neurons are cells that are specialized for the transmission of information. Electrochemical reactions cause neurons to fire.

Neurons are connected to one another in a highly organized way. One can think of these connections as circuits -- just like a computer has circuits. These circuits determine how the brain processes information, just as the circuits in your computer determine how it processes information. Neural circuits in your brain are connected to sets of neurons that run throughout your body. Some of these neurons are connected to sensory receptors, such as the retina of your eye. Others are connected to your muscles. Sensory receptors are cells that are specialized for gathering information from the outer world and from other parts of the body. (You can feel your stomach churn because there are sensory receptors on it, but you cannot feel your spleen, which lacks them.) Sensory receptors are connected to neurons that transmit this information to your brain. Other neurons send information from your brain to motor neurons. Motor neurons are connected to your muscles; they cause your muscles to move. This movement is what we call behavior.

In other words, the reason we have one set of circuits rather than another is that the circuits that we have were better at solving problems that our ancestors faced during our species' evolutionary history than alternative circuits were. The brain is a naturally constructed computational system whose function is to solve adaptive information-processing problems (such as face recognition, threat interpretation, language acquisition, or navigation). Over evolutionary time, its circuits were cumulatively added because they "reasoned" or "processed information" in a way that enhanced the adaptive regulation of behavior and physiology.''
 
I don’t think we can really take account of our nature for the simple fact that we still don’t know what it is. We can take account of our folk psychological assumptions, but they are highly suspect. The fact is that we have a neat little hermeneutic circle with beliefs, desires and rationality. We act rationally on our beliefs to bring about our desires. Or so we think. Now define a belief and go find one, post Putnam, in the head.

Alright, we don't actually know our nature. I certainly don't. But being rational may be taken to be acting on the basis of whatever it is you do know. I don't see how you could possibly disagree with that even if that does not as such entirely characterise rationality.

Now, we may well all have very different appreciations of what it is we actually know but I would say it certainly includes things in our mind like perceptions, sensations, memories, emotions, impressions and thoughts, and me I would say nothing else; so broadly, Descartes. And I would characterise all our basic beliefs as impressions, and which as such are highly pernicious because we very easily mistake our impressions to be something we come to think voluntarily. More elaborate things like the idea that democracy works better than dictatorship for example are not mere beliefs. They require some thinking, even if also founded on various beliefs.

So, in effect, I would say that you can act rationally on the basis of a hallucination that you don't know that it is a hallucination, because, essentially, that would be all you have to work with. It's not your rational mind arbitrarily inventing some thing. It's working with what you have on the moment even if in this case what you have isn't going to be able to represent anything in the physical world. Just bad luck in this case.

A second level in rationality would be the use of basic logic.

And a third level, the use of language and the ability to exchange information with other minds, and perhaps the ability to debate with oneself.

Where I guess I would agree with you is that I think we are likely largely mistaken as to the extent to which we are indeed rational. I would broadly assess our rational mind to represent possibly 5 percent of our conscious mind but our rational thought processes seem to feature more prominently in our subjective appreciation. And I would guess this comes with, and may even be crucial to, our sense of free will.

Still, one kind of belief is easy enough to identify: I believe that my perceptions are real physical things.

And I actually can't shake off that kind of belief. Should I, and be rational? :rolleyes:
EB
 
Give a person LSD and their brain does not change. It's activity changes.

The brain does change with LSD. The chemicals in the brain (including blood) are as much part of the brain when they are present as what we like to think of as the 'fixed' structures.

The mind sleeps. But the brain is very active.

In your model though, the mind is present during dreaming sleep.

Clearly a separation between mind and brain exist.

The brain could be operating in different modes. The parts or processes involved in generating consciousness could be active or inactive at certain times.

By the way, none of this says anything about whether qualia are a 'presentation' for the mind to experience, or whether both, being of the same nature, are brain experiences (the currently more accepted model) albeit on a spectrum.

This does not rule out them being causal, of course.
 
Last edited:
The brain as experiencer has several things going for it that the mind doesn't. It is more of a 'thing' in that it can be observed, and it can be active, and its activity can be measured. Also, we know better what it's made of. Sure, of course there's no way to say for definite that it can experience, but why can't it?

Yet I would myself count my own subjective experience as observation of my own mind, provided I pay attention.

So, rather, we come back to the distinction between subjective and objective. We can only observe our own mind and for ourselves but we can observe other people's brains, and to some extent our own, and, crucially, compare our observations with that of other people. So we can only observe a mind subjectively while we can observe brains objectively.

I would say myself that both points of view have advantages and disadvantages.
EB
 
Back
Top Bottom