• Welcome to the Internet Infidels Discussion Board.

Eliminating Qualia

Yeah - I'm looking at s7 and s8 with deep suspicion. They are pulling a fast one (probably on themselves) with their training sets. Taking the training data and the test data from the same cohort is just asking for a decent probabilistic analysis to find non obvious regularities between the two. I'd use two seperate training sets generated with a freshly purchased or generated random sample for each.

I'll have to think about it, but it looks like the pattern isn't in the head. They are trying to pull pattern out of noise so hard that they succeed when they almost certainly shouldn't. God knows this happened enough back in the early days of connectionism and this feels very similar.

That said, the retinotopic sheath does represent isomorphically to the surface of the eye and you could certainly pull this sort of 'mexican hat' driven pattern out of the eye. I'm just unsure that voxels are a sharp enough tool to catch them and BOLD would normally be way way too slow. I suppose if you spent enough time training... However, that's a very very specialist part of the brain that deals with early visual processing, and boy they must have set it up carefully so it's not telling us anything that we wouldn't expect.

I haven't (yet at least) read the relevant paper(s) through, so I don't know what the set up was, and even if I did, I doubt I'd appreciate the subtleties in relation to what you say, about 60% of which is above my head.

I'm assuming that the 'decoding' (from fMRI data) is done on the brain activity of the same subject that the training was done on (ie they're not trying to decode my brain activity using yours etc) so I'm not sure what you mean about 'same cohort', partly because I'm unfamiliar with such methods generally.

As for pulling this sort of data out of the eye, that wouldn't, in principle, seem quite so...interesting. And here, I think, as you probably appreciated, the subjects were asked (I believe) to imagine/remember the letter, they weren't seeing it (in the decoding tests, I'm not sure about the training).

To me, this sort of thing goes into the broad category, 'Neural Correlates of Consciousness'. As far as I know, this is still quite a popular pursuit in science. I'm not sure how it's generally viewed by philosophy. I guess I can imagine at least some theoretical philosophers having reservations and being sceptical, as is often philosophy's job, while scientists often get on with applications.

On which note, I read (in an article, from 2012) that some researchers in The Netherlands are/were working on a way to develop a computer program that types text on a computer screen directly from the brain instead of via the fingers on a keyboard. The intention is partly to help people with certain impairments.

https://gizmodo.com/5922208/scientists-invent-mind-reading-system-that-lets-you-type-with-your-brain

There isn't even a link to a paper in that article though, unfortunately, so extra grains of salt added for that.
 
Last edited:
At least accept that there is a property of being observable and another property of being able to observe.

...the known/observed and the unknown/unobserved are reminders that there at least 2 essential properties in nature. We start with the known/observed and we assume there exists the unknown/unobserved somewhere out there. I will tentatively say that the mind is the observed/known.

Just to chip in.....

I get the vague feeling that saying the above sorts of things has the effect of, 'automatically' defining the world as if there is duality, so I'd be vaguely concerned about some sort of 'assuming conclusions' pitfall. I know you're adding caveats, and not yet positing this or that sort of duality.

I guess I'm not sure your two types of property aren't in some way...arbitrary. I guess this has something to do with what a 'property' is, or what a 'separate property' or an 'essential property' is.

Also, I'm not sure what the two are. I was a bit surprised to see you allocating 'mind' to the 'observed/known'.
 
Just before I go and get some much-needed work done, I'll drop this into the mix:

"In this study we developed a new encoding model that predicts BOLD signals in early visual areas with unprecedented accuracy. By using this model in a Bayesian framework we provide the first reconstructions of natural movies from human brain activity. This is a critical step toward the creation of brain reading devices that can reconstruct dynamic perceptual experiences."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3326357/
 
Something like this, I think:


[YOUTUBE]https://www.youtube.com/watch?v=nsjDnYxJ0bo[/YOUTUBE]


"The left clip is a segment of a Hollywood movie trailer that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:
[1] Record brain activity while the subject watches several hours of movie trailers.
[2] Build dictionaries (i.e., regression models) that translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points at which brain activity was measured.
(For experts: The real advance of this study was the construction of a movie-to-brain activity encoding model that accurately predicts brain activity evoked by arbitrary novel movies.)
[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
[4] Build a random library of ~18,000,000 seconds (5000 hours) of video downloaded at random from YouTube. (Note these videos have no overlap with the movies that subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average these clips together. This is the reconstruction."
 
How can there be mental properties and no mind?

I have no idea. If I were making that claim then that would be a serious head scratcher. I'm not and have been repeatedly clear throughout this thread that Dennett is a linguistic behaviorist and I'm not. I think you are mistaking beating UM over the head with his hypocritical inconsistency about objectivity with a disbelief in minds. Nothing could be further from the truth.

Here's a fast pass on my own position from earlier in the thread.


Me post 137 said:
My position is a variation of Anomalous Monism as championed by Donald Davidson and refined by Jaegwon Kim. I then, in theory if not in practice, champion a sharp divide between the conceptual and the non conceptual aspects of mind. There's the stuff that goes on in language: logic, intentions, folk psychology, narrative and so on. Then there's stuff that is non conceptual and smeared all over the brain. Here I take a different position which is a more straightforward property dualism: in some areas of the brain, and in some circumstances, physical events are also mental events. They are the same thing seen from two perspectives. To use the old saw c-fibres firing is pain. One is a physical and the other a mental description of the same event. Here, I'm leaning on Rudder Baker's constitution accounts.

Descartes may have 'settled' the issue with his substance dualism, but now that we don't need God to underpin our ontology and substance dualism has been systematically proven profoundly unhelpful whenever it turns up, it flabbergasts me that the parsimonious explanation of property dualism isn't more widely accepted in philosophy and science. If it were then most of the (non) problems simply evaporate.

WAB said:
IF we (meaning us collectively, not in the royal sense) are our brains, then WTF is this "illusion" of consciousness being presented to?????

Ask the wrong question and the world will cheerfully offer up the wrong answer. Information hits the brain from a wide range of sources and has to be bound to be much use. One fundamental problem the embodied brain has to solve is how to unify all of the perceptions in a way that actually allows the embodied brain to act effectively in the world.

In the brain, there is no place where it all comes together, there is no finish line at which afferent becomes efferent and there is no self. What there is, and Chalmers, who I will talk about later, is the easy problem of consciousness: we measure, discriminate, respond and act, for a start. My personal experience, and by methodologically unscientific but entirely pragmatic assumption, is that all of this happens to feel like something to everyone who isn't me.

So let's start with a P-zombie. As it happens, I don't believe human P-zombies are possible because I think functionalism is bollocks and conscious cognition takes both the meat and the motion. However, if we imagine a p-zombie, then that zombie, which you seem happy to imagine, has somehow magically solved the problem of binding all the disparate aspects of its internal and external sensory manifest - that's the hard bit of the easy problem.

More than that, it's somehow developed the ability to talk about a sense of self it doesn't have. Me, I think it would only be able to think of itself in the third person and this would be a bit of a giveaway as it would only be able to respond to its behaviour. A zombie trying to be devious would, presumably, have to whisper, very quietly and hear itself... However, it would be able to apply the intentional stance to the body it was, name it, decide what beliefs and desires that it had and, not just use them for prediction and explanation, but also, cleverly, to work out what to do next, allowing it to use logic to make both tactical and strategic decisions. It would get interests.

In time, with practice, it might come to build up a bloody great set of settled beliefs and desires. In time, it could start to look a lot like it had a first person perspective. Hell, it could even mistake that cluster of beliefs desires, folk psychological predictions, experience about the body's dispositions and so on for something more. Now imagine a Watsonian P-zombie. It's internalised that language use and predicts silently, with the brain clamping down on the muscular production of language and simply producing, then interpreting language. That's broadly what we do when thinking in words, by the way. However, again, it's a P-zombie, its behaviour is on an internal feedback loop rather than an external one, but it still doesn't feel like anything. It's got a rich model of what it does that it can use to predict, explain, justify and produce behaviour and it's perfectly capable of modelling others - as if they had intentions desires and interests.

Obviously, this embodied p-zombie brain doesn't really have beliefs and it isn't really there. But it uses the intentional stance to predict and explain behaviour. it will be able to tell stories based on personal history and, living among non zombies, would learn the grammatically correct use of personal pronouns like I. The I, as Dennett puts it, would be the centre of narrative gravity. A convenient hook for the stories. A fictional character written by the p-zombie. Mind you, it would be a fictional character able to respond to its own stories and history. That's starting to feel like a pretty rich (non) mental life.

Now, holding that story in your head, just imagine what would happen if, actually, the unification of all of the perceptions in a way that actually allows the embodied brain to act effectively in the world. happened to feel like something to have. Imagine if, before language happened, pain hurt just because sharing information in a rich manner across a brain happened to feel like something. Now you have two options here: Chalmers' option is panpsychism - all matter has a phenomenal character in the same way it has mass. Personally I see Chalmers' option as incredibly excessive.

All you have to imagine is that, in the brain, some processes that promulgate and bind information across the brain just happen to feel like something when they happen. It doesn't have to be many, because as we saw with the zombie, even a zombie can get a third person sense of self. Most of our sense of self is, as it happens, third person, just like the zombie. However, a little bit of it isn't. It just happens that, in us, it turns us from p-zombies to something a bit richer, with a spark of internal awareness of solving the easy problems that easy consciousness solves. That internal dashboard, is all it takes. all it needs. Most of the heavy lifting is already done by third person folk psychology applied recursively.

So we have two user illusions - a really basic private one that is biology in action seen from the inside (because it feels like something to discriminate, perceive, nocicept and so on) and a public one that is rather similar to the zombie one that allows us to spin stories around this little kernel of biology experienced from the inside. Put the two together and you have something that looks mysterious from the biology (because of the language, intentional stuff and so one) and looks mysterious from the personal (because of the ill understood biology). Obviously, the two are hopelessly intermingled which just makes unpicking it near impossible. As I always say, psychology has not yet had its Newton.


I agree. But how can the aboutness be known/discovered without an observer to observe/know its existence?

That just looks like a strange sort of dualism to me. My position on this is stated above.

Sub said:
You'll end up being forced towards idealism and not wanting to be. It's an old and well beaten path. See Berkeley.

I have been through this too. I came to the need for a second kind of general property. I have been explaining them as the observable and the observer.

Which doesn't help you. All you have done is forced the problem down a level - how do you explain the observer's ability to observe? does it have an observer. Homuncular explanations never end well unless you like turtles.

Ryan said:
I mean you aren't really addressing my claim in anyway that furthers the discussion.

Sub said:
I am. I'm pointing out that it's making a rather old mistake.

Ryan said:
I like how Chalmers deals with "old mistakes". But I like to use my own analysis.

Really? As a rebuttal that needs a bit more detail - perhaps a quote or some argument?

Ryan said:
Sure they might both exist within a single substance, but they are not the same thing.

Sub said:
I don't see it, you'll have to explain.

Ryan said:

But I do. This is quite important to your position. You obviously think it for a reason.

Ryan said:
At least accept that there is a property of being observable and another property of being able to observe.

Sub said:
I'm happy to say that some physical states can be representational of other physical states. I don't see that as a particular property of minds though.

Ryan said:
In this instance I would say that the mind is needed to be aware of the representation. Without the mind, representation/intentionality is pretty meaningless.

Sub said:
Why? What makes minds so special? I just see this as a religious way of looking at the matter. Intentionality is a false prescientific theory of mental content and representation is ubiquitous throughout nature.

Ryan said:
First of all, you can't really know for sure that intentionality exists for any other system but your own.

Of course I can! Intentionality is a public phenomenon carried in language. It's phenomenology that is tricky and private not intentionality.

You can't know this because the mind is limited and spotty.

As I see it intentions are unaffected by the mind's spottiness - they are an instrumentalist solid not a realist one.

Second, the known/observed and the unknown/unobserved are reminders that there at least 2 essential properties in nature.



I'll say it again - you are making a metaphysical assumption that will end in tears. This commitment to primary and secondary qualities lead directly, as Berkeley demonstrated, to the sceptical conclusion that only the mental properties are knowable and ultimately the denial of the physical. Idealism. There's no way around it.

We start with the known/observed and we assume there exists the unknown/unobserved somewhere out there. I will tentatively say that the mind is the observed/known.

Just read Berkeley. You start here, you end up in idealism. I don't need to repeat the argument, it's all over the internet as it's a staple of Descartes to Kant courses everywhere.
 
Ryan said:
I came to the need for a second kind of general property. I have been explaining them as the observable and the observer.

Sound similar-ish to the experiencer/experienced thing taxonomy we've been hearing about a lot in the thread?
 
Ryan said:
I came to the need for a second kind of general property. I have been explaining them as the observable and the observer.

Sound similar-ish to the experiencer/experienced thing taxonomy we've been hearing about a lot in the thread?

A thing you have not nor can deal with.

All you have ever had in your life are your subjective experiences. Nothing else.

You have not known any people. You have known your subjective experiences of them.

For there to be a subjective experience there must be both a subject and an experience.

To deny this is to live in a delusion denying experience.
 
I'll just go with this one:

As for pulling this sort of data out of the eye, that wouldn't, in principle, seem quite so...interesting. And here, I think, as you probably appreciated, the subjects were asked (I believe) to imagine/remember the letter, they weren't seeing it (in the decoding tests, I'm not sure about the training).

That I don't find so surprising - remember what I said about the brain 'leaning forward' into the incoming data. The point with a retinotopic sheath is that it is isomorphic with the surface of the eye. We've known this shortly after penicillin allowed people to survive serious brain injuries long enough to discover the correlation. Don't, for God's sake, be fooled into thinking that this is a cartesian screen, it's just a way of effectively moving the eye into the brain for ease of processing. The input layers of a sheath might be isomorphic but by the time you are through seven or so layers of neurons you are not in Kansas any more.

The point is that one of the more promising ways of thinking about imagining or visualising is that it's a rather clever recycling of expectation phenomena and we've know that imagining things can cause similar excitation.

So yes, I think it's possible that the the claimed response is out there to be found and I assume they know that too - a simple literature survey will do that. However my worry is that voxels lack the resolution and FMRI the agility to actually catch the dance of processing. If it's a very blocky image that they are trained and prompted with (and it looks to be) and they are asked to really stabilise the visualised image then maybe. However I still think they'd get more noise than signal. My worry is that the strategy used for extracting signal from noise might be so good that it will produce signal where none is. I don't see any methodological controls for false positives, which would be the first thing I'd think about in design and the second thing I'd talk about - who have we protected against fooling ourselves with processing artefacts?

It's cool research though and if it inspirese higher resolution voxels and more agile scanning techniques, that's cool. But I still don't like using blood flow as an indicator of neural activity, It tells you that something is going on, but not what. It's like the old searching for your keys under a streetlight because it's too dark in the alley that you lost them in. It's fine as long as people bear in mind how blunt a tool it is.

They never do.
 
Ryan said:
I came to the need for a second kind of general property. I have been explaining them as the observable and the observer.

Sound similar-ish to the experiencer/experienced thing taxonomy we've been hearing about a lot in the thread?

A thing you have not nor can deal with.

All you have ever had in your life are your subjective experiences. Nothing else.

You have not known any people. You have known your subjective experiences of them.

For there to be a subjective experience there must be both a subject and an experience.

To deny this is to live in a delusion denying experience.

Only a zombie wouldn't get bored of writing the same few assertions over and over again.
 
Only a zombie wouldn't get bored of writing the same few assertions over and over again.

Then you must be a zombie.

You have no answer for it.

You have your subjective experiences and what you subjectively make out of them.

You have nothing else.

You are a mind all alone trying to hide behind the works of other minds.
 
I'll just go with this one:

As for pulling this sort of data out of the eye, that wouldn't, in principle, seem quite so...interesting. And here, I think, as you probably appreciated, the subjects were asked (I believe) to imagine/remember the letter, they weren't seeing it (in the decoding tests, I'm not sure about the training).

That I don't find so surprising - remember what I said about the brain 'leaning forward' into the incoming data. The point with a retinotopic sheath is that it is isomorphic with the surface of the eye. We've known this shortly after penicillin allowed people to survive serious brain injuries long enough to discover the correlation. Don't, for God's sake, be fooled into thinking that this is a cartesian screen, it's just a way of effectively moving the eye into the brain for ease of processing. The input layers of a sheath might be isomorphic but by the time you are through seven or so layers of neurons you are not in Kansas any more.

The point is that one of the more promising ways of thinking about imagining or visualising is that it's a rather clever recycling of expectation phenomena and we've know that imagining things can cause similar excitation.

So yes, I think it's possible that the the claimed response is out there to be found and I assume they know that too - a simple literature survey will do that. However my worry is that voxels lack the resolution and FMRI the agility to actually catch the dance of processing. If it's a very blocky image that they are trained and prompted with (and it looks to be) and they are asked to really stabilise the visualised image then maybe. However I still think they'd get more noise than signal. My worry is that the strategy used for extracting signal from noise might be so good that it will produce signal where none is. I don't see any methodological controls for false positives, which would be the first thing I'd think about in design and the second thing I'd talk about - who have we protected against fooling ourselves with processing artefacts?

It's cool research though and if it inspirese higher resolution voxels and more agile scanning techniques, that's cool. But I still don't like using blood flow as an indicator of neural activity, It tells you that something is going on, but not what. It's like the old searching for your keys under a streetlight because it's too dark in the alley that you lost them in. It's fine as long as people bear in mind how blunt a tool it is.

They never do.

I would take all those caveats on board.

Now, if (if) it's true that either some sort of 'typing via thinking' is either possible or in development (that article is 6 years old) and the system works, then wouldn't that be a sort of litmus test?

These days, of course, 'works' might mean leaning heavily on a spellchecker/predictive text algorithm, but since most teenagers are (a) illiterate, (b) if that qualifies as an impairment and (c) it works (even if they have to check and/or correct prompts) then whoopee. We might have something akin to mind reading. Albeit fMRI scanners would have to get quite a bit smaller.

Not this:

DJB-213.jpg

or even this:

Tim-Parker-WUSTL.png

maybe this:

ar-alt-portable-brain-scanner-01.jpg

and


Facebook just revealed at its F8 conference that the company has had 60 engineers working on a brain-computer interface that will let you type words merely by thinking them. The technology won't eavesdrop on the thoughts you don't want to share, but will capture the words you think of speaking without speaking them out loud, much like sending a telepathic message in a science fiction movie.

Think this is impossible or decades in the future? Not so much. Technology has existed for years that lets paralyzed people type by thought in exactly this way. It requires a surgical brain implant, though, which most Facebook users probably don't want. Facebook thinks it can lick that problem by using optical imaging to scan your brain 100 times per second and detecting the words you want typed. The company is working with scientists from several large universities, including Johns Hopkins and UC Berkley, to make this a reality.


https://www.inc.com/minda-zetlin/fa...hat-lets-you-type-words-by-thinking-them.html
 
If there is a machine that a person can influence with their mind why is it so hard to believe it can influence the brain?

What no machine can do is repair a damaged mind.
 
If there is a machine that a person can influence with their mind why is it so hard to believe it can influence the brain?

What no machine can do is repair a damaged mind.

Um, not for the first time, I have to say that I'm not sure you're bringing up something that is being disagreed with.

I don't think anyone is saying that mind does not exist (which you might have thought was the case before) and I don't think anyone is saying (I know I'm not for example) that mind doesn't influence brain. My current opinion is that it likely does. I'm not sure exactly how, or exactly when and in what ways or to what extent, but I'm ok with taking the general view that it is likely to play some causal role.

I do not see how it is even possible for the mind to exert autonomous control. That is not to say I rule it out completely. It goes in the folder labelled, "would like to hear a good case laid out for it in detail, ideally with some good clinical evidence". Ditto free will. Both of those are, I suspect, where we will disagree.
 
If there is a machine that a person can influence with their mind why is it so hard to believe it can influence the brain?

What no machine can do is repair a damaged mind.

Um, not for the first time, I have to say that I'm not sure you're bringing up something that is being disagreed with.

I don't think anyone is saying that mind does not exist (which you might have thought was the case before) and I don't think anyone is saying (I know I'm not for example) that mind doesn't influence brain. My current opinion is that it likely does. I'm not sure exactly how, or exactly when and in what ways or to what extent, but I'm ok with taking the general view that it is likely to play some causal role.

What does a mind experience?
 
If there is a machine that a person can influence with their mind why is it so hard to believe it can influence the brain?
Interestingly, the decoder does not literally read minds, it literally only reads brain activity. That is, in fact, what is most interesting, I think.

- - - Updated - - -

What does a mind experience?

You tell me. I'm reluctant to go around again. :)

- - - Updated - - -

ps Um, I added this:

I do not see how it is even possible for the mind to exert autonomous control. That is not to say I rule it out completely. It goes in the folder labelled, "would like to hear a good case laid out for it in detail, ideally with some good clinical evidence". Ditto free will. Both of those are, I suspect, where we will disagree.

I really must stop post editing!!! :(
 
Interestingly, the decoder does not literally read minds, it literally only reads brain activity. That is, in fact, what is most interesting, I think.

The brain activity it reads is actively and purposefully generated by a mind.

You tell me. I'm reluctant to go around again. :)

You think dodging every point I make is going around.

You went nowhere.

What does a mind experience?

Do you know?

Do you know of anything that is not an experience?
 
I’m absolutely certain that everyone here has a different thing in mind when they use the word mind.

I’m with Ryle. Once we have a multimodal catalog of all m and p states, mind will not be one of them anymore than you can find society or university.
 
Subsymbolic;542i844 said:
I'll just go with this one:

As for pulling this sort of data out of the eye, that wouldn't, in principle, seem quite so...interesting. And here, I think, as you probably appreciated, the subjects were asked (I believe) to imagine/remember the letter, they weren't seeing it (in the decoding tests, I'm not sure about the training).

That I don't find so surprising - remember what I said about the brain 'leaning forward' into the incoming data. The point with a retinotopic sheath is that it is isomorphic with the surface of the eye. We've known this shortly after penicillin allowed people to survive serious brain injuries long enough to discover the correlation. Don't, for God's sake, be fooled into thinking that this is a cartesian screen, it's just a way of effectively moving the eye into the brain for ease of processing. The input layers of a sheath might be isomorphic but by the time you are through seven or so layers of neurons you are not in Kansas any more.

The point is that one of the more promising ways of thinking about imagining or visualising is that it's a rather clever recycling of expectation phenomena and we've know that imagining things can cause similar excitation.

So yes, I think it's possible that the the claimed response is out there to be found and I assume they know that too - a simple literature survey will do that. However my worry is that voxels lack the resolution and FMRI the agility to actually catch the dance of processing. If it's a very blocky image that they are trained and prompted with (and it looks to be) and they are asked to really stabilise the visualised image then maybe. However I still think they'd get more noise than signal. My worry is that the strategy used for extracting signal from noise might be so good that it will produce signal where none is. I don't see any methodological controls for false positives, which would be the first thing I'd think about in design and the second thing I'd talk about - who have we protected against fooling ourselves with processing artefacts?

It's cool research though and if it inspirese higher resolution voxels and more agile scanning techniques, that's cool. But I still don't like using blood flow as an indicator of neural activity, It tells you that something is going on, but not what. It's like the old searching for your keys under a streetlight because it's too dark in the alley that you lost them in. It's fine as long as people bear in mind how blunt a tool it is.

They never do.

I would take all those caveats on board.

Now, if (if) it's true that either some sort of 'typing via thinking' is either possible or in development (that article is 6 years old) and the system works, then wouldn't that be a sort of litmus test?

These days, of course, 'works' might mean leaning heavily on a spellchecker/predictive text algorithm, but since most teenagers are (a) illiterate, (b) if that qualifies as an impairment and (c) it works (even if they have to check and/or correct prompts) then whoopee. We might have something akin to mind reading. Albeit fMRI scanners would have to get quite a bit smaller.

Not this:

View attachment 15790

or even this:

View attachment 15788

maybe this:

View attachment 15789

and


Facebook just revealed at its F8 conference that the company has had 60 engineers working on a brain-computer interface that will let you type words merely by thinking them. The technology won't eavesdrop on the thoughts you don't want to share, but will capture the words you think of speaking without speaking them out loud, much like sending a telepathic message in a science fiction movie.

Think this is impossible or decades in the future? Not so much. Technology has existed for years that lets paralyzed people type by thought in exactly this way. It requires a surgical brain implant, though, which most Facebook users probably don't want. Facebook thinks it can lick that problem by using optical imaging to scan your brain 100 times per second and detecting the words you want typed. The company is working with scientists from several large universities, including Johns Hopkins and UC Berkley, to make this a reality.


https://www.inc.com/minda-zetlin/fa...hat-lets-you-type-words-by-thinking-them.html

That’s easy. Thinking on words really is just talking with the speech production out of gear. That’s a nice serial signal at certain places in the brain. Possible but not close. The public side of the brain is easy. The private side practically impossible.
 
The brain activity it reads is actively and purposefully generated by a mind.

That is something you believe.
Not something you could ever demonstrate.

Facebook just revealed at its F8 conference that the company has had 60 engineers working on a brain-computer interface that will let you type words merely by thinking them.

The thinking is what pushes the machine.

The mind.

If one understands what is happening in front of them they know nothing can be proven in this forum.

All that can happen is people can present ideas and discuss them. Nothing can be proven by anyone. Not one thing.

Evidence (something a mind experiences) can be presented.

But it is insanity to ask for evidence that a mind experiences. It is asking for something to experience.
 
Back
Top Bottom