Skip to main content
Topic: Artificial intelligent - Ideas producer  (Read 6956 times)

Artificial intelligent - Ideas producer

is there any possibility , if someday ..

there is computer that can produces ideas ?

or  is   currently   there is development in that manners ?


Re: Artificial intelligent - Ideas producer

Reply #1
Right now, the closest one I know about is an IBM machine named "Watson". It seems to be able to come close to original thought-- you have to be pretty good to do well at chess, for example-- but beyond that, we're still talking science-fiction.

Data, the Star-Fleet android from Star Trek, is not coming to a coffee-house near you any time soon.
What would happen if a large asteroid slammed into the Earth?
According to several tests involving a watermelon and a large hammer, it would be really bad!

Re: Artificial intelligent - Ideas producer

Reply #2
Good on you mjsmsprt40 for being aware of that now get them to link it to John McCain's brain if it hasn't shrunk too much!
"Quit you like men:be strong"

Re: Artificial intelligent - Ideas producer

Reply #3
is there any possibility , if someday ..

there is computer that can produces ideas ?

Never.
There's no such thing as artificial intelligence. The term has been used to describe what is in reality a mere automatism  between a (series of) given input(s) and a predefined (series of) outputs.

At the XIX century a Turkish (I believe so) astonished several European courts with a mechanic automaton that could "play" chess.

However, one could conceive the idea of a machine more intelligent than any human being. Such machine could build zillions of other machines equally more intelligent than any human being.That's the basis of "Singularity",  something worthwhile of give it a thought.
A matter of attitude.

Re: Artificial intelligent - Ideas producer

Reply #4
i guess the main problem of Human mind  is finity.
while   new Invention , is depending to Human Mind .

AI , Probably can save Huge amount of many theories of everything , many memories , many files greater than Human .
but for somehow , it can't produce ideas / creative .
it cant create something new even they have huge amount of Science .

because it is just Hardwares , and   programs .
it just can understand something that already programmed .

i dont think it can  understand " cookies recipt " ?
or understand what we are talking about in this thread .

:coffee:

btw , is anyone have ideas ?
about how Ideas happen ? or the Formula of ideas ?
in example : hey ,  i have ideas to Create new kind of Cookies  :chef:   .











Re: Artificial intelligent - Ideas producer

Reply #5
Oh I know this will stretch the intelligence aspect of this thread but in a lighter note most forums also show example of artificial intelligence too!  :devil:
"Quit you like men:be strong"


Re: Artificial intelligent - Ideas producer

Reply #7
Bing added some AI to its search engine.[1] So AI is finding its niche - it's an enhancement to search engines. The enhancement is that you do not always know if the answer comes directly from the database or has been touched up a bit by the algorithm for your pleasure.

When ChatGPT came out, I participated in a debate on another forum about how intelligent AI is. My stance is that AI has no intelligence whatsoever. And machine learning does not learn. That is, there is no similarity or analogy, much less identity, between how human mind works and how AI works.

The opponents meanwhile hoped that progress in AI would help define intelligence, learning and consciousness "better" so as to be easily applicable to both humans and machines. And of course that there's a chance of AI becoming conscious, sentient, alive, aware etc.

Once upon a time the idea of a chess computer was that you'd feed the chess rules to the computer and voila, all humans would be beaten. Did not turn out like this. Instead, all history of chess games had to be fed into computers, along with what a win and defeat look like, and only then finally chess computers defeated humans. Similarly in modern AI the heavy lifting is done by human-produced database and the "learning algorithm" does hardly anything worthy of note.

Computers do not work by abstract rules. Some aspects of programming language may leave the impression that there are some quasi-philosophical principles being fed to the computer, such as addition, "processing", "simulation" etc. but actually the computer operates as follows. To do something, it needs a representation of "something", and then it can do something with the representation. And the representation must be trivially concrete, such as one or some memory slots. Since this is so, a computer cannot (categorically so, as in never ever, and never will) even have a concept of addition. To add numbers, it can only take a literal memory slot and add it to another literal memory slot, and it can only add this way up to the physical limit when it runs out of memory slots.

So the work principle of a modern computer is exactly the same as on an abacus. When the abacus runs out of beads, it literally cannot compute any further. What to do in such a situation is up to the human user of the abacus - either bring in more abacuses or reassign the values of some beads to accommodate bigger numbers.

This is why the Y2K problem was a thing: Computers could not count time. Counting time should be easy: Just keep adding seconds. But computers' time addition operation hit a wall, unless humans manually interfered to make them count differently. Computers were not aware of Y2K problem approaching and thought nothing of it. And they think nothing of it even now after fixes and patches have been applied. They only do what the man-made program tells them to do; they do nothing beyond or apart from it.

Similarly AI has no concept of anything. It has been made to simulate human responses and reactions, but this is not happening along some rational or logical lines. It is happening because a vast catalogue of good-quality man-made material has been fed to it and the patterns in the material have been analysed enough so as to "make sense" - make sense to humans who use the AI. When the AI starts misbehaving (from human point of view), it cannot self-correct. It needs manual interference. AI is software. It only does what software can do and has changed nothing about the essence of software.

A genuinely good insight I gained in the debate was that programmers or people with plenty of non-trivial knowledge about the inner workings of computers can talk hopelessly past each other. It turns out there are different kinds of programmers. I thought programming meant creating software. However, there is another kind of programming that means making hardware part of a computer system by means of some machine-level coding. There is probably no necessary philosophical break between
the two, but it seems that the latter type is more prone to assuming that by creating their systems they are bringing forth new life or such.
I have not found the AI on Bing though. Maybe it requires logging in.

Re: Artificial intelligent - Ideas producer

Reply #8
When ChatGPT came out, I participated in a debate on another forum about how intelligent AI is. My stance is that AI has no intelligence whatsoever. And machine learning does not learn. That is, there is no similarity or analogy, much less identity, between how human mind works and how AI works.
That might be overstating the negative a bit. We are extremely good statistical pattern matchers, much better than closely related monkeys for example, so there may well be a certain conceptual similarity between how some part(s) of our brain work and how these algorithms work. That's also why the people who developed them named them "neural nets," because they were designed after a certain model of how the brain might work.

The fact that it outputs such natural-sounding prose certainly doesn't contradict the hypothesis exactly, but we do need several algorithmic orders of magnitude less training data. Crucially, there is nothing much intelligent about merely a simulacrum of our visual and auditory processing and production abilities. But it could be a step towards it.

Re: Artificial intelligent - Ideas producer

Reply #9
Matt Briggs has been arguing against this sort of foolishness, on various fronts, for quite some time!

Quote
Beware the Deadly Sin of Reification!, sayeth the Philosopher. Reification is the Snake of Science. It lurks. It sneaks. It insinuates. The Snake tells the scientist his thoughts are good, that they are better, even, than the scientist thought. Flattered, the scientist comes to believe his model not only describes his uncertainty in Reality, but that his model is Reality.

We’ve seen this sin, you and I dear reader, hundreds of times over the years. Yet the funniest instance is before us now, with Elon Musk, Steve Wozniak, Yuval Noah Harari, and even ex-presidential candidate Andrew Yang, signing a document declaring WE ARE FRIGHTENED UNTO DEATH OF THE COMPUTERS WE PROGRAMMED AND WE DON’T KNOW HOW TO STOP PROGRAMMING THEM. Or some such name.

It’s true. The lurid fantasies of those who believe AI is not only not artificial but ackshually intelligence, say that AI is gonna get us, and that it will soon surpass we puny brained men and think greater thoughts than can now be thunk (you heard me). The Nervous are sure these wondrous cogitations must include the thought that man has outlived his usefulness, and that, sad as it might be, he is too stupid to let live.

Well, there is much truth in that notion as any glance at the “news” confirms. The obviousness of the solution is surely a driving concern of the Nervous. But it isn’t the ultimate cause of it. That comes from believing a model, and concluding that model is Reality.
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

Re: Artificial intelligent - Ideas producer

Reply #10
...there may well be a certain conceptual similarity between how some part(s) of our brain work and how these algorithms work. That's also why the people who developed them named them "neural nets," because they were designed after a certain model of how the brain might work.
You don't get to be a psychologist without acknowledging that the brain and the mind are different things. You may get to be a neuroscientist, but not a psychologist.

Assuming that the brain and the mind are the same or that the distinction is unnecessary is a typical physicalist fallacy. It's as serious a fallacy as assuming that grammar and syntax are the same or that words and meanings are the same. Or that the symptom and the pathology are the same.

In my debate, one good old-time programmer came up with the aphorism "The brain is what happens when you keep adding features and never rewrite from scratch." Unfortunately for him, it is not in the nature of brains to happen. Also, physically the brain is among less complicated parts of the body - it's just a mass of rather uniform neurons, no other tissue, no detectable distinct features. And of course by "brain" he meant the mind, but that's a no-brainer.

The intelligence of AI has interesting connection points. For example, if AI is intelligent, what prevents one from saying it is sentient and probably alive? And if yes, then aren't current smartphones already smart? And if yes, doesn't this have ethical implications like every time you switch it off you are killing it? And when it ceases to function and you throw it away you are desecrating a dead body?

Also, are philosophical zombies a possibility? Physicalists, if they are consistent, should probably assume that a philosophical zombie is a human being, plain and simple, there is no difference between the two.

Matt Briggs has been arguing against this sort of foolishness, on various fronts, for quite some time!

Quote
Beware the Deadly Sin of Reification!, sayeth the Philosopher. Reification is the Snake of Science.
Those engineerial masterminds cannot help but reify. They know how machines work and they think everything else works the same way, because if something works, then it must definitely work like the thing whose workings they know, namely machine.

Re: Artificial intelligent - Ideas producer

Reply #11
Also, are philosophical zombies a possibility? Physicalists, if they are consistent, should probably assume that a philosophical zombie is a human being, plain and simple, there is no difference between the two.
Philosophical zombies are incoherent. Of course there's no difference. :)

Re: Artificial intelligent - Ideas producer

Reply #12
Oh dear. There is a difference between whether philosophical zombies are conceivable and whether they are ontologically and empirically possible.

They are definitely conceivable, therefore not incoherent. Both you and I deny their possibility for different reasons, but one would have to be the crudest type of materialist to say they are incoherent. The crudest type as one that would deny there is a difference between words and meanings.

Re: Artificial intelligent - Ideas producer

Reply #13
Philosophical zombies are self-refuting because thinking you experience qualia is simply the same thing as experiencing qualia. That's what qualia are. Unless something is lying about experiencing qualia, but the philosophical zombie isn't lying by definition. This is a logical impossibility. If they're not lying about it, that means they're experiencing qualia.

Of course you could conceive of magical qualia but then you're not saying anything meaningful about qualia.

Re: Artificial intelligent - Ideas producer

Reply #14
This is what you get from reading secondary interpretations. Should I try to explain how philosophical zombies really work? Here goes.

The thought experiment addresses the empiricist idea that to really know means to verify externally. In contrast, for dualists and idealists to really know means to comprehend by introspective conscious experience.

A philosophical zombie behaves as if having consciousness and intelligence and reacts for all external purposes as if experiencing qualia. In truth, he is a zombie, i.e. no consciousness, emotions and feelings whatsoever.

Lying or non-lying doesn't enter the picture at all. To lie or not to lie, the zombie should have awareness of the distinction of its experience and consciousness of qualia versus his reactions and behaviour, but the point is that all he has is reactions and behaviour, no consciousness and experience whatsoever. The zombie only has the reactions and behaviour. He doesn't have the corresponding internal experience that would enable him to ponder a la "Well, I really don't have those experiences, but life seems to go better as long as I pretend that I do, so I'll just keep on pretending." He is neither lying or not lying. Truth is not part of his system.

He's a zombie, capisce? The thought process is not there. The sense-experience is not there. The thing to ponder with - the mind - is not there. All there is is the reactions and behaviour, for empirical purposes identical for the being to be deemed a conscious, rational, live and well Homo sapiens.

As a flawed approximation, think of a human-shaped animal: The sufficient instincts are there in order to go through the motions of ordinary human life, including sophisticated speech and elaborate professional and social skills, but the mind and intellect are not there. The point of contention highlighted by the thought experiment is: Are the empirically verifiable biological processes, including the firing neurons as observable by neuroscientists, identical to internal experience or not? In other words, is phrenology true (i.e. is the brain identical to the mind)? Alternatively, is it permissible to presume or assume that external reactions are sufficient signs of some internal reflection, self-reflection, introspective consciousness and experience of qualia (by some other possible theory than phrenology, as phrenology is known to be false)?[1]

The thought experiment should be enlightening with regard to AI. AI talks (and writes and "creates") human-like enough so as to fool many. Consequently, since it "behaves" like a human being, doesn't it follow that it really is creative and intelligent? Doesn't it follow that we should treat it like a human being the full monty, along with human rights and the whole nine yards? If you say no, then why? Doesn't "yes" follow on the empiricist-physicalist theory?

There can be said to be two main schools of thought with regard to attitude towards AI. One is the Turing Test school of thought: If it seems intelligent enough to fool intelligent beings, then it must be said that it is intelligent. The other is Chinese Room: It may very well seem like it, but some of it may still lurk hidden in the shadows so we cannot reliably conclude that it is it.

I'm of a third school of thought. AI is software. We know all about software. Nothing is in the shadows. The essence of software is simulation (or modelling). And the essence of simulation is: NOT the real thing. It may very well seem intelligent and creative, but we know for absolute fact that it is not. Artificial intelligence is 100% artificial and 0% intelligent.
Quoting you, "Of course there's no difference." Do you see any wiggle room for yourself?

Re: Artificial intelligent - Ideas producer

Reply #15
Or maybe — just maybe — I've actually read the text. There's this little trick in reading texts, namely using your brain — or your mind, if you prefer, but the English idiom uses the word brain — to draw logical conclusions based on the premises. And sometimes you'll find you come to the conclusion they didn't actually take their own thought experiment very seriously.

Are the empirically verifiable biological processes, including the firing neurons as observable by neuroscientists, identical to internal experience or not? In other words, is phrenology true (i.e. is the brain identical to the mind)?
The mind is a process created by the brain. To call them identical is to say that flying is identical to an airplane.

Alternatively, is it permissible to presume or assume that external reactions are sufficient signs of some internal reflection, self-reflection, introspective consciousness and experience of qualia (by some other possible theory than phrenology, as phrenology is known to be false)?
To be a philosophical zombie you need to be identical to a non-zombie, yet somehow not experience qualia, yet somehow behave exactly the same as a non-zombie. This is incoherent. The zombie will have to behave differently, that is to say that it doesn't experience the smell of wood or colors, or lie. The latter is also behaving differently, but in a form that will be detectable by — yes indeed — neurons in the brain. The zombie's brain will be different.

"AI" is literally about as different as can be. AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.

Re: Artificial intelligent - Ideas producer

Reply #16
The thought experiment should be enlightening with regard to AI. AI talks (and writes and "creates") human-like enough so as to fool many. Consequently, since it "behaves" like a human being, doesn't it follow that it really is creative and intelligent? Doesn't it follow that we should treat it like a human being the full monty, along with human rights and the whole nine yards? If you say no, then why? Doesn't "yes" follow on the empiricist-physicalist theory?
But of course the answer to that could be "yes". Turn it around. How do you know that's not what we're doing? ;)

I don't understand why you keep saying things like that as if they're some kind of gotcha. It's fundamentally quite similar to whether it is or isn't okay to keep cattle as livestock or humans as livestock.

Re: Artificial intelligent - Ideas producer

Reply #17
"AI" is literally about as different as can be. AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.
I should note that I mean this in the logical sense by which I reject philosophical zombies as incoherent. In actual practice it may not be possible to be sufficiently complex while processing certain types of information without experiencing qualia as a side effect, or perhaps rather as an unavoidable consequence.

Re: Artificial intelligent - Ideas producer

Reply #18
Or maybe — just maybe — I've actually read the text. There's this little trick in reading texts, namely using your brain — or your mind, if you prefer, but the English idiom uses the word brain — to draw logical conclusions based on the premises. And sometimes you'll find you come to the conclusion they didn't actually take their own thought experiment very seriously.
You are still missing the point of thought experiments. They are a tool of philosophy. The philosophical zombie does not have to be empirically plausible to serve as a lesson. The lack of empirical plausibility does not make it any less serious.

E.g. truth is an abstract concept. It is not anything empirical at all. Yet it is absolutely stone-dead serious as a matter of philosophy and law. Same with rights and wrongs.

The mind is a process created by the brain. To call them identical is to say that flying is identical to an airplane.
Well, flying is not a process created by the airplane. Certainly there is absolutely nothing in the process that the airplane creates. Rather, it is what the engineers and pilots create based on the properties of air and aerodynamic components. To assume that the airplane creates the process is a horrendously false description of what is going on.

Now, knowing this, why would you assume your approach to how the brain and the mind relate has any resemblance to what is really going on? You just compared the airplane+flying to the brain+mind. Want to give it another shot?

To be a philosophical zombie you need to be identical to a non-zombie, yet somehow not experience qualia, yet somehow behave exactly the same as a non-zombie. This is incoherent.
Let's repeat: The philosophical zombie does not experience qualia. It acts *as if* it did, but it *does not*. Moreover, it does not pretend - there's just the behaviour, neither pretended or honest, but behaviour nonetheless. Like a sleepwalker walking - he sure walks, but he knows nothing of it and there's no related intention, pretension, responsibility or whatever. That's the definition of zombie.

The philosophical zombie *is not* identical to a non-zombie. It is identical for empirical purposes, but the point of the thought experiment is to highlight that the empirical is not all there is.

Consider the following. When a healthy human being touches a hot stove, his hand gets burned AND he removes his hand *due to pain*. When a person with hypoesthesia (or whatever the loss of sensation is) touches a hot stove, his hand gets burned, but he does not feel the pain. Now, suppose there is someone who does not feel the pain when his hand gets burned, but since childhood he has learned that it is customary to remove the hand and wince when touching a hot stove, so that's what he does because everybody else does it. This is a baby-step towards the philosophical zombie.

Your statements here are not demonstrating the incoherence of the concept of the philosophical zombie. Your statements demonstrate that you either do not comprehend the concept (which I find hard to believe) or that you reject the concept. Rejection is not refutation. Rejection of the thought experiment due to your different presuppositions is just non-interaction with the thought experiment that is meant to challenge your very presuppositions. It's the fallacy of begging the question.

The zombie will have to behave differently, that is to say that it doesn't experience the smell of wood or colors, or lie. The latter is also behaving differently, but in a form that will be detectable by — yes indeed — neurons in the brain. The zombie's brain will be different.
Just refusal to follow through with the thought experiment. Come on, thought experiments are for people who are able to think, particularly people who are able to consider points of view that are not their own. It's a form of empathy.

"AI" is literally about as different as can be.
Yes, literally as in its body (the digital computer) is nothing like the human body. And this fact should be enlightening. Being materially as different as it is, how can it act so similar? For some people, this convincingly demonstrates or at least lucidly illustrates that the mind is NOT a process of the brain, but rather it can emerge from anything, such as a machine, a flowing rivulet if it accidentally hits the right resonance or whatever. I of course reject this, because I take AI to be simulation - radically and categorically NOT the real thing.

AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.
In terms of a physical experiment in the current stage of civilisation, I agree with you. But in terms of a philosophical discussion you are frankly not on board what a thought experiment is.

"AI" is literally about as different as can be. AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.
I should note that I mean this in the logical sense by which I reject philosophical zombies as incoherent. In actual practice it may not be possible to be sufficiently complex while processing certain types of information without experiencing qualia as a side effect, or perhaps rather as an unavoidable consequence.
Again, it so happens that the brain is among the physically and biologically less complicated organs. So, assuming that the brain is the mind (or any other fallacious version of the same, such as "the mind is a process of the brain" or "the mind is what the brain does"), complexity is the wrong description of what is going on even from the purely empirical point of view.

Re: Artificial intelligent - Ideas producer

Reply #19
I'm not talking about being empirically plausible. I'm talking about actually doing the thought experiment. You have to remain conceptually consistent. The Chinese Room you mentioned is another example. Within the confines of the thought experiment, the man using the book may be dumb but the book still has to be capable of correctly speaking Chinese, so can you actually say the room as a system isn't conscious as the thought experiment pretends? All you've actually said is that the man is like some parts of the body, which isn't particularly interesting. You haven't said anything about consciousness.

Now, knowing this, why would you assume your approach to how the brain and the mind relate has any resemblance to what is really going on? You just compared the airplane+flying to the brain+mind. Want to give it another shot?
I don't understand thought experiments, but clearly you understand analogies. ;) When you think about the context of the discussion it might become obvious why I purposefully picked a man-made machine, but swapping in a bird doesn't change anything about the analogy.

But to get back to what you said above it:
Well, flying is not a process created by the airplane. Certainly there is absolutely nothing in the process that the airplane creates. Rather, it is what the engineers and pilots create based on the properties of air and aerodynamic components. To assume that the airplane creates the process is a horrendously false description of what is going on.
Is this even an equivocation fallacy? It's the configuration of the wings and engines that creates the process of flight. That which caused the wings and engines to be isn't an active part of the process, but a prerequisite to it.

The philosophical zombie *is not* identical to a non-zombie. It is identical for empirical purposes, but the point of the thought experiment is to highlight that the empirical is not all there is.
Shocking, who'd have thought. :) Thought experiments can show the opposite of what they claim or they can be logically impossible. I rather doubt that's something you disagree with; you just think this one's decent.

Consider the following. When a healthy human being touches a hot stove, his hand gets burned AND he removes his hand *due to pain*. When a person with hypoesthesia (or whatever the loss of sensation is) touches a hot stove, his hand gets burned, but he does not feel the pain. Now, suppose there is someone who does not feel the pain when his hand gets burned, but since childhood he has learned that it is customary to remove the hand and wince when touching a hot stove, so that's what he does because everybody else does it. This is a baby-step towards the philosophical zombie.
Ah yes, the person who's completely identical on account of hypoesthesia clearly demonstrates… wait a second.

In terms of a physical experiment in the current stage of civilisation, I agree with you. But in terms of a philosophical discussion you are frankly not on board what a thought experiment is.
Of course thought experiments are useful, but you have to conduct them correctly.

Again, it so happens that the brain is among the physically and biologically less complicated organs. So, assuming that the brain is the mind (or any other fallacious version of the same, such as "the mind is a process of the brain" or "the mind is what the brain does"), complexity is the wrong description of what is going on even from the purely empirical point of view.
Saying the brain isn't complex is just downward silly.

Re: Artificial intelligent - Ideas producer

Reply #20
When you think about the context of the discussion it might become obvious why I purposefully picked a man-made machine, but swapping in a bird doesn't change anything about the analogy.
Of course there's a vital difference between the bird and the airplane. Vital in every sense. The difference between a living being and a machine is metaphysically categorical.

I'll leave you pondering the fact that you did not reject, refute or even problematise any of the corollaries that I pointed out to you, such as the ethical corollaries following from the assumption that AI is intelligent to whatever degree. And that you did not demonstrate the incoherence of the thought experiment, you only rejected it based on your own presuppositions. Demonstration of a position's incoherence, if you still want to try it, is to be done on the terms of the position. You definitely have elucidated its inconsistency with your concepts, but you have not demonstrated its internal inconsistency.

And you had nothing to say about the sleepwalker analogy to clarify the definition of zombie for you. So I guess that one works well. The example of hypoesthesia on the other hand was not an analogy, not intended to claim at all that such condition is empirically identical to a healthy human. Rather, it was one in a series of illustrations to get you closer to the concept of philosophical zombie, which you still have failed at.

In my opinion the mind and the brain relate more like time versus watches and clocks. I'd like to suppose that you don't think that time is a process created by a watch, but I'm not quite sure anymore.

Re: Artificial intelligent - Ideas producer

Reply #21
I'll leave you pondering the fact that you did not reject, refute or even problematise any of the corollaries that I pointed out to you, such as the ethical corollaries following from the assumption that AI is intelligent to whatever degree.
Why would I reject the logical conclusion to treat such a hypothetical AI appropriately as per their level of consciousness and intelligence? That wouldn't make much sense.

And you had nothing to say about the sleepwalker analogy to clarify the definition of zombie for you. So I guess that one works well. The example of hypoesthesia on the other hand was not an analogy, not intended to claim at all that such condition is empirically identical to a healthy human. Rather, it was one in a series of illustrations to get you closer to the concept of philosophical zombie, which you still have failed at.
P-zombies aren't hard to imagine. The incoherence comes from the fact that if they're identical in every way, the conclusion must be that qualia are identical to thinking you experience qualia.[1] You disagree, but naturally in my opinion it's you who's importing definitions of qualia from outside the thought experiment. In your example, it's not coherent to say that your hand is in pain from touching a hot stove but that you feel nothing. It's neither or both.

In any case, I doubt it's fruitful to continue further down that path. It's been trodden. :)

In my opinion the mind and the brain relate more like time versus watches and clocks. I'd like to suppose that you don't think that time is a process created by a watch, but I'm not quite sure anymore.
In some sense they do. Clocks fool us into thinking time can be divided into concrete little chunks. But that aside, I do provisionally conceive of time as a process created by all "clocks"[2] put together.
Or the reverse, that no one has any. Given that we all think we have qualia that seems a bit sillier, though given an incoherent or incomplete definition of qualia it's certainly possible. But in that case we still have qualia; they're just not the particular thing called qualia here, so that would be a very unhelpful line of thought. Even if qualia aren't what we think they are, there's still a phenomenon we call qualia.
I.e., all the atoms in the universe.

Re: Artificial intelligent - Ideas producer

Reply #22
Why would I reject the logical conclusion to treat such a hypothetical AI appropriately as per their level of consciousness and intelligence? That wouldn't make much sense.
In some way the point has been all along whether movies like Terminator and Ex Machina represent a possibility that we should take seriously or we can treat them as mere fiction. Based on my metaphysics, concluding that there is exactly zero chance of AI waking up and taking over, I can calmly treat them as mere fiction.

Whereas you have trouble even following the timeline of AI. It has been with us for at least half a century or so. It is not hypothetical.

P-zombies aren't hard to imagine. The incoherence comes from the fact that if they're identical in every way...
They are not identical to humans in every way, as I explained. Clearly, p-zombies are impossible for you to imagine despite explanations.

There are other thought experiments and examples to demonstrate that the physical is not all there is that came up in this discussion, but you have let them all slip.

In your example, it's not coherent to say that your hand is in pain from touching a hot stove but that you feel nothing. It's neither or both.
Here's what I actually said: "Now, suppose there is someone who does not feel the pain when his hand gets burned, but since childhood he has learned that it is customary to remove the hand and wince when touching a hot stove, so that's what he does because everybody else does it."

It should not be too hard to understand: Acts *as if* in pain, but does not feel anything. To understand "as if" seems to be an impossibility for you.

But that aside, I do provisionally conceive of time as a process created by all "clocks"[1] put together.
Okay, I was right when I assumed the crudest type of materialist.

Clearly you have reasons for your stance and to reject other stances. Likely among the reasons is something like that truth matters. Interestingly, the concept of truth as explained by any materialist is internally incoherent. For hardcore believers of evolution, survival should be the value that trumps truth any day. Evolutionarily, truth-pursuers are always a weak minority and extra rare in high places. Th rulers are the powerful. In the animal world - and on consistent Darwinism there is no human world - truth-pursuers are not even a thing at all. On materialism, truth has no value and, for metaphysical consistency, must be construed as non-existent.
I.e., all the atoms in the universe.

Re: Artificial intelligent - Ideas producer

Reply #23
Regardless of the status of AI, its usage is not really controversial — only mostly mis-understood. It consists of tools for sorting queries to data sets. That's it. They're nothing more.

One can blame the fallacious Turing Test for much of the mischievous talk circulating in our commons: It in essence posits the "if it can fool us" criterion for detecting sentience. Would we apply a similar test to, say, our financial dealings? :) Madoff presumed so...

An instructive article was highlighted in a recent issue of the Manhattan Institute's magazine: Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems.

To ersi's point about Truth above, I'd mention the Pragmatist philosophers of two centuries ago, only to deride their positions — for much the same reason as he gives: Incoherency rears its ugly head in any serious application of their program.
Of course, that's what naturally comes of trying to elevate a maxim to a principle. :)

The difficulty and the problem of defining Truth is that it necessarily involves values... And, to date, we only have one example of creatures capable of acknowledging values, and acting upon them.
When we talk about truth (uncapitalized), we have to refer to particular domains (e.g., mathematical truth) for which the criteria can be specified. For Truth, with a capital T, we enter the realm of metaphysics — which persists despite our dependence on empiricism[1].
(The most bizarre and amusing philosophical system I've read is contained in A. N. Whitehead's "Process and Reality".)
For science.
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

Re: Artificial intelligent - Ideas producer

Reply #24
In any case, I doubt it's fruitful to continue further down that path. It's been trodden. :)
Oh dear how bad the arguments against the conceivability of zombies are, about half of them conflating conceivability with real-world possibility, which is the crudest form of missing the point. Have philosophers really lost their edge and wit?

To be clear, I firmly reject the real-world possibility of zombies. This does not make the thought experiment the least bit uncomfortable for me.

Different metaphysical assumptions, if followed through consistently, lead up to different conclusions about the possibility of zombies. Aristotelians think of sense of smell, colour, sound etc. as material, i.e. biological matter is vital and sentient, whereas there are immaterial aspects of thought. On this basis, it may be possible to manufacture a configuration of live matter resulting in sense-behaviour, but without intellect, which would constitute a zombie.

A point of contention remains due to the distinction of biological matter versus inert matter. If man-made artefacts can be produced using only inert matter, then zombies are not possible.

A Neoplatonist like myself thinks of sense of smell, colour, sound etc. as immaterial vis-a-vis biological organs. So there are more hurdles to zombies on Neoplatonism: first the manufacturing of a living being using biological matter, and second providing it with immaterial senses.

Another factor is theological assumptions: Can demons puppeteer, say, a freshly dead body? If yes, it would be somewhat close to a real-life zombie.

Whereas on materialism, as we have figured out now, we are all non-different from zombies, just as we are non-different from animals, just as we are non-different from machines. AI is intelligent, smartphones are smart, machines learn and teach, abacuses compute, airplanes create flying, dolls are babies, silicon breasts are breasts, plastic fruit is fruit and so on and so forth.

An instructive article was highlighted in a recent issue of the Manhattan Institute's magazine: Danger in the Machine: The Perils of Political and Demographic Biases Embedded in AI Systems.
Thanks, Oakdale, for an interesting article, and good it is available in pdf. I took a look at the conclusions. I agree that the main danger of AI is that it fools and misleads humans, and makes humans complacent as their tasks are automated. The threat is not that it would wake up, start behaving like human species and aim to take over the world.