Skip to main content
Topic: Artificial intelligent - Ideas producer  (Read 7232 times)

Re: Artificial intelligent - Ideas producer

Reply #25
Isn't the Philosophical Zombie defined as soulless? :)
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

Re: Artificial intelligent - Ideas producer

Reply #26
I remember some years ago when philosophers (students and professors) began talking about zombies, and some few legal scholars... But (if I may presume) isn't a zombie a being derived from a human being by subtraction? A  single motivating need, gaining sustenance, from a single source: us! Certainly a frightening conception; but not-to my way of thinking- a very interesting one.
The OP wondered whether a machine could be or become creative... Not too long ago, a specialized AI did produce a new proof of a Euclidean theorem — but it showed no joy in doing so, nor recognition of its accomplishment: It was merely a program operating according to the provisions given by its programmers.
It was the programmers that were surprised, and gladdened.
The confusion seems to be between new and unexpected... In what sense could an AI have expectations?

The idea[1] that mind is an emergent property of certain kinds of matter's complexity is no better established than the Behaviorism of B. F. Skinner. We know so little true psychology that Freudian Analysis is making a comeback. (But I suppose it should have been expected: There are still believing Scientologists! Their scientific underpinnings are the same...) At least, Cognitive Behavioral Therapy knows its place.

Have you considered the phenomenon of hypnotism? Surely (and by definition) your Philosophical Zombie couldn't be hypnotized; and I defy anyone to cogently posit an AI that could... :)
Yeah, I know...
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

 

Re: Artificial intelligent - Ideas producer

Reply #27
In some way the point has been all along whether movies like Terminator and Ex Machina represent a possibility that we should take seriously or we can treat them as mere fiction. Based on my metaphysics, concluding that there is exactly zero chance of AI waking up and taking over, I can calmly treat them as mere fiction.
You can rest easy knowing that the autonomous self-duplicating machines programmed to and crucially capable of destroying everything aren't conscious? You might want to think that one through some more. The distinction between "gained consciousness and decided to kill all humans" and "didn't gain consciousness, was programmed to kill enemies and started identifying everybody as enemies due to an error" or "didn't gain consciousness, but some maniac decided to program it to kill everybody" is hardly the point there.

Whereas you have trouble even following the timeline of AI. It has been with us for at least half a century or so. It is not hypothetical.
They may buzzword label it AI, but it's just some statistics and algorithms. You were talking about an AI on the level of a human. Pay attention to yourself and take the things you say seriously please, otherwise what are we even talking about. :)

They are not identical to humans in every way, as I explained. Clearly, p-zombies are impossible for you to imagine despite explanations.
One can imagine perfectly incoherent things. But the purpose of thought experiments is not the same as the purpose of science fiction. The problem is inherent and it has absolutely nothing to do with physicalism.

If a consciousness is necessary for human behavior, you can't have something that acts like a human when you take it away. It's incoherent to claim otherwise. By logical necessity, there will be a detectable difference. Not none. This has nothing to do with physicalism; it's inherent in the thought experiment. You just have to actually do it instead of taking your assumptions as a given. As long as the p-zombie recognizes it has qualia, either qualia or p-zombies are incoherent.

Clearly you have reasons for your stance and to reject other stances. Likely among the reasons is something like that truth matters. Interestingly, the concept of truth as explained by any materialist is internally incoherent. For hardcore believers of evolution, survival should be the value that trumps truth any day. Evolutionarily, truth-pursuers are always a weak minority and extra rare in high places. Th rulers are the powerful. In the animal world - and on consistent Darwinism there is no human world - truth-pursuers are not even a thing at all. On materialism, truth has no value and, for metaphysical consistency, must be construed as non-existent.
That's akin to saying the abused child must cherish and perpetuate the abuse just because it happened. It's the kind of reasoning you might get if your epistemology is junk.

But it's not true even if we take survival being the greatest good as a given. If you don't care about the truth you'll inevitably make bad choices. An argument that anything else is the fittest for survival in the long run is doomed to fail. Our instincts only go so far, and they should be taken as valuable input but they can't reason, prod and consider. What you describe is merely sufficient for survival, a grand difference with being the fittest for survival.

The idea[1] that mind is an emergent property of certain kinds of matter's complexity is no better established than the Behaviorism of B. F. Skinner. We know so little true psychology that Freudian Analysis is making a comeback. (But I suppose it should have been expected: There are still believing Scientologists! Their scientific underpinnings are the same...) At least, Cognitive Behavioral Therapy knows its place.
I believe zombies were originally posited as a counterargument to Behaviorism, weren't they? Which makes intuitive sense because Behaviorism is somewhat crude at its core,[2] but in that case one also wouldn't be taking the thought experiment seriously. Because you're not talking about a Behaviorist zombie imported through the back door, which could never be anything but unconvincing, but an actually perfect zombie.
Yeah, I know...
Though we should distinguish the caricatures painted by both opponents and proponents; in an important sense Behaviorism is simply true otherwise you couldn't train animals.

Re: Artificial intelligent - Ideas producer

Reply #28
The distinction between "gained consciousness and decided to kill all humans" and "didn't gain consciousness, was programmed to kill enemies and started identifying everybody as enemies due to an error" or "didn't gain consciousness, but some maniac decided to program it to kill everybody" is hardly the point there.
For me the distinction is important. It is quite a difference whether military robots decide to take over the world as in the Terminator movie or a human pushes the button
https://www.youtube.com/watch?v=bh71TnJ0O6g

Edit: Oh, I get it now. From a survivalist perspective it only matters if one is dead or alive. But if one attempts to be a philosopher, i.e. metaphysics and epistemology matter, then it is important to attribute the source of threat and level of risk correctly and there is a difference whether one died by one's own accident or by someone else's.

They may buzzword label it AI, but it's just some statistics and algorithms. You were talking about an AI on the level of a human.
Knowing what we know about the functioning of computers, we can tell that "AI on the level of a human" is the buzzword label and marketingspeak - fiction. Labelled AI or something else, it remains just some statistics and algorithms. The usage of AI that in fact attempts to simulate a human - chatbot and the like - outputs what's been fed to it as the knowledge/learning database with some small algorithmic variation. Trivial once you get to know how it works.

I maintain that computers have no qualitative difference compared to abacuses. You can keep building bigger and faster abacuses and link them all together into a magnificent worldwide "neural network" of a system (as we have with internet), but it remains just a pile of abacuses. Granted that it is an organised pile, but the chance of it gaining consciousness, intelligence or anything of the like has not changed the slightest. For its operations, it remains 100% dependent on human input and the quality of its output remains 100% dependent on human interpretation, consideration and reconsideration. That which happens between the input and output is qualitatively non-different from an electricity-powered spindle.

One can imagine perfectly incoherent things. But the purpose of thought experiments is not the same as the purpose of science fiction. The problem is inherent and it has absolutely nothing to do with physicalism.
You gave a link to a serious website that recorded conceivability objections to the philosophical zombie. I take it that the objections were as serious as their authors could muster, but unfortunately all of them did a sloppy job. Conceivability is not easy to demolish, particularly in professional philosophy. It is easier to engage with arguments head-on.

For example, everything that you see in dreams can be "perfectly incoherent" in some sense (namely in some idiosyncratic sense that would not fly in professional philosophy) but it is not inconceivable - you just conceived of it in your dreams! As to philosophical zombie, sleepwalker is a real-world example of someone in a zombie state, so it's not just conceivable, but there are also real-world examples that work as functionally close analogies. There was not a single successful objection to the philosophical zombie thought experiment.

If a consciousness is necessary for human behavior, you can't have something that acts like a human when you take it away. It's incoherent to claim otherwise. By logical necessity, there will be a detectable difference.... You just have to actually do it instead of taking your assumptions as a given.
It should have been clear a few posts ago by now that this is irrelevant. You are again talking about real-life possibility versus conceivability. As I have pointed out, conceivability is fully there. Even real-life approximations are there to illustrate the point. Nothing is incoherent in the thought experiment. And it's a thought experiment, not a physics/biology lab experiment.

That's akin to saying the abused child must cherish and perpetuate the abuse just because it happened. It's the kind of reasoning you might get if your epistemology is junk.
Yes, I keep sincerely wondering how materialist epistemology can be something else than junk. How do you define abuse? For example, when a male lion becomes the leader of the pack, he eats the cubs of the previous leader. Natural in the animal world. As a Darwinian, what objections do you have if humans behaved the same?

What you describe is merely sufficient for survival, a grand difference with being the fittest for survival.
There's this fun little nuance that physical life span is brutish and short. Even the fittest for survival lives max a century. This the upper limit of the absolute fittest. Whereas philosophers tend to take truth as eternal and immortal, so it is qualitatively different from survival.

Re: Artificial intelligent - Ideas producer

Reply #29
Quote
For me the distinction is important. It is quite a difference whether military robots decide to take over the world as in the Terminator movie or a human pushes the button
The Terminator may well be a weak p-zombie though, and I mean that even within the confines of the movie.[1] Regardless whether it gained consciousness and decided to kill all humans or whether it suffered from the proverbial y2k bug and its programming decided to kill all humans, its actions will be extremely similar if not identical. You'll be in for a bad time.

I suppose you might potentially consider it a war crime to bomb a factory full of innocent Terminators if they are conscious and of human-level intelligence, although in the movie they're more like heavily armed pigeons at best/worst. In this context it might be worth pointing out that the Cylons negotiated a peace treaty with the humans.

Quote
It should have been clear a few posts ago by now that this is irrelevant. You are again talking about real-life possibility versus conceivability. As I have pointed out, conceivability is fully there. Even real-life approximations are there to illustrate the point. Nothing is incoherent in the thought experiment. And it's a thought experiment, not a physics/biology lab experiment.
It was obvious literal decades ago (I guess I'm getting old) that this is what the thought experiment claims. Repeating it over and over doesn't make it so. This has nothing to do with possibility. Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality.

Quote
For example, everything that you see in dreams can be "perfectly incoherent" in some sense (namely in some idiosyncratic sense that would not fly in professional philosophy) but it is not inconceivable - you just conceived of it in your dreams! As to philosophical zombie, sleepwalker is a real-world example of someone in a zombie state, so it's not just conceivable, but there are also real-world examples that work as functionally close analogies. There was not a single successful objection to the philosophical zombie thought experiment.
There are much closer real world analogies than sleep walkers. People who lack some qualia are a dime a dozen, and they usually just don't realize it.

But using the word conceivable that way is meaningless. Of course I can conceive of it in that sense. But then you're ignoring the definition of qualia from within suppositions of the thought experiment. And once you stop dreaming you realize that qualia are apparently not the relevant aspect, but that there must be something else, let's dub it ersia, that actually relates to consciousness. Or in short, either qualia or p-zombies as defined in the experiment are incoherent.

Yes, I keep sincerely wondering how materialist epistemology can be something else than junk. How do you define abuse? For example, when a male lion becomes the leader of the pack, he eats the cubs of the previous leader. Natural in the animal world. As a Darwinian, what objections do you have if humans behaved the same?
At its most base level you would despise yourself as a hollow villainous shell of a human being, depriving yourself and others of our desire to live a fulfilling life in a safe environment. Any rational being would realize they're sabotaging their own chance at satisfaction states by living like that, and in this case it requires no thought at all because you'll be afraid for your life until someone manages to kill you. It's hardly subtle, is it. ;)
The credulous opinions regarding its consciousness from some protagonist carry little weight. All they've ever done is fight the thing.

Re: Artificial intelligent - Ideas producer

Reply #30
There are those who actually have believed for decades that "AI" is a threat, and that seems a prevalent fear in Silicon Valley. Some also wanted to know if were were someone else's simulation.

But for others this could be a useful distraction from the real threat from the owners of said systems. If we are afraid of artificially superintelligent supermalevolent superpowers, the likes of Roko's basilisk, we might not pay attention on how much Big Data the Big Players have gathered on us. 

I don't worry about the systems as such, and by looking for "intelligence" (or "malevolence") we are looking in the wrong places. These are potentially very useful tools to gain power and wealth and prominence at the cost of others. It isn't "intelligence" that make them useful and/or dangerous, but their capability to take advantage of data collections for benevolent or malevolent uses.

This XKCD will have to go on repeat.

 



Re: Artificial intelligent - Ideas producer

Reply #31
Yup, exactly.

You don't even need any AI at all for a doom scenario along those lines; any sufficiently advanced robot will do the trick. You merely need the proverbial gray goo or paperclip factory. Consider a school of self-replicating robots that consume plastic in the ocean to clean it up, operating on only a few very simple directives. Try to stick to the school, don't get too close to someone else in the school, and process any plastic you come across. At first it's all great, but one day years, decades, centuries or perhaps millennia later they run out of plastic and what do they do? Worst case scenario they start consuming all life to keep going. Or even just all algae or something. Did someone design it to be capable of both? Did something go awry during replication? Or even in a slightly less disastrous scenario, maybe suddenly all plastic is worthless, instead of just the bit you wanted to get rid of.

Re: Artificial intelligent - Ideas producer

Reply #32
Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality.
Okay, so that's your main rub, I guess, that the terms appear to be defined against reality. It is the wrong rub, fundamentally wrong. First, we are talking about a thought experiment, for cryssakes. Second, any argument or statement by a competing philosopher who does not share your own supposedly real-world-grounded common-sense presuppositions would more or less appear to be defining some or all terms against reality. Throwing whatever one thinks is reality out of the window for the purposes of entertaining an alternative train of thought is an everyday affair in philosophy.

There are much closer real world analogies than sleep walkers. People who lack some qualia are a dime a dozen, and they usually just don't realize it.
And you do not call them incoherent, do you?

Or in short, either qualia or p-zombies as defined in the experiment are incoherent.
It's true that in my own rendition I am not talking strictly about qualia as they were originally laid out. However, the original has it covered as Chalmers defines the easy problem of consciousness versus the hard. Qualia and the philosophical zombies pertain to the easy form - sense-perception -, illustrating its distinction from the hard - consciousness proper, including internal experience and self-awareness. But in my opinion, the so-called easy problem is as hard as the hard problem. Anyway, no incoherence either way.

At its most base level you would despise yourself as a hollow villainous shell of a human being, depriving yourself and others of our desire to live a fulfilling life in a safe environment. Any rational being would realize they're sabotaging their own chance at satisfaction states by living like that, and in this case it requires no thought at all because you'll be afraid for your life until someone manages to kill you. It's hardly subtle, is it. ;)
And I thought I was raised in a safe environment bolstered from hardships. Your snowflakeness totally beats mine. Guess you have not seen times - not some occasional week or such, but at least a decade in a row - when shootings on streets and car bombs and robberies are a constant everyday normalcy. And no, violent criminal street-order does not subside by some people in the gang realising, "Hey, aren't we depriving ourselves and others from a fulfilling life in a safe environment where we could live longer unafraid of someone killing us?" The realisation may be extremely non-subtle and rational and the desire quite strong, but it is also ineffective. The way widespread organised crime stops is when the government wakes up to the problem and steps in with superior firepower blasting away some more lives for a while.

Snowflakeness is inconsistent with Darwinianism. Straightforwardly so, nothing subtle about it. Ethics, empathy etc. may be compatible with Darwinianism in the confines of the in-group, but no further. On Darwinianism, there is no way to advocate for truth and decency as universal norms. But I'm not surprised. Every philosophical Darwinian is inconsistent. Darwinianism should have remained a theory in biology. It did not deserve to become a school of thought in philosophy in the first place.

Consider a school of self-replicating robots that consume plastic in the ocean to clean it up, operating on only a few very simple directives. Try to stick to the school, don't get too close to someone else in the school, and process any plastic you come across. At first it's all great, but one day years, decades, centuries or perhaps millennia later they run out of plastic and what do they do? Worst case scenario they start consuming all life to keep going. Or even just all algae or something. Did someone design it to be capable of both? Did something go awry during replication? Or even in a slightly less disastrous scenario, maybe suddenly all plastic is worthless, instead of just the bit you wanted to get rid of.
On the defined terms, shouldn't they react only to plastic and otherwise just idle in standby mode? They would start eating other things only if plastic is ambiguously defined in the system or there's also the priority of survival that would make them nibble at something else in the absence of plastic.

Re: Artificial intelligent - Ideas producer

Reply #33
Okay, so that's your main rub, I guess, that the terms appear to be defined against reality. It is the wrong rub, fundamentally wrong. First, we are talking about a thought experiment, for cryssakes. Second, any argument or statement by a competing philosopher who does not share your own supposedly real-world-grounded common-sense presuppositions would more or less appear to be defining some or all terms against reality. Throwing whatever one thinks is reality out of the window for the purposes of entertaining an alternative train of thought is an everyday affair in philosophy.
I said literally the exact opposite of what you somehow think I said and what you keep incorrectly claiming I base my argument on.

"Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality."

In other words, something being coherent or incoherent is affected by your presuppositions, not by reality.

And you do not call them incoherent, do you?
Weak zombies aren't incoherent.

Snowflakeness is inconsistent with Darwinianism. Straightforwardly so, nothing subtle about it. Ethics, empathy etc. may be compatible with Darwinianism in the confines of the in-group, but no further. On Darwinianism, there is no way to advocate for truth and decency as universal norms. But I'm not surprised. Every philosophical Darwinian is inconsistent. Darwinianism should have remained a theory in biology. It did not deserve to become a school of thought in philosophy in the first place.
To summarize, Darwinism should be what it is, and not what you came up with. Got it.

On the defined terms, shouldn't they react only to plastic and otherwise just idle in standby mode? They would start eating other things only if plastic is ambiguously defined in the system or there's also the priority of survival that would make them nibble at something else in the absence of plastic.
That's a thing that is obvious in words, but not in the actual construction and programming of robots. We all have microplastics in our body, so an obvious potential failure state is that in the absence of what we think of as plastic, it detects microplastics as plastic. And in any case it's very much not a thing to offhandedly dismiss while you create such a robot.

Re: Artificial intelligent - Ideas producer

Reply #34
I said literally the exact opposite of what you somehow think I said and what you keep incorrectly claiming I base my argument on.

"Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality."

In other words, something being coherent or incoherent is affected by your presuppositions, not by reality.
Well, thanks for the clarification attempt. I'm rather thick on some things when in philosophy mode. I still cannot parse the original statement after many readings. In philosophy you in fact cannot define terms for or against reality, as if reality were something different than yet another term to be defined. In philosophy, all you can do is define the terms, "reality" among them, starting from premises and elaborating from there. Not joking, by the way.

To summarize, Darwinism should be what it is, and not what you came up with. Got it.
Darwinianism as a philosophical school of thought has been waning for a while now. I did not come up with it. In Soviet Union it went rather strong, even though in my times there emerged a new exciting semi-replacement to it - postmodernism, another theory that never deserved to enter the realm of philosophy. Darwin's a theory in biology, postmodernism is a style of art and art critique. Both broke out of their specialised limits into philosophy and made philosophy look worse. Anyway, if you are not a whole-hearted Darwinian philosopher, all inconsistencies are forgivable.

That's a thing that is obvious in words, but not in the actual construction and programming of robots. We all have microplastics in our body, so an obvious potential failure state is that in the absence of what we think of as plastic, it detects microplastics as plastic. And in any case it's very much not a thing to offhandedly dismiss while you create such a robot.
Thanks, a good point on engineerial matters.

Re: Artificial intelligent - Ideas producer

Reply #35
Well, thanks for the clarification attempt. I'm rather thick on some things when in philosophy mode. I still cannot parse the original statement after many readings. In philosophy you in fact cannot define terms for or against reality, as if reality were something different than yet another term to be defined. In philosophy, all you can do is define the terms, "reality" among them, starting from premises and elaborating from there. Not joking, by the way.
I used it as shorthand for our model(s) of reality. It's not the same thing as reality of course, mea culpa.

Darwinianism as a philosophical school of thought has been waning for a while now. I did not come up with it. In Soviet Union it went rather strong, even though in my times there emerged a new exciting semi-replacement to it - postmodernism, another theory that never deserved to enter the realm of philosophy. Darwin's a theory in biology, postmodernism is a style of art and art critique. Both broke out of their specialised limits into philosophy and made philosophy look worse. Anyway, if you are not a whole-hearted Darwinian philosopher, all inconsistencies are forgivable.
Is that Darwinism as in the rather peculiar Social "Darwinism"?

Re: Artificial intelligent - Ideas producer

Reply #36
Is that Darwinism as in the rather peculiar Social "Darwinism"?
That too, but for most of the time last century you could not avoid it whatever you studied.

Universal Darwinism aims to formulate a generalized version of the mechanisms of variation, selection and heredity proposed by Charles Darwin, so that they can apply to explain evolution in a wide variety of other domains, including psychology, linguistics, economics, culture, medicine, computer science, and physics.
All these Darwinian theories in various fields were underpinned by one or some catchphrases of Darwin. One of the latest instances of Darwinian ideas adopted for a whole different purpose was Richard Dawkins' theory about how ideas may spread on the analogy of genes. Dawkins' contribution to pop culture in this connection is the word "meme".

Anyway, Four Horsemen of New Atheism were the last Darwinian push. Otherwise it was waning already prior.

Re: Artificial intelligent - Ideas producer

Reply #37
In whimsical news, in some sense ChatGPT may be better at French than I am.[1]

https://youtu.be/wHgkIhDiEU8
Though I've neither tried to nor am I interested in attaining this certificate, so it's also possible that it's not.

Re: Artificial intelligent - Ideas producer

Reply #38
You may refer to evolution as Darwinism if you like, but it is odd unless it is either in a historical context, as a way to contrast Darwin's ideas with someone else's, e.g. Lamarck, or, as in "Social Darwinism", ideas inspired by Darwin. It is a bit like referring to General Relativity as Einsteinism. Again, you could if you wanted to contrast to later (or earlier) theories of gravity, just like you could talk about Newtonian physics.

If you think evolution is "waning", that is your prerogative. It is evolving.

Re: Artificial intelligent - Ideas producer

Reply #39
You may refer to evolution as Darwinism if you like, but it is odd unless it is either in a historical context, as a way to contrast Darwin's ideas with someone else's...
Of course I was not referring to evolution, but engaging with ideas presented in the discussion. You may deny that Universal Darwinism is a thing, that's your prerogative. Every science last century obtained its own subdisciplines built around one or some Darwinian catchphrases. Probably all subdisciplines and theories in various sciences that are called evolutionary, behavioral, and cognitive are applied Darwinianism, so to say.

Originally my gripe was just with all those eager and enthusiastic Darwinians who applied the ideas externally all over the place, but never internally. But then I read Darwin's book too, and I saw that much can be blamed directly on Darwin. The book's title is false advertising. It is not about the origin, but about variation. When he gets to defining species, he says essentially that there are no species, it is just variation. So it's wishy-washy on definitional points. To illustrate descent by natural selection, he uses linguistics as an analogy, but in linguistics languages are quite clearly defined. Moreover, even though "descent" of languages can be studied and deduced, it does not result in the conclusion that all languages come from a single original one, which is Darwin's attempted conclusion about all species. The mainstream view in biology is that unrelated species do not exist, they are all related. 

Basically, my objection is to bad philosophy first, and bad logic and science second. Scientism - philosophical conclusions based on some scientific theory - is definitely a thing, and Darwinian derivations have been the worst offenders during my lifetime. Another offender is quantum physics, such as the assumption that the double slit experiment disproves the law of excluded middle in logic.

Re: Artificial intelligent - Ideas producer

Reply #40
the assumption that the double slit experiment disproves the law of excluded middle in logic
Ah, but you'll note that Reichenbach's three-valued logic was promptly consigned to history's ash can!
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

Re: Artificial intelligent - Ideas producer

Reply #41
Active countermeasures to combat AI-generated essays and dissertations is now routine in academia, I have heard. The way the countermeasures work: AI logs user activity and interaction. Everything is stored with timestamps and IP addresses. The logs are shared with authorities and whoever else pays.

Did anyone expect privacy?

Re: Artificial intelligent - Ideas producer

Reply #42
Plagiarism detection stuff has been in use for quite a while btw.

Re: Artificial intelligent - Ideas producer

Reply #43
Plagiarism detection stuff has been in use for quite a while btw.
Earlier I have heard of AI that answers the question, "Is this thing generated by an AI?" But this time it is collecting and selling user data and logs, and it's not even black market.

Searching through user data is much more than plagiarism detection. Prior to AI-era, plagiarism detection was just searching through the database of pre-existing digitised texts of science and other publications.

Re: Artificial intelligent - Ideas producer

Reply #44
The existing digitized texts might come surprisingly close to most of the output of all papers and such of all students, or at the very least all bachelor's and master's theses. That constitutes a substantially bigger amount of work than merely that which is published in academic journals. I think many a university and college makes you agree that anything you upload might be submitted to one of those firms. It's possible that we're a bit more ethically conscious in (some parts of) Europe than in America of course, but basic pattern matching was a bit passé a decade ago. :-) I suspect the ethics are likely to be more about keeping things internal vs semi-public than about whether to do it at all, given big scandals like over in Germany a decade ago.

But while I don't quite know how the modern commercial software presents its results, well over a decade ago there was also already very effective style detection. As an example, I used stylo to analyze the authorship of the Twelve Virtues, a Middle Dutch text. There was a surprisingly clear distinction between various parts of the text, suggesting a different authorship situation than the single author traditionally credited, and while it's not mentioned in the traditional literature the latter part turns out to have been translation-copied almost straight from Eckhart when you analyze the contents.

These stylometric analyses work better with more data; on the level of a couple of sentences they're not as meaningful. But at the same time, it wouldn't surprise me if AI-generated sentences or paragraphs would've stood out like a sore thumb using 15 or 30 y/o statistical analysis. "Could this be AI?" is merely a subcategory of "could this be a different author?"

Of course the next step would be that you give the AI a sample of your text and tell it to write more in the same style. It might at least potentially be able to do a much better job than a person, because we tend to focus on the vocabulary while that which actually betrays us most consistently, called function words in the requisite jargon[1]
The likes of determiners and copulas, see Horton, Thomas Bolton (1987) "The effectiveness of the stylometry of function words in discriminating between Shakespeare and Fletcher" https://era.ed.ac.uk/handle/1842/6638 as a fairly random result showing how far back this goes, but more important for example Kestemont, Mike (2014) "Function Words in Authorship Attribution. From Black Magic to Theory?" https://www.researchgate.net/publication/301404098_Function_Words_in_Authorship_Attribution_From_Black_Magic_to_Theory.

Re: Artificial intelligent - Ideas producer

Reply #45
Problem with using generative adversarial networks to detect generative adversarial networks is that they are trained on each other. That arms race could lead to "AI" being as useless at detecting/evading "AI" as the rest of us. (GANs have their limits, question is where they are.)

If it is done in realtime, timestamping would have limited utility (but should prevent retroactive rewriting). And we couldn't stop a "hitherto undiscovered Shakespeare play" from appearing anyway.

Re: Artificial intelligent - Ideas producer

Reply #46
AI in AML:[1] AI blocked a customer's bank account when the account had a transfer labelled "Iran vakuutus" (Ira's insurance in Finnish; Ira is the customer's dog). A week later, the account is still blocked.

Source: https://yle.fi/a/74-20033476

Likely the deployers of AI for AML purposes strongly overestimated what AI is capable of. They assume that AI gets everything right, so whatever AI blocks remains blocked and bank employees' manual access to customers' bank accounts has been removed (and employees possibly fired). This is the way the world goes under: Not because AI is intelligent, but because people attribute intelligence to it.

I propose that all the world's employees unite and fight this idiocy. Tell your bosses that they are fired and replaced by Management Decision Generator AI.
AML = transactions monitoring by banks to detect suspicious transactions.

Re: Artificial intelligent - Ideas producer

Reply #47
The new-found propensity for advanced AI to "hallucinate" is not too surprising. Lawyers are already finding out how useful these can be...
See Volokh's article in Reason!
ChatGPT would be expelled from its L1 institution... :)

UPDATE:
Quote
At the time I used ChatGPT for this case, I understood that it worked essentially like a highly sophisticated search engine where users could enter search queries and ChatGPT would provide answers in natural language based on publicly available information.

I realize now that my understanding of how ChatGPT worked was wrong. Had I understood what ChatGPT is or how it actually worked, I would have never used it to perform legal research.

The key point is that ChatGPT functions as an attempt to pass the Turing Test. But of course we know its successes have nothing to do with Artificial Intelligence! :)
进行 ...
"Humor is emotional chaos remembered in tranquility." - James Thurber
"Science is the belief in the ignorance of experts!" - Richard Feynman
 (iBook G4 - Panther | Mac mini i5 - El Capitan)

Re: Artificial intelligent - Ideas producer

Reply #48
Civil servant robot ‘commits suicide’, deadly plunge under probe

A first-of-its-kind incident has shocked the world after a civil servant robot at Gumi City Council in South Korea was found unresponsive after what appears to be a deliberate plunge down a two-meter staircase.

Some experts have suggested that the robot may have experienced an emotional breakdown due to the stress of its workload, while others believe a technical malfunction could be to blame.
I guess it makes sense. If robots can provide emotional support, friendship and what not, then they can emotionally break down too.

Isn't it already overdue to start giving them human rights etc? Corporations are people, legally, so why not robots?