Skip to main content

Messages

This section allows you to view all Messages made by this member. Note that you can only see Messages made in areas you currently have access to.

Messages - Frenzie

77
Hobbies & Entertainment / Re: Travelling and such
It comes across as unremarkable to me, but it's possible that it's slightly tighter than even the smallest here. Here's a link to a streetview in Belgium for a crossing on a small street. Here's another one near it. And another one.

Here's a link to the Netherlands. And another showing a bike path next to the road.
78
DnD Central / Re: Grammatical Mutterings
School, according to various rankings in a form closer to their implementation in for example the Benelux, Estonia, Finland and Japan than in the United States[1] is clearly beneficial to society. All citizens should have certain basic skills, otherwise society will be much worse to live in.

In context I was talking about how we're first taught the intricacies of these distinctions at around seven or eight years old, right alongside learning how to read and write. The very basics of reading and writing come a year prior. Then we spend another decade working with various genres of texts and enhancing our understanding. What, in other words, did these kids learn in English class? But the thing is, I've seen lesson plans for English from America that were excellent. Nothing to suggest the kids going through that system wouldn't be able to articulate the difference between a novel and a book.
For whatever their worth is, e.g., https://www.oecd.org/education/universal-basic-skills-9789264234833-en.htm

Though presumably if you split the US up by state you'll get a picture closer to Europe. I also realize that Finland and Japan may well take a rather different approach, where the Finnish approach at least superficially comes across as much more appealing because it seems it does well on average while simultaneously people like you and I should be mostly fine there as well, but as I said that's a superficial impression.
79
DnD Central / Re: Grammatical Mutterings
But the teacher should not rely on students' self-learning. It's part of the teacher's job to verify that the concepts have reached the student by giving tests that are different than the introductory examples were.
I still remember how I was one of the few who managed to add something like 97+6, whereas there were many who made up random answers or just gave up. They either didn't think a number over 100, which we'd never explicitly learned about, was allowed, or they somehow couldn't conceive of it despite knowing about 1–9 and about 10–99. I'm not sure why I particularly recall that example, but it's illustrative for pretty much every subject.

But granted, regardless if you'd already figured it out for yourself, such matters as books, novels, short stories, fiction and non-fiction, are a standard part of learning how to read and use the library, and it's not like school didn't dot some Is and cross some Ts for me. Going more in-depth about the differences between tabloids and quality newspapers did indeed elucidate, even though the differences as such were obvious. Conversely we also learned all kinds of bizarre robotic methods for "understanding" and "analyzing" non-fiction texts.
81
DnD Central / Re: Grammatical Mutterings
But I mean the concepts. Why else teach vocabulary than to elucidate concepts?
To a very large extent you learn vocabulary to be able to talk about things with other people. The concepts are something you'll generally have figured out by yourself years prior. If you're lucky there'll be some deeper analysis, but the steps in school are slow and late. It's not like they tell you about picaresque novels or something when you're six years old.
82
DnD Central / Re: Grammatical Mutterings
I should hardly think the distinction between book and novel[1] is something one has to be purposefully taught. You have books like encyclopedias, books about history, dinosaurs, biology and physics and then you have storybooks. At some point hopefully well over a decade before you're eighteen years old you osmotically learn that thicker storybooks are called novels, and that non-fiction books are not. It's the fancier vocabulary like fiction and non-fiction that you might have to learn because it's every so slightly more in the domain of jargon. Note that I explicitly mean the vocabulary, not the concept.

On the flip side, there are some supposed non-fiction books that might as well be novels. :)
Or rather, boek and roman, as @jax pointed out.
84
DnD Central / Re: Artificial intelligent - Ideas producer
Well, thanks for the clarification attempt. I'm rather thick on some things when in philosophy mode. I still cannot parse the original statement after many readings. In philosophy you in fact cannot define terms for or against reality, as if reality were something different than yet another term to be defined. In philosophy, all you can do is define the terms, "reality" among them, starting from premises and elaborating from there. Not joking, by the way.
I used it as shorthand for our model(s) of reality. It's not the same thing as reality of course, mea culpa.

Darwinianism as a philosophical school of thought has been waning for a while now. I did not come up with it. In Soviet Union it went rather strong, even though in my times there emerged a new exciting semi-replacement to it - postmodernism, another theory that never deserved to enter the realm of philosophy. Darwin's a theory in biology, postmodernism is a style of art and art critique. Both broke out of their specialised limits into philosophy and made philosophy look worse. Anyway, if you are not a whole-hearted Darwinian philosopher, all inconsistencies are forgivable.
Is that Darwinism as in the rather peculiar Social "Darwinism"?
85
DnD Central / Re: Artificial intelligent - Ideas producer
Okay, so that's your main rub, I guess, that the terms appear to be defined against reality. It is the wrong rub, fundamentally wrong. First, we are talking about a thought experiment, for cryssakes. Second, any argument or statement by a competing philosopher who does not share your own supposedly real-world-grounded common-sense presuppositions would more or less appear to be defining some or all terms against reality. Throwing whatever one thinks is reality out of the window for the purposes of entertaining an alternative train of thought is an everyday affair in philosophy.
I said literally the exact opposite of what you somehow think I said and what you keep incorrectly claiming I base my argument on.

"Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality."

In other words, something being coherent or incoherent is affected by your presuppositions, not by reality.

And you do not call them incoherent, do you?
Weak zombies aren't incoherent.

Snowflakeness is inconsistent with Darwinianism. Straightforwardly so, nothing subtle about it. Ethics, empathy etc. may be compatible with Darwinianism in the confines of the in-group, but no further. On Darwinianism, there is no way to advocate for truth and decency as universal norms. But I'm not surprised. Every philosophical Darwinian is inconsistent. Darwinianism should have remained a theory in biology. It did not deserve to become a school of thought in philosophy in the first place.
To summarize, Darwinism should be what it is, and not what you came up with. Got it.

On the defined terms, shouldn't they react only to plastic and otherwise just idle in standby mode? They would start eating other things only if plastic is ambiguously defined in the system or there's also the priority of survival that would make them nibble at something else in the absence of plastic.
That's a thing that is obvious in words, but not in the actual construction and programming of robots. We all have microplastics in our body, so an obvious potential failure state is that in the absence of what we think of as plastic, it detects microplastics as plastic. And in any case it's very much not a thing to offhandedly dismiss while you create such a robot.
86
DnD Central / Re: Artificial intelligent - Ideas producer
Yup, exactly.

You don't even need any AI at all for a doom scenario along those lines; any sufficiently advanced robot will do the trick. You merely need the proverbial gray goo or paperclip factory. Consider a school of self-replicating robots that consume plastic in the ocean to clean it up, operating on only a few very simple directives. Try to stick to the school, don't get too close to someone else in the school, and process any plastic you come across. At first it's all great, but one day years, decades, centuries or perhaps millennia later they run out of plastic and what do they do? Worst case scenario they start consuming all life to keep going. Or even just all algae or something. Did someone design it to be capable of both? Did something go awry during replication? Or even in a slightly less disastrous scenario, maybe suddenly all plastic is worthless, instead of just the bit you wanted to get rid of.
87
DnD Central / Re: Artificial intelligent - Ideas producer
Quote
For me the distinction is important. It is quite a difference whether military robots decide to take over the world as in the Terminator movie or a human pushes the button
The Terminator may well be a weak p-zombie though, and I mean that even within the confines of the movie.[1] Regardless whether it gained consciousness and decided to kill all humans or whether it suffered from the proverbial y2k bug and its programming decided to kill all humans, its actions will be extremely similar if not identical. You'll be in for a bad time.

I suppose you might potentially consider it a war crime to bomb a factory full of innocent Terminators if they are conscious and of human-level intelligence, although in the movie they're more like heavily armed pigeons at best/worst. In this context it might be worth pointing out that the Cylons negotiated a peace treaty with the humans.

Quote
It should have been clear a few posts ago by now that this is irrelevant. You are again talking about real-life possibility versus conceivability. As I have pointed out, conceivability is fully there. Even real-life approximations are there to illustrate the point. Nothing is incoherent in the thought experiment. And it's a thought experiment, not a physics/biology lab experiment.
It was obvious literal decades ago (I guess I'm getting old) that this is what the thought experiment claims. Repeating it over and over doesn't make it so. This has nothing to do with possibility. Something incoherent can easily be possible and something coherent can be impossible, unless you define the terms against reality.

Quote
For example, everything that you see in dreams can be "perfectly incoherent" in some sense (namely in some idiosyncratic sense that would not fly in professional philosophy) but it is not inconceivable - you just conceived of it in your dreams! As to philosophical zombie, sleepwalker is a real-world example of someone in a zombie state, so it's not just conceivable, but there are also real-world examples that work as functionally close analogies. There was not a single successful objection to the philosophical zombie thought experiment.
There are much closer real world analogies than sleep walkers. People who lack some qualia are a dime a dozen, and they usually just don't realize it.

But using the word conceivable that way is meaningless. Of course I can conceive of it in that sense. But then you're ignoring the definition of qualia from within suppositions of the thought experiment. And once you stop dreaming you realize that qualia are apparently not the relevant aspect, but that there must be something else, let's dub it ersia, that actually relates to consciousness. Or in short, either qualia or p-zombies as defined in the experiment are incoherent.

Yes, I keep sincerely wondering how materialist epistemology can be something else than junk. How do you define abuse? For example, when a male lion becomes the leader of the pack, he eats the cubs of the previous leader. Natural in the animal world. As a Darwinian, what objections do you have if humans behaved the same?
At its most base level you would despise yourself as a hollow villainous shell of a human being, depriving yourself and others of our desire to live a fulfilling life in a safe environment. Any rational being would realize they're sabotaging their own chance at satisfaction states by living like that, and in this case it requires no thought at all because you'll be afraid for your life until someone manages to kill you. It's hardly subtle, is it. ;)
The credulous opinions regarding its consciousness from some protagonist carry little weight. All they've ever done is fight the thing.
88
DnD Central / Re: Artificial intelligent - Ideas producer
In some way the point has been all along whether movies like Terminator and Ex Machina represent a possibility that we should take seriously or we can treat them as mere fiction. Based on my metaphysics, concluding that there is exactly zero chance of AI waking up and taking over, I can calmly treat them as mere fiction.
You can rest easy knowing that the autonomous self-duplicating machines programmed to and crucially capable of destroying everything aren't conscious? You might want to think that one through some more. The distinction between "gained consciousness and decided to kill all humans" and "didn't gain consciousness, was programmed to kill enemies and started identifying everybody as enemies due to an error" or "didn't gain consciousness, but some maniac decided to program it to kill everybody" is hardly the point there.

Whereas you have trouble even following the timeline of AI. It has been with us for at least half a century or so. It is not hypothetical.
They may buzzword label it AI, but it's just some statistics and algorithms. You were talking about an AI on the level of a human. Pay attention to yourself and take the things you say seriously please, otherwise what are we even talking about. :)

They are not identical to humans in every way, as I explained. Clearly, p-zombies are impossible for you to imagine despite explanations.
One can imagine perfectly incoherent things. But the purpose of thought experiments is not the same as the purpose of science fiction. The problem is inherent and it has absolutely nothing to do with physicalism.

If a consciousness is necessary for human behavior, you can't have something that acts like a human when you take it away. It's incoherent to claim otherwise. By logical necessity, there will be a detectable difference. Not none. This has nothing to do with physicalism; it's inherent in the thought experiment. You just have to actually do it instead of taking your assumptions as a given. As long as the p-zombie recognizes it has qualia, either qualia or p-zombies are incoherent.

Clearly you have reasons for your stance and to reject other stances. Likely among the reasons is something like that truth matters. Interestingly, the concept of truth as explained by any materialist is internally incoherent. For hardcore believers of evolution, survival should be the value that trumps truth any day. Evolutionarily, truth-pursuers are always a weak minority and extra rare in high places. Th rulers are the powerful. In the animal world - and on consistent Darwinism there is no human world - truth-pursuers are not even a thing at all. On materialism, truth has no value and, for metaphysical consistency, must be construed as non-existent.
That's akin to saying the abused child must cherish and perpetuate the abuse just because it happened. It's the kind of reasoning you might get if your epistemology is junk.

But it's not true even if we take survival being the greatest good as a given. If you don't care about the truth you'll inevitably make bad choices. An argument that anything else is the fittest for survival in the long run is doomed to fail. Our instincts only go so far, and they should be taken as valuable input but they can't reason, prod and consider. What you describe is merely sufficient for survival, a grand difference with being the fittest for survival.

The idea[1] that mind is an emergent property of certain kinds of matter's complexity is no better established than the Behaviorism of B. F. Skinner. We know so little true psychology that Freudian Analysis is making a comeback. (But I suppose it should have been expected: There are still believing Scientologists! Their scientific underpinnings are the same...) At least, Cognitive Behavioral Therapy knows its place.
I believe zombies were originally posited as a counterargument to Behaviorism, weren't they? Which makes intuitive sense because Behaviorism is somewhat crude at its core,[2] but in that case one also wouldn't be taking the thought experiment seriously. Because you're not talking about a Behaviorist zombie imported through the back door, which could never be anything but unconvincing, but an actually perfect zombie.
Yeah, I know...
Though we should distinguish the caricatures painted by both opponents and proponents; in an important sense Behaviorism is simply true otherwise you couldn't train animals.
89
DnD Central / Re: Artificial intelligent - Ideas producer
I'll leave you pondering the fact that you did not reject, refute or even problematise any of the corollaries that I pointed out to you, such as the ethical corollaries following from the assumption that AI is intelligent to whatever degree.
Why would I reject the logical conclusion to treat such a hypothetical AI appropriately as per their level of consciousness and intelligence? That wouldn't make much sense.

And you had nothing to say about the sleepwalker analogy to clarify the definition of zombie for you. So I guess that one works well. The example of hypoesthesia on the other hand was not an analogy, not intended to claim at all that such condition is empirically identical to a healthy human. Rather, it was one in a series of illustrations to get you closer to the concept of philosophical zombie, which you still have failed at.
P-zombies aren't hard to imagine. The incoherence comes from the fact that if they're identical in every way, the conclusion must be that qualia are identical to thinking you experience qualia.[1] You disagree, but naturally in my opinion it's you who's importing definitions of qualia from outside the thought experiment. In your example, it's not coherent to say that your hand is in pain from touching a hot stove but that you feel nothing. It's neither or both.

In any case, I doubt it's fruitful to continue further down that path. It's been trodden. :)

In my opinion the mind and the brain relate more like time versus watches and clocks. I'd like to suppose that you don't think that time is a process created by a watch, but I'm not quite sure anymore.
In some sense they do. Clocks fool us into thinking time can be divided into concrete little chunks. But that aside, I do provisionally conceive of time as a process created by all "clocks"[2] put together.
Or the reverse, that no one has any. Given that we all think we have qualia that seems a bit sillier, though given an incoherent or incomplete definition of qualia it's certainly possible. But in that case we still have qualia; they're just not the particular thing called qualia here, so that would be a very unhelpful line of thought. Even if qualia aren't what we think they are, there's still a phenomenon we call qualia.
I.e., all the atoms in the universe.
90
DnD Central / Re: Artificial intelligent - Ideas producer
I'm not talking about being empirically plausible. I'm talking about actually doing the thought experiment. You have to remain conceptually consistent. The Chinese Room you mentioned is another example. Within the confines of the thought experiment, the man using the book may be dumb but the book still has to be capable of correctly speaking Chinese, so can you actually say the room as a system isn't conscious as the thought experiment pretends? All you've actually said is that the man is like some parts of the body, which isn't particularly interesting. You haven't said anything about consciousness.

Now, knowing this, why would you assume your approach to how the brain and the mind relate has any resemblance to what is really going on? You just compared the airplane+flying to the brain+mind. Want to give it another shot?
I don't understand thought experiments, but clearly you understand analogies. ;) When you think about the context of the discussion it might become obvious why I purposefully picked a man-made machine, but swapping in a bird doesn't change anything about the analogy.

But to get back to what you said above it:
Well, flying is not a process created by the airplane. Certainly there is absolutely nothing in the process that the airplane creates. Rather, it is what the engineers and pilots create based on the properties of air and aerodynamic components. To assume that the airplane creates the process is a horrendously false description of what is going on.
Is this even an equivocation fallacy? It's the configuration of the wings and engines that creates the process of flight. That which caused the wings and engines to be isn't an active part of the process, but a prerequisite to it.

The philosophical zombie *is not* identical to a non-zombie. It is identical for empirical purposes, but the point of the thought experiment is to highlight that the empirical is not all there is.
Shocking, who'd have thought. :) Thought experiments can show the opposite of what they claim or they can be logically impossible. I rather doubt that's something you disagree with; you just think this one's decent.

Consider the following. When a healthy human being touches a hot stove, his hand gets burned AND he removes his hand *due to pain*. When a person with hypoesthesia (or whatever the loss of sensation is) touches a hot stove, his hand gets burned, but he does not feel the pain. Now, suppose there is someone who does not feel the pain when his hand gets burned, but since childhood he has learned that it is customary to remove the hand and wince when touching a hot stove, so that's what he does because everybody else does it. This is a baby-step towards the philosophical zombie.
Ah yes, the person who's completely identical on account of hypoesthesia clearly demonstrates… wait a second.

In terms of a physical experiment in the current stage of civilisation, I agree with you. But in terms of a philosophical discussion you are frankly not on board what a thought experiment is.
Of course thought experiments are useful, but you have to conduct them correctly.

Again, it so happens that the brain is among the physically and biologically less complicated organs. So, assuming that the brain is the mind (or any other fallacious version of the same, such as "the mind is a process of the brain" or "the mind is what the brain does"), complexity is the wrong description of what is going on even from the purely empirical point of view.
Saying the brain isn't complex is just downward silly.
91
DnD Central / Re: Artificial intelligent - Ideas producer
"AI" is literally about as different as can be. AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.
I should note that I mean this in the logical sense by which I reject philosophical zombies as incoherent. In actual practice it may not be possible to be sufficiently complex while processing certain types of information without experiencing qualia as a side effect, or perhaps rather as an unavoidable consequence.
92
DnD Central / Re: Artificial intelligent - Ideas producer
The thought experiment should be enlightening with regard to AI. AI talks (and writes and "creates") human-like enough so as to fool many. Consequently, since it "behaves" like a human being, doesn't it follow that it really is creative and intelligent? Doesn't it follow that we should treat it like a human being the full monty, along with human rights and the whole nine yards? If you say no, then why? Doesn't "yes" follow on the empiricist-physicalist theory?
But of course the answer to that could be "yes". Turn it around. How do you know that's not what we're doing? ;)

I don't understand why you keep saying things like that as if they're some kind of gotcha. It's fundamentally quite similar to whether it is or isn't okay to keep cattle as livestock or humans as livestock.
93
DnD Central / Re: Artificial intelligent - Ideas producer
Or maybe — just maybe — I've actually read the text. There's this little trick in reading texts, namely using your brain — or your mind, if you prefer, but the English idiom uses the word brain — to draw logical conclusions based on the premises. And sometimes you'll find you come to the conclusion they didn't actually take their own thought experiment very seriously.

Are the empirically verifiable biological processes, including the firing neurons as observable by neuroscientists, identical to internal experience or not? In other words, is phrenology true (i.e. is the brain identical to the mind)?
The mind is a process created by the brain. To call them identical is to say that flying is identical to an airplane.

Alternatively, is it permissible to presume or assume that external reactions are sufficient signs of some internal reflection, self-reflection, introspective consciousness and experience of qualia (by some other possible theory than phrenology, as phrenology is known to be false)?
To be a philosophical zombie you need to be identical to a non-zombie, yet somehow not experience qualia, yet somehow behave exactly the same as a non-zombie. This is incoherent. The zombie will have to behave differently, that is to say that it doesn't experience the smell of wood or colors, or lie. The latter is also behaving differently, but in a form that will be detectable by — yes indeed — neurons in the brain. The zombie's brain will be different.

"AI" is literally about as different as can be. AI is exactly the way in which a zombie could actually exist, contrary to the one described in the premises.
94
DnD Central / Re: Artificial intelligent - Ideas producer
Philosophical zombies are self-refuting because thinking you experience qualia is simply the same thing as experiencing qualia. That's what qualia are. Unless something is lying about experiencing qualia, but the philosophical zombie isn't lying by definition. This is a logical impossibility. If they're not lying about it, that means they're experiencing qualia.

Of course you could conceive of magical qualia but then you're not saying anything meaningful about qualia.
96
DnD Central / Re: Artificial intelligent - Ideas producer
When ChatGPT came out, I participated in a debate on another forum about how intelligent AI is. My stance is that AI has no intelligence whatsoever. And machine learning does not learn. That is, there is no similarity or analogy, much less identity, between how human mind works and how AI works.
That might be overstating the negative a bit. We are extremely good statistical pattern matchers, much better than closely related monkeys for example, so there may well be a certain conceptual similarity between how some part(s) of our brain work and how these algorithms work. That's also why the people who developed them named them "neural nets," because they were designed after a certain model of how the brain might work.

The fact that it outputs such natural-sounding prose certainly doesn't contradict the hypothesis exactly, but we do need several algorithmic orders of magnitude less training data. Crucially, there is nothing much intelligent about merely a simulacrum of our visual and auditory processing and production abilities. But it could be a step towards it.
98
Browsers & Technology / Re: Best about wristwatches
In terms of "work well enough for my purposes", smartwatches can be awesome, counting steps and observing heartrate, sharing your location and what not. However, they are not the same technology as watches. Smartwatches are electronics and are accordingly guaranteed to lose at least half of their price along the years and then never recover from there, just like electronics in general.
I also have a Casio from the '90s, albeit currently without a working battery. It's the superior form of a smartwatch, and they sell them new as well. The battery lasts… two years? Maybe more.

The main selling point for "smart"watches seems to be that they bother you every time you receive a text or something. (And they spout some lies about being able to measure your heart rate and how you sleep.) I wouldn't want that unless they paid me at least a thousand a month to wear it.
99
DnD Central / Re: What's going on in Scandinavia, North Atlantic, Baltic States and Scotland?
Tartu is sending some propaganda out into the world about becoming the "capital of self-driving vehicles".

https://e-estonia.com/tartu-aims-to-become-the-european-capital-of-self-driving-vehicles/
Quote
To start with, Tartu decided to experiment with on-demand transport in the region. Organising a regular bus route in its sparsely populated surroundings is unreasonable. But according to Tambet Matiisen, Head of Technology at the ADL, this challenge was a perfect opportunity for self-driving cars: it is often easier to achieve driverless mobility on smaller highways than in dense urban settings. Combining these considerations, and with the participation of several technology companies, Tartu ran a widely popular experiment between 26 on-demand stops connected by 66 km of roads.

This pilot provided both the city and other participants with valuable information about future challenges before such a transport system could be applied more widely. Mr Matiisen recalls, for example, how they quickly realised that using traffic lights for navigation is suboptimal.

Something like 24-hour buses could certainly be a very good thing in theory.
100
Browsers & Technology / Re: Best about wristwatches
Below 100 e mechanical watch: probably not a good idea, unless it is Vostok from Russia, Luch from Belarus or Red Star from China. Until recently, also cheapest Seikos and Orients were below 100 e, but not anymore.
I have a couple of € 20 mechanical watches. They work well enough for my purposes, but of course they hold no value.