Log In   or  Sign Up for Free

Philosophy Discussion Forums | A Humans-Only Club for Open-Minded Discussion & Debate

Humans-Only Club for Discussion & Debate

A one-of-a-kind oasis of intelligent, in-depth, productive, civil debate.

Topics are uncensored, meaning even extremely controversial viewpoints can be presented and argued for, but our Forum Rules strictly require all posters to stay on-topic and never engage in ad hominems or personal attacks.


Use this philosophy forum to discuss and debate general philosophy topics that don't fit into one of the other categories.

This forum is NOT for factual, informational or scientific questions about philosophy (e.g. "What year was Socrates born?"). Those kind of questions can be asked in the off-topic section.
#469153
Count Lucanor wrote: October 23rd, 2024, 3:37 pm
Lagayscienza wrote: October 22nd, 2024, 11:31 pm
People underestimate just how much of human nature and behavior is determined. Humans are part of nature, a natural phenomenon like the rest of life, an outgrowth of the planet. If we want to change the way we interact with each other and the rest of life, then we would need to change ourselves. We would need to change the very determinants of our behavior. We could conceivably change our brains in such a way that they did not produce too much competitive, violent, destructive behavior. And we could try to free ourselves from the ouroboros by becoming technological autotrophs. But none of this will be happening any time soon, if at all. Unless/until it does, the best we can do is try consciously to give more voice to the better angels of our nature, to our cooperative, caring side. That side, too, is part of nature which we must accept. Hopefully, we can foster it at the expense of our darker side. But progress in that regard has been, at best, halting and piecemeal.
Surely, absolutely everything is natural in the sense that nature is the universe, unless one thought that there is or there might be another supernatural reality beyond the universe as we know it, which I believe is not your case. However, is the International Space Station natural? Is it Musk’s Cybercab? Calling them “natural” doesn’t say much about them, as these things, among the many produced by human culture, are essentially different than any other product of any other species in nature, being precisely that the difference that allows us to use a term invented for the arte factum, for the things made by human ingenuity: “artificial”. The label is put to things such as intelligence, fibers, diamonds or rain, to distinguish them from those not produced by humans. Artificial intelligence, artificial fibers, artificial diamonds or artificial rain, they all point to man-made things, as opposed to natural intelligence, natural fibers, natural diamonds and plain old natural rain.
The existence of all these “artificial” things says already something about the determinants of human behavior. While our brains have not changed anatomically in any significant way for millennia, we have certainly changed ourselves from our hunter-gatherers ancestors, in other words, we have been able to use those determinants of behavior for transforming our environment and our own societies, so that, unlike the rest of nature, our destiny is not predetermined by the innate faculties, actually they allow us to do the opposite. There’s no need to wait for a new artifact to modify our brains (minds) and behavior, we have already being doing it since there was a human species: arrows, fire, agriculture, art, script, books, TV and, of course, computers. We have been making better and better computers, and with that we will continue modifying our minds and behavior, but how far can we go with that? Again, our destiny is not predetermined, so we don’t know. We do know, however, that we can make a very strong case for this thing called “artificial intelligence” not being real intelligence at all, in the sense that we expect a natural diamond sharing key properties with an artificial diamond, while still being called a diamond, so the current state of our technology, that which we use for modifying our minds and behavior, is not there yet, not in that field.
Sure, if we want to call everything made by humans artificial, then that is one way to carve up the world. But why stop at human structures? What about a dam made by a beaver or a bird's nest? And how about tools made by primates and corvids? Are all of these not also artificial in the same way that human artifacts like houses and cars are artificial? Or is the building and tool making behavior of these other animals entirely mindless and determined while similar human behavior is entirely mindful and undetermined? Can we really say one is natural and determined and the other not?

I agree that current AI is not the same as the biologically housed intelligence of humans. Computers are capable of some of the processes we normally associate with intelligence, but computers don't currently have consciousness or an inner mental life. But is that a reason to believe that intelligence, and perhaps sentience, cannot eventually be housed in a non-biological substrate? Do you not think that we have made the first steps down that road?
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469156
Sculptor1 wrote: October 23rd, 2024, 6:11 pmI was impressed, but this is not intelligence ...
Not yet. Then again, a foetus is not intelligent either. I would describe AI's abilities as proto-intelligence.

Time is key to this issue. The more time that AI develops, the closer it will come to actual intelligence, as we know it.
#469157
Lagayscienza wrote: October 23rd, 2024, 6:13 pm
Sure, if we want to call everything made by humans artificial, then that is one way to carve up the world. But why stop at human structures? What about a dam made by a beaver or a bird's nest? And how about tools made by primates and corvids? Are all of these not also artificial in the same way that human artifacts like houses and cars are artificial?
No, they are not artificial in the same way. It’s not a matter of any of them being functionally superior or more complex than the other. It’s a matter of how they come to be as progressive technological advances that have a history, that are founded in social cooperation in the sphere of human culture, a sphere that simply does not exist in the beaver’s world, the bird’s world, nor the bee’s. They will continue doing the same dams, the same nests and the same hives forever. Secondly, those human developments surpass the innate faculties of the human species, so one isolated human would not have made a dam by himself, nor a bridge, nor a house, unlike a bird or a beaver, which will do it by instinct, it’s within their nature. It takes what is within human nature: social cooperation, higher cognitive functions and learning abilities to make ever more sophisticated tools and use their natural environment to produce things that do not respond to mere instinctive necessity. It is well documented that a feral child will barely survive and will not develop any characteristic associated with being a normal, functional human. Maybe there are some other cases in nature, but it will show that early upbringing in a social setting is key to future development, which in the case of humans is particularly important.
Lagayscienza wrote: October 23rd, 2024, 6:13 pm
Or is the building and tool making behavior of these other animals entirely mindless and determined while similar human behavior is entirely mindful and undetermined? Can we really say one is natural and determined and the other not?
Unlike the work of beavers, birds or bees, human make the plans in their heads before putting their hands to the task. Two different innate qualities at work, one produces solutions fixed to the instinct of the species, the other is open to much more possibilities.
Lagayscienza wrote: October 23rd, 2024, 6:13 pm I agree that current AI is not the same as the biologically housed intelligence of humans. Computers are capable of some of the processes we normally associate with intelligence, but computers don't currently have consciousness or an inner mental life. But is that a reason to believe that intelligence, and perhaps sentience, cannot eventually be housed in a non-biological substrate? Do you not think that we have made the first steps down that road?
Having no evidence of such technical capabilities one can only speculate, but the likelihood always inclines to the side where there is evidence. Meanwhile, once it is acknowledged that current technology can not achieve its current pretensions for fundamental issues in its conception and deployment, it is a fairly good reason to believe that such technology will not achieve it in the future. Perhaps another technology yet to be invented, but not this one making headlines, heralded by the tech lords.
Favorite Philosopher: Umberto Eco Location: Panama
#469160
Sculptor1 wrote: October 23rd, 2024, 6:11 pm I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

[…]
I was impressed, but this is not intelligence, this is a specialised language model
It’s impressive, but I’m even more impressed of what it takes to make it work:
”So here’s the big reveal—all modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all. Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion. To appreciate the magnitude of that number, if every single person in the world together performed arithmetic at the rate of one calculation per minute, eight hours a day, a quadrillion calculations would take about a year. All that to generate one single response.
It is this sledgehammer of an algorithm that’s the core of text generation. It is the G in ChatGPT, which stands for “generative.”

Excerpt from AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Favorite Philosopher: Umberto Eco Location: Panama
#469162
Count Lucanor wrote: October 23rd, 2024, 8:40 pm It’s a matter of how they come to be as progressive technological advances that have a history, that are founded in social cooperation in the sphere of human culture, a sphere that simply does not exist in the beaver’s world, the bird’s world, nor the bee’s. They will continue doing the same dams, the same nests and the same hives forever.
This is simply wrong information. All of these will evolve and change as environment changes, the developments are simply exponentially slower. Human creations are in response to environment too.

To claim that there is a qualitative difference - that humanity or AI are somehow divorced from the natural world - is simply incoherent, a distortion of logic due to anthropocentric bias. It's akin to deciding that minerals are part of the Earth but biology is not truly part of the Earth due to the qualitative differences.

The Earth produces different structures over time. Now it's creating AI, with humans as the conduit. That is the situation.
#469170
Sy Borg wrote: October 23rd, 2024, 7:20 pm
Sculptor1 wrote: October 23rd, 2024, 6:11 pmI was impressed, but this is not intelligence ...
Not yet. Then again, a foetus is not intelligent either. I would describe AI's abilities as proto-intelligence.

Time is key to this issue. The more time that AI develops, the closer it will come to actual intelligence, as we know it.
No. I think there is a unavoidable matter of QUALITY rather than degree of difference. A machine "intellignce" will never be of the same type as a organic/biological intelligence.
If you can define "actual intelligence" then we might be able to progress with this debate.
#469171
Count Lucanor wrote: October 23rd, 2024, 11:20 pm
Sculptor1 wrote: October 23rd, 2024, 6:11 pm I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

[…]
I was impressed, but this is not intelligence, this is a specialised language model
It’s impressive, but I’m even more impressed of what it takes to make it work:
”So here’s the big reveal—all modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all. Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion. To appreciate the magnitude of that number, if every single person in the world together performed arithmetic at the rate of one calculation per minute, eight hours a day, a quadrillion calculations would take about a year. All that to generate one single response.
It is this sledgehammer of an algorithm that’s the core of text generation. It is the G in ChatGPT, which stands for “generative.”

Excerpt from AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
This is interesting. Can you cite this source?
#469172
Count Lucanor wrote: October 23rd, 2024, 11:20 pm
Sculptor1 wrote: October 23rd, 2024, 6:11 pm I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

[…]
I was impressed, but this is not intelligence, this is a specialised language model
It’s impressive, but I’m even more impressed of what it takes to make it work:
”So here’s the big reveal—all modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all. Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion. To appreciate the magnitude of that number, if every single person in the world together performed arithmetic at the rate of one calculation per minute, eight hours a day, a quadrillion calculations would take about a year. All that to generate one single response.
It is this sledgehammer of an algorithm that’s the core of text generation. It is the G in ChatGPT, which stands for “generative.”

Excerpt from AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
Thanks for that.
I asked CHATGPT to build on that anser. Here's what it said..
The core idea behind these models is that they are trained to predict the next word (or more precisely, the next token) in a sequence, based on the context of the words that have come before. However, this explanation leaves out important details about the complexities of training, the structure of language models, and the way they generate meaningful, coherent, and contextually relevant responses. Here’s a more comprehensive breakdown:

1. Token Prediction:
At the heart of models like ChatGPT, there's indeed a process called next-token prediction. A token can be a word, part of a word, or even punctuation, depending on the specific model's tokenization strategy. These tokens are the building blocks used for processing and generating text. For instance, the word “unbelievable” might be split into multiple tokens like “un,” “believ,” and “able” in a tokenization system.

During training, the model learns to predict the most likely next token based on the context provided by previous tokens in the sequence. This process of token-by-token prediction is repeated until the model completes a sentence, paragraph, or any other user-specified length.

2. Transformer Architecture:
ChatGPT uses an underlying model architecture called a Transformer. Transformers are designed to handle sequential data (like text) and make use of mechanisms called self-attention to understand the relationships between different tokens in a sequence.

Self-Attention: This mechanism allows the model to assign different importance to different parts of the input when predicting the next token. For example, when generating a response, certain earlier parts of the conversation may be more relevant to predicting the next token than others.

Contextual Understanding: Thanks to self-attention, the model can understand context beyond just the nearest words. It can capture long-range dependencies between words, helping it generate coherent responses. For instance, if a conversation involves keeping track of different topics or names, the model uses context from the entire conversation to make better predictions.

3. Training Process:
ChatGPT is trained on vast amounts of text data, such as books, articles, and websites. During training, the model goes through many iterations of trying to predict the next token in a sequence, comparing its predictions to the actual tokens in the training data, and then adjusting its internal parameters to improve its accuracy. This is done through a process called backpropagation using gradient descent.

Pre-training and Fine-tuning: The model is first pre-trained on a massive corpus of general data to learn about the structure and patterns of human language. After that, it may be fine-tuned on more specific data (such as dialogue datasets) to make it better at conversational tasks.
4. Probabilistic Nature:
The model generates text probabilistically. For each token it generates, it assigns probabilities to many possible tokens that could come next. The token with the highest probability is usually chosen, but randomness can be introduced (controlled by a parameter called temperature) to allow for more creative or varied responses.

5. Sampling Strategies:
Several techniques are used to sample tokens during the text generation process:

Greedy Search: The model always selects the token with the highest probability at each step. While this can produce coherent results, it can sometimes lead to repetitive or uncreative text.

Beam Search: This is a more sophisticated technique where multiple potential sequences are generated simultaneously, and the one with the best overall probability is chosen. It balances exploring multiple possibilities with selecting the most likely one.

Top-k Sampling / Nucleus Sampling: Instead of always picking the most probable token, the model limits its choice to the top k most probable tokens or the smallest group of tokens that collectively represent a certain threshold of probability (e.g., 90%). This can introduce more diversity into the responses.

6. Bias and Safety Measures:
During fine-tuning, additional steps are taken to align the model’s behavior with user expectations, avoid harmful or biased content, and encourage helpfulness. OpenAI and similar organizations also introduce reinforcement learning from human feedback (RLHF), where human reviewers provide feedback on model outputs, which helps refine and guide the model’s responses further. This method enhances the model's ability to provide useful, safe, and contextually appropriate answers.

7. Memory and State:
The model itself does not "remember" anything between interactions; each input is treated as an independent request. However, it can simulate memory within a single conversation by relying on the tokens that have already been exchanged. For example, if a user refers back to something mentioned earlier in a conversation, the model can keep track of that context based on the tokens in the input sequence.

8. Handling Ambiguity and Multimodal Input:
In cases where there is ambiguity, ChatGPT may ask clarifying questions or provide multiple potential interpretations. While this isn’t a direct consequence of its predictive nature, the ability to handle ambiguity stems from the vast variety of language data it's trained on. Recently, GPT models have also started incorporating multimodal capabilities, allowing them to process and generate responses to images, but the underlying predictive mechanism remains token-based (in the case of text) or involves a similar process for interpreting images.

Conclusion:
While it’s true that at its core, ChatGPT predicts the next token in a sequence, the overall process involves much more than simply stringing words together. The Transformer architecture, self-attention mechanism, and various sampling strategies make it possible for the model to generate coherent, context-aware, and informative responses. It also undergoes fine-tuning and feedback processes to ensure that its outputs are safe, helpful, and aligned with user expectations.


Sorry its a bit long for the Forum, but interesting.
#469173
lagayascienza wrote: Sure, if we want to call everything made by humans artificial, then that is one way to carve up the world. But why stop at human structures? What about a dam made by a beaver or a bird's nest? And how about tools made by primates and corvids? Are all of these not also artificial in the same way that human artifacts like houses and cars are artificial?
Count Lucanor wrote: No, they are not artificial in the same way. It’s not a matter of any of them being functionally superior or more complex than the other. It’s a matter of how they come to be as progressive technological advances that have a history, that are founded in social cooperation in the sphere of human culture, a sphere that simply does not exist in the beaver’s world, the bird’s world, nor the bee’s. They will continue doing the same dams, the same nests and the same hives forever.
Some animal behaviours are novel and require thought. Dogs, corvids, cetaceans and octopus all exhibit advanced cognitive abilities. For example, a chimp, never having seen it done before, will move an object on which it can stand to beneath some high hanging food so that the food can be reached. To perform this operation a chimp must have been able to think thus: Ok, there’s some food up there that I want but I can’t reach it. Well, what can I do that will enable me to reach it? Mmm... Well, there’s a thing over there on the other side of the room that I could move and stand on. If I bring it over here beneath the food then I should be able to reach the food.

This represents a deliberate change made by the Chimp to its environment with a particular goal in mind and exhibits high level cognition and planning. And if this cognition and planning is determined, pre-programmed by evolution, then so must human cognitive abilities. Of course, a human may be able to perform this process more quickly and devise more complicated and efficient ways of doing the same job as the Chimp (such as building a ladder) but here I think we are talking about a difference in degree and not in kind. Or is the building and tool making behavior of these other animals entirely mindless and determined while similar human behavior is entirely mindful and undetermined? Can we really say one is entirely determined and the other not at all? Why?


Count Lucanor wrote: Secondly, those human developments surpass the innate faculties of the human species, so one isolated human would not have made a dam by himself, nor a bridge, nor a house, unlike a bird or a beaver, which will do it by instinct, it’s within their nature. It takes what is within human nature: social cooperation, higher cognitive functions and learning abilities to make ever more sophisticated tools and use their natural environment to produce things that do not respond to mere instinctive necessity.
But we were once as they are. Go back a million years or so and we were just smart primates. And If modern humans went extinct, then perhaps in another million years chimps would evolve to be as smart as we are now. And other animals have cooperative behaviours. Some, like bees are not mindful cooperators but chimps plan and hunt cooperatively like our ancestors did. Again, the difference between them and us may be just a difference in degree and not in kind.

Count Lucanor wrote: It is well documented that a feral child will barely survive and will not develop any characteristic associated with being a normal, functional human. Maybe there are some other cases in nature, but it will show that early upbringing in a social setting is key to future development, which in the case of humans is particularly important.
Agreed. But it’s the same with baby monkeys that are separated at birth from their mothers and not given the nurturing, socialization and teaching by a mother. These monkeys function cognitively and socially at a lower level than monkeys raised normally just as with human infants. I'm not seeing a lot of difference here.


Count Lucanor wrote: Unlike the work of beavers, birds or bees, human make the plans in their heads before putting their hands to the task. Two different innate qualities at work, one produces solutions fixed to the instinct of the species, the other is open to much more possibilities.
I agree that current AI is not the same as the biologically housed intelligence of humans. Computers are capable of some of the processes we normally associate with intelligence, but computers don't currently have consciousness or an inner mental life. But is the progress made thus far in AI not reason to think that intelligence, and perhaps sentience, could eventually be housed in a non-biological substrate? Do you not think that we have made the first steps down that road?

Count Lucanor wrote: Having no evidence of such technical capabilities one can only speculate, but the likelihood always inclines to the side where there is evidence. Meanwhile, once it is acknowledged that current technology can not achieve its current pretensions for fundamental issues in its conception and deployment, it is a fairly good reason to believe that such technology will not achieve it in the future. Perhaps another technology yet to be invented, but not this one making headlines, heralded by the tech lords.
Animals are one thing, computers are another. We are currently a very long way from creating machines that have non-organic brains which could host minds akin to even the minimal sentience of the simplest vertebrate. But I think we have made the first baby steps down the road to constructing such machines and this progress may indicate that it will be possible eventually to construct sentient, thinking, non-biological machines.

Computers process things differently from us, but some of the processes they carry out achieve outcomes we generally associate with intelligence. For example, adding two quantities is an operation that both biological brains and non-biological neural networks are capable of. Yes, computers currently do it by brute force but, once we learn more about how biological brains do what they do, we should be able to build smarter machines and, eventually, machines which might rival our own cognitive abilities.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
#469174
subatomic wrote: December 23rd, 2023, 3:10 pm This post is very relevant to this quote from The Imitation Game, the movie about Alan Turing:

"Of course machines can't think as people do. A machine is different from a person. Hence, they think differently. The interesting question is, just because something, uh... thinks differently from you, does that mean it's not thinking? Well, we allow for humans to have such divergences from one another. You like strawberries, I hate ice-skating, you cry at sad films, I am allergic to pollen. What is the point of... different tastes, different... preferences, if not, to say that our brains work differently, that we think differently? And if we can say that about one another, then why can't we say the same thing for brains... built of copper and wire, steel?"

What is your definition of intelligence? Because for me, honestly, if an AI can pass the Turning test....I consider it intelligent. I personally believe there is no big separation between a "conscious" human mind and a machine that is just really really good at pattern recognition. After all, I think we are all just machines that are really really good at pattern recognition. AI is simulated neural networks, and we are neural networks.
I doubt this is even close to what Turing ever said, and even if it did would not really be helpful since he never encountered an electronic computer in his lifetime. That is not to minimise his contribution.
My main objection here is that the script writer has a lack of appreciation for what is going on.
The fact that one person loves chocolate and another does not cannot be used to suggest that we think differently. It is evident that we have different life experiences. Since we all share 99+% of our DNA with one another, it is highly unlikely to suggest that the means by which we think is different from one another though we may come forth with different conclusions that is more to do with the fact that we all have unique experiences. The metabolic and nervous processes that occur in the human brain are necessarily different in quality (not implying a heirarchy), than any process in copper, and microchips. Given the basic structures of nature we will always have more in common , in this respect, with chimps , dogs, and even lizards than any machine "intelligence".
There is many accurate comparisons with lizard brains and human behaviour which reveal the truth of our evolution, that are forever separate from the rather recent tools of our computational world.
The difference will always be qualitative and not simply quantitative, though we may count the speed of computation, the means is wholly different.
#469187
Lagayscienza wrote: October 24th, 2024, 5:14 am
Some animal behaviours are novel and require thought. Dogs, corvids, cetaceans and octopus all exhibit advanced cognitive abilities. For example, a chimp, never having seen it done before, will move an object on which it can stand to beneath some high hanging food so that the food can be reached. To perform this operation a chimp must have been able to think thus: Ok, there’s some food up there that I want but I can’t reach it. Well, what can I do that will enable me to reach it? Mmm... Well, there’s a thing over there on the other side of the room that I could move and stand on. If I bring it over here beneath the food then I should be able to reach the food.
I've seen the experiments with chimps, rats, etc., and I don't deny any of this. Many species, specially mammals, exhibit complex behaviors, can solve problems, use tools, etc., and our closest relatives among primates can even talk with scientists in sign language (although during all the decades that this has been going on, they have not asked a single question yet). That behavior is within their nature and so is ours. But that's not the issue. We certainly have some extra features that allowed us to inaugurate a completely new way of dealing with the environment, and most importantly, a way of changing ourselves as a species by organizing our social experience, collectively and historically, in the domain of culture, a domain that simply does not exist in any other species, a domain that has its own complex dynamic and does not respond directly to the the same pressures of biological evolution as in the experience of any other species. There's no social construction, no cultural dynamic as such in whales, birds or tigers, nor they will be able to produce specific subdomains as art, technology, religion, philosophy, etc., nor develop in history a "whaleness" or "tigerness" distinctively different from that of thousands of years before. That's why the products of human activity, the artifacts of culture, can be justifiably set apart from the rest of natural production and be called "artificial". That's the issue.
Lagayscienza wrote: October 24th, 2024, 5:14 amOr is the building and tool making behavior of these other animals entirely mindless and determined while similar human behavior is entirely mindful and undetermined? Can we really say one is entirely determined and the other not at all? Why?
While one is determined in a way that allows very little variation, following innate mechanisms of response to the environment, the other is an emergent reality that works autonomously as an open field of undetermined human decisions and actions, with its contingencies and unpredictable historical developments. I'm not a subscriber of strong social constructionism, so I will not deny all of our innate mechanisms in play, but I'll highlight the importance of our "second nature", the field of human action, which sets up apart, for good or for bad, from the rest of nature.
Lagayscienza wrote: October 24th, 2024, 5:14 am
Count Lucanor wrote: Secondly, those human developments surpass the innate faculties of the human species, so one isolated human would not have made a dam by himself, nor a bridge, nor a house, unlike a bird or a beaver, which will do it by instinct, it’s within their nature. It takes what is within human nature: social cooperation, higher cognitive functions and learning abilities to make ever more sophisticated tools and use their natural environment to produce things that do not respond to mere instinctive necessity.
But we were once as they are. Go back a million years or so and we were just smart primates. And If modern humans went extinct, then perhaps in another million years chimps would evolve to be as smart as we are now. And other animals have cooperative behaviours. Some, like bees are not mindful cooperators but chimps plan and hunt cooperatively like our ancestors did. Again, the difference between them and us may be just a difference in degree and not in kind.
But we are talking about humans, homo sapiens. Those other primates were not "we". We can justifiably see the connection as ancestors in the evolution process of hominids, but that's about it: there's something about the homo sapiens species that made a a significant difference in the way forward. Cooperation in eusocial animals? Sure, but such innate mechanisms of cooperation are not subject to modification by historical circumstances created by those animals themselves while organizing consciously their whole collective experience. There will be no intrigues and power struggles in the ant nest, not any that has not been biologically predetermined. Nor will specialized labor be determined by contingent historical circumstances shaped by collective decisions and actions, but by innate biological mechanisms.
Lagayscienza wrote: October 24th, 2024, 5:14 am
Count Lucanor wrote: It is well documented that a feral child will barely survive and will not develop any characteristic associated with being a normal, functional human. Maybe there are some other cases in nature, but it will show that early upbringing in a social setting is key to future development, which in the case of humans is particularly important.
Agreed. But it’s the same with baby monkeys that are separated at birth from their mothers and not given the nurturing, socialization and teaching by a mother. These monkeys function cognitively and socially at a lower level than monkeys raised normally just as with human infants. I'm not seeing a lot of difference here.
Of course, chimps are close relatives, so we do share a lot of features, but there are also many key differences. One such important, well-known difference, is the relatively small brain size of newborn humans and its rapid postnatal enlargement, extraordinary as compared to other animals, including primates. This has been attributed to the size of the human maternal pelvis as an evolutionary tradeoff for bipedal locomotion. The point is that the extra pressure on the survival chances of human neonates that is countered with nurturing, also implies a bigger impact of that nurturing on brain development, which also highlights the importance of brain plasticity.
Lagayscienza wrote: October 24th, 2024, 5:14 am I agree that current AI is not the same as the biologically housed intelligence of humans. Computers are capable of some of the processes we normally associate with intelligence, but computers don't currently have consciousness or an inner mental life. But is the progress made thus far in AI not reason to think that intelligence, and perhaps sentience, could eventually be housed in a non-biological substrate? Do you not think that we have made the first steps down that road?
I disagree mostly with the implications of stating that "computers are capable of some of the processes we normally associate with intelligence". We could say that of math calculators, even the first, rudimentary machines. If we implied with it that eventually math calculators will make progress to become able to reason mathematically, we would be making a fundamental mistake. It is of no help to point out that current electronic calculators are the first steps in that path, because they obviously aren't. The counterargument to this objection, if we followed the common logic among advocates of "AI as real intelligence", would be: "they are just a different type of math reasoning". The problem is: why then call it reasoning, intelligence, etc.? What advocates of "AI as real intelligence" are actually pretending is to embrace the computational theory of mind, with all its philosophical implications, as the ruling model of intelligence, and, even worse, this as the root of agency, sentience, life, and so on. Thus, the close relationship between AI and the singularity hypothesis.
Lagayscienza wrote: October 24th, 2024, 5:14 am
Count Lucanor wrote: Having no evidence of such technical capabilities one can only speculate, but the likelihood always inclines to the side where there is evidence. Meanwhile, once it is acknowledged that current technology can not achieve its current pretensions for fundamental issues in its conception and deployment, it is a fairly good reason to believe that such technology will not achieve it in the future. Perhaps another technology yet to be invented, but not this one making headlines, heralded by the tech lords.
Animals are one thing, computers are another. We are currently a very long way from creating machines that have non-organic brains which could host minds akin to even the minimal sentience of the simplest vertebrate. But I think we have made the first baby steps down the road to constructing such machines and this progress may indicate that it will be possible eventually to construct sentient, thinking, non-biological machines.
It is the second part of this statement that I disagree with, as explained in my previous response. No baby steps whatsoever, but you may think so if you embrace the computational theory of mind and believed that sentience and animal-like agency are emergent properties of computational minds, whether those are biological organs or man-made devices.
Lagayscienza wrote: October 24th, 2024, 5:14 am Computers process things differently from us, but some of the processes they carry out achieve outcomes we generally associate with intelligence. For example, adding two quantities is an operation that both biological brains and non-biological neural networks are capable of. Yes, computers currently do it by brute force but, once we learn more about how biological brains do what they do, we should be able to build smarter machines and, eventually, machines which might rival our own cognitive abilities.
But being capable of, does not imply doing it the same way. Outperforming intelligent organisms in completing tasks is not the same as outperforming organisms in cognitive abilities. It is without question that the "smartest" computers don't understand anything, they just perform fast, automated (mechanistic/digital) calculations, but the implication that "understanding" and "reasoning" are simply a factor of such calculations, so that the more calculations, the bigger chances of reasoning and understanding emerging, is very highly debatable. It's the debate against the computational theory of mind.
Favorite Philosopher: Umberto Eco Location: Panama
#469188
Sculptor1 wrote: October 24th, 2024, 4:53 am
Count Lucanor wrote: October 23rd, 2024, 11:20 pm
Sculptor1 wrote: October 23rd, 2024, 6:11 pm I've been quite impressed with CHATGPT, though I has been shown to have some really bad simple errors lie counting the number of multiple letters in words..

[…]
I was impressed, but this is not intelligence, this is a specialised language model
It’s impressive, but I’m even more impressed of what it takes to make it work:
”So here’s the big reveal—all modern chatbots are actually trained simply to predict the next word in a sequence of words. They generate text by repeatedly producing one word at a time. For technical reasons, they generate a “token” at a time, tokens being chunks of words that are shorter than words but longer than individual letters. They string these tokens together to generate text.
When a chatbot begins to respond to you, it has no coherent picture of the overall response it’s about to produce. It instead performs an absurdly large number of calculations to determine what the first word in the response should be. After it has output—say, a hundred words—it decides what word would make the most sense given your prompt together with the first hundred words that it has generated so far.
This is, of course, a way of producing text that’s utterly unlike human speech. Even when we understand perfectly well how and why a chatbot works, it can remain mind-boggling that it works at all. Again, we cannot stress enough how computationally expensive all this is. To generate a single token—part of a word—ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations—a million billion. To appreciate the magnitude of that number, if every single person in the world together performed arithmetic at the rate of one calculation per minute, eight hours a day, a quadrillion calculations would take about a year. All that to generate one single response.
It is this sledgehammer of an algorithm that’s the core of text generation. It is the G in ChatGPT, which stands for “generative.”

Excerpt from AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference
This is interesting. Can you cite this source?
AI Snake Oil - What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, by Arvind Narayanan and Sayash Kapoor.
Favorite Philosopher: Umberto Eco Location: Panama
#469192
Sculptor1 wrote: October 24th, 2024, 4:49 am
Sy Borg wrote: October 23rd, 2024, 7:20 pm
Sculptor1 wrote: October 23rd, 2024, 6:11 pmI was impressed, but this is not intelligence ...
Not yet. Then again, a foetus is not intelligent either. I would describe AI's abilities as proto-intelligence.

Time is key to this issue. The more time that AI develops, the closer it will come to actual intelligence, as we know it.
No. I think there is a unavoidable matter of QUALITY rather than degree of difference. A machine "intellignce" will never be of the same type as a organic/biological intelligence.
If you can define "actual intelligence" then we might be able to progress with this debate.
Sure it won't be the same. How could it be? Human intelligence is not the same type as that of a bee either. But there is still intelligence in each.

Dictionary definition: The ability to acquire, understand, and use knowledge.

Acquiring and use already apply, but not understanding ... at this stage. It helps to consider why the understanding aspect of intelligence evolved and how it could apply to future intelligent(?) machines. I would say that understanding evolved as a means of extrapolating on, and thus extending, existing knowledge. I remember from school having difficulty remembering any concept that I did not understand but, once I understood the principles involved, I never forgot. If rote learning failed me (often) I could work from first principles and recall information, eg. I aced Commerce in year ten with almost no study by simply applying two principles - supply and demand and economies of scale to each scenario.

This ability to extrapolate on other knowledge, to see analogies, will be useful to AI when it's sent off-world with 3D printers to build habitats and infrastructure. The further the units are from Earth, the less they can rely on human guidance. They will need to be able to anticipate potential issues and then respond to rapidly unfolding novel situations quickly, as there will not always be time to "phone home" for advice.

As with life, every event met/experienced by AI is part of its training. Like life, it starts with programming (our programming is DNA) and its capabilities are shaped by subsequent learning. These will not just be like chatbots of today. Anyone who thinks AI will not progress significantly from today's chatbots is not in touch with reality.
#469194
Count Lucanor wrote:But being capable of, does not imply doing it the same way. Outperforming intelligent organisms in completing tasks is not the same as outperforming organisms in cognitive abilities. It is without question that the "smartest" computers don't understand anything, they just perform fast, automated (mechanistic/digital) calculations, but the implication that "understanding" and "reasoning" are simply a factor of such calculations, so that the more calculations, the bigger chances of reasoning and understanding emerging, is very highly debatable. It's the debate against the computational theory of mind.
To my mind, the computational theory of mind has more going for it than any other. However, the brain, it's processes and emergent phenomena, are very difficult to study. The problem is we are studying the thing we want to understand by using the thing we want to understand. This creates a feedback loop or hall of mirrors effect which in turn creates a lot of meaningless and confusing noise.

While I am far from an expert in this field, I have to say that, based on the reading I have done, I agree with the likes of Dennett, Fodor, Marr, Neiser, Pinker and Putnam. The brain appears to be is a biological computer and the computational theory makes the most sense to me. Of course that dos not make it true. However, I have looked at the criticisms of the computational theory of mind and, as far as I can see, they can be countered. Whereas, all the other theories have shortcomings which cannot be addressed. There is a simple account of the computational theory of mind on Wikipedia that is worth reading and which provides references to much of the relevant literature. Some of these I have not read yet. After I look at those unread references I will revisit the other theories of mind. It will take some time but I will then write a summary of how the theories compare.

Thanks for the interesting discussion thus far.
Favorite Philosopher: Hume Nietzsche Location: Antipodes
  • 1
  • 9
  • 10
  • 11
  • 12
  • 13
  • 31

Current Philosophy Book of the Month

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2025 Philosophy Books of the Month

On Spirits: The World Hidden Volume II

On Spirits: The World Hidden Volume II
by Dr. Joseph M. Feagan
April 2025

Escape to Paradise and Beyond (Tentative)

Escape to Paradise and Beyond (Tentative)
by Maitreya Dasa
March 2025

They Love You Until You Start Thinking for Yourself

They Love You Until You Start Thinking for Yourself
by Monica Omorodion Swaida
February 2025

The Riddle of Alchemy

The Riddle of Alchemy
by Paul Kiritsis
January 2025

2024 Philosophy Books of the Month

Connecting the Dots: Ancient Wisdom, Modern Science

Connecting the Dots: Ancient Wisdom, Modern Science
by Lia Russ
December 2024

The Advent of Time: A Solution to the Problem of Evil...

The Advent of Time: A Solution to the Problem of Evil...
by Indignus Servus
November 2024

Reconceptualizing Mental Illness in the Digital Age

Reconceptualizing Mental Illness in the Digital Age
by Elliott B. Martin, Jr.
October 2024

Zen and the Art of Writing

Zen and the Art of Writing
by Ray Hodgson
September 2024

How is God Involved in Evolution?

How is God Involved in Evolution?
by Joe P. Provenzano, Ron D. Morgan, and Dan R. Provenzano
August 2024

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters

Launchpad Republic: America's Entrepreneurial Edge and Why It Matters
by Howard Wolk
July 2024

Quest: Finding Freddie: Reflections from the Other Side

Quest: Finding Freddie: Reflections from the Other Side
by Thomas Richard Spradlin
June 2024

Neither Safe Nor Effective

Neither Safe Nor Effective
by Dr. Colleen Huber
May 2024

Now or Never

Now or Never
by Mary Wasche
April 2024

Meditations

Meditations
by Marcus Aurelius
March 2024

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes

Beyond the Golden Door: Seeing the American Dream Through an Immigrant's Eyes
by Ali Master
February 2024

The In-Between: Life in the Micro

The In-Between: Life in the Micro
by Christian Espinosa
January 2024

2023 Philosophy Books of the Month

Entanglement - Quantum and Otherwise

Entanglement - Quantum and Otherwise
by John K Danenbarger
January 2023

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul

Mark Victor Hansen, Relentless: Wisdom Behind the Incomparable Chicken Soup for the Soul
by Mitzi Perdue
February 2023

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness

Rediscovering the Wisdom of Human Nature: How Civilization Destroys Happiness
by Chet Shupe
March 2023

The Unfakeable Code®

The Unfakeable Code®
by Tony Jeton Selimi
April 2023

The Book: On the Taboo Against Knowing Who You Are

The Book: On the Taboo Against Knowing Who You Are
by Alan Watts
May 2023

Killing Abel

Killing Abel
by Michael Tieman
June 2023

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead

Reconfigurement: Reconfiguring Your Life at Any Stage and Planning Ahead
by E. Alan Fleischauer
July 2023

First Survivor: The Impossible Childhood Cancer Breakthrough

First Survivor: The Impossible Childhood Cancer Breakthrough
by Mark Unger
August 2023

Predictably Irrational

Predictably Irrational
by Dan Ariely
September 2023

Artwords

Artwords
by Beatriz M. Robles
November 2023

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope

Fireproof Happiness: Extinguishing Anxiety & Igniting Hope
by Dr. Randy Ross
December 2023

2022 Philosophy Books of the Month

Emotional Intelligence At Work

Emotional Intelligence At Work
by Richard M Contino & Penelope J Holt
January 2022

Free Will, Do You Have It?

Free Will, Do You Have It?
by Albertus Kral
February 2022

My Enemy in Vietnam

My Enemy in Vietnam
by Billy Springer
March 2022

2X2 on the Ark

2X2 on the Ark
by Mary J Giuffra, PhD
April 2022

The Maestro Monologue

The Maestro Monologue
by Rob White
May 2022

What Makes America Great

What Makes America Great
by Bob Dowell
June 2022

The Truth Is Beyond Belief!

The Truth Is Beyond Belief!
by Jerry Durr
July 2022

Living in Color

Living in Color
by Mike Murphy
August 2022 (tentative)

The Not So Great American Novel

The Not So Great American Novel
by James E Doucette
September 2022

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches

Mary Jane Whiteley Coggeshall, Hicksite Quaker, Iowa/National Suffragette And Her Speeches
by John N. (Jake) Ferris
October 2022

In It Together: The Beautiful Struggle Uniting Us All

In It Together: The Beautiful Struggle Uniting Us All
by Eckhart Aurelius Hughes
November 2022

The Smartest Person in the Room: The Root Cause and New Solution for Cybersecurity

The Smartest Person in the Room
by Christian Espinosa
December 2022

2021 Philosophy Books of the Month

The Biblical Clock: The Untold Secrets Linking the Universe and Humanity with God's Plan

The Biblical Clock
by Daniel Friedmann
March 2021

Wilderness Cry: A Scientific and Philosophical Approach to Understanding God and the Universe

Wilderness Cry
by Dr. Hilary L Hunt M.D.
April 2021

Fear Not, Dream Big, & Execute: Tools To Spark Your Dream And Ignite Your Follow-Through

Fear Not, Dream Big, & Execute
by Jeff Meyer
May 2021

Surviving the Business of Healthcare: Knowledge is Power

Surviving the Business of Healthcare
by Barbara Galutia Regis M.S. PA-C
June 2021

Winning the War on Cancer: The Epic Journey Towards a Natural Cure

Winning the War on Cancer
by Sylvie Beljanski
July 2021

Defining Moments of a Free Man from a Black Stream

Defining Moments of a Free Man from a Black Stream
by Dr Frank L Douglas
August 2021

If Life Stinks, Get Your Head Outta Your Buts

If Life Stinks, Get Your Head Outta Your Buts
by Mark L. Wdowiak
September 2021

The Preppers Medical Handbook

The Preppers Medical Handbook
by Dr. William W Forgey M.D.
October 2021

Natural Relief for Anxiety and Stress: A Practical Guide

Natural Relief for Anxiety and Stress
by Dr. Gustavo Kinrys, MD
November 2021

Dream For Peace: An Ambassador Memoir

Dream For Peace
by Dr. Ghoulem Berrah
December 2021


Personal responsibility

Right. One does the socially expected thing and ap[…]

Q. What happens to a large country that stops ga[…]

Emergence can't do that!!

I'm woefully ignorant about the scientific techn[…]