Page 32 of 32

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: January 24th, 2025, 7:00 pm
by Sy Borg
Steve3007 wrote: January 24th, 2025, 2:33 pm
Sy Borg wrote:According to Sabine, ChatGPT, Grok, Meta's Llama (and, I presume, the CCP's DeepSeek) are frontier AI models that are already so far ahead that it's unlikely that any new models will be able to compete. You'd need to start with a whole new paradigm that was inherently more efficient.
Yes, or do something with AI that those models aren't doing. For example, as I understand it, the use of AI in SETI's Breakthrough Listen Project is in sifting through the vast and continually growing quantity of radio and optical telescope data looking for patterns that look artificial but not terrestrial. Creating ANN's which aren't necessarily as complex as the cutting edge ones funded by the big cooperation but which have novel/niche applications seems like an interesting place to go.

I'm hoping, at some point, to continue working on the use of ANN's in fluid dynamics (neural networks learning how fluids move) because that's what my dissertation was about and it has applications in things like climate science. But there, as with everywhere else, if you search through the literature you'll find loads of other people doing the same thing. Which is a good thing, of course, as it's how progress is made. Just difficult, as an individual, to find a little piece of uncharted territory to explore!
The competition would be immense. Good idea to find a niche. A good study subject too, given that fluid dynamics seem to be a possible x-factors of life that sceptics think AI will never replicate. Life's behaviour seems to often echo fluid dynamics. If electricity can be massaged to replicate water's role in life, then that will change everything.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: January 25th, 2025, 8:55 am
by Pattern-chaser
Sy Borg wrote: January 24th, 2025, 1:24 pm I think we do know what intelligence is. We know it when we encounter it.
That's my line! 🤣 Yes, you're right, of course. But our present discussion could benefit greatly from a more precise understanding, couldn't it? 🤔👍


Sy Borg wrote: January 24th, 2025, 1:24 pm We are resistant about terming machines intelligent because it's a new phenomenon. We don't want to disappear up the backside of post-modernism to the point where nothing can be said about anything.

If new chatbots are better at chatting because they are more intelligent. They only have to chat - they don't have to be able to make you a cup of tea and form political beliefs to be intelligent. They can have a specialised intelligence. Likewise, we don't expect bees and ants to be able to engage in discourse about nuclear physics - but they are still intelligent, certainly more intelligent than beetles and fleas.

Consider the dictionary definition of "The ability to acquire, understand, and use knowledge". The "aha!" that naysayers pounce on is ... "AI does not understand". I think it does. The way AI understands complex sentences, errors and all, and responds appropriately cannot be disregarded. In this context "understanding" does not require internality, only appropriate processing.
I'm afraid a few clever short-cuts are sufficient to achieve what you describe. Intelligence is not strictly necessary. ... Depending what we mean by intelligence, of course. 👍 Think of it like those screens full of simple birds, seemingly flying around and never hitting each other. It looks amazingly complex, but a couple of simple rules are all it takes to draw such screens. There are similar "simple rules" to text recognition too, I believe.






Flying birds - additional info: https:^^randomtechthoughts.blog^2020^10^08^simulating-how-birds-form-flocks^ (if you replace "^" with "/").

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: January 25th, 2025, 9:11 am
by Pattern-chaser
Sy Borg wrote: January 24th, 2025, 1:24 pm I think we do know what intelligence is. We know it when we encounter it. We are resistant about terming machines intelligent because it's a new phenomenon. We don't want to disappear up the backside of post-modernism to the point where nothing can be said about anything.

If new chatbots are better at chatting because they are more intelligent. They only have to chat - they don't have to be able to make you a cup of tea and form political beliefs to be intelligent. They can have a specialised intelligence. Likewise, we don't expect bees and ants to be able to engage in discourse about nuclear physics - but they are still intelligent, certainly more intelligent than beetles and fleas.

Consider the dictionary definition of "The ability to acquire, understand, and use knowledge". The "aha!" that naysayers pounce on is ... "AI does not understand". I think it does. The way AI understands complex sentences, errors and all, and responds appropriately cannot be disregarded. In this context "understanding" does not require internality, only appropriate processing.
An excerpt from a webpage describing word-understanding tricks:
In this explainer:

How LLMs learn to predict the next word.
Why and how LLMs turn words into numbers.
Why learning to predict the next word is surprisingly powerful.

Large language models (LLMs) are best known as the technology that underlies chatbots such as OpenAI’s ChatGPT or Google’s Gemini. At a basic level, LLMs work by receiving an input or prompt, calculating what is most likely to come next, and then producing an output or completion. The full story of how LLMs work is more complex than this description, but the process by which they learn to predict the next word—known as pre-training—is a good place to start.

If you are given the sentence, “Mary had a little,” and asked what comes next, you’ll very likely suggest “lamb.” A language model does the same: it reads text and predicts what word is most likely to follow it.1

The right input sentence can turn a next-word-prediction machine into a question-answering machine. Take this prompt, for example:

“The actress that played Rose in the 1997 film Titanic is named…”

When an LLM receives a sentence like this as an input, it must then predict what word comes next. To do this, the model generates probabilities for possible next words, based on patterns it has discerned in the data it was trained on, and then one of the highest probability words is picked to continue the text.2 Here’s a screenshot from an OpenAI model, showing the words it estimated to be most probable continuations in this case (highlighted text was generated by the model):

Source: Author experimentation with text-davinci-003.

In this case, correctly predicting the next word meant that the model provided the user with a fact—and by doing so, answered the user’s implicit question in the input. Let’s look at one more example that goes a step further, using next-word prediction to carry out a simple task. Consider the following prompt:

“Now I will write a Python function to convert Celsius to Fahrenheit.”

Like the previous example, the model takes this input text and predicts what words come next—in this case, functioning code to carry out the task in question, as shown in this screenshot:

Source: Author experimentation with text-davinci-003.

In some sense, this input “tricks” the LLM into outputting a Python function by asking the model to complete the text. It’s as if the LLM were an improv partner and was continuing the scene by writing the correct code. This approach demonstrates how a next-word-prediction machine can not only answer questions but also carry out useful tasks.

These examples only involve short chunks of text, but the same principle can be used to generate longer texts, too. Once the model has predicted one word, it simply keeps predicting what will come next after the text it has already produced. It can carry on in this fashion indefinitely, though the generated text will generally become less coherent as it gets more distant from the initial input.

Explanations of how LLMs work often stop there: with predicting the next word. But as mentioned above, predicting the next word isn’t the whole story of how ChatGPT and similar systems do what they do. Learning to predict the next word happens in a step called pre-training, and it’s only one of several key steps in the development process of today’s LLMs. Subsequent posts in this series dive into the limitations of next-word prediction, and what other techniques AI developers use to build LLMs that work well. But first, the rest of this post explains more about how pre-training works—and why it has been so pivotal in creating AI systems that appear to have something resembling an understanding of the world.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: January 25th, 2025, 3:18 pm
by Sy Borg
Why do you think that simplicity precludes intelligence? Because humans and other chordates are complex? Are we asking if AI is intelligent or if it is highly intelligent? I often hear that ants are intelligent but they also operate simply, with far fewer "algorithms" than our mental systems.

Re: Is AI ‘intelligent’ and so what is intelligence anyway?

Posted: January 26th, 2025, 7:58 am
by Pattern-chaser
Sy Borg wrote: January 25th, 2025, 3:18 pm Why do you think that simplicity precludes intelligence?
I don't. 🙂


Sy Borg wrote: January 25th, 2025, 3:18 pm Because humans and other chordates are complex?
Acknowledging our lack of a clear definition for "intelligence", many/most creatures are intelligent, we think. Don't we? I do... 🙂 Perhaps complexity is a secondary feature, in this context?


Sy Borg wrote: January 25th, 2025, 3:18 pm Are we asking if AI is intelligent or if it is highly intelligent?
For myself, I'm not asking either of those things. It seems clear, from empirical observation, that today's AI is not intelligent at all. Useful? Yes. Capable? Yes, in many ways. But intelligent? No. Not today. But tomorrow is another day...

Sy Borg wrote: January 25th, 2025, 3:18 pm I often hear that ants are intelligent but they also operate simply, with far fewer "algorithms" than our mental systems.
And yet many would call ants intelligent. I think I might concur... 🤔🤔🤔