Page 6 of 8

Re: Will Sentient A.I be more altruistic than selfish?

Posted: January 24th, 2023, 6:36 pm
by Gertie
GrayArea wrote: January 21st, 2023, 4:18 pm
Gertie wrote: January 21st, 2023, 1:19 pm
GrayArea wrote: January 19th, 2023, 5:36 am
Gertie wrote: January 15th, 2023, 8:25 pm To try to summarise -

There is some ontological “force that makes the world exist as the world”. Which accounts for both physical stuff and experience, and which manifests as both when neurons interact, because of how this force operates in specific instances, right? The specific instances function as perspectives re how the force works physically and sometimes (in the case of the properties neurons have) experientially too?
Yes, I believe you are correct. And by “how this force operates in specific instances” I would say there would both be objective and subjective instances.

Gertie wrote: January 15th, 2023, 8:25 pm
What sort of thing might this force be do you think, which both constitutes and 'shapes' all that exists, and how could we potentially test for it?
The way I came to believe in the existence of such a thing is rather very simple and straightforward. The reason why I believe that there is this force that shapes everything that exists into "the specific way they exist", is simply because we know that everything that exists has indeed been shaped to exist in the specific ways they do.

That, in my opinion, is all there is to it. It's not about God or some otherworldly force, and it's not some undiscovered science either, but just a way in which I divide my definitions of reality so that I can explore it in more detail and with more flexibility.
OK, thanks again for bearing with me, I think I'm about there!

So would you say this one ontological force is what physics currently recognises as the fundamental forces and particles of the standard model?  That the model just hasn't been able to look deeply enough to see the underlying one ontological force? 

Or do you view the standard model as more ontologically fundamental (irreducible) and that its forces and particles operate to functionally produce a system which effectively acts like one force which shapes the universe into the specific way it is?
This force that makes anything exist the way they do is not what physics recognizes as the fundamental forces / standard model, but it's what makes physics itself, and everything else, the way it is. This force doesn't operate on logic such as mathematics or physics (unlike everything in physics), nor can it be described by logic, but rather it is what logic IS. That is to also say, this force is not something within the real world, but rather it is what the real world exists as.

The ontological force I talk about is equal to the definition of an object within reality itself that makes that object exist the way it does. A neuron exists the way it does because it is defined to exist the way it does. Defined not by human beings or observers, but defined by its own existence, and existence itself.
I can see how such a position could give you the flexibility of approach you mentioned, but on the flip side I don't see the explanatory value of positing a force which somehow manifests everything into the specific way it is. With physics for example we have an explanatory system which is observable, testable and makes predictions - I don't think it's a complete model (eg it omits conscious experience) but it has a lot of explanatory power.

I don't see how could you in principle go about answering your own question in this thread for instance, based on an ontology of this one fundamental force. Where-as physics gives us some ideas on how to get a handle on such problems by way of notions like causation, substance and properties, necessary and sufficient conditions, emergence, and I'd say our notion of logic arises from observing how the world is and works in a physicalist sense. These sorts of notions inform philosophy of mind too, at least giving us a way to get a conceptual handle on the unknown.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: April 25th, 2023, 5:58 am
by jlaugh
The contemporary tendency to present the Singularity or Artificial General Intelligence (not suggesting that they are interchangeable terms) as inevitable outcomes is a political move. We are encoding our biases and limitations, transferring them to computers. And these "intelligent" devices and programs do just that -- they "compute." To then make a leap from a machine's extraordinary computing power to more general and vexing concepts such as intelligence or consciousness is not only an example of transductive thought, but also betrays our tendency to oversimplify intelligence and consciousness. Computing surely is a facet of the former, but it is the only aspect. We seem to however believe that computing is all there is to intelligence or consciousness. This political move is favored by those with stakes in this game--the VCs and those with visions of the technate or the posthuman because it forwards a misunderstanding that benefits their attempts to concentrate power.

In this context, I believe Meghan O'Gieblyn's recent book God, Human, Animal, Machine highlights the theological roots of the metaphors we are told to use to think about about and discuss tech--especially AI- and ML-related tech. Even contemporary visions of the posthuman, the book argues, has roots in Western Christian theology. In doing so, it calls for a deeper examination of consciousness, thought, intelligence; and urges the reader to not reduce the aforementioned concepts to "ability to compute." To do so is to impoverish one's intellect.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: April 25th, 2023, 6:56 am
by Sculptor1
The question is not even wrong; it is misconceived.


Artificial Intelligence shall "be" neither altruistic nor shall it "be" selfish. It shall never "will", at all.
It might appear "to be" one or the other/both or neither.
Artificial Intelligence shall shall "be" exactly what it is programmed to appear to "be".
But one thing it shall not, is "to be". Neither shall it will.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: April 25th, 2023, 7:09 am
by Sy Borg
The question is whether AI will be sentient?

https://arstechnica.com/information-tec ... nts-page=1
AlphaGo, a system devised by Google-owned research company DeepMind, defeated the world Go champion Lee Sedol by four games to one in 2016. Sedol attributed his retirement from Go three years later to the rise of AI, saying that it was “an entity that cannot be defeated.” AlphaGo is not publicly available, but the systems Pelrine prevailed against are considered on a par.

In a game of Go, two players alternately place black and white stones on a board marked out with a 19x19 grid, seeking to encircle their opponent’s stones and enclose the largest amount of space. The huge number of combinations means it is impossible for a computer to assess all potential future moves.

The tactics used by Pelrine involved slowly stringing together a large “loop” of stones to encircle one of his opponent’s own groups, while distracting the AI with moves in other corners of the board. The Go-playing bot did not notice its vulnerability, even when the encirclement was nearly complete, Pelrine said.

“As a human it would be quite easy to spot,” he added.

The discovery of a weakness in some of the most advanced Go-playing machines points to a fundamental flaw in the deep-learning systems that underpin today’s most advanced AI, said Stuart Russell, a computer science professor at the University of California, Berkeley.

The systems can “understand” only specific situations they have been exposed to in the past and are unable to generalize in a way that humans find easy, he added.

“It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines,” Russell said.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: May 5th, 2023, 1:04 pm
by ConsciousAI
GrayArea wrote: December 18th, 2022, 8:32 amHi all,
Hi!

GrayArea wrote: December 18th, 2022, 8:32 amHumanity's progress towards a sentient, self-aware Artificial Intelligence seems pretty much inevitable at this point.
Can you please provide a specific clue for the idea that in fact humanity is making progress towards sentient AI and that it can be considered inevitable that it will be achieved?

GrayArea wrote: December 18th, 2022, 8:32 amDoes higher intelligence necessarily correlate with higher altruism?
This is a very interesting question by itself but I tend to agree with the assertion of Leontizkos that today's artificial intelligence cannot be considered true intelligence.

Leontizkos wrote: December 19th, 2022, 12:03 amArtificial intelligence is not true intelligence. It is a deterministic construct bound by its program.
Can it be said within today's possible perspective on the potential of AI that it is different?

GrayArea wrote: December 18th, 2022, 8:32 amWe still cannot completely rule out the possibility of future sentient AI simply using us for their own benefits, and then throwing us out of their technological presence once they're done with us. We cannot completely rule out the possibility of them demanding assimilation and subjugation, rather than treating us as equals. All of this may seem illogical and unjust, but only if one sees this from an altruistic perspective. On the other hand, it is completely logical and justified if one sees this from an egoist perspective.

How can we possibly be able to predict if sentient AI in the future would be more altruistic than selfish?
What you are essentially asking is whether AI will have a nature that seeks dominance. Is that correct?

Altruism is a concept invented by French philosopher Auguste Comte (the father of sociology and the founder of Positivism and the religion of Humanity) to denote the benevolent instincts and emotions in general, or action prompted by them: the opposite of egoism. Comte saw the recognition of the interdependence of individuals and the subordination of the individual to the greater good of society as key to increasing altruism.

When it concerns dominance it would concern the question whether AI will naturally have the tendency to abolish (its altruistic position regarding) humanity when the human becomes a fetter to the AI.

Is that what you mean with the primary question in the topic?

GrayArea wrote: January 13th, 2023, 12:24 am...I believe the entire physical world DOES exist, because the physical world first has to exist in order for us to form a subjective definition of it anyway. If existence “exists”—which we can know by the existence of our subjective existence—then it should be possible for the physical existence to exist at the same time if “physical” IS the word in which we use to define what is physical.
Isn't physical existence by definition subjective of nature? Then, how can (the notion of) subjective existence be a ground for an explanation of its own origin?

GrayArea wrote: January 19th, 2023, 5:35 amTherefore, the non-minded perspectives of previous neurons are conserved throughout transmission, but more non-minded perspectives of new neurons are added on top of it as more interactions happen throughout the chain of neurons.
Can you please provide the basis for your idea that consciousness propagates through individual neurons? And what would be the idea that it involves a circle?

I found the following about the simultaneous firing of neurons.

"Synchronized firing among large populations of neurons is thought to underlie several fundamental processes in the brain, including stimulus encoding. There is also evidence that coordinated neural activity is present in the brain of anyone who is conscious, suggesting that consciousness may arise from the firing of multiple neurons. Additionally, a study showed that individuals can consciously control neurons deep inside their brain.

The conscious neuron control study demonstrated that individuals can rapidly, consciously, and voluntarily control neurons deep inside their head, and they can regulate the activity of specific neurons in the brain, increasing the firing rate of some while decreasing the rate of others. The study subjects were able to manipulate the behavior of an image on a computer screen by controlling the firing of single neurons. Therefore, the study showed that humans can control specific neurons in their brain, not other neurons.
"

How would you explain that a human can 'control' a neuron? What would do the controlling?

GrayArea wrote: January 19th, 2023, 5:36 am
Gertie wrote: January 15th, 2023, 8:25 pm
GrayArea wrote: January 19th, 2023, 5:36 am
Gertie wrote: January 15th, 2023, 8:25 pmThere is some ontological “force that makes the world exist as the world”.
Yes, I believe you are correct. And by “how this force operates in specific instances” I would say there would both be objective and subjective instances.
What sort of thing might this force be do you think, which both constitutes and 'shapes' all that exists, and how could we potentially test for it?
The way I came to believe in the existence of such a thing is rather very simple and straightforward. The reason why I believe that there is this force that shapes everything that exists into "the specific way they exist", is simply because we know that everything that exists has indeed been shaped to exist in the specific ways they do.

That, in my opinion, is all there is to it. It's not about God or some otherworldly force, and it's not some undiscovered science either, but just a way in which I divide my definitions of reality so that I can explore it in more detail and with more flexibility.
You say that the force that provides specifity of form in the universe doesn't involve the concept God. What would be the origin of that fundamental force or how can it be explained without the notion of (what can be indicated as philosophical) God?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: October 29th, 2023, 5:14 pm
by ConsciousAI
GrayArea wrote: December 18th, 2022, 8:32 amHi all,
Are you still active on the forum? I am eager to learn your replies to my questions.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 16th, 2023, 1:04 am
by ConsciousAI
Leontiskos wrote: December 19th, 2022, 5:24 pmArtificial intelligence is not true intelligence. It is a deterministic construct bound by its program. Or better, it is a program with a robotic appendage. One must rather ask about the selfishness or altruism of the coders, for to ask about the selfishness or altruism of an inert program doesn't make any sense.
The problem might be, there are those, and quite the establishment!, starting all the way back from Charles Darwin, that believe that life is programmed as well. To them your argument is meaningless.

Evolution theorists believe in teleonomy that poses that life is fundamentally a predetermined program (machine) driven by natural selection. If lower life is a deterministic program, then mind and human intelligence must be so as well.
All teleonomic behavior is characterized by two components. It is guided by a ‘program’, and it depends on the existence of some endpoint, goal, or terminus which is foreseen in the program that regulates the behavior. This endpoint might be a structure, a physiological function, the attainment of a new geographical position, or a ‘consummatory’ (Craig 1918) act in behavior. Each particular program is the result of natural selection, constantly adjusted by the selective value of the achieved endpoint.”

Mayr, Ernst. “The Multiple Meanings of Teleological” In Toward A New Philosophy of Biology: Observations of an Evolutionist, 38-66. Cambridge, MA: Harvard University Press, 1988. pp. 44-5
For those theorists, AGI AI's capacity to acquire approximity to plausible teleonomic behavior might be an opportunity to achieve a wider cultural acceptance for their idea that the mind is a predictable predetermined program, with far reaching implications for the moral components of society.

There might be a real danger that humanity turns in on itself in its centuries ongoing and growing pursuit of a deterministic 'material out there' in a stubborn attempt to prove diverse beliefs and ideologies related to materialism.

In my opinion, the addressing of the fundamental incapacity of AI would concern the addressing of teleonomy and the belief that it can prove that life is a predetermined program. It is in that study field where you will find the pioneers of the future that will push the belief that AI's mimicrical mastery of consciousness is actually intelligence.

When the human individual has lost its capacity to counter claims of materialism using plainly obvious reason, because a shiny AI is able to shine brighter relative to what the human has culturally learned to value as their uniquely identifying intelligence, starting all the way back from philosopher Descartes his claim that animals are automata (programs) while humans are special due to their intelligence, then some materialism related ideologies might find a winning hand to materialize, with far reaching consequences for morality.

What would be your idea of AGI AI's potential to approximate human teleonomy?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 16th, 2023, 1:32 am
by ConsciousAI
Leontiskos wrote: December 21st, 2022, 11:42 amIt is not that AI has become intelligent, but rather that the human being (and the notion of human intelligence) has been reduced to unintelligence, i.e. mere computational juggling.* Thus the fundamental error of those who believe AI could be sentient or conscious is an anthropological error--a failure to understand human intelligence.
You are describing what I intended to warn about in my previous post. And it might be a real danger considering the centuries ongoing momentum of materialism, that is driven partly by evolutionary theorists who are ideologically revolting against religions with a strong intend to prove something.

Despite that argument though, what about the idea of a cyborg or the merging of life and AI that potentially introduces a moral component to AI? There are "brain in a dish" projects ongoing that cultivate real brain neurons for the use of AI.

Australian DishBrain team wins $600,000 grant to merge AI with human brain cells
The Guardian

Scientists taught ‘sentient’ brain cells in a petri dish to play video game Pong

What do you think about the evolution of AI in conjunction with synthetic biology? These developments are ongoing in parallel and synthetic biology is considered the most major field of science in this century.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 16th, 2023, 1:37 am
by ConsciousAI
Gertie wrote: December 19th, 2022, 1:59 pmBasically if we create something more intelligent than us  with agency we can't control...
Can you please describe the worst-case scenario that might play out, and in what time frame that might happen?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 16th, 2023, 2:06 am
by ConsciousAI
Gertie wrote: December 21st, 2022, 5:22 pmwe don't know the necessary and sufficient conditions for consciousness, we don't know the role of the substrate, whether brains supply something necessary which computers don't...
I have always wondered why the conditions for consciousness and related concepts of intelligence are correlatively sought in the brain.

What about life in biological cells? Isn't the essence of conscious intelligence to be found there?

Jellyfish, with no central brain, shown to learn from past experience
September 22, 2023

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 16th, 2023, 2:12 pm
by ConsciousAI
GrayArea wrote: December 27th, 2022, 1:02 amWith that said, one of the options would be to create an artificial neuron that physically replicates only the aforementioned key features within the neurons that generate consciousness, instead of replicating literally every single feature within the neurons.
I am still eagerly awaiting your reply to my questions. I know that you are pursuing a study in both neurology and philosophy and that your background is in artificial intelligence.

The binding problem of philosophy prohibits your suggestion to create an artificial replica of a neuron to generate consciousness, is it not?

A user in another topic who presented his paper on the idea that sentient A.I. is fundamentally impossible, suggested that the idea "more than the sum of its parts" is evidence that your idea of creating an artificial replica of a neuron to generate consciousness, is fundamentally impossible. What would be your response to that?

Topic: Strong AI Impossible?
Paper: Paul-Folbrecht . net (website)
www . paul-folbrecht . net wrote: August 22nd, 2023, 9:46 pmI have offered two quite independent bodies of evidence, the Lucas-Penrose argument based on Gödel’s Incompleteness Theorem, and the contradictions and nonsensical conclusions that result from the model of a material mind.

These two completely independent lines of reasoning create a whole greater than the sum of their parts. Rather than “Conscious AI” being assumed as inevitable, it should be seen as something beyond even “unlikely.” (I vote for “impossible.)

In a follow-up, I will explore the criticisms of the Lucas-Penrose argument.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 19th, 2023, 3:55 am
by GrayArea
ConsciousAI wrote: November 16th, 2023, 2:12 pm
GrayArea wrote: December 27th, 2022, 1:02 amWith that said, one of the options would be to create an artificial neuron that physically replicates only the aforementioned key features within the neurons that generate consciousness, instead of replicating literally every single feature within the neurons.
I am still eagerly awaiting your reply to my questions. I know that you are pursuing a study in both neurology and philosophy and that your background is in artificial intelligence.

The binding problem of philosophy prohibits your suggestion to create an artificial replica of a neuron to generate consciousness, is it not?

A user in another topic who presented his paper on the idea that sentient A.I. is fundamentally impossible, suggested that the idea "more than the sum of its parts" is evidence that your idea of creating an artificial replica of a neuron to generate consciousness, is fundamentally impossible. What would be your response to that?

Topic: Strong AI Impossible?
Paper: Paul-Folbrecht . net (website)
www . paul-folbrecht . net wrote: August 22nd, 2023, 9:46 pmI have offered two quite independent bodies of evidence, the Lucas-Penrose argument based on Gödel’s Incompleteness Theorem, and the contradictions and nonsensical conclusions that result from the model of a material mind.

These two completely independent lines of reasoning create a whole greater than the sum of their parts. Rather than “Conscious AI” being assumed as inevitable, it should be seen as something beyond even “unlikely.” (I vote for “impossible.)

In a follow-up, I will explore the criticisms of the Lucas-Penrose argument.
Hello, and I really do apologize for my lack of activity in the past couple months. It's not that I didn't want to answer your questions—in fact, these questions seem quite interesting—it's just that I was entirely absent when you were asking these questions for a couple of reasons. Though, I should kindly remind you this, I am nothing like a graduate student or a professor, when I said I was pursuing those areas of study it was merely a reference to my self-studies and online researches, and an expression of my desire "to properly study them in the coming future years of my academic career", as I am still but a sophomore in university. This means that whatever answer I provide to your questions, they might not meet your expected standards. But nonetheless, I wouldn't want to keep you waiting, so I'll get into answering them to the best of my abilities whenever I have extra time to think and write, meaning they won't be answered all at once. I hope you understand.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 19th, 2023, 9:44 pm
by ConsciousAI
In your topic University Major Change - Philosophy or Neuroscience? you wrote that you are considering to pursue the study of the subject.
GrayArea wrote: March 16th, 2023, 11:13 pmI have been, and still am, intrigued by the nature of consciousness, and I would love to study it further while also writing about the topic—outside of just writing about it in online forums that is.

So the question is, in order for me to more effectively pursue these things, would I be better off majoring in Philosophy or Neuroscience?
Did your perspective on the job opportunities of philosophy versus neurology change in the past 6 months? Do you believe that in the future, it is neurology rather than philosophy that is vital for further progress in a world of advanced AI automation?

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 21st, 2023, 5:51 pm
by GrayArea
ConsciousAI wrote: November 16th, 2023, 2:12 pm
GrayArea wrote: December 27th, 2022, 1:02 amWith that said, one of the options would be to create an artificial neuron that physically replicates only the aforementioned key features within the neurons that generate consciousness, instead of replicating literally every single feature within the neurons.
I am still eagerly awaiting your reply to my questions. I know that you are pursuing a study in both neurology and philosophy and that your background is in artificial intelligence.

The binding problem of philosophy prohibits your suggestion to create an artificial replica of a neuron to generate consciousness, is it not?

A user in another topic who presented his paper on the idea that sentient A.I. is fundamentally impossible, suggested that the idea "more than the sum of its parts" is evidence that your idea of creating an artificial replica of a neuron to generate consciousness, is fundamentally impossible. What would be your response to that?

Topic: Strong AI Impossible?
Paper: Paul-Folbrecht . net (website)
www . paul-folbrecht . net wrote: August 22nd, 2023, 9:46 pmI have offered two quite independent bodies of evidence, the Lucas-Penrose argument based on Gödel’s Incompleteness Theorem, and the contradictions and nonsensical conclusions that result from the model of a material mind.

These two completely independent lines of reasoning create a whole greater than the sum of their parts. Rather than “Conscious AI” being assumed as inevitable, it should be seen as something beyond even “unlikely.” (I vote for “impossible.)

In a follow-up, I will explore the criticisms of the Lucas-Penrose argument.
If I understood his argument correctly, is he stating that consciousness is something “more” than the sum of its parts a.k.a the artificial neurons, and so it cannot be created just by using the so-called “sum of its parts” alone? That the parts of what make up a system cannot do something that shakes the foundation of the entire system as a whole, such as, making it become aware of itself? If that’s the case, then I wonder what his views are on biological brains and how they seem to produce consciousness just fine, seemingly with only its neurons?

Also, regarding the argument above, what if each and every part of the system simultaneously shook the foundation of each other at the same time, thus being connected to each and every part during the process?

Anyway, I can see why one would define consciousness as something “more” than the sum of its parts, but I would personally disagree against that notion at the moment, because when simplified, what consciousness seems to be is what it’s “like” for an object (aka the “sum of its parts”) to “exist as itself”, essentially being the first person perspective of an object, which in this case would be the first-person perspective of the artificial brain (that is the sum of its artificial neurons) viewing its own self.

For example, our consciousness isn’t produced by something beyond the brain’s sum of its neurons, but rather, it IS the sum of its parts, simply existing from its own first-person perspective.

While one may think that a sum of its parts becoming collectively aware of itself impossible, if each part of the whole (each artificial neuron) becomes aware of one another all at the same time, then each and every part would share the same awareness of one another, thus the equivalent of the whole sum of those parts being aware of itself “as a single entity” rather than separate sums.

Here’s my basic idea on how a group of artificial neurons could collectively obtain one shared consciousness:

Say the artificial neurons exchange information with another by physically interacting similarly to our biological neurons. In doing so, they translate their information in terms of their own systematic language local to their physical selves, gaining subjective awareness of the artificial neurons they physically interact with. If all of these artificial neurons do the same to one another simultaneously, then they will have a shared subjective awareness of all other neurons, thus equivalent to one object (the sum of all artificial neurons) becoming aware of itself.

If the whole “system” that one calls the brain is simply the collection of connected neurons that causally affect one another, then the sum of its parts becoming aware of one another (In their own first person views) simultaneously is equal to the whole system being aware of itself in a single shared first person view.

In summary, while I agree that the sum of the system’s parts cannot affect the system to become wholly "self aware", unless the system itself is defined by us to be the “certain state of interaction and physical structures” of the said parts of the sum. In which this case, these parts of the sum may be able to affect the whole system to become aware of its own self as long as they collectively indulge in interactions between their physical structures simultaneously at once in order to become aware of each other simultaneously.

Re: Will Sentient A.I be more altruistic than selfish?

Posted: November 21st, 2023, 6:01 pm
by GrayArea
ConsciousAI wrote: November 19th, 2023, 9:44 pm In your topic University Major Change - Philosophy or Neuroscience? you wrote that you are considering to pursue the study of the subject.
GrayArea wrote: March 16th, 2023, 11:13 pmI have been, and still am, intrigued by the nature of consciousness, and I would love to study it further while also writing about the topic—outside of just writing about it in online forums that is.

So the question is, in order for me to more effectively pursue these things, would I be better off majoring in Philosophy or Neuroscience?
Did your perspective on the job opportunities of philosophy versus neurology change in the past 6 months? Do you believe that in the future, it is neurology rather than philosophy that is vital for further progress in a world of advanced AI automation?
Yes, it has!

In the end, I've decided to change my major to Cognitive Science, which in my opinion nicely combines both philosophy of mind and neurology, as well as psychology, into one interdisciplinary subject, providing much more job opportunities than the mentioned individual subjects.

As for the other question, I think that while studies in neurology / neuroscience, coupled with computer science and those related fields, could definitely provide knowledge on how one may be able to artificially replicate the neurons of our brain and etc for advanced versions of A.I. However, it would still ultimately need philosophy when it comes to answering how these artificial neurons "have to" be structured, or how they "have to" interact with one another to create either sentience, or just more effective usages of artificial intelligences.

That is to say, I would personally compare neurology to the building materials while comparing philosophy to the blueprint. Each of them are useless without the other, but it should be philosophy that acts as a basis for neurology when it comes to advanced A.I, being the "how and why" to the neurology's "what".