GrayArea wrote: ↑March 22nd, 2023, 7:16 pm
I'm currently majoring in Computer Science. One of the reason I chose this major was because of the rapid—and almost exponential—technological growth the world has been experiencing recently, as I felt like it was required for me to learn this field of knowledge that moves the world forward if I were to survive and make my way into the world in the future.
Another reason was because I wanted to be a part of the A.I industry. I saw just how much potential it had and was certain that A.Is would change everything. As they say, "If you can't beat them, join them." In a naive way, I wanted to play some kind of role in this upcoming new era. And I just didn't want to be left behind.
I understand from your reply that you are naturally capable of doing almost anything with as diverse a scope as majoring in computer science (AI) or neuroscience. Such a capacity might be of advantage in philosophy since philosophy can investigate questions that span multiple fields and connect the dots to create theories that may suit your described interest scope "
how consciousness is formed / what causes subjective experience, the relationship between subjective and objective experience, as well as what causes self-awareness".
GrayArea wrote: ↑March 22nd, 2023, 7:16 pm
value wrote: ↑March 19th, 2023, 3:44 am
It is philosophy that drives AI.
...
When all mimicable technical capacities of humans are outdone by AI. What's left that humans can 'add' for value? In my opinion that would be philosophy.
I suppose it is true that philosophy underlays anything that can be analyzed in terms of philosophy. Being knowledgeable in the philosophy behind the inner logic of A.I would certainly be useful when it comes to creating and improving them. Though, at the same time, I would say that the material world of technology also holds as much power over the mental or philosophical world as well—given how a large part of philosophy is a description of the physical world.
However, your comments are still much appreciated, as they have reminded me that there is still something that I'm good at which I could make use of in this day and age. I personally think that the philosophy of mind would play an important role in creating the blueprint behind sentient Artificial Consciousness.
GitHub (owned by Microsoft) released a GPT-4 AI powered Co-pilot that enables whole software to be automatically written based on a simple few sentence logical description of the desired program. While it currently enables senior software engineers to save massive amounts of time (90% time saving and the abolishing of junior engineer jobs), once matured in GPT-5-6-7 the software developer may become obsolete and all that is required to create software would be philosophy or 'creative ideas'.
(Gut feeling based) creativity and art may be said to underlay creative ideas however it would be philosophy that would enable to touch the essence of that creativity with theory which would be required to steer AI.
Currently in business a lot is done based on gut feeling and philosophy (the capacity to create wisdom through theory) has a high regard as most business people intuitively feel that philosophy can be used to both discover and describe the wisdom behind what is done. Business science is essentially for a large part based on philosophy. While people in business are able to create success naturally most love to see philosophy touch the essence of the path to success through theory.
In the future when AI is taking over most mimicable technical capacity of humans the capacity to excel in philosophy (the professional creation of wisdom) to steer AI might become the primary aspect by which humans can qualitatively distinguish themselves. Therefore philosophy might become one of the highest paid jobs in the future.
GrayArea wrote: ↑March 22nd, 2023, 7:16 pmThough speaking of which, while I suppose it is also true that A.I does not yet possess a philosopher's mind, I imagine that once it gains sentience, which I assume it will at some point (as I'm not one to underestimate human curiosity), the A.Is would also be able to philosophize in a way similar but also different to ourselves.
I do not agree. I do not believe that humans can 'harness' or master life itself or that there is a mere clue that such is possible. In my opinion life forms are at most possible to
serve life.
"
An attempt to stand above life, as being life, results in a figurative stone that sinks in the ocean of time."
Perhaps it would be possible for machines or AI to become actually alive (e.g. through neutrino interaction when neutrino's would be the origin of life) but there are several questions concerning the actual possibility since it might not be a wholly technical matter (e.g. 'connect the neutrinos to a mechanical port'). Questions like "
why would a neutrino want to interact?" and "
what would the concept 'health' mean to a human made tool 'machine'?" might become applicable.
Philosophy can start today as a pioneer to address those questions and when time comes it can have a place upfront to provide solutions.