GE
Gertie wrote: ↑
Today, 12:24 pm
To briefly summarise how I'm interpreting you -
Brain processes create a product, in the way a steam train creates steam.
This product consists of experiential ''what it's like'' states.
The content of these experiential states comprise a dynamic 'virtual model' of a material world and myself as an embodied agent within it.
An external world, but not necessarily a "material" one.
The function of this experiential model of the world is to direct actions.
To consider and weigh possible alternatives, and their possible outcomes, prior to taking some action. Yes.
Understood.
The brain then 'presents the experiential model to itself' - by which you mean presents the experiential model to the ''consciousness system/body as a whole''.
Not quite. The brain creates the model, which is the "me" and the world we perceive. We, and the universe we see and conceive, ARE that model.
OK. So what does it mean to say neurons, chemicals, etc
present that model they've produced
to themselves?
The upshot here, important for AI, is that any system which can create a dynamic, virtual model of itself and its environment, constantly updated in real time, and choose its actions based on scenarios run in the model, will be "conscious."
Well that would depend on whether that recreates the necessary and sufficient conditions for experiential states to manifest, and while we know brains have them, we don't know what those conditions are. They might be substrate dependent (see for example
https://en.wikipedia.org/wiki/Orchestra ... %20neurons. ).
A note on the "Explanatory Gap": There are two types of explanations, reductive ones and functional ones. The "gap" only acknowledges the former, and because mental phenomena are not reducible to physical phenomena, concludes that mental phenomena are inexplicable.
A reductive explanation proceeds by constructing a causal chain from one event or set of events to another. And of course, no such chain can be constructed between a physical event or process and a non-physical phenomenon.
Right. And when Dennett says we have to talk about consciousness in functional terms, he's saying he can't explain it any other way. And I think that's because of what Chalmers calls The Hard Problem, which Dennett denies exists. Or ''dissolves'' - which I suppose it does if you ignore it. How can you be a materialist which is an ontological account rooted in matter and the smaller bits of matter it's reducible to, and just ignore the biggest problem this raises re experience...
But a functional explanation does not draw such a chain. Instead, it sets up a mechanism, a process, which is thought to be enabling or causative of a certain result, and seeing if the anticipated result follows. It disregards any intermediate steps which may or may not intervene between cause and effect. So if we can set up a system we believe will produce consciousness, and it indeed produces something we can't distinguish from conscious behavior, then we will have explained consciousness functionally.
I don't find the functional approach to phenomenal consciousness satisfactory. It might or might not work to produce an experiencing machine, but it'll be by immitating certain functional features of a known experiencing system (brains), not by explaining it in the way reductionism might. Hence the problem of how to test AI for phenomenal experience - we won't know if reproducing that model making function has captured the necessary and sufficient conditions for experiencing. We might only have created a machine which is very good at mimicking experiential states, and is incapable of understanding and correctly answering questions about feelings, thinking, seeing, etc. We should still def be trying it to see what happens of course, it's a possible practical way forward.
BTW, Levine's seminal paper on the "Explanatory Gap" is here:
https://faculty.arts.ubc.ca/maydede/min ... oryGap.pdf
Thanks. Looks like it might need a lot of background reading to really understand, but I'll give it a go.