Gertie wrote: ↑September 9th, 2020, 12:24 pm
To briefly summarise how I'm interpreting you -
Brain processes create a product, in the way a steam train creates steam.
This product consists of experiential ''what it's like'' states.
The content of these experiential states comprise a dynamic 'virtual model' of a material world and myself as an embodied agent within it.
An external world, but not necessarily a "material" one.
The function of this experiential model of the world is to direct actions.
To consider and weigh possible alternatives, and their possible outcomes, prior to taking some action. Yes.
The brain then 'presents the experiential model to itself' - by which you mean presents the experiential model to the ''consciousness system/body as a whole''.
Not quite. The brain creates the model, which is the "me" and the world we perceive. We, and the universe we see and conceive, ARE that model. The upshot here, important for AI, is that any system which can create a dynamic, virtual model of itself and its environment, constantly updated in real time, and choose its actions based on scenarios run in the model, will be "conscious."
A note on the "Explanatory Gap": There are two types of explanations, reductive ones and functional ones. The "gap" only acknowledges the former, and because mental phenomena are not reducible to physical phenomena, concludes that mental phenomena are inexplicable.
A reductive explanation proceeds by constructing a causal chain from one event or set of events to another. And of course, no such chain can be constructed between a physical event or process and a non-physical phenomenon.
But a functional explanation does not draw such a chain. Instead, it sets up a mechanism, a process, which is thought to be enabling or causative of a certain result, and seeing if the anticipated result follows. It disregards any intermediate steps which may or may not intervene between cause and effect. So if we can set up a system we believe will produce consciousness, and it indeed produces something we can't distinguish from conscious behavior, then we will have explained consciousness functionally.
BTW, Levine's seminal paper on the "Explanatory Gap" is here:
https://faculty.arts.ubc.ca/maydede/min ... oryGap.pdf