BPS Model of Consciousness
The Genesis of Thought in the BPS Model
"For those insatiable spirits who'd rather travel than get there, the journey becomes the destination when you search for the meaning of life and consciousness."
This journey of the mind ends the same way it all started, a horizon in sight, a strategy to get there and still as many conceptual obstacles and hurdles to climb. At the end of the road there is always a new gate that opens. For those insatiable spirits who'd rather travel than get there, the journey becomes the destination when you search for the meaning of life and consciousness.
The role natural language plays in the conformation and functioning of the all encompassing global consciousness, that we hoped would be our biggest contribution to the study of consciousness, has turned out to generate more questions and abstractions than we had hoped to answer and bargain for. As we discussed in chapters 12, 21, 22 above and many other places, we had hoped to give a complete ambitious description of the amygdaloid complex as a natural candidate for the seat of consciousness based primarily, among other things, on its well documented participation (with the hippocampal formation) in coordinating the avoidance reflex responses when humans were confronted with natural life-threatening environmental stimuli. As it turns out the stimulating natural object / event in this case is meaning-neutral, the semantic tag being provided by inherited life-preserving amygdaloidal audio-visual codelets as modified by experience. We will expand further on below.
Pursuant to the analysis we have developed we have designated the ‘shores' surrounding the Sylvian fissure (perisylvian area) inter-connecting all sensory inputs into Heschl-Wernicke's-angular gyrus region and relaying it into à Broca's area à pre-frontal executive cortex, the ‘proto-linguistic organ' (plo). We labored hard to weave together a meta-linguistic distributed network headquartered at ‘plo' and modeled to integrate nativist considerations on syntax, semantics, referentials, phonology, truth values, pragmatics, vector space network theory and DNA encoded language inputs. We even thought we had found the 4-d coordinates for Chomsky's generative grammar as the same locus for a regenerative semantics, all embodied by the ‘plo'. There we could combine both elements (universal grammar & proto-semantics) and bring to life a comprehensive theory of ‘meaning' linking linguistic elements such as figures, signs, noises, marks and body movements as different manifestations of a communication urge, most reducible in principle to ‘propositional attitudes' configured in syntax structure and semantics. We hoped it would represent the beginnings of a veritable truth-conditional theory of meaning of high coherence value. We laid the foundations, based on a reinterpretation of Fodor's ‘mentalese' and Piaget's theory on language acquisition by the newborn as discussed in chapter 5 and elsewhere. We have scattered many seeds on fertile grounds to germinate and flourish but still have not found the magic fertilizer concept to make them sprout into a luxuriant independent existence.
In our opinion, the focus of any such search for a marketable algorithm should start first on revaluating the role played by nature's non-intentional sounds and signs as they get incorporated into heritable proto-semantic ‘mentalese' ‘atomic' codelets and second on analyzing the relative priority assignment of verbal (and non-verbal) language in either thought ‘formation' and / or ‘transmission'. The priority choices get narrowed down to the alternatives of considering language as either causally efficient in producing thought or dependant on it. Both alternatives either co-exist independently or are mutually dependent on each other.
The inescapable (and expected) first big hurdle is clearly seen when considering causality relations between two different domains, the physical language (or its symbolic representation thereof) and the non-physical ‘thought'. Fortunately, for starters, the choice approach should narrow down to an manageable epistemological argumentation, trying to avoid the constraints of wearing an elusive ontological straight jacket fitting an ephemeral ‘thought entity'. The chosen strategy is driven by pragmatic considerations if one can appreciate that it is more reliable to analyze language as the basis of thought than the opposite approach requiring more speculative activity when analyzing what ‘content' of thought is causing language generation. Besides, the only known way we can be sure about subject A's thought content is by way of subject A's first person account, a language narrative. Analytically speaking, the choices are clear: either we get more tangible results concentrating on analyzing linguistic syntacto-semantics structure as being causal to thought or get lost analyzing the elusive vagaries about the ‘intentionality' content of thought or mental states as causally efficient in producing the logic structure of language. The latter approach, besides being counter-intuitive, would have to depend considerably more on self-referential accounts of language users about the beliefs and intentional mental states allegedly preceding the corresponding language formulation on the basis of an equally questionable co-variation of thought and language, or teleological wishful thinking or an unconscious self-serving functional scheme of neo-behaviorists as discussed below.
However, a re-interpretation of both Grice and Fodor may well do the trick as we discuss below. Based on all things considered and their possible outcomes that we gambled and put our stock on the idea of a language precursor to thought, especially after having previously suggested the proto-linguistic organ ('plo') as the putative site for the assembling of language-dependent thoughts, an attractive connectionist / representationalist view of how the mind may operate. We also thought that our approach would give the clinician an additional logic tool to predict psychic etiologies of disease based on first person mental state narratives as an additional input.
This places language development and ‘plo' at center stage in our evolving ‘bps' model of consciousness. We had reasoned early on that if an appropriate environmental life- threatening stimulus, e.g., a snake sound and a visual context of the scenery it came from, can trigger an adaptive inborn behavior in a newborn species by ‘plo' then it can also be involved in related but more complex language elaboration. By integrating into its species-specific genetic memory the acquired memories of existence, the primeval sounds and sights danger cues get elaborated into a biopsychosocial (‘bpo') survival strategy, including a communication tool. The role played by DNA, genetic archetypes, etc. in unleashing chemically-mediated adaptive responses when triggered by environmental stimuli (cues) has been discussed elsewhere in the text. This mechanism includes a consideration of mother's ‘baby talk' cooing and her facial expressions as effective primitive phonemes and cues to trigger appropriate modifier archetypes that add on to the genetic proto-semantic reservoir of inherited ‘meanings'. The role played by cortical ‘mirror neurons' in imitating behavior is reasonably well established. Thus the inherited universal grammar links with a regenerative semantics clothed in phonology and mimicry to evolve the sentential logic structure (‘propositional attitudes'?). Species' environmental survival tactics, clothed as nature's ‘meaningful cues' survive by getting coded into DNA, transmitted across generational gaps and translated in the newborn into a proto-semantics nested circuitry (codelets). These get then shaped into a regenerated environmental survival weapon de novo. Its presence is felt first by reflex adaptive patterns as described and then gets developmentally modified into a syntacto-semantic architecture. The inherited first stage gets modified in the newborn by mothers ‘cooings' and facial expressions and posterior environmental sense inputs.
This view of language generation places primeval semantics transfer at unconscious nativist levels ahead of syntactic arrangements by ‘plo'. This leaves volition and free will at ‘the proximate cause' level of control as discussed elsewhere. "A man can surely do what he wants to do. But he can not determine what he wants.", Schopenhauer once said. It was at this conjectural point that we discovered Dr. Jerry Fodor and the ‘language of thought' (LOT) hypothesis which has given impetus and corroboration to our model, save for some minor and major disagreements as will note below.
Where we have hopelessly stumbled big time has been in providing a marketable account of how our ‘plo' processing module mediates the transition from an on-line sense-phenomenal (or conceptual off-line) brain codelet input (I) to a corresponding syntactically-structured representational output (O) in a systematic one to one instantiation by this special basic input-output system (BIOS) of the ‘plo' processor. We suspect that the inherited original ‘machine language' genetic code input, when translated from the newborn DNA gets incorporated (and modified?) into the acquired phonemic and facial expressions input from the lactating mother via cortical mirror neurons as discussed briefly in various chapters. ‘Meaning' to the newborn (proto-semantics) gets somehow structured into a proto-syntax in the ‘plo' processor. The neuro-humoral reward-punishment system of Olds-Pribram (connecting nerve trunk midbrain and ‘plo' with forebrain executive area via Medial Forebrain Bundle) may be intimately involved in the original and subsequent valence classification of environmental (internal & external) inputs. Somehow a systematic audio-visual (or other sensory) input facilitates the formation of ‘inferential' codelet loops that, added to other relevant modular inputs (visceral brain, talking brain, non-dominant brain, etc) will configure the resultant of ‘all things considered', a "thought". Whether this final event precedes a putative motor adaptive response or not (see Libet's timing data) is open to debate and should not necessarily put into question the existence of a ‘free will' for the reasons already discussed above.
The big problem still remaining is, of course, how to explain what kind of ‘sentential' logic structure guides the jazz pianist when improvising his music, or the artist when moving the brush over the canvas? We believe there is no conscious thought guiding that kind performance; we discuss this problem in some detail in chapter 19.
How would one start laying out the groundwork for developing a model for a linguistic generation of mind? Following closely on the steps of British empiricist Locke, Columbia U. Dr. J.A. Fodor had taken a first step (see "The Language of Thought," 1975). Henceforth neuroscientists and philosophers alike abandoned ship on the search for explanations on the meanings of spoken words to concentrate instead on the ‘contents' of mental representations in the hope that therein originated somehow the ‘meanings' of words (see Grice's essay "Meaning Revisited,"1982).
Within the scope of the ‘bps' model the family is the structural / functional unit of viable human existence (see Eric From's "Man for Himself", 1947) and consequently it is not far-fetched to speculate that language may have evolved in order to ease and synchronize the correspondence in mental states between parents, siblings and one another. For the reasons already stated above we have to both agree and disagree with Dummet when he stated "..that ‘the fundamental axiom of analytical philosophy' is that "the only route to the analysis of thought goes through the analysis of language." Agree because it is easier to infer from a well established language syntax structure encoding semantics than the opposite view requiring an elusive structure of mind to infer from. Yet, as we will argue, language structure is intrinsically semantics neutral, its meaning to be discovered in the mental state / representation of both speaker and listener that animate it. In so doing we must resist the temptation to confuse the map with the territory it represents, the cognition of ‘how' with the cognition of ‘that', the epistemology with the ontology. The worst possible scenario will be, anyway, that the resulting analysis will only translate our current grammatical description of ‘mind' into a richer theoretical system without substantially improving on the older explanations and remaining at square one as Wittgenstein has mocked about the analytical philosophy effort. We have tried all along to identify those other fundamental concepts the diad language-->mind is necessarily related to and establishing the connections thereto.
This analytical strategy, as described, already supposes a commitment to two important aspects of cognitive science: the content of ‘mental states' (beliefs, desires and other intentional states) can be represented (brain-encoded) as functional isomorphs (symbolic representations) such that reasoning becomes a formal (logic) manipulation (computer processing) of such representations (symbols) according to a set of non-semantic rules (e.g., program). The credibility of such approach rests on the premise that any logic operations applicable to syntax can be either duplicated or emulated by a computer (after Turing). Implied here is that ‘mental representations', as described, carry both syntactic and semantic properties (see below for more on properties). The important conclusion is that thereby syntax structure programming becomes causally efficient in both the computer and the brain as long as the relevant functions can be formalized (programmed). This makes logical ‘inferences' possible, the hallmark of reasoned thinking. This way a "Language of Thought" (LOT) or ‘mentalese' is modeled by Fodor as discussed elsewhere in the text. It is clear that this model requires linear input sequential processing, can not explain what it is like to have a feeling (e.g., qualia) and does not explicitly spell out whether language communicates thought or participates in the formation of thought (as discussed in a previous chapter where Fodor defends a ‘nativist' idea using a combinatorial argument successfully). Furthermore, the ‘Mentalese' model of Fodor supposes , like ours, that language precedes the formation of thought but, unlike ours, that the meaning of an assertion (its semantics) is encoded in the syntax arrangement according to a ‘propositional attitude' structural representation. For example, if I have a thought that refers to George W. Bush and the WMD, it is because that thought is a relation to a coded mental representation that refers to the US President. If I think "Bush invaded Iraq in 2003" it is because I am in a particular functional relation (characteristic of belief) that has the content: "Bush Invaded Iraq to destroy the WMD in 2003" (e.g., Tarskian semantics).
As we enunciated above we differ in non-trivial aspects of this interpretation and believe that an in-house proto-semantic archetype precede and dictate syntax and its subsequent development according to a layering build-up of the inherited by the external influence of acquired language parameters derived initially from the mother, siblings and others. But this is just an informed intuition in its embryonic stage as will expose below. We hold that inherited proto-semantics precede syntax which is acquired from mother & environment.
Propositional attitude states, that is, states that occur at some specific moment in a person's mental life, have the sort of content that might be expressed by a propositional phrase proper to the subjects natural language. This variation still conceptualizes mental states as either tokened mental representations at the sub-personal nativist level (Fodor) or images them from natural language at the personal level (Carruthers). What is important is that it considers much more significant how the mental encoding came into being where genetic memory (implicit and unconscious as opposed to the global conscious or the Freud-Jung subconscious) levels of processing are controlling in behalf of ‘bps' survival imperatives. Our BPS model view may seem counterintuitive at first sight but, observing how computers carry out programmed instructions, it is easier to visualize a language generation of thought as operations performed over the mental representations in a given language than it is to extract a ‘meaning' based on a particular structure of syntax.
Is the syntax universal for all human languages? We think not. The inherited proto-semantics IS, and it will be fashioned into the future syntactic structure depending on the natural language acquired and other mental development influences. This post-natal external stage of language development only partially reivindicates the pre-Chomskian behaviorist (classical and Skinnerian operant conditioning) understanding of language learning and consolidation. Cognitive science alone was able to explain the linguistic competence already observed in a year-old toddler with little or no experience, i.e., through internal brain mechanisms. It was the observed ability of toddlers to understand the difference between "the cat chased the mouse" and "the mouse chased the cat" or their equivalents formed by changing the position of the actors or their relationship (i.e., systematicity) and the toddler's natural ability to generate an unlimited number of sentences / thoughts from a limited set of lexical primitives proper of the age (i.e., productivity) evidenced an innate presence of an universal grammar enabling them to –in a primitive way- formulate and confirm hypothesis. In the BPS model this is evidence of an inherited inner primeval language we call ‘genetic memory' which we have argued before as to its brain location in the perisylvian geography we call the ‘proto-linguistic organ' (plo). These generalizations may not apply to other aspects of communications like sign, sound (music) or body language.
Communication for ‘bps' survival is predicated upon an efficient and reliable reciprocal sharing of ‘mental states' between a language producer and a receiver and includes linguistic and extralinguistic modes of conveyance of intentionalities, a true ‘Theory of Mind'. As we said earlier, a system of information-carrying linguistic symbols as such, in either mode, are in principle neutral in meaning content until decoded by a receiver, regardless of whether that was the intention of the producer. It may just as well had been unspecific. The semantic content is not intrinsic to the arrangement of symbols except for an intended or un-intended receiver who must extract its meaning if able to synchronize her mental state with the producer.
We may extrapolate further and say that DNA composition, regardless of species, carries equivalent unit ‘symbols' (sugar, base, phosphate) and when assembled and transmitted by inheritance will not carry intrinsic information as such except for the species it was intended for who must extract it via archetype activation. In this case we have to assume that, other than the unlikely heritable somatic mutations (?), the information coded into the germinal DNA was the result of a just as unlikely Lamarckian-like encoding of environmental survival information which gets transmitted by inheritance and then activated in the newborn when triggered by an equivalent relevant stimulus in the new generation. This way newly hatched chicks will react violently to a proyector slide showing a hawk in flight and not when showing a duck (by reversing the direction of same slide). This is a species-specific inherited response. A similar argument holds for the avoidance response triggered when we see (for the first time) a spider or a snake moving our way. The species-specific survival kit of multi-modal (e.g., audio-visual) code for environmental specific information constitute a genetic memory of sorts, to be activated should the same danger cue be present in the new environment. These are solid experimental facts, regardless of their mode of inherited transmission. This is reminiscent of Grice's ‘natural meaning' that requires no intentionality other than that present in the mental state of the receiver. If present, following a presentation of the ‘neutral' stimulus, a chain of reactions will ensue providing a meaningful adaptive response. The environmental stimulus is also affective neutral but adaptive responses will have an affective positive, negative or alert valence. There is not such thing as a neutral affective response. This fact can be equated with our pain-pleasure affective system (see Olds, Pribram and others) associated with peri-acqueductal grey (PAG), medial forebrain bundle (MFB), hypothalamus and cingulate cortex. It is a common experience to classify sensory, body proper or dreams input according to this primitive affective state which we choose to postulate as a primitive ‘affective meaning' tag associated with phenomenal, conceptual, qualic or motor experience. We are not now able to precise whether the input information is tagged at the receptor, afferent pathways to intermediate association neurons or at the amygdaloid complex as discussed in other chapters, but it has the salutary protective effect of screening and classifying all information input into the central brain. As we also discussed elsewhere, the amygdaloidal complex controls the relay switch that immediately activates a neuro-humoral Cannon-type response when confronted with a life-threatening stimulus or an endorphin-type euphoric response when the environmental information valence is positive. When in doubt (alert status), the organism will ‘freeze' and wait until more contextual information arrives from the hippocampus social memory as explained elsewhere.
The proto-linguistic organ (plo) associates combo, coupling amygdala, hippocampus and cingulate cortex develop embriologically early on in preparation for a more delayed myelinization of primary and secondary sensory pathways converging into angular gyrus and a more complete cephalization of functions requiring communication (Wernicke-Broca maturation) in coordination with an executive and adaptive-dispositive forebrain. This is the type of intrinsic brain universal grammar anlage that is posited in the newborn serving as a foundation for future linguistic development as sensory input and social interactivity gets more sophisticated inside the context of the particular natural language adopted from the parents. This way the natural language syntax structure will be learned and layered on the inherited proto-semantic structure that guides and colors its subsequent evolutionary profile. This summarizes the first stage.
Thus far there has been no overt intention to exchange information between two cognitive agents, only an unconscious, stereotypical, species-specific adaptive response to environmental cues whose information content / meaning is extracted internally based on an activation of the genetic memory archetypes controlling and unleashing appropriate physiological effectors (glands, smooth and skeletal musculature).
The second stage of linguistic development in the newborn is based on re-enforcing the proto-semantic data base by adding new elements from mother's facial expressions, cooing sounds, baby talk and surroundings and classifying them into subsets of the three primitive affects as they become effective in reducing hunger, pain and general comfort. All this activity goes on at unconscious and subconscious levels and limited to expressing degrees of pain / pleasure affective equivalents reciprocally. The most important brain mediator in these developments are the cortical ‘mirror neurons' discussed elsewhere in the text. Thus true communication starts by extracting meaningful information from an environmental cue in the first stage and in addition from mimicry, both from mother's sounds (phonemes) and facial musculature expressions (as analyzed at oculomotor center) as visual, auditive, tactile and kinesthesic resolution develop further. As discussed elsewhere in the text, a primitive first order awareness, mostly sense-phenomenal consciousness, will develop as soon as the newborn realizes she is different from the doll, the crib, the mother, etc. and not an extension thereof (see Piaget's "The Development of Thought", 1977). At this stage (first year of life) Broca's ‘talking brain' connecting pathways are not developed sufficiently to entertain propositional arrangements of motherß à son communications, a requirement to share beliefs, a sine qua non for effective reciprocal communication and a true ‘Theory of Mind'.
To illustrate, it has been demonstrated (Kaplan, 1989) how primitive indexicals (context-sensitive expressions) become modified by linguistic maturation of speaker as well as from extra-linguistic context experience which varies (in content and meaning) with time, location and intentions. It is important to keep in mind that indexicals are ‘sui generis' in that their content in context A is derived from (refers to) an object in that context and not a description of A.
Only when the toddler believes (mental state) ‘that p' (e.g., baby is hungry) and overtly communicates ‘that p' (body language) such that mother extracts that meaningful information from the baby's cue and incorporates it by identities (both genetic and social memory) into her own meaning of the ocurrence, has a belief being shared. At that point they have shared beliefs sans much elaboration of linguistic proficiency. The shared information, the semantics of it all, reflects an internal state of the mind NOT an external state of the world.
This view carries important consequences. My view of existential reality, e.g., my belief system, primitively inherited as argued, may have been influenced originally from information extracted from environmental cues but ultimately will be a ‘view' of the internal state of my own mind, always hoping that it corresponds one to one with external reality, but NOT necessarily so! The eventual linguistic competence achieved will be the result of the contribution made by both genetic and social memories in creating a mental state -in harmony with the adopted natural language- (initially via mimicry mediated by mirror neurons) from the internal, semantically-coached combinatorial syntax architecture. Consequently, commonly shared natural language does not validate the truth value of literal linguistic meaning, even among identical twins! Identical world state is no guarantee of identical internal mental states among niche dwellers. Vive la difference!
It is clear to us that any model of consciousness conceiving language as its genesis or exclusive conveyance must insert in its development, besides the classical neuroscientific level of explanation, cognitive (representational theory of mind, RTM), connectionist and quantum mechanical algorithms to fill in the gaps left by the other's explananda. There are important conceptual areas of basic disagreements that must be negotiated, e.g., meaning, property, relations, etc. If the complexity of the challenge is overviewed under a BPS human survival optics then the relevant areas of investigation / analysis become clearly framed into one or more of the 5 classical aspects of a super-complex reflex arc: receptor, sensory circuits, interneuronal integrating circuits, motor circuits and effector. Only the retinal receptor and its associated afferent pathways to occipital V1 cortex and intermediate collateral branches to mesencephalon and diencephalon is very well documented. Likewise, the efferent arm of the arc has only been pretty well studied in the oculo-vestibular reflex analysis of Llinas and Pellionisz involving the cerebellum and neck musculature. Most elegant theoretical renditions have sprung from such approaches, e.g., Crick's cortico-thalamic 40Hz binding theory and Churchland's vector phase transformation theory, respectively. We do not anticipate a significant improvement on the level of research sophistication when directed at these two arms of the complex reflex arc. This leaves the interneuronal complex of integration as the natural and eventual focus of attention. The brain wetware can be considered as a compacted interneuronal phase transformational complex where sensory input gets massively transformed into motor adaptive output during normal functioning (see Glynn's "Anatomy of Thought",1999 and Feinberg's "Altered Egos", 2001).
Once the visual (or any other receptor) deconstructs the seeming continuity of the environmental sensory scenario into digitized, discontinuous events reaching the interneuronal compact, there is a vector phase transformation and different algorithms continue the deconstruction into codelet (Kantian?) categories. The totality of the sensory codelets gets classified, partitioned and allocated different virtual or real macro-locations in the not-so-hard disk of the wetware, whether in modules or in a recurrent distributed network fashion. It becomes the task of the interneuronal compact to reconstruct the ‘original' or equivalent representational scenario when called for (the binding problem). The resulting integral may not be necessarily provide an adaptive solution in neuropathology but will always reflect the dynamic equilibrium state of the constitutive modular elements charged with ‘bps' survival strategies. Passed this test the ‘solution' needs the intervention of an executive implementation to coordinate the best fitting adaptive response of the effectors at the motor end of the reflex arc. This view is the typical functionalist picture.
Bridging the sensorimotor divide we find a theorist trying to identify a suitable algorithm appropriate to the computational task of the neurological wetware and capable to deliver an implementation task to the effectors. This is no easy task because the algorithm must satisfy isomorphic requirements of the input-output divide, a transducer of sorts. It would help if our theorist would precise the best symbol representation of the massively parallel information flow to ease the transduction from input to output. Our mind is the algorithmic symbol processor in the interneuronal compact. Let's see how the argument may likely develop at the analytical philosophy level and the unavoidable constraints and paradoxes it generates in the process. But consciousness research can't stop at the test tube and oscilloscope lab, at the tip of the iceberg's view.
Now comes the qualitative jump of Fodor (1981) when he proposed the view that mental states are ‘relations' to symbolic representations. If the implied ‘meaning' adscribed to a logic propositional construction ‘relates' to a ‘mental state' in se, the latter will come to inherit the semantic value and intentionality (meaning) of the construction where the syntactic arrangement determines the semantic ‘meaning'. E.g., the President (subject S) believes (attitude a) there are WMD inside Irak (proposition p) or <Sa that p> in modal logic. A mathematical purist may argue that a strict canonical interpretation of set theory requires that an interpretation of semantics must map the relevant terms exclusively into mathematical objects, an obvious impossibility here, which argues for the inadequacy of syntax to determine semantics. A complete demonstration is beyond the scope of this essay but we can see at least that the meaning of proposition p is not identical with the meaning of its representation p*, the identity p=p* is untenable because it implies that there exists a 2 place relation between an inscription and its semantic value and further assumes the possibility of an inexistent correspondence (thought sharing) of meaning between a language producer and the receiver, unless mediated by a linguistic convention, something we argue can only be found in a genetic memory mediating interface. It may be further added that there exist many mental processes not reducible to algorithmic manipulations, especially when the argument is drawing from outside the defined problem domain and is thereby not purely inductive or processable by rule-based techniques. In the best possible scenario, that model does not provide for an ‘understanding' of the computations and, while it may be suitable to explain a first order type of ‘awareness', it would be useless for higher order conceptual and introspective consciousness as argued many times before. The same argument would still apply if a concatenative linear symbolic processing is substituted by a non-serial, sub-symbolic distributed type (see McClelland's "Parallel Distributed Processing"). Smolensky's tensor space brings in interesting possibilities when coupled with n-dimensional space accomodation of quantum mechanical interpretations of consciousness.
If we focus on the transition p-->p* = what-->how we realize that for p* symbols to become a ‘mark of the mental' their ‘content' must have the ‘property' of being about something else (in the Brentano sense), i.e., it must have ‘intentional' states (e.g., desires, beliefs, hopes, etc.). One may ask, how does arranging the symbols into propositional statements animate the symbols with linguistically derived intentions, as in a computer? The program representations may have content-laden states but no independent intentionality. Why not reverse the causality vector and posit that an intrinsic, inherited, original intentionality ‘in potency' may realize that semantic potential via the acquired natural language tool and / or in response to appropriate environmental triggers, as we propose? Fodor's Psychosemantics is a variation of the ‘bps' internalist approach when it holds that the interactive causal connections of the representation with the external environmental reality it stands for provides a sort of derived ‘meaning' that fuels the represented symbols to influence the behavior of the rest of the system! This clever explanation is in sharp contrast with that of analytical philosophers of the same ‘internalist' persuasion who argue that intentionality need not be independently present in the physical state of a given symbolic representation, that it builds its semantic content from causal connections with other co-existing physical states (nodes) of the system (program). Both of these positions still imply that any supercomputer could have meaningful states without being necessary its being introspectively aware of its own states. These models may explain sense-phenomenal consciousness (awareness) but never a higher order type of introspective consciousness. Apparently Dennet, contrary to Searle, does not think that the introspective consciousness (self-awareness of intentionality) supersedes in importance the information-bearing, behavior-driving functionality of derived intentionality. This robotic animation with computer-derived, other directed intentions is counterintuitive to say the least. An unconscious patient (still a better computer than any built!) can not generate intentions simply because it can not attain self-consciousness, an absolute sine qua non. As Chalmers suggested, you can substitute every neuron with a silicon chip and the resulting robot, like the unconscious man, can not have qualia or generate intentions independently. Searle expressed the same concern with his now famous thought experiment, the "Chinese Room".
But advocates of functionalism, surviving branch of logical positivism, adopt a neo-behaviorist stance when defending that a mental state is ‘what it does', its functionality being based on its causal efficiency in producing a measurable result. Thus p = p* = p** where the result p** = neither a structural or functional isomorph of p, leaving many intermediate black boxes between the real life intention p and the observed behavior p**. This myopia of course implies that a simulation = a duplication if only the result is considered. Pain or pleasure qualia being, in this interpretation, just mental states known to be experienced by activation of their corresponding neural centers. Only in theory can we possibly isolate an independent property that depends exclusively on the way the underlying system is organized, an example of Chalmer's principle of organizational invariance. It has been demonstrated (Siegelman, 1994) that some massively parallel connectionist distributed networks, as we would expect to find in the CNS, can not even be simulated in supercomputers. If some conclude: a. that a super computer is able to use environmental information creatively, b. that it understands and even have a conscience, and c. that evolutionary selection is predicated on overt behavior, then we can safely bet that they will selected by evolution to succeed humans. Any takers among functionalists? :-)
Many readers would ask, what difference does it make whether the brain bears the mind or causes the mind state? After all, their argument goes, the semantic content in representations can only be judged by the measured effects it is able to produce, it need not be of a denotational character. The computer does not rely exclusively on its manipulation of structure-sensitive language symbols, it also connects to the external world by analog transducers and correlates interactively with hard-wired chip connections and other aspects of the program. Besides, they continue to argue, do humans always understand? The truth is that humans have been largely hard-wired by nature, both internally and externally, to react, to parse and create associations between linguistic elements and their denotations, like machines do. This all may be true in part but no computer has ever been animated like Stravinsky's Pulcinella doll and remain so independently!
We may want to fancy splitting hairs with Fodor's dictum that: "mental states are ‘relations' to symbolic representations." and ask further if one can consider the undeniable physiological correlates characterizing the experience of a ‘mental state' (e.g., anger) as a ‘property' of an appropriate symbolic representation. The symbols must be able to instantiate their property content (e.g., anger) or at least derive it from other measurable properties that can be instantiated by appropriate manipulation of logical operations. One can code ‘is angry' any number of ways and provide examples of its instantiation in sport figures, etc. as exemplified by measurable correlates, themselves codifiable in any number of logically quantified relations to other symbol representations (pulse, heart rate, pressure, etc.). Still the code does not have an independent life of its own and depends on an interpreter (receiver) for the instantiation to take effect. This is the easy example, what if the linguistic predicative expression is ‘sui generis' and can not be instantiated, e.g., ‘he is an angel', or a ‘square circle', a ‘round square' or a ‘virgin'? How do you define the properties of un-instantiables? Do they exist empirically or inside any space-time dimension, can they be exemplified, are they necessary or contingent, can they be individuated? We must remember from previous discussions that ‘being' is very different from ‘existing'. Can a symbolic representation catch all of these nuances? Can they instantiate these properties minimally, with or without their affective component or qualia? If you are a neo-behaviorist or a scientist all you may care about is that, no matter how different their intrinsic properties, two or more properties are the same if they cause the same nomological or functional effect in their instances. This way a brachial plexus chemical block by injection is identical to cutting the same nerves connection to the arm you are trying to anesthesize!! Not all objects can have exemplifiable properties accurately constituted (encoded) as specified by axioms, like circles or squares where identities can be established as long as the abstract specifications in the geometry theory are met. We say that properties that necessarily have the same encoding extensions are identical, but properties that necessarily have the same exemplification extensions may be distinct, like the exemplification of the property of being ‘round' in different objects, e.g., round squares = round circles. Empirical properties (low order logic) are handled differently from the ‘many placed' (high order logic) 'properties' of metaphysical entities. As long as there may be a demonstrable causal effect empirical properties may be assigned higher order status.
The antecedent arguments clear the way for a better understanding that the ‘relation' between an object and its symbol representation may be properly considered as a property itself. Relations have orders or levels also, from the two place relation (e.g., <Republicans believe the President> or <the contender is taller than the incumbent>) to the ‘many argument places' relationship that arguably give credence to symbolic representations of meanings in a computer program where the symbols are also related to other programs, hard-wired chips, transducers, sensors, monitors, etc. When the relation is to non-instantiable properties, including math constructs, metaphysical logic conclusions, etc., then the resulting conclusion or model will depreciate in credibility even when it may describe the truthful reality account. The same thing holds for propositions when considered as limiting cases of properties. Instantiations may not qualify as properties because they become their object, i.e., there are no intermediaries and they are no longer related causally. The Transubstantiation religious ritual instantiates the body of Christ in the ‘Host' in a symbolic, non-empirical way, which truth becomes validated in those with that belief (faith).
This preceding elaboration brings us finally to the reason why our ‘bps' model position that an inherited proto-semantics that precedes formal syntax structure in the generation of language and thought is more tenable than the classical causation view that reverses the vector of causation syntax--> semantics. ‘Meanings' (‘that p', e.g, beliefs) should be considered in all cases as complex predicates in the propositional attitude equation <Sa that p>. A syntactic structure of a complex predicate is not meant to exhibit the internal structure nuances of a complex property; but rather to evidence in a general way that property's position in the logical network of properties. An eminently structured specification like linguistic syntax should aim at becoming a natural device for singling out a specific member among a structured realm of possible entities, by identifying it by its place (its logical location) in that domain. The ‘bps' model makes it possible for language syntax to become that kind of device when nourished and fashioned by a genetic memory input and early environmental influences within the context of an adopted natural language. It is our belief that, unduly influenced by the successful use of complex hyperstructured predicates and structured metaphores to denote empirical, structured specifications (measurable properties) in Artificial Intelligence (AI), have driven some of the best analytical minds into the naïve faith belief that ALL properties are literally structured. We have provided examples to illustrate how even the definition of what a property is, is put into question! For all we know, the complex mental ‘properties' themselves may not even have a tangible structure to get hold off and translate into symbols.
Dr. Angell O. de la Sierra, Esq.
Deltona, Florida Winter 2003
Hope you enjoyed it!
End of Book