The Two 'Faces' of the Brain

INTRODUCTION

Modern psychology and neuro-psychiatry have given us a fairly reliable introspective knowledge of the subjective self. Successful simulations of cognitive scenarios using computer logic programs by experts in Artificial Intelligence (AI), the unraveling of the interconnectivity of neuronal synapses in the brain revealed by the neuroscientist and the panoramic view of neuro-philosophers have each contributed important pieces to the big puzzle of understanding how is our mind / conscience related to our perceptions of the natural objects conforming our objective reality. How are these perceptions influenced by our own self perceptions (introspections) of body and mind? What role the brain itself plays as an interface mediator bridging the objective environment (body proper & external) with the subjective self?

These questions seem more and more to be the relevant aspects of brain function to focus on, especially the often forgotten inevitable mediators in these objective / subjective transactions, the socially acquired language and genetic proto-linguistic potential. In a previous chapter we said “Consciousness has three singular aspects, the external perception of natural objects, their attributes and their relationships, the introspection of self as an individual observer, both physically and mentally and the fidelity of language as a necessary representation of both types of perceptions.” Now we would like to make a brief distinction between the acquired language superimposed on Chomsky’s generative grammar and the genetic  language, which we assume to be represented in the brain of the newborn's neuronal, combinatorial logic circuits.

        In chapter 1 we elaborated briefly on the neuronal coding and storage (and retrieval) of external sense data into cortical strata. Now we would like to expand briefly on these processes to include sensory receptor and brain codification aspects in the elaboration of a language only intelligible to the individual self, not his analytic observer.

BACKGROUND DATA

       In respect to the gross neuronal connections responsible for feeding information about the body internal environment into the brain, we have discussed it in great detail in chapter 5 above. We said, in a capsule summary: “The most significant event during the pre-linguistic stage is the establishment of the neuronal interconnectivity between all visceral organ effectors and centers of neural control, at different anatomical levels, all servo-control systems independent of any cortical levels for base-level functioning…… That is not to say that there are no connections between the ‘central autonomic’ loci (hypothalamus, limbic system, etc.) and the cortical levels of consciousness, they have been established by central stimulations in awakened patients (Penfield). The paucity of these connections belies their tremendous contribution to the ‘emotional’ content of thought beyond the pre-linguistic stage, when thought is articulated language, vocalized or not.”

At that time, we stressed how this ‘visceral’ neuronal configuration appraising the individual of his changing internal body conditions is very different to the way external sensory receptors’ inform consciousness of changing external environmental conditions. We wanted to question the high fidelity external perceptions about objects in nature deserve, especially when the incoming external information is commingled with information arising from the viscera. We saw how visceral information may utilize the same ascending neuronal pathways to consciousness that external, environmental somatic information travels. We demonstrated how visceral states could be biologically linked to external perceptions (voices, sounds, sights, cutaneous sensations, etc), which reach consciousness. We stressed the importance to keep this admixing in mind when elaborating a conceptual framework in the area of consciousness.

This time, we will examine very briefly how the external special sense receptors recreate the external (visual, acoustical, olfactory, gustatory, stereognostic) world by coding its sensory information in a language suited to be engrammed into the brain physical strata. The object or event itself may physically disappear eventually but will leave behind representative scripts that reasonably codify for their recorded (sensed) physical features. We have argued that the acquired language eventually takes over the codification process; from then on, this language predominantly will control the codified inferences about objects or events in nature. Notice the superior efficiency of language now being able to link a sequence of codified representations making it unnecessary to double check with associated memories (visceral or perceptual) during the production of speach.

SENSORY CODING.

 

          The external special receptor organ represents the interface between the individual and his environment. Most important in this context are the special receptors (for vision, audition, etc.) in that they are able to receive information from objects or events at a distance, across what may  conveniently be considered a material gap or space. Within certain boundary conditions, they are most often rather selective to a particular frequency in the electromagnetic energy spectrum traveling through ‘empty’ space. It is clear that objects in nature must ‘radiate’ their presence via a vehicular ‘wavicle’ traveling through space to be consciously identified by an observer subject, as long as emitter object can stimulate observer’s sensory receptors at a given or resonant frequency level.

        The observer’s cellular receptor surface (charged membrane) will respond to an appropriate environmental change with an increased membrane permeability to ionic flows that alters its cellular resting potential. The resulting ionic composition imbalance in the medium will affect the resting potential of conducting afferent nerve fibers near the receptor which, upon being itself now depolarized to a threshold value, will discharge a propagated ‘action potential’ impulse along the membrane length of its axon until it reaches the next cell in the sequence, usually separated by a synaptic gap. The synapse junction controls signal transmission to the next cell in different ways, including inhibition. If we exclude the receptor / nerve connection (analog / analog), we can generalize by saying that the synapse is the information processing element in a neuronal circuit. Incoming multimode information from the environment gets ‘transduced’ to an unimodal propagated wave of electrochemical energy traveling across membrane to the next synaptic gap or effector at the constant velocity, intensity, frequency and duration of the action potential. The details of synaptic transmission are beyond the scope of this elementary account. For details, see my Vol. I Human Biology, MightyWords.com. Suffice it to say that, in general, the intensity, duration and frequency of the environmental signal gets coded into the activity  of a train of action potential impulses traveling at constant speed (determined by membrane characteristics). The intensity will code for frequency of discharge, duration, for length of train of impulses and frequency for frequency of train of impulses. There exist all kinds of neuronal combinations possible, all of which will code for the particulars identifying the physical object or event in nature. There are convergent, divergent, resonating, reverberating, inhibitory, facilitatory neuronal loops, to mention just a few of the many possible arrangements. We will find these multiplicities of arrangements not only in peripheral sensory coding but also at all levels of central neuronal information processing.

          Perhaps the most important characteristic of neuronal loop assemblies, in our context, is the nested nature of these neuronal arrays that can explain the most unpredictable of all results observed, their emergent nature. This simply means that you may predict the individual behavior of components a, b and c. When you integrate their behavior there will be a collective ‘emergent’ behavior d, you were not able to predict from the individual data. Just like one would be hard put to predict the characteristics of liquid water by integrating the gaseous behavior of oxygen and hydrogen constituents. Nested assemblies do not individualize their control center, as you would find in a pyramidal assembly that is being controlled from the apex. Control can be anywhere in the assembly and once a control center is established, all subsets within the purvue of the controlling set are inhibited from performing individual acts not germane to the collective result expected.

It is possible to design ‘equivalent’ digital circuits to approximate just about any feature a neurophysiological measurement may discover; this is the bread and butter of AI. In a previous chapter we outlined the distributive nature of cortical and subcortical sensory-derived information storage into ‘Kantian equivalent’ categories. The discussion that preceded essentially described that ‘face’ of the brain observing nature through its sensorium. Let us now briefly discuss the salient points about how that sensory-coded information may be processed further inside more centrally located areas inside the brain.

COMBINATORIAL LOGIC CIRCUIT EQUIVALENTS

The unit of structure and function in the nervous system is the neuron, but in a cooperative activity, we consider instead the reflex arc that comprises a receptor organ, afferent sensory neuron, integrator interneuron(s), efferent motor neuron and effector. The ‘all or none’ type of neuronal discharge upon environmental stimulation makes the sensory neuron a binary encoder. Most receptor cells have graded responses (analog to analog conversion?) but that particularity will not be discussed further here. A device that is actuated by power from one system and supplies power usually in another form to a second system is called a ‘transducer’ ( Merriam Webster Dictionary).

A binary encoder will be a transducer of environmental electromagnetic energy into a binary code of two digits, 1 & 0, representing the presence (+) or absence (0) of a unit of environmental information respectively. In other words, this neuron can be considered as an analog to digital encoder (A / D).

A ‘quantum’ defines the smallest unit of energy applied to a neuron that would elicit a binary digit propagated response as an action potential. A measure of this response, or ‘resolution’, will depend on the transition from a resting to a threshold potential and will depend on the reciprocal of the number of bits of information processed. (Bits= 1 / 2^n). A high resolution system is found when the voltage gradient is minimal, i.e., n is large. The response time measures the delay between environmental stimulus and generation of an action potential.

Mathematicians and neurophysiologists together have been able to create equivalent digital circuits to simulate the vast array of measured responses registered when recording from  either individual cells or in the vicinity of neuron aggregates. One such equivalent circuit element is the “logic gate”, an array of active (transistors, etc.) and passive (resistors, etc.) sub elements whose function is to control the relay of an applied input signal to its output. We usually find these various logic gates assembled  inside ‘integrated circuits’. Out of three basic logic gates (INVERTER OR NOT, AND GATE, OR GATE) we can combine them into many ‘logic families’. We can normally express the digital equivalents of neuronal assemblies in the form of symbolic logic diagrams  or Boolean algebra language. The very interesting results of these combinations are outside the scope of this elementary introduction. ‘Karnaugh Maps’ have been designed to simplify the interpretation of combinatorial sequence results involving any number of variables. The design of digital logic circuit equivalents is predicated upon the assumption that the brain processes information like a computer does. It searches or encodes information, executes, synchronize, store and decode information, like a microprocessor unit (CPU) does. The successes of AI are a testimony to the fact that our inherited brain is able to unconsciously perform all of the arithmetical operations and logic functions the that Arithmetic Logic Unit (ALU) of the computer does. We have mentioned the sensory input, memory and motor output equivalents that would complete the analogy with the computer.

At the other end of the neuron cell transmission, at the synaptic cleft, another event will be generated at the post synaptic membrane of the next neuron in the sequence after a synaptic delay. This delay is due to the neurotransmitter diffusion time between the pre-synaptic membrane where it was produced and post-synaptic membrane where the neurotransmitter will depolarize the membrane in preparation for a succeeding event. In the simplest of all cases, the monosynaptic reflex, we may find that the post-synaptic membrane belongs to an effector (gland, muscle cell). In this case we will witness a reversal of the coding process described where the conversion is now D / A, e.g., a muscle contraction.

In general an A / D conversion is more complicated than D / A, the big exception being when the latter corresponds to the transmission events on the ‘other face of the brain’ as we will discuss ahead. It is important to notice at this point how we can e.g., aim a laser gun at a subject from a distance and cause a muscle contraction on the subject. The external physical object was able to communicate some of its physical features through space and leave their imprint on another physical object, the subject’s neuron pool, which converted the environmental analog energy transmitted into a digital code, causing the generation of a  propagated action potential reaching the subject’s effector muscle causing its contraction. At the level of the effector organ the propagated action potential was again digitized when it reached  and depolarized the muscle cell’s membrane, an A / D conversion. The internal ionic environment of the muscle cell will be altered thus triggering an observable contraction, a D / A conversion.

This analysis may seem trivial until you realize the functional bipolarity of the excitable cell (neuron, muscle). On the one hand the sensory neuron faces the external physical object, laser gun, and is able to extract its only identifiable physical feature, the laser energy of a given intensity, duration and frequency, which it digitized and converted into an intelligible code for subsequent processing. As we illustrated this original encoding has to be decoded, converted into an action potential analog to carry the information to the effector muscle, another excitable cell, that must digitize again the propagated signal (that’s all its membrane can do!) so it may alter the intracellular ionic environment that makes possible the actomyosin complex to cause the muscle contraction.

We may ask at this point, if we analyze the various encoding / decoding interventions in this most simple of an example, can we deduct information about the physical ontology of the external object we perceived, the laser gun?  Likewise, if we now try to analyze the neuronal events leading to the muscle contraction, can we deduce its physical object features (strength, tension, etc) from the various encoding / decoding events? The answer is a resounding NO! These various interneuronal arrangements and their discrete energy transformations represent a cognitive structure able to relate the two relevant aspects of the subject’s adaptive response (muscle contraction) to an environmental challenge (laser gun activity). It is clear that the neuronal loops responsible for the various sequential A / D> D /  A > A / D play a dual role, one when facing the external physical object in nature and a very different one when facing the subject.

This conclusion is so important that we will allow ourselves another example dramatizing the importance of the intervening events. Imagine we are looking at guitarist Johnny Asia performing one of his avante-garde ánglo-flamencos from the distant bleachers. I can only register those salient features of the object (guitar) and event (concert) that reach my audio-visual sensors through space. The ‘electromagnetic’ audio signal reaches my tympanic membrane and sets it to vibrate at a specific intensity, duration and frequency, an A / A conversion. From then on a series of analog to digital conversions (cochlea, inferior colliculi, lateral geniculate body, Heschl cortex, etc.) progresses cephalad, corticopetally. The same we can say about the ‘electromagnetic’ video signals traveling through space to my retinas; finally, the visual information similarly finds its way to the primary visual cortex  (retina, superior colliculi, medial geniculate body, cortical area V1, etc.)  Thus far, we have modified our genetic neuronal ‘machine language’ by the encoding / decoding parallel processing manipulations of the intrinsic combinatorial digital logic processing. The resulting physical neuronal structure represents an equivalent physical guitar, sound, sight and all. Now we can leave the concert and ‘carry’ the guitar and all home! Now we can ask the same questions as before, can we study ad nauseam this brain ‘engram’ and deduce from its analysis the ontological physical reality of the object guitar? Or, can I study the detailed physical features of the guitar and deduce from it the combinatorial logic neuronal patterns configured to apprehend its form inside the brain? The answer is the same, it is impossible! The brain ‘engram’ neuronal structure has two faces with very different problems to solve. One captures environmental reality to form memories, the anlage of thoughts, the other provides meaning to the execution of the proper adaptive response therein coded. It also connects two physical objects simultaneously across space, an interface of sorts. In this last respect it acts like a detergent molecule interface binding two dissimilar, chemically incompatible physical objects (water and oil) together for a common good. Neither phase can ‘learn’ about the other based on their physical incompatibility. In addition, either phase can now  learn only limited things from the interface approximation that may allow some features to be deduced, like degrees of hydrophilia of the ionic projections into the aqueous phase or the nature of the organic hydrophobic molecular configurations facing the oil phase. The duality of coding faces two different realities and codes for its revealed features making them ‘intelligible’ for a putative subject seeking to extract meaning from that result.

Likewise, the cognitive structure of the brain ‘engram’ is the limiting factor in our knowledge about the empirical reality we perceive. If I had recorded into a Cam Corder the sight and sound of the guitar so I could play it in my studio, I should not expect to find the guitar inside the recorder! I was only able to encode some of its revealed features so I can decode them and enjoyed them in my studio. Not anyone analyzing the material and spectral waveform of the sound should expect to gain insight into the ontological nature of the guitar in itself. This experimental data is not about the guitar or the listener as an object, knowledge about either one cannot be deduced or extracted from this information. The encoding is not the guitar but about the guitar, the final decoding is not about the listener but about the esthetic enjoyment of the music, about the self.

Can we now extrapolate this analogy to explain the conundrum of the brain / mind interface? To be able to come closer to an understanding we must, primarily, abdicate as human beings any hopes of ever attaining certainty in our knowledge of the true nature of physical objects in nature. We can still make predictions about some aspects of their structure and function by the methodologies of induction, deduction, logic or metaphysics.

A corollary of this premise is that we have to reckon with the fact that reality, as we experience it, is in our brain, not out there in nature necessarily. Many studies in neuropathology, neuropsychiatry and neurophilosophy will substantiate that. Furthermore, if we get concerned about second order judgments , notwithstanding claims to the contrary, Cantor’s paradox still points out to the logical impossibility of an observer being able to make an objective detached observation of an event while he is simultaneously part of the event, unless we can first demonstrate ubiquity!. A member of a set X cannot observe the actions of X while at the same time participating in such action.

One would be tempted to affirm that the features of external reality encoded into our physical brain ‘engram’ would never become manifest to our consciousness unless it happens in the context of their content becoming relevant to an adaptive adjustment response needed. Yet we know that in the controlled absence of sensory input or motor responses from a subject, like during sleep, we can still corroborate by narration a state of consciousness, (see previous chapter,.“The Natural Life of Thoughts”). Michael Gazzaniga has suggested that the mind interprets data the brain has already processed (‘engrams’?) at that instant it needs it, making "us”, the self, the last to know. Of course, this would apply to a first encounter with an object or event in nature. When it happens that the object we observe does not jive with our experience (if we see humans flying with feathered wings!) we may go in denial, negating to ourselves that objective reality out there in nature. What we normally "see” frequently is an illusion, a balanced view according to experience and not at all what our sense receptors actually perceive in front of them. That way even false memories can become a part of our memory database and an autobiography can become a wish list fiction. Gazzaniga also explores how the brain enables the mind, i.e., how the engram controls the adaptive motor response, a first crawl towards an understanding of how we become who we are.

THE INNER ‘FACE’ OF THE BRAIN

           While it may not be possible to demonstrate how consciousness supervenes logically on the cognitive structure of the physical brain (its network of neuronal digital processors), it is not difficult to conceive of a natural supervenience by explaining how consciousness or experiential phenomena may be inferred solipsistically (first person) as the result of these computational physical processings. They always seem to be linked systematically and causally. It has always been our recurrent ‘lei motif’ to distinguish between awareness, i.e., the ability to access relevant information from a cognitive structure for a willed (?) or programmed motor act of behavior, including a printed or verbal report, and self consciousness, a second order judgment or meta consciousness. We still consider the latter event as controlled by Cantor’s limiting predictions. Awareness may be readily explained by a cognitive cyber structure dynamics and subject to a computer simulation. This computer awareness includes that perceptually sensed by monitors externally or related to an internal pre-programmed state. For example, a parked bicycle may be monitored by a video camera and when the image is stored and compared with gallery of bicycles in the database, the computer speaker may say “bicycle”, a perceptual, a phenomenal awareness. An hour later the same computer may be ‘asked’ what conveyance did it ‘see’ in the last hour and it will say again ‘bicycle’ as the last item stored in its memory, a propositional, access awareness. One may ask whether the computer can experience pain or rage like the one humans do. We can program the computer to respond to a stimulus pattern any human would consider noxious and capable to evoke all of the psychomotor manifestations of rage behavior. I can stimulate the hypothalamus in a human and elicit a similar response as a computer, called ‘sham rage’. Only the second order judgment will be absent from the computer. There is a logical impossibility in designing a metaconsciousness for a computer. It is also impossible to design a cognitive machine routine to mimic the rage ‘emotion’ which must be inferred by the motor behavior concomitants. Yet, there is a tight correspondence between the cognitive neuronal engram and a particular observable behavior, as the example illustrates. Chalmers called this correspondence “structural coherence”. The conscious verbalization of this specificity or coherence is seen during surgical electrical stimulation at specific brain loci. Can we even be sure that in this case the verbalization evidences consciousness or outright verbal discharge of a coded cognitive content? When a cortically blind patient avoids being hit by a ball thrown at him, is he conscious of the adaptive behavior being displayed, or is it a programmed avoidance reflex response? Here we find an undeniable strict correlation between the perceptual event processing (one ‘face’ of the brain), the avoidance adaptive experience (the other side of the brain controlling behavior and / or verbal report) and the physical cognitive structure inside the brain separating both events and properly called a first level judgment or state of awareness. The qualitative jump from order level consciousness to a second order metaconsciousness is the great black-box gap which is connected causally but the meta consciousness state does NOT supervene logically (only naturally) on the physical cognitive structure programmed into the brain by genetics (generative grammar) and a superimposed social experience, a sort of ‘metabrain’. It is outside the scope of this short presentation an exposition as to how a physical space in the brain is converted into an informational space. Is meta-consciousness an epiphenomenon of the metabrain with natural causal but not logical supervenience relationships? If we were to conclude, as functionalists do, that perhaps metaconsciousness is an illusion we need not worry about, then the metabrain cognitive structure model would bridge the gap between mind and brain. However, as long as there is a will, capable of exercising control over the meta-consciousness state we must leave open non-physical influences bearing on this special state.

METABRAIN CONTROL OF THE SELF.

          It has been argued that the will to act is in itself an unconscious act arising from those cognitive combinatorial structures we have called the metabrain and directed to the ‘homeostatic’ preservation of the psychological integrity of the self. We are looking now at the other face of the brain. This proposition is reinforced with neuropsychiatric and neuropathological findings. This is its forte. Its weakness is that it is not expected that a ‘normal’ person would display an aberrant behavior that is contrary to his physical or psychological best interests. We do witness this behavior however in individuals involved in heroic acts or unselfish acts of altruism. Mother Theresa of Calcutta is a recent good example of such personal sacrifice for the benefit of others.

         The arguments connecting the ‘metabrain’ combinatorial and cognitive structure to the preservation of the ‘self’ model will be discussed below. It should first be noted how these ‘functionalists’ interpretations have taken over the field, it is no longer required to scrutinize the chemical or quantum physical properties of the physical brain to explain the conscious state. This approach has many advantages in that the model emphasis is in ‘functional’, not structural organization, i.e., how many abstract components there are and the different possible states they can assume under boundary conditions, plus a definition of the causal relationship controlling state transitions. It looks as if ‘functionalists’ have adopted information theory as their front line of attack and defense. Wherever the same conditions are to be met, in a neuronal or silicon array, you will expect identical results, the principle of organizational invariance for ‘functional isomorphs’.

As we said earlier, the ‘homeostatic’ preservation of the psychological integrity of the self becomes the centerpiece of yet another functionalist approach to explain consciousness. To start, one may ask, how do you maintain an integrated self in a distributed, differentiated system? The answer is borrowed again from AI and Medawar (The Life Science) and C.Lloyd Morgan who essentially consider the brain a hyper complex nested hierarchy of functions with the features we mentioned above and resisting an ‘ontological reduction’ (after John Searle). This is so because of the emergent new feature, the mind, that which its defenders equate also as the ‘self’. In an ontological reduction an emergent complex object can never be demonstrated to be caused by the reunion of other objects of different simpler types. See my Amazon.com review of “Altered Egos”, Oxford University Press 2001. These authors believe that purposeful actions (achieving physiological and psychological homeostasis) IS a constrained ‘teleonomic’ (directed to a goal) driving force behind the nested hierarchy we have called the metabrain ‘program’, leaving in the process no room to accommodate free will or human purpose. It is true that the lowest levels of brain activity, visceral functions, are nested very tightly to preserve the biological integrity of the species as  compared to neocortical functions, looser, more flexible connections that must accommodate and superimpose new information proceeding from an ever-changing environment. This explains how much easier it is to disrupt acquired social behavior by organic disease. In reality these new functionalists’ approach equating consciousness with an emergent product of a nested hierarchy, our metabrain, still begs the question. It is another epiphenomenal interpretation where the emergent consciousness supervenes naturally but not logically on our physical metabrain; what else is new? It is not at all clear why Dr. Feinberg, having argued persuasively in behalf of his functionalist model, then comes to his senses by concluding towards the end of his book that “the mind is subjective and personal, and…can not be reduced to the brain??” In our opinion, Dr. Feinberg’s contribution is a valuable one if confined to the issue of how awareness, or first order judgments, can be generated by that cognitive combinatorial ‘engram’ program we have called ‘metabrain’. To claim second order judgments capabilities is not warranted by his most interesting clinical data.

We had suggested in a previous chapter how a visceral brain might accomplish for visceral, biological survival- related homeostasis what the right frontal neocortex may accomplish for social adjustment and survival as narrated by Dr. Feinberg. In our opinion, he strained to accommodate his explanation, as he should when addressing scientists, into a physicalist mold, leaving out self-evident human experiences as the exercise of freewill, which has resisted reductionist attacks from time immemorial. We consider it as another attempt of formulating a valuable virtual reality model and trying to extend its reach into an ever-receding scenario of infinite reality.

We still must explore how generative grammar may be interwoven into the logic fabric of the pre-metabrain so that acquired language may adequately serve as a vehicular conduit of thought during the conscious state. It is of some interest that Dr. Feinberg admits the participation of the limbic system in our metabrain function in giving form to the manifestation of self. Another way of saying how the visceral brain is inextricably commingled with the metabrain in the formulation of thoughts, however abstract and objective we try to structure it.

CONCLUSIONS

In this latter context, we would like to quote from one of our previous publications: “Transcending virtual reality into an eventual certainty of infinite reality is not new. Leibniz, premised on his assumed rational structure of reality, considered that events when expressed in a language of logic statements, represented an “…alphabet of human thought..”” Consider for a moment the following thoughts: All knowledge requires an ability to identify, catalogue and compare with previous experiences in memory, when present. This is true not only in perceptions during the cognition / recognition of natural objects but also during body state introspections or during recalls of previous perceptions or distortions thereof, as found in fantasies or even logical impossibilities as Dr. Feinberg demonstrated. The common denominator in both perceptive / introspective cognitions is words, the vehicular conduit to conscience. Sensations, whether blurred by inarticulate viscerogenic images or sharpened by the acuity of exteroceptive recalls are poor substitutes and makes knowledge indeterminate, a mere awareness of a presence we are not able to communicate or even remember with a high degree of fidelity.

Environmental (outer, inner) perceptions have been coded or linked by structured word meanings or less structured non-verbal equivalents. We may be able to identify an event by adequately combining their sensible qualities perceived, but to elaborate an idea about it will require word coding. The most significant and comprehensive cognition of an object or event is obtained when  the cognitive elements of environmental perceptions are coded into an abstract word like sadness, anger, happiness, etc. which recapitulates a broad spectrum of the real life experiences.  Yet, one may be certain about an environmental perception but totally mistaken about the word-coded inference that substitutes for it in the metabrain linguistic combinations. The undeniable, self evident existence of intuition makes the process of deduction a reality in that orderly single propositions are combinatorially sequenced as cause> effect. However, there is another important caveat we discussed briefly in a previous chapter. The indiscriminate use of abstract language to represent a connectivity of ideas pre-supposes a logical cause > effect sequence not necessarily present in the environmental objects being coded for. Ideas, or the word structural sequence therein contained ONLY represent their independent mark or sign. Being oblivious of this fact makes it possible for brilliant language parsers to impose a particular ‘logical’’ understanding.

        There is more to the phenomenology of perceptions than meets the eye. Sometimes we have to access the metabrain database to re-enact a scenario beyond the resolution capacity of our senses. The sad truth is that science and even logical reasoning IS subjective.

Syllogism is the sort of discourse where, once certain assertions are accepted as stated, something else different from what is being stated, follows of necessity from their being so. By the inductive process we aggregate particulars to arrive to an always uncertain generalization while during deduction we arrive at particulars from universals. The philosophical or logico-mathematical structures we construct by accessing our metabrain will never capture the realities they hope to represent. All thought is clothed in language and inseparable from it. It is language that imposes, by best fitting, a structure in the way it categorizes. Visceral sensations are essentially pre-conceptual, as are vegetative desires, anger, pain, pleasure, etc. When environmental perceptions reach a critical aggregation mass it becomes alive to a sense perception and thus support thoughts (language-coded). Therefore, thoughts, while independent of the individualized  atomic constituents that cause them, are dependant on their aggregate structure. The sense-captured intuition is related by induction to the external  atomic constituents (or their external causes) in themselves, which are unintelligible to us. Thus, a perceptive ‘thought’ or awareness cannot be coded into a word equivalent, except when the intuition is linked to codeable information simultaneously generated by the event. The thought is nothing more than a reasoned inference of that perceptual event that caused it, vague, general and inexact, as it must be, since words do not have independent realities. A metaphysical best fitness of words to the material objects / processes they designate is only possible in abstract.

There is as much reality in a cause as there is in its effect.  Can a psychological introspective finite effort ever understand and describe the self or the infinite human soul? Ruling out sense perception, how can reason help? Reason can only teach us something about finite objects identified or causally linked to an experience. If the self or the human soul were a substance or a subject, we could only capture its reality by its accidents or predicates. If we undress reality from its accidents or predicates, what remains out there of the entity we like to describe? The object itself remains outside our intellectual reach, is not part of our world, is outside of it, and we have to make do with the representation.

End Chapter 7