Is basic consciousness in early animal forms?

coberst

Well-Known Member
Messages
427
Reaction score
0
Points
0
Is basic consciousness in early animal forms?

Antonio Damasio is a scientist who has set out to organize a scientific study of human consciousness. Damasio utilizes a rather unique method that involves careful observation of individuals who have been deprived of some aspects of consciousness because of brain lesions caused by accidents. He studies brain dysfunction caused by such things as strokes and accidents.

Damasio finds that “nearly all the sites of brain damage associated with a significant disruption of core consciousness share one important trait…these structures are of old evolutionary vintage, they are present in numerous nonhuman species, and they mature early in individual human development.”

That is to say that his evidence indicates that core consciousness is centered about the brain’s physical areas that developed very early in the evolution of life on our planet, i.e. human core consciousness is directly evolved from early animal forms.


The basic facts made available for analysis give testimony to the hypothesis that consciousness is not a monolith. Most importantly there is an abrupt division between what is identified as core consciousness and extended consciousness. There are also distinguishing levels within extended consciousness it self. When core consciousness fails then extended consciousness follows.


Many non human creatures have emotions—“human emotions however have evolved to making connections to complex ideas, values, principles, and judgments”—thus human emotion is special—the impact of feelings on humans is the result of consciousness—a distinct difference between feeling and knowing a feeling—“neither the emotion or the feeling caused by the emotion is conscious”—these things happen in a biological state—there are three stages here; emotion, feeling, and consciousness of feeling—consciousness must be present if feelings have an influence beyond the here and the now—consciousness is tooted in the representation of the body.

We need not be conscious of the emotion or the inducer of the emotion—we are about as effective in stopping an emotion as in stopping a sneeze.

“Emotions are about the life of an organism, its body to be precise, and their role is to assist the organism in maintaining life…emotions are biologically determined processes, depending upon innately set brain devices, laid down by long evolutionary history…The devices that produce emotions…are part of a set of structures that both regulate and represent body states…All devices can be engaged automatically, without conscious deliberation…The variety of the emotional responses is responsible for profound changes in both the body landscape and the brain landscape. The collection of these changes constitutes the substrate for the neural patterns which eventually become feelings of emotion.”

The biological function of emotions is to produce an automatic action in certain situations and to regulate the internal processes so that the creature is able to support the action dictated by the situation. The biological purpose of emotions are clear, they are not a luxury but a necessity for survival.

“It is through feelings, which are inwardly directed and private, that emotions, which are outwardly directed and public, begin their impact on the mind; but the full and lasting impact of feelings requires consciousness, because only along with the advent of a sense of self do feelings become known to the individual having them.”

Damasio proposes “that the term feeling should be reserve for the private, mental experience of an emotion, while the term emotion should be used to designate the collection of responses, many of which are publicly observable.” This means that while we can observe our own private feelings we cannot observe these same feelings in others.

Core consciousness—“occurs when the brain’s representation devices generate an imaged, nonverbal account of how the organism’s own state is affected by the organism’s processing of an object, and when this process enhances the image of the causative object, thus placing it saliently in a spatial and temporal context”

First, there is emotion, then comes feeling, then comes core consciousness of feeling. There is no evidence that we are conscious of all our feelings, in fact evidence indicates that we are not conscious of all feelings.

Humans have extended consciousness, which takes core consciousness to the level of self consciousness and the awareness of mortality.


Quotes from The Feeling of what Happens: Body and Emotion in the Making of Consciousness by Antonio Damasio
 
Hi, coberst,

Last year I posted an entry in Philosophy Forums (The mind and the brain: Philosophy Forums) on this topic. I repost it here as a possible confirmation of Damasio's thesis:

Hi, unenlightened,
...
First, some terminology. Although this thread is replete with talk about "mental objects" (thoughts, minds, etc.), I infer from your earlier contributions that you agree that these objects are not participants in the causal process along with neurons and axons and muscles. They are a shorthand for talking about the mental perspective of people (and perhaps other animals).

Roderick Chisholm, in his investigations into human action, concluded that among the things we know about other people is that they believe certain things, they intend certain things, they expect certain things. (Take a drive down the freeway, and you find yourself constantly making predictions about what other cars will do, based on your interpretations of what their drivers intend.) Chisholm classified these facts about people as intensional (with an 's'). By that he meant (at least) three things:

  1. The predicates of the propositions that express these facts are in some sense psychological.
  2. The subjects of those propositions are people (or other beings to which it is meaningful to ascribe psychological attitudes).
  3. There is no way to analyse (i.e., reduce) those propositions (facts) into equivalent sets of propositions (facts) that do not contain intensional propositions with the person as subject.
This last is critical: it is effectively a denial of Cartesian reductionism for human action.

Chisholm refuses to follow Descartes along the path of analyzing (reducing) human action to causal chains, some of whose links are physical and others mental. Rather, he is saying that human action is holistic, a process with a top-down control structure, rather than a bottom-up control structure.

In his rejection of the mental links in the chain, Chisholm anticipated such philosophers as Dennett and Hofstadter, both of whom reject the Cartesian demon (which Descartes hypothesized inserted its influence through the pineal gland) and even the Cartesian "theater" (there is no place in the process of conscious action where all our physical systems report to the real me). All seem to agree that if you analyze conscious action into its causal components, what you find are physical events, and physical events only.

Dennett concludes from this that consciousness is an illusion, or at least he likes to use that language. Hofstadter, on the other hand, although he also sometimes characterizes consciousness as an illusion, makes a very interesting observation: our ability to predict what a person will do improves with the precision with which we can specify the content of these intensional propositions, i.e., the more precisely we know what he believes or intends, etc., the better we can predict what he will do. But the details of the content of intensionality don't seem to correlate with what we can measure about brain activity. If this is correct, then it might mean that the wonderful details that Mars Man has supplied might be off the track. That of course is speculative; new research might produce a better correlation.

But what has this to do with the emergence of consciousness from brain activity? Consider the following hypothesis.

During the course of evolution, higher level systems have emerged out of lower level systems. Complex, "active" molecules emerged out of simple, "inactive" molecules. Single-celled organisms emerged from complex molecules. Multi-celled organisms emerged from single-celled organisms. Survival at any of these levels is tenuous and chancy. Any organism is subject to attack from below (as when we are attacked by germs), from above (as when we attack germs with germicide), and from the same level (as by predators). Some organisms improve their survivability by prolific breeding; some, by incorporating bio-chemical protection (invariably achieved by hijacking the capabilities developed by bacteria). Some (including at least fish, reptiles, dinosaurs/birds, and mammals) addressed the challenge by learning to control the dangers and opportunities at the same level. They learned to recognize objects at that level that they could eat, that could eat or otherwise harm them, and that they could mate with.

But what does this "recognition" mean. We know that certain molecules "respond" automatically by reconfiguring themselves in the presence of other molecules with certain features, but we don't think of those automatic reactions as conscious. What's the difference?

We can at least imagine that Nature could have created complex organisms whose behavior was entirely automatic, that Nature could have given each organism a survival algorithm. In which case there would have been no need for conscious awareness. And indeed that seems to have worked for the entire plant kingdom. Prolific breeding plus occasional biochemical protections seem to have sufficed for plants, without any need for consciousness. That seems not to have been Nature's strategy for animals. The situation facing animals, especially at our level, seems to have been too complex and too varied for Nature to have found an survival algorithm suited to our needs.

One of the reasons for Nature's failure to find that algorithm is information overlap. To oversimplify, animal behavior can be classified into what have been called the four F's: feed, flee, fight, and mate. A simple set of algorithms would take the form:

  • When A is present, eat it.
  • When B is present, flee it.
  • When C is present, fight it.
  • When D is present, mate with it.
Then the organism could evolve appropriate monitoring subsystems for each of its behavioral subsystems. Thus the digestive (sub)system would have an apparatus for recognizing A's, the flight (sub)system would be able to recognizing B's, and so on.

Unfortunately that approach runs afoul of the fact that A's, B's, C's and D's have common elements. If a gazelle takes an opportunity to take a sip of water without noticing the ripples made by the crocodile, its survival is compromised. More importantly, survival often depends on making choices in complex situations where dangers and opportunities are not clear and and have to be calculated. Conscious action are the choices we make based on our recognition of the elements of these complex situations.

To recapitulate:

  • Consciousness is the information space shared by our behavioral subsystems, or perhaps more accurately, the sharing of information among those subsystems.
  • Consciousness would be irrelevant to survival, and hence would not have evolved, if the organism could have relied solely on what its subsystems did naturally.
  • The threats and opportunities to which complex organisms respond, require an ontology of objects at their level, i.e., we respond to the presence of berries and bears, not (directly) to bacteria.
  • No analysis of our behavior will be adequate that does not explain the subtle discriminations we make in the details of the content of our awareness.
On this account, consciousness did not emerge for its own sake; its origins depend essentially on it value in supporting choice. Like all evolved capabilities, once present it takes on its own raison d'etre, allowing us, for example, to be conscious of a beautiful sunset, without any direct link to survival-oriented action.

Now nothing in this hypothesis says that silicon-cased consciousness is impossible. On this planet only carbon-based organisms evolved conscious choice strategy of behavior.

But it might be interesting to speculate. Suppose like Lovelock and Margolis, we thing of Earch (Gaia) as a living organism that has developed powerful homeostatic (i.e., self-regulating) capabilities. And suppose we also observe that the vast network of communications among people and computers is at least topologically and functionally similar to a brain. Is it so hard to imagine that Gaia's brain could become active in a way that confers sentience on Gaia herself?

It's conceivable, but given the evolutionary origins of conscious choice, it's hard to see how it could actually come about. Gaia's survival does not seem to depend in any way on things that Gaia could do. Gaia has no threats or opportunities at its own level on which it could take action in our time frame.
 
Back
Top