Servicios
Descargas
Buscar
Idiomas
P. Completa
Non-reflective consciousness and our moral duties to non-reflective animals1
Bernardo Aguilera
Bernardo Aguilera
Non-reflective consciousness and our moral duties to non-reflective animals1
Consciência não-reflexiva e nossos deveres morais para com animais não-reflexivos
Conciencia no-reflexiva y nuestros deberes morales para con los animales no-reflexivos
Revista de Filosofía Aurora, vol. 36, e202430456, 2024
Pontifícia Universidade Católica do Paraná, Editora PUCPRESS - Programa de Pós-Graduação em Filosofia
resúmenes
secciones
referencias
imágenes

Abstract: Many philosophers and scientists believe that animals can be conscious by virtue of possessing first-order perceptual representations, while having high-order representational capacities is not necessary for being conscious. In this paper I defend this view but argue that it reveals that there are two kinds of consciousness that depend on whether one is capable of high-order representations or not. I call these two kinds of consciousness reflective and non-reflective consciousness, respectively. Given that consciousness is crucial for the ascription of moral status to animals and for determining our moral duties to them, the distinction between these two kinds of consciousness turns out to have important normative implications. In the last part of this paper, I argue that our moral duties towards animals with moral status are generally stronger when they arise from reflective, rather than from non-reflective, animals.

Keywords: Animal ethics, Consciousness, Moral status, Pain.

Resumo: Muitos filósofos e cientistas sustentam que os animais podem ser conscientes em virtude de possuírem representações perceptivas de primeira ordem, enquanto sustentam que a posse de capacidades representacionais de ordem superior não é necessária para a consciência. Neste artigo defendo esta tese, mas defendo que ela revela que existem dois tipos de consciência que dependem se alguém é capaz de representações de ordem superior ou não. Chamo esses dois tipos de consciência de consciência reflexiva e consciência irrefletida, respectivamente. Sendo a consciência determinante para a atribuição de estatuto moral aos animais e para a determinação dos nossos deveres morais para com eles, a distinção entre estes dois tipos de consciência acaba por ter importantes implicações normativas. Na última parte deste artigo, argumento que nossos deveres morais para com os animais com status moral são geralmente mais fortes quando surgem de animais reflexivos do que quando surgem de animais não reflexivos.

Palavras-chave: Ética animal, Consciência, Estatuto moral, Dor.

Resumen: Muchos filósofos y científicos sostienen que los animales pueden ser conscientes en virtud de poseer representaciones perceptuales de primer orden, al tiempo que sostienen que la posesión de capacidades de representación de alto orden no es necesario para tener conciencia. En este artículo defiendo esta tesis, pero argumento que esta revela que hay dos tipos de conciencia que dependen de si uno es capaz de realizar representaciones de alto orden o no. A estos dos tipos de conciencia los llamo conciencia reflexiva y conciencia no reflexiva, respectivamente. Dado que la conciencia es crucial para la atribución de estatus moral a los animales y para determinar nuestros deberes morales hacia ellos, la distinción entre estos dos tipos de conciencia acaba teniendo importantes implicaciones normativas. En la última parte de este artículo, sostengo que nuestros deberes morales hacia animales con estatus moral son generalmente más fuertes cuando surgen de animales reflexivos, que cuando surgen de animales no reflexivos.

Palabras clave: Ética animal, Conciencia, Estatus moral, Dolor.

Carátula del artículo

Scientific article

Non-reflective consciousness and our moral duties to non-reflective animals1

Consciência não-reflexiva e nossos deveres morais para com animais não-reflexivos

Conciencia no-reflexiva y nuestros deberes morales para con los animales no-reflexivos

Bernardo Aguilera
Universidad San Sebastián, Chile
Revista de Filosofía Aurora, vol. 36, e202430456, 2024
Pontifícia Universidade Católica do Paraná, Editora PUCPRESS - Programa de Pós-Graduação em Filosofia

Recepción: 11 Junio 2023

Aprobación: 11 Julio 2024

Financiamiento
Fuente: Chilean National Agency for Research and Development
Fuente: FONDECYT
Nº de contrato: 11200897
Descripción del financiamiento: This work was supported by the Chilean National Agency for Research and Development (ANID) under Grant FONDECYT 11200897.
Introduction

Our moral duties to non-human animals depend on their moral status, which is often grounded on the capacity to have conscious experiences. While many believe that consciousness is a sufficient, if not a necessary, condition for having moral status, less attention has been paid to the existence of different ways of being conscious and their moral significance. Arguably, the kind of consciousness animals have will be relevant for determining the moral duties we owe to them.

In this paper I claim that we are justified to ascribe consciousness to many non-human animals based on so-called first-order theories of consciousness. That is, they are conscious thanks to their first-order perceptual representations about the environment (or inner bodily states), such as the capacity to see objects or feel pain-a claim that will be unsurprising to most people. Then I claim that humans2 (and perhaps a handful of non-human animal species) possess high-order representations, which allows them to reflect upon their first-order perceptual representations.

Drawing on this capacity of reflection, I advance a more contentious claim: that humans have reflective consciousness, while animals who lack high-order representational capacities can just have non-reflective consciousness, which is a different kind of consciousness. I conclude by exploring the implications that the distinction between these two kinds of consciousness has for moral status.

Theories of consciousness

The philosophical debate over the attribution of consciousness to non-human animals is often divided between first-order and high-order theorists of consciousness. The former group (Dretske 1995, Tye 1997) claims that many animals are conscious by virtue of having first-order perceptual representations, while high-order theorists (Lycan, 1987; Rosenthal, 2005; Carruthers 2003), contend that high-order representations are a crucial enabling condition for consciousness. This latter view would imply that most animals-except for great apes, dolphins and perhaps other animal species endowed with meta-representational capacities-would not be conscious. The main argument behind high-order theorists is that there is compelling evidence of non-conscious perception in humans and that the most plausible way of explaining why just some perceptual representations become conscious is by appeal to high-order processes acting over them. Let us look at some examples, which often come from the field of visual perception.

A classic case for nonconscious perception comes from blindsight (Weiskrantz, 1996), a pathology arising from damage in the primary visual cortex in which subjects report seeing nothing in certain areas of their visual field, however they can discriminate and respond to stimuli present in that field (not accurately, but above chance). Blindsight patients are described as having visual perception without conscious awareness; they can ‘see’ things but are not aware of what they are seeing. The explanation, according to high-order theorists, is a distinction in the way the brain processes first-order perceptual representations versus meta-representations. Blindsight patients ‘see’ objects in their visual field thanks to alternative retinal projections to the cortex, while they remain nonconscious because cortical damage hinders them from being accessed by high-order, meta-representational systems.

Another example of high-order theory is the subliminal processing of sensory information when stimuli are properly masked or changed (Prinz 2015). Consider experimental paradigms of backward masking (Enns & Di Lollo 2000). Participants are presented with a very brief visual stimulus (target), followed by a second stimulus (mask), such as a pattern of black and white squares. People report not seeing the target stimulus, however its features prime subsequent judgements in forced choice tasks (i.e. they accurately ‘guess’ whether a target was shown or not). Again, high-order theories interpret these results as evidence that first-order visual processing is not sufficient for conscious experience, and that it crucially depends on first-order information being accessed by high-order, meta-representational systems.

Some researchers have used fMRI to complement the masking experiments by visualizing the brain areas that are activated in different experimental conditions (Dehaene et al. 2001). They found that while both masked and unmasked stimuli are processed at early sensory regions, the unmasked (i.e. visible) stimulus also activates distant parietal, prefrontal, and cingulate areas. Since the prefrontal cortex is crucially involved in the meta-cognitive representation of information, these findings have been interpreted as supporting the claim that meta-representations are required for conscious awareness (Lau & Rosenthal 2011, Fleming & Dolan 2012). However, the fact that visible stimuli activate widespread regions in the brain also shows that things are more complex than high-order theories might suggest. The data shows that consciousness depends on the dynamic interaction between multiple areas and has integrative functions (Dehaene; Naccache, 2001).

This is consistent with what Seth (2009) has called the ‘integration consensus’ among cognitive and neurophysiological theories of consciousness, according to which consciousness has the main function of integrating otherwise independent neural and cognitive processes. Among the most prominent theories in this respect are the Global Workspace Model and the Integrated Information Theory. The Global Workspace Model (GWM) claims that the contents of consciousness are contained in a central processor called global workspace which has the function to mediate communication between many nonconscious specialized functional regions of the brain (Baars, 1997; 2005). More precisely, the global workspace consists in “a fleeting memory capacity that enables access between brain functions that are otherwise separate” (2005, 46). This model has been further developed in neuroscientific terms by Dehaene and colleagues (Dehaene et al., 2001; Dehaene; Changeaux, 2011), who claim that consciousness results from the sustained activity of a ‘neuronal workspace’ of cortical pyramidal cells (mainly prefrontal, cingulate, and parietal) with long-range connections to multiple specialized nonconscious, automatic, processors brain-wide.

The Integrated Information Theory (IIT), in turn, starts by identifying essential properties of experience (from a phenomenological perspective), to then postulate how a physical system (e.g., the brain) must be to account for these essential properties (Tononi, 2008; Tononi et al., 2016). According to this view, systems with the sort of intrinsic, irreducible cause-effect power required for consciousness, must be strongly connected and integrated. The extent to which the system is integrated-the extent to which the whole is different from its parts-is supposed to be correlated with how conscious it is and how many higher-order operations it can make (Koch, 2019, 126).

Animal (first-order) consciousness

Importantly, for present purposes, both the GWM and the IIT have been used to challenge high-order theories of consciousness. Let us start with the GWM. Even though this model has been interpreted both by first and high-order theorists of consciousness as consistent with their views, it can be formulated in terms of first-order theories. On this interpretation, the information broadcast in the global workspace can be made available to widespread cognitive processing without the mediation of meta-representational processes (see Prinz, 2012; Tye, 2016; and Carruthers, 2017, who has recently converted into a first-order theorist). Note that meta-cognitive functions such as inner monitoring and control may be performed at a first-order processing level, as for example when the workspace serves as a bottleneck that ‘selects’ which of a set of represented goals will be globally broadcasted, under the influence of first-order factors.

Meta-representation, in contrast, constitutes a more sophisticated form of meta-cognition, that endows the agent with additional abilities such as reporting the contents of conscious experiences and exploiting the inferential and compositional structure of her thoughts (Bermudez, 2017). With respect to high-order theorists, they claim that global workspace makes first-order perceptual representations available for meta-representational systems, which play an enabling causal role in the production of consciousness (Carruthers, 2005; Lau; Rosenthal, 2011). This interpretation of the GWM finds empirical support in the work of Dehaene and others, which suggests that the prefrontal cortex does play a crucial role in enabling conscious experience (Dehaene; Changeaux, 2011).

There is an ongoing debate over whether the prefrontal cortex (and meta-representation, for that matter) is essential for consciousness (for a recent discussion, see Odegaard, Knight & Lau 2017, Boly et al. 2017). My point here is that there are good reasons to believe that a theory of consciousness can be construed at a first-order level, based on the GWM. Similarly, the IIT does not need to invoke high-order representations, nor the involvement of the prefrontal cortex, to account for the emergence of primary sensory-motor experiences. So-called ‘hot zones’ of highly organized neocortical tissue in temporo-parietal-occipital regions can meet the hallmarks of connectedness and integration for being neural correlates of consciousness, according to the IIT. So even granting that prefrontal access to posterior neural regions enables the production of sophisticated forms of consciousness, such access may not be necessary to support the contents of sensory experiences directly (Baars, 2015; Koch, 2019).

I conclude at this point that it is plausible to say that many animals have cognitive architectures that support consciousness at a purely first-order level. From a neuroscientific perspective, such attribution of consciousness to animals can be justified by the instantiation of a global workspace, at least in animals with working memory capacities (e.g., mammals and birds, see Carruthers 2013), or by appeal to the circuit complexity and integration present in the neocortex of many animals (Koch 2019). In sum, there is sufficient theoretical and empirical evidence to assert that many animals can have conscious perceptual representations without being capable of reflecting over them. They can be non-reflectively conscious.

Two forms of consciousness: reflective and non-reflective

The upshot of the previous section is that there are at least two ways of having a conscious perceptual representation. One is to be non-reflectively conscious, by virtue of processing sensory information at a first-order level. The other is to be reflectively conscious, by means of meta-cognitive systems that represent first-order sensory information. There might be additional, more simple ways of being conscious. For example, in animals that lack the cognitive architecture of a global workspace but have evolved specialized neural systems for sensing, acting, and remembering (Godfrey-Smith, 2016), or even in invertebrates with sufficiently complex and integrated neural systems (Koch, 2019). For present purposes, however, I will just focus on the distinction between reflective and non-reflective consciousness, and remain neutral about the existence of other different, or simpler, ways of being conscious.

At this point, it is worth taking a brief historical detour on the philosophy of animal minds, in which the notion of a non-reflective, primary form of consciousness has long roots. It can be traced back to Aristotle’s idea that animals have a ‘sensitive soul’ with the capacity to receive and react to sense impressions, but without the capacity for rational thought, which allows humans to exert control over their desires (among other intellectual functions, see Fiecconi, 2019). A similar view can be found in Leibniz, who contra Descartes, argued in favor of attributing consciousness to animals. According to Leibniz, animals are conscious by virtue of their apperception of their representations of the external world (a sort of inner perception that gives rise to consciousness). But he also conceives consciousness as coming in both a non-reflective and a reflective form (Gennaro 1999). The former may be possessed by animals, however reflective consciousness would be exclusive of humans. According to Leibniz, it accounts for the characteristically human capacities of reflection, understanding and reasoning.

In the remainder of this section, I provide further arguments for the distinction between reflective and non-reflective consciousness and take some steps towards a characterization of the latter.

Phenomenological arguments

The distinction between different kinds or degrees of consciousness can also be approached by studying the way we experience things, which is known as a phenomenological or introspective approach to consciousness. A fairly simple account of this sort can be found in Searle (1992):

Consciousness can vary in degree even during our waking hours, as for example when we move from being wide awake and alert to sleepy or drowsy, or simply bored and inattentive. Some people introduce chemical substances into their brains for the purpose of producing altered states of consciousness, but even without chemical assistance, it is possible in ordinary life to distinguish different degrees and forms of consciousness. Consciousness is an on/off switch: a system is either conscious or not. But once conscious, the system is a rheostat: there are different degrees of consciousness (p. 83).

In the quote Searle distinguishes between the on/off sense of being conscious (instead of nonconscious), and different degrees under which consciousness may be experienced. From the perspective of the GWM, the first sense of consciousness occurs when automatic attentional systems ‘switch on’ the global workspace, letting perceptual representations reach the ‘stage’ of consciousness (Baars 2005). This happens automatically, as when one simply opens one’s eyes in front of a green leaf and sees it but can also be mediated by voluntary control as when one turns one’s attention to some particular part of the leaf or to something else according to one’s interests.

Now in the second sense mentioned in the previous paragraph, voluntary attention can also be used to explain degrees of consciousness in the sense that selective attention can amplify the sensory attributes of what is being perceived (Dehaene; Changeaux, 2011). In the case of vision, it is well known that attention is associated with higher spatial resolution and color sensibility. When I focus my attention on the patterns of the veins in the surface of a leaf, I become aware of more features and details of the leaf. Since voluntary attention requires frontal executive cortex (Baars 2005), the increase in awareness is to some extent an exercise of reflective consciousness. This is not to say, though, that reflective consciousness comprises all ways of being conscious. Consider the case of the ‘absent-driver’, which refers to the common situation that happens when, after driving long distances, one drifts one’s attention from the road and starts to wander into thoughts about past experiences, plans, etc. Again, Searle (1990) will serve us to illustrate the point:

I am not paying much attention to the details of the road and the traffic. But it is simply not true that I am totally unconscious of these phenomena. If I were, there would be a car crash. We need therefore to make a distinction between the center of my attention, the focus of my consciousness on the one hand, and the periphery on the other. […] There are lots of phenomena right now of which I am peripherally conscious, for example the feel of the shirt on my neck, the touch of the computer keys at my fingertips, and so on (p. 635).

What Searle calls “peripheral consciousness” corresponds to what phenomenological approaches to consciousness have described as an implicit, first-order form of awareness that permeates our conscious experiences without requiring introspection (Gallagher; Zahavi, 2016). Phenomenologists (e.g. Husserl, Sartre) argue that when one turns one’s conscious attention to-i.e. ‘posits’ the existence of-objects in the environment or parts of our own bodies, we must presuppose an underlying ‘non-positing’ consciousness, an inarticulate awareness of what lies in the unattended surroundings (Siewert 2009). Such implicit form of consciousness has been used to argue, on phenomenological grounds, against high-order theories of consciousness and to claim that animals can possess non-reflective self-awareness (Gallagher; Zahavi, 2016).

Cognitive arguments

Conscious experience of unattended stimuli has also been studied in a famous experiment by Sperling (1960), who briefly presented an array of 12 letters to participants, asking them to focus on a particular row, immediately after the letters disappear. Participants were able to report about three items of the attended row, but it was quite clear to them that they saw an array of letters. When cued after stimulus offset to help prompt memories of the letters, participants could recall most items from the array. This suggest that while the attended letters where reflectively processed, and thus reportable, the unattended ones were still experienced consciously, though in a pre-reflective way. Unattended experiences, however, remain in sensory memory available for reflective report for just a few seconds or less.

This quick fading of unattended (pre-reflective) experiences might explain why such experiences are often not reported or remembered, and thus believed to be nonconscious, as high-order theorists do. In other words, unreportability of past experiences does not demonstrate that they were never conscious, given the possibility of memory failure (Jamieson; Bekoff, 1992; Simons, 2000). Furthermore, one may argue that some implicit representations of unattended objects could make their way into long term memory and be remembered. Returning to the absent driver, it is hard to rule out the possibility that he could recall some of his experiences of the road, especially after being cued about events that happened during the drive (e.g. do you recall seeing semi-trailer trucks? Of any color?).

Some experimental evidence appears to support this possibility. In a study (Simons et al., 2002), one experimenter holding a basketball approaches a pedestrian to ask directions, while a group of students passes between them and surreptitiously take the basketball away from the experimenter. Afterwards, most pedestrians reported seeing nothing unusual, or anything changing, or anything different about the appearance of the experimenter. However, when asked specifically if the experimenter used to have a basketball, more than half recalled having seen the ball and were able to describe some of its atypical features (red and white stripes). Then it seems that the subjects were pre-reflectively conscious of the change, however lacked awareness of the change until they were prompted to reflect about it.

Neuroscientific arguments

Neuroscientists use the term ‘primary consciousness’ to denote a kind of consciousness that is not reflective in the high-order sense explained above. Consider Feinberg and Mallatt’s definition of primary consciousness:

The basic ability to have subjective experiences including exteroceptive, interoceptive, or affective experiences. Primary consciousness includes the capacity to have any conscious mental images or affects and “something it is like to be” (Nagel, 1974). It is not reflective, nor is it higher consciousness or self-consciousness (2018, p. 131).

Thus, primary consciousness corresponds to basic sensory experiences, both from the external world or the inner body, as well as affective experiences. The latter correspond to experiences attached to a positive or negative valence (e.g., pleasure or pain, respectively), which motivate action more directly than non-affective experiences. Primary consciousness is often said to depend on the cortico-thalamic system (the core of the mammalian brain) which is evolutionary prior to reentrant pathways connecting this core region with the cortex responsible for high-order awareness (Seth, Baars & Edelman 2005, Panksepp 2005). Some neuroscientists also believe it possible that non-mammals and even non-vertebrates are capable of primary consciousness by virtue of different neural structures (Boly et al., 2013; Feinberg; Mallatt, 2018). In any case, my point here is that neuroscientists generally agree with the idea that primary consciousness is present in many animals and that it is, by definition, non-reflective.

On what is it like to be non-reflectively conscious

Even if the reader now finds the notion of non-reflective consciousness plausible on phenomenological, cognitive, and neuroscientific grounds, she might remain uncertain about what non-reflective conscious experience is like. Beyond doubt, it is hard for us to conceive a mind that never undergoes reflection, precisely because any attempt to explore the contents of our conscious experiences is likely to turn into a reflective account. Non-reflective experiences also appear to fade away from memory very fast, giving us little chance to grasp them.

Drawing on the neuroscientific distinction between primary and higher-order levels of consciousness mentioned in the previous section, Seth (2009) observes: “While in humans these two forms of consciousness almost always go together (with possible exceptions in certain dreamlike or meditative states), it is conceivable that primary consciousness could exist independently of HOC [high-order consciousness]. Indeed, this may be the case in many animals and perhaps as well in newborn infants” (p. 293). If Seth is on the right track, then being fully immersed in primary-non-reflective-consciousness would be quite different from our ordinary conscious mental lives.

What is it like to be non-reflectively conscious, then? To address this question, I will use the example of pain, which is a sensory and affective experience associated with a negative valence. Pain is often regarded as a basic, primary form of experience with obvious adaptive value. Pain is also considered crucial for the attribution of moral status (see next section). Below, I discuss three ways of conceiving non-reflective pain in animals, which I call deflationary account, anthropocentric account, and moderate account. I argue the moderate account is the most plausible.

Deflationary account of non-reflective pain

On this view, animal consciousness would be implicit and non-reflective, like what phenomenologists have described as the pre-reflective consciousness of our bodies that lies ‘behind’ our reflective awareness. Non-reflective pain would then feel somewhat ‘peripheral’ or ‘dreamlike’ to animals, to use two concepts used earlier. For example, consider an ‘absent headache’: one has a persistent dull headache, but after getting distracted and ceasing to pay attention to the headache, it ceases to bother; some minutes later, we focus again on the headache and on how to get rid of it. Retrospectively, it might appear as if during the ‘absent’ period the headache had disappeared from consciousness. But, the argument goes, it just turned into pre-reflective mode, somewhat present but ‘behind our backs’ and easily forgettable.

The problem with this account of non-reflective pain is that it is, indeed, too deflationary. If animal pain is analogous to the absent driver’s experience of the road or the absent headache, then experiencing pain would be almost trivial for the animal. But this is certainly not the case. Animal pain is typically explicit enough to get the animal very upset, which makes sense insofar as the function of pain is to warn the animal of danger and trigger a rapid response. The deflationary account of non-reflective pain can also lead to the view that animal pain is virtually nonconscious. But, as shown in section 2, this is ill-founded; there is theoretical and empirical room for conceiving animal consciousness without resorting to the reflective awareness that characterizes human subjectivity (Jamieson; Bekoff, 1992; DeGrazia, 1996; Saidel, 1999).

Anthropocentric account of non-reflective pain

The anthropocentric account of animal consciousness blurs the distinction between reflective and non-reflective experiences, so that the reflective component of human consciousness would not change first-order experiences substantively. Such a view is sometimes defended by some animal ethics theorists. Sapontzis, for example, claims that “there is no reason to believe that intellectually sophisticated beings have feelings to a quantitatively or qualitatively greater degree than do intellectually unsophisticated beings” (p. 272). Therefore, according to this view, animal pain would be just like pain in humans, or as Peter Singer has famously put it, “pain is pain” (1975, p. 23).

The anthropocentric account has the problem of ignoring that there is conceptual space for the claim that pains and other experiences can come in different kinds. In the words of the cognitive ethologist Donald Griffin, a pioneer in the field of animal consciousness: “the difference between the human and other brains is the content of conscious experience. This content of consciousness, what one is aware of, surely differs both qualitatively and quantitatively in astronomical magnitudes.” (2001, p. 18). If I have been successful in my presentation so far, it should be both conceptually and empirically plausible to believe that reflective and non-reflective experiences can be rather different. The fact that human consciousness is widely reflective and so gives us little chance to grasp what a non-reflective experience is like, should not lead us to overlook that non-reflective animals may inhabit a different kind of consciousness.

Moderate account of non-reflective pain

The moderate account seeks a middle way between anthropocentric and deflationary accounts of animal consciousness. On this view, the basic neurophysiology that gives rise to primary consciousness, including pain experiences, is shared by humans and other animals. However, our reflective capacities would allow us to transform pain experiences into something more intense and significant. This turns reflective pain into a different kind of conscious experience when compared to non-reflective pain. Here I present some cognitive mechanisms that can help us fleshing out this idea: attention and narrative identity.

Both high-order and first-order theories are consistent with the idea that consciousness always involves some form of attention, however attention and consciousness may be dissociated (Pitts, Lutsyshyna & Hillyard 2018). What seems clear, is that selective attention increases the baseline neuronal activity, amplifies the neuronal response to the selected location, and suppresses the neuronal response to objects that are not selected (Tsuchiya; Koch, 2016). This results in a more vivid and salient phenomenal experience, that is more accessible and likely to transit from iconic to working memory (Lamme 2003). Given the crucial involvement of selective attention and working memory in high-order processing of representations (which in turn is largely based on the fronto-parietal cortex, Lamy et al. 2013), it seems plausible to say that when first-order pain experiences undergo reflection they become more vivid and more likely to be consolidated in memory. Indeed, research shows that directing one’s attention to painful stimuli correlates with a more intense painful experience (Arntz et al., 1991).

Reflective animals may also amplify the cognitive significance of their experiences thanks to their narrative capacities. Recall that one of the main functions of consciousness is to integrate information from different brain regions, including providing perceptual information access to meta-representational systems, which function as contextual and executive interpreters. Thanks to these interpreters, reflective animals can generate a narrative that meta-represents the pain experience (in the case of humans, through linguistic representational vehicles). As has been suggested by Gazzaniga (1995), these interpreter mechanisms waive together autobiographical facts to produce a self-narrative that maintains a sense of continuous self. Along with sensory experiences, the satisfaction of narrative-relevant desires “reflect the lived, self-caring perspective of a conscious subject” (DeGrazia; Millum, 2021, p. 226). Arguably, as the satisfaction of these narrative-relevant desires is prudentially good for the animal, their frustration is prudentially bad. Therefore, the capacity for self-narrative can result in more potential for worsening the overall state of negative affective valence the animal is immersed in.

Non-reflective pain and moral status

Affective experiences-conscious experiences attached to a positive or negative emotional valence-are relevant for moral status since those experiences appear to entail a welfare interest in having or not having them. Negative affective experiences are generally avoided, whereas positive affective experiences are generally desirable. Violation of a creature’s welfare interests by imposing negative affective experiences is presumed as morally problematic.

Non-human animals have welfare of their own, in the sense that when something bad happens to them, it is bad for them (Feinberg 1980). This contrasts with an instrumental interpretation of welfare, according to which the welfare of non-human animals is only valuable as a means to some other end. Consequently, animals that have welfare for their own sake can have legitimate moral claims on others. Most theories that take welfare as a fundamental ground of moral status, confer similar consideration to the interests of all entities that have comparable welfare interests (DeGrazia, 1996; Garner, 2005; Regan, 2003; Sapontzis, 1987). So, one would expect that, irrespective of whether we are dealing with reflective or non-reflective animals, insofar as their welfare interests are comparable, they should be on a par with respect to the moral obligations owed to them-at least with respect to obligations arising from moral status.

It should be noted that even under the rule of equal consideration of comparable interests, considerations of magnitude matter. Different animals (including humans) can have different interests and quantities of interests. These interests can also have different moral weight (DeGrazia & Millum 2021). For example, consider two different animal species, A and B, who are exposed to a certain stimulus X. Let us suppose that A’s interests in not experiencing X is greater than B’s, or that B simply lacks interests relative to X. All else being equal, the moral obligations owed to A would be stronger3. This view is generally accepted by animal ethicists. Quoting Peter Singer (1993), “we must take care when we compare the interests of different species. In some circumstances a member of one species will suffer more than a member of another species. In this case we should still apply the principle of equal consideration of interests but the result of so doing is, of course, to give priority to relieving the greater suffering.” (p. 58).

Now I turn to discuss whether the fact that animals have reflective or non-reflective consciousness makes a difference in our moral obligations towards not inflicting pain to them. Consider the deflationary account of non-reflective pain discussed above. On this account, non-reflective pain would be such an implicit and faint experience, that it would hardly be comparable with the pain of a reflective creature. This could lead to justifying extremely unequal consideration towards the pain-interests of reflective and non-reflective animals, which may indeed be another reason for being suspicious about the deflationary account. On the other hand, the anthropocentric account of non-reflective pain also leads to controversial conclusions. Take Regan’s argument that "animals probably enjoy sexual congress as much as we do, and it is for this reason that I support vasectomies for male pet animals, rather than castration, and the development of effective contraceptives" (2003, p. 227). I am inclined to think that Regan is underestimating the depth, quality, richness and significance that reflective capacities add to human sexual experiences. This is not to say that non-reflective animals cannot enjoy of sexual congress, but that the range of positive affective states available to them is in many ways more modest than what is available to reflective animals.

To assess the welfare interests associated with non-reflective pain, let us now focus on the moderate account of non-reflective pain, which I have been defending as the most plausible. As argued previously, the capacity of conscious reflection can make reflective pain more intense and significant, than it would have been for a non-reflective animal. Both reflective and non-reflective pain would still be a comparable pain experience-a sensory and affective experience associated with a negative valence-but reflective pain would have a greater potential to compromise welfare than non-reflective pain. If this is so, the welfare interests associated with reflective pain would generally entail stronger moral obligations than those of non-reflective pain.

With respect to the intensity of stimulus, attentional enhancement of the pain stimulus can explain why reflective animals are susceptible to a more intense pain experience. It is well known that top-down mechanisms can modulate pain intensity, which in turn is highly influenced by emotional and contextual factors (Price 2017). We may compare the top-down mechanisms that modulate pain with the top-down mechanisms that, in visual perception, ‘fill-in’ unattended regions of our visual field. These mechanisms allow us to see more than the information that is actually been processed by first-order processes (Panksepp et al., 2017). Similarly, such reflective processing of a pain stimulus can make us experience more pain, as suggested earlier. It should be noted, however, that top-down modulation can cut-both ways. For example, viewing either fear or disgust photographic slides prior to pain stimulation reduces pain intensity and unpleasantness (Meagher et al., 2001).

So reflective animals would be capable of both amplifying or decreasing the intensity of their pain experiences (though not always voluntarily). This may confer reflective animals (like us) an advantage over non-reflective animals. For example, knowing that the vaccine we are about to get will imply just a brief shot and that it will prevent us from getting a harmful disease, might allow us to tolerate the injection better. But the point remains that, all else being equal, a reflective creature is susceptible to more intense pain experiences than a non-reflective counterpart.

A similar point can be made with respect to how significant pain is or how much suffering it involves. As mentioned in the previous section, reflective animals have the capacity to generate a self-narrative that engenders narrative-related desires relevant to one’s life. This allows the animal to generate more complex appraisals of their pain experiences, with the potential to amplify their overall negative experience. As DeGrazia notes, experiences with negative valence involve an evaluative appraisal of the subject’s overall situation that results in representations like “this is terrible” or “this is terrible for me”, which affect the subject’s welfare (DeGrazia, 2014). Arguably, evaluative appraisals in reflective animals can be more complex, for example including how pain impacts one’s plans or personal relationships in a meaningful manner, and in this way amplify the suffering caused by pain.

In sum, reflective and non-reflective consciousness can be considered two different kinds of consciousness because there are substantive phenomenological, cognitive, and neuroscientific differences between them. Furthermore, this distinction in kinds of consciousness has important implications for moral status. Granting that conscious animals in general can undergo intense pain and suffering, animals that possess reflective consciousness can experience pain and suffering to a greater extent, at least under certain circumstances. They have more at stake than animals with just primary, non-reflective, forms of consciousness. Therefore, there should be a prima facie presumption that harmful actions towards reflective animals are, ceteris paribus, more detrimental to their welfare, compared to non-reflective animals.

Finally, it is worth noticing that this claim is consistent with the principle of equal consideration of interests, insofar as reflective and non-reflective animals have comparable interests in not experiencing pain and suffering. The point is that we are justified in presuming that reflective animals’ interests in this respect are generally stronger and thus carry stronger moral weight, when compared with the interests of non-reflective animals. This should not translate into a disregard of animal welfare but serve as a justification for providing more stringent protections to reflective animals. Some may see this as grounding a difference in degrees of moral status between reflective and non-reflective animals, an issue concerning the concept of moral status that goes beyond the purposes of this paper. But the present proposal is also consistent with the claim that reflective and non-reflective animals have the same moral status, however the moral duties entailed by moral status are generally stronger when they arise from reflective animals.

Material suplementario
Acknowledgments

Earlier versions of this paper benefited from comments by David DeGrazia, Eric Saidel and Andrew Peterson. I also thank the valuable comments provided by referees of this journal. This work was supported by the Chilean National Agency for Research and Development (ANID) under Grant FONDECYT 11200897.

References
ARNTZ, A.; DREESSEN, L.; MERCKELBACH, H. Attention, not anxiety, influences pain. Behaviour research and therapy, v. 29, n. 1, p. 41-50, 1991.
BAARS, B. J. In the theater of consciousness: The workspace of the mind. New York: Oxford University Press, 1997.
BAARS, B. J. Global workspace theory of consciousness: toward a cognitive neuroscience of human experience. Progress in brain research, v. 150, p. 45-53, 2005.
BERMÚDEZ, J. L. Can nonlinguistic animals think about thinking? In: K. ANDREWS, K.; BECK, J. The Routledge handbook of philosophy of animal minds. London, New York: Routledge, 2017. p. 119-130.
BOLY, M. et al. Consciousness in humans and non-human animals: recent advances and future directions. Frontiers in psychology, v. 4, p. 625, 2013.
BOLY M. et al. Are the Neural Correlates of Consciousness in the Front or in the Back of the Cerebral Cortex? Clinical and Neuroimaging Evidence. J Neurosci. v. 37, n. 40, p. 9603-9613, 2017.
CARRUTHERS, P. Phenomenal consciousness: A naturalistic theory. Cambridge: Cambridge University Press, 2003.
CARRUTHERS, P. Consciousness: essays from a higher-order perspective. Oxford: Clarendon Press, 2005.
CARRUTHERS, P. Evolution of working memory. Proceedings of the National Academy of Sciences, v. 110 (Supplement 2), p. 10371-10378, 2013.
CARRUTHERS, P. In Defence of First-Order Representationalism. Journal of Consciousness Studies, v. 24, n. 5-6, p. 74-87, 2017.
DEGRAZIA, D. Taking animals seriously: mental life and moral status. Cambridge: Cambridge University Press, 1996.
DEGRAZIA, D. What is suffering and what sorts of beings can suffer. In: GREEN, R.; PALPANT, N.; Suffering and Bioethics. Oxford: Oxford University Press, p. 134-154, 2014 .
DEGRAZIA, D.; MILLUM, J. A theory of bioethics. Cambridge: Cambridge University Press, 2021.
DEHAENE, S. et al. Cerebral mechanisms of word masking and unconscious repetition priming. Nature neuroscience, v. 4, n. 7, p. 752, 2001 .
DEHAENE, S.; NACCACHE, L. Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework. Cognition, v. 79, n. 1-2, p. 1-37, 2001.
DEHAENE, S.; CHANGEUX, J. P. Experimental and theoretical approaches to conscious processing. Neuron, v. 70, n. 2, p. 200-227, 2011.
DRETSKE, F. I. Naturalizing the mind. Massachusetts: MIT Press, 1995.
ENNS, J. T.; DI LOLLO, V. What’s new in visual masking?. Trends in cognitive sciences, v. 4, n. 9, p. 345-352, 2000.
FEINBERG, J. Rights, justice, and the bounds of liberty: Essays in social philosophy (Vol. 148). New Jersey: Princeton University Press, 2014
FEINBERG, T. E.; MALLATT, J. M. Consciousness Demystified. Massachusetts: MIT Press, 2018
FIECCONI, E. C. Aristotle's Peculiarly Human Psychology. In: KEIL, G., KREFT, N. Aristotle’s anthropology. Cambridge: Cambridge University Press, 2019.
FLEMING, S. M.; DOLAN R. J. The neural basis of metacognitive ability. Philosophical Transactions of the Royal Society B: Biological Sciences v. 367.1594, p. 1338-1349, 2012 .
GALLAGHER, S.; ZAHAVI, D. Phenomenological Approaches to Self-Consciousness. In: ZALTA, E. N. The Stanford Encyclopedia of Philosophy. Winter 2016 Edition.
GARNER, R. Animal Ethics. Cambridge: Polity Press, 2005.
GAZZANIGA, M. S. Consciousness and the cerebral hemispheres. In: GAZZANIGA, M. The Cognitive Neurosciences. Massachusetts: MIT Press, 1995. p. 1391-1400.
GENNARO, R. J. Leibniz on Consciousness and Self-consciousness. In: GENNARO, R. J.; HUENEMANN, C. New essays on the rationalists, 1999. p. 353-371.
GODFREY-SMITH, P. Other minds: The Octopus and the evolution of intelligent life. London: William Collins, 2016.
GRIFFIN, D. R. Animal minds: Beyond cognition to consciousness. Chicago: University of Chicago Press, 2001.
JAMIESON, D.; BEKOFF, M. Carruthers on nonconscious experience. Analysis, n. 52, v. 1, p. 23-28, 1992.
KOCH, C. The feeling of life itself: why consciousness is widespread but can't be computed. Massachusetts: MIT Press, 2019.
LAMME, V. A. Why visual attention and awareness are different. Trends in cognitive sciences, v. 7, n. 1, p. 12-18, 2003.
LAMY, D.; LEBER, A. B.; EGETH, H. E. Selective attention. In: HEALY, A. F.; PROCTOR, R. W.; WEINER, I. B. Handbook of psychology: Experimental psychology. Nova Jersey: John Wiley & Sons, 2013
LAU, H.; ROSENTHAL, D. Empirical support for higher-order theories of conscious awareness. Trends in cognitive sciences, v. 15, n. 8., p. 365-373, 2011 .
LYCAN, W. G. Consciousness, Cambridge, MA: Bradford Books, 1987 .
MACPHAIL, E. M. The evolution of consciousness. Oxford: Oxford University Press, 1998.
MEAGHER, M. W.; ARNAU, R. C.; RHUDY, J. L. Pain and emotion: effects of affective picture modulation. Psychosomatic medicine, v. 63, n. 1, p. 79-90, 2001.
ODEGAARD, B.; KNIGHT, R. T.; LAU, H. Should a few null findings falsify prefrontal theories of conscious perception?. Journal of Neuroscience, v. 37, n. 40, p. 9593-9602, 2017.
PANKSEPP, J. Affective consciousness: Core emotional feelings in animals and humans. Consciousness and cognition, v. 14, n. 1, p. 30-80, 2005.
PANKSEPP, J. et al. Reconciling cognitive and affective neuroscience perspectives on the brain basis of emotional experience. Neuroscience & Biobehavioral Reviews, v. 76, p. 187-215, 2017.
PITTS, M. A.; LUTSYSHYNA L. A.; HILLYARD S. A. The relationship between attention and consciousness: an expanded taxonomy and implications for ‘no-report’ paradigms. Phil. Trans. R. Soc. v. 19, n. 373(1755):20170348, 2018.
PRICE, D. A view of pain based on sensations, meanings and emotions. In: CORNS, J. The Routledge Handbook of Philosophy of Pain, Routledge, New York, p. 113-123, 2017 .
PRINZ, J. The conscious brain. Oxford: Oxford University Press, 2012
PRINZ, J. Unconscious perception. In: MATTHEN, M. The Oxford Handbook of Philosophy of Perception. Oxford: Oxford University Press, p. 371-389, 2015.
REGAN, T. The case for animal rights. Berkeley: University of California, 2003
ROSENTHAL, D. M. Consciousness and mind. Oxford: Oxford University Press, 2005.
SAIDEL, E. Consciousness without awareness. Psyche, v. 5, n. 16, 1999.
SAPONTZIS, S. Morals, Reason, and Animals. Philadelphia: Temple University Press, 1987.
SAPONTZIS, S. Aping Persons - Pro and Con. In: CAVALIERI, P.; SINGER, P. The Great Ape Project. New York: St. Martin's Griffin, pp. 269-277, 1993.
SEARLE, J. R. Who is computing with the brain? Behavioral and Brain Sciences, v. 13, n. 4, p. 632-642, 1990.
SEARLE, J. R. The rediscovery of the mind. Massachusetts: MIT press, 1992.
SETH, A. Functions of consciousness. In: BANKS, W. P. Elsevier Encyclopedia of Consciousness, Elsevier Press, p. 279-293, 2009.
SETH, A. K.; BAARS, B. J.; EDELMAN, D. B. Criteria for consciousness in humans and other mammals. Consciousness and cognition, v. 14, n. 1, p. 119-139, 2005.
SIEWERT, C. Consciousness. In: DREYFUS, H. L.; WRATHALL, M. A. A companion to phenomenology and existentialism. New Jersey: John Wiley & Sons, 2009. p. 78-90.
SIMONS, D. J. Attentional capture and inattentional blindness. Trends in cognitive sciences, v. 4, n. 4, p. 147-155, 2000 .
SIMONS, D. J. et al. Evidence for preserved representations in change blindness. Consciousness and Cognition: An International Journal, v. 11, p. 78-97, 2002.
SINGER, P. Animal Liberation. New York: Avon Books, 1975.
SINGER, P. Practical ethics (2nd ed.). New York: Cambridge University Press, 1993.
SPERLING G. The information available in brief visual presentations. Psychol. Monogr. Gen. Appl. v. 74, p. 1-29, 1960.
TONONI, G. Consciousness as integrated information: a provisional manifesto. Biol Bull, v. 215, p. 216-242, 2008 .
TONONI, G. et al. Integrated information theory: from consciousness to its physical substrate. Nat Rev Neurosci, v. 17, p. 450-461, 2016.
TSUCHIYA, N.; KOCH, C. The relationship between consciousness and top-down attention. In: LAUREYS, S.; GOSSERIES, O.; TONONI, G. The neurology of consciousness, Oxford: Academic Press, p. 71-91, 2016.
TYE, M. Ten problems of consciousness: A representational theory of the phenomenal mind. Massachusetts: MIT Press, 1997.
TYE, M. Tense bees and shell-shocked crabs: are animals conscious? Oxford: Oxford University Press, 2016.
VILLEMURE, C.; BUSHNELL, C. M. Cognitive modulation of pain: how do attention and emotion influence pain processing? Pain, v. 95, n.3, p. 195-199, 2002 .
WEISKRANTZ, L. Blindsight revisited. Current opinion in neurobiology, v. 6, v. 2, p. 215-220, 1996 .
Notas
Como citar:
AGUILERA, Bernardo. Non-reflective consciousness and our moral duties to non-reflective animals. Revista de Filosofia Aurora, Curitiba: Editora PUCPRESS, v. 36, e202430456, 2024. DOI: https://doi.org/10.1590/2965-1557.036.e202430456.
Notas
1 This work was supported by the Chilean National Agency for Research and Development (ANID) under Grant FONDECYT 11200897.
Notas
2 Of course, some humans lack the capacity to entertain high-order representations due to immaturity or severe cognitive impairment. But to simplify the exposition, and unless otherwise stated, I say ‘humans’ to mean adult members of this species with their normal cognitive capacities. The same applies to other animal species.
3 Note that the ‘all else being equal’ clause is important here since the presence of other factors may alter the strength of our moral obligations. If, for example, B (but not A) happens to be under intense pain, or A (but not B) has received an analgesic, our obligations towards B may become stronger. My point here is simply that depending on their capacities, animals can have different interests, which can entail moral obligations of different strength.
Notas de autor
Ph.D. em Filosofia pela University of Sheffield
Buscar:
Contexto
Descargar
Todas
Imágenes
Visor de artículos científicos generados a partir de XML-JATS por Redalyc