Hippocampus III, Greg Dunn
Earlier this month I submitted a term paper for a philosophy course that was broadly focused on philosophy of mind, with more specific spotlight cast upon theories of cognitive architecture, the bridge between cognition and perception, and epistemology. I’d like to share this paper with you here, as I believe it represents an interesting, and ultimately formidable, task that scientists are now facing: namely, probing phenomenology in an era of fast-paced, and often spectacular, technological innovation. It should be said that the following paper may incite a tl;dr response, as the final page count totaled at about 30, however, I think that this line of inquiry is compelling and drives home the point that we have a lot of work in front of us; making this a fitting treatise as we approach the new year.
If one were so inclined, one could find questions and theories regarding the nature of being dating back to the origins of philosophy. Amazingly enough, what it means to exist has by no means been resolved. To complicate the issue further, modern science employs systematized parameters to define biology, for purposes that include (but are not limited to): organismal/structural classification, functionality, abnormal functioning, and even to differentiate something as living or non-living. This complicates our considerations of nature considerably by begging the question of how biology and the nature of experience can be reconciled into a unified and cohesive theory of existence. Is this even possible?
The purpose of this investigation is oriented at human nature, and specifically, the nature of human mentation. The processes, capacities, and functions of mentation are studied in the various branches of cognitive science; however, the nature of cognition that bears upon this essay refers to the individuated and endogenous experience of having a mental activity. Thus, emphasis can be placed on intentionality, the experience of intelligent agents, and the essence of thinking.
One could envisage other meaningful considerations in the broad topic of nature, such as: the nature of being a non-human species, the nature of emotion, the nature of interpersonal dynamics, changes to one’s nature throughout development, and the nature of accessibility of different cognitive elements. This essay will not explicitly address these considerations, but regarding them as ultimately relevant to a unified theory is encouraged. The reason for this exclusion is pragmatic; to reconcile the science of cognition with the nature of cognition we must first critically review the analytical tools at our disposal. Such tools consist of research technologies and methodologies that can be broadly categorized as functional or process-level analysis. To invoke the aforementioned ‘other considerations’ requires review of particular experimental manipulations, such as reviewing a longitudinal study design for developmental considerations. That is to say, an analysis of study tools is distinct from an analysis of study design; however, both factors should ultimately be considered and synthesized in a robust investigation of nature.
Origins and Etymology
The contemplation of the nature of being has roots in metaphysics under the study of ontology (Petrov, 2011). The implications of the term ontology have transformed over the years and throughout the literature, however, the investigations proposed herein are ontological. This is not to say our discourse assumes a particular ontological perspective over another, but to say that ontological queries are apropos.
Another term that may have emerged for readers already is phenomenology, or the study of the objects of awareness and of awareness itself. Phenomenology is another term with a storied etymological history, particularly if it is compared to ontology. Historically, Johann Heinrich Lambert first used the term in his treatise called Neues Organon (Lambert, 1764), or New Organon. Phenomenology was later revisited by illustrious philosophers, such as Immanuel Kant (1781) and G.W.F. Hegel (1807). In Hegel’s Phenomenology of Spirit (1807), we are presented with the term as a study of appearances in consciousness, with a hierarchical progression from sensory consciousness up to the self-consciousness. Furthermore, Hegel proposed philosophical consciousness, which is akin to a notion of reason, to be the preeminent ascension in the evolution of consciousness (Hegel, 1807). In later years, Edmund Husserl mobilized phenomenology as a recognizable school of thought, with pioneering usage in Logical Investigations (Husserl, 1901). Husserl’s early literature on phenomenology was influenced by discourses on intentionality by his predecessor, Franz Betrano (1838-1917), as well as Carl Stumpf (1848-1936). This early formulation focused on the intentionality of consciousness in the sense that it can be considered directed and willful. Husserl’s later work developed phenomenology to mean the study of the essences, or the elements, of phenomena (Husserl, 1913); many refer to this paradigm as eidetic reduction. Some time later in the 20th century, Martin Heidegger (1927) challenged Husserl’s conception of phenomenology with existential charges, asserting that phenomenology cannot be captured as disentangled from the experiencing person (von Herrmann, 2013). Heidegger’s theory of dasein, or existence, is non-dualistic and requires any phenomenology to incorporate engagement with every day life; i.e., ontology is irreducible (Reiners, 2012).
Given the complex etymology of phenomenology and ontology, it isn’t surprising that more recent thinkers have adopted different terminology to represent studies of this kind (however, many authors recall those terms when appropriate). This brings us to our preferred term, qualia, which will be appropriated for the remainder herein to mean the phenomenal characters of experience (Keeley, 2009). The etymology of qualia (or quale in its singular form) is nebulous, but it’s popularization can be traced distinctly back to C.I. Lewis’ Mind and the World-Order (1929), wherein he focused specifically on qualia of sense-data. Many thinkers have since debated the nuanced implications of qualia (Nagel, 1995; Chalmers, 1995; Searle, 1998), however we will take quale to represent the phenomenal character of any mentation of interest, as opposed to modalities that are exclusively sensory.
The Hard Problem
Modern day cognitive neuroscience allows us develop deductions regarding functional localization, or the systematic appraisal of brain regions based on apparent functionality (Darby & Walsh, 2005). Furthermore, we can chronicle the properties of these functional regions on a variety of levels; electrical, chemical, molecular, and network. A germane example of a neural region with well-documented functional specialization is the hippocampus, a deep brain structure located in the medial temporal lobe. The properties of the hippocampus have been studied on all of the aforementioned levels and there is compelling evidence that it is specialized for memory functions, specifically, long-term memory processes (Turner, 1969; Mueller et al., 2011; Carlson, 2012). Evidence of this kind supports the Fodorian notion of mental modularity (Fodor, 1983) as well as the tenets of faculty psychology (Aquinas, 1947, Translated Ed.), insofar as distinct cognitive faculties can be correlated with distinct neural regions (or modules in Fodor’s phrasing). In parallel with functional specialization, cognitive science provides us with adequate language and evidence for: the integration of information in the central nervous system (Baars & Gage, 2010); the presence of endogenous and/or voluntarily mental capacities, such as attentional deployment (Baars & Gage, 2010); the correlation between behavioral cycles and brain-state cycles (Budzynski et al., 2008); malleable and adaptive neural mechanisms, such as neuroplasticity or learning mechanisms (Baars & Gage, 2010); and the processes that connect sensory processing with reactive behavior, such as reflex circuits and reflexology (Carlson, 2012). Given the availability of high-tech research tools and innovative translational paradigms, these problems can be considered “easy”. The research itself may be time-consuming and require a preponderance of confirmatory papers, but the problems themselves have palpable solutions in the sense that these phenomena exist and scientific observations can be replicated (Chalmers, 1995).
In contrast to these easy problems of cognition and behavior stands the so-termed “hard” problem (Chalmers, 1995). David Chalmers notably called this hard problem one of consciousness itself, of the subjective experience of the aforementioned phenomena (Chalmers, 1995). Revisiting our example of memory, we can correlate hippocampal activity with memory processes and say that this functional specialization is valid and reliable. However, it is substantially harder to scientifically address what it is like to have a memory. Theories and papers surrounding the hard problem tend to utilize the term consciousness when referring to the object of the problem. This may be due to the connotations of the term itself, in that consciousness implies a principality or some other sovereign and diffuse property of mind. However, it is contented here that the hard problem exists on every level of mind science, including (but not limited to): cognitions, sensory processes, emotions, behavior, and disease/disorder states. That is to say, what it is like to have a mental activity, to sense, to feel, to act, and to be abnormal, are all hard problems in the same way as ‘what it is like to experience consciousness’ is a hard problem. En masse, we may find that these differentiated hard problems synthesize to the hard problem of consciousness, i.e., a Gestalt effect, or alternatively, we may find that there is a fundamental property common to each mental attribute, i.e., an Occam’s razor effect. The latter result would implicate consciousness as merely another mental capacity, in equal standing with cognition and perception. The former result would implicate consciousness as an emergent property of all other mental attributes, a conceivably troubling suggestion regarding the potential of artificial intelligence. In any case, the hard problem persists.
Over a decade before Chalmers, Joseph Levine designated the source of such difficulties as an explanatory gap between the physiological bases of mind and the experiential phenomena associated with such physiology (Levine, 1983). An example offered by Levine asks us to consider nociception, or the physiological processes underlying pain perception. We can explicate nociception in terms of bodily systems, but we cannot scientifically account for what it is like to experience pain. Re-orienting the explanatory gap back to our preferred terminology: the gap lies in questions regarding the qualia of pain, or, the inability to scientifically describe the phenomenal character/s of pain.
Related to the hard problem is the binding problem, as put forth by John Smythies, refers either to a computational issue of segregation (i.e., binding problem one) or a combinatorial issue of emergence (i.e., binding problem two) (Smythies, 1994). Arguably, binding problem two is more pertinent to our current exploration in that it is broader in scope and can describe the problem for cognition just as it does for perception. The binding problem confronts the explanatory gap existing specifically between cognitive, neural, and philosophical disciplines.
Recalling the dispute between Husserl and Heidegger, are these problems due to our modus operandi for what is knowable? Or is the problem due to inadequate tools for studying ontology? Knowability issues are likely related to the requirements of the scientific method; insofar as reproducibility is a dogmatic standard all scientists must meet (Popper, 1935). The very essence of quale as a phenomenal character implies temporal singularity, or one-of-a-kindness, so how could we fit quale into our standard of reproducibility?
The hard problem is particularly formidable for those who ascribe to a physicalist paradigm (Stoljar, 2009). This includes any scientist or scholar who submits an ontological monism, whether it be: bottom-up physicalism, i.e., there is a physical basis for all things; top-down physicalism, i.e., physicality supervenes over all things; or an intermediate framework, i.e., there is dynamic interaction between all things and their physicality. The physicalist paradigm concerning the mind is sometimes called the embodied mind thesis, which can be associated with the work of Maurice Merleau-Ponty (1908-1961). The review that follows will specifically interrogate formulations from the wide purview of neuroscience, including cognitive and behavioral neuroscience (sometimes called biopsychology), neuropsychology, translational neuroscience, and interdisciplinary fields, such as behavioral genetics.
Noteworthy contributions to this discourse come from self-proclaimed neurophenomenologists. Such scholars may be philosophically, scientifically, or clinically oriented, but find common ground in explicit appreciation of the hard problem in their work. In the 1960’s, neurologist Erwin Straus urged his colleagues to consider phenomenology in their research and personal practice, pioneering the neurophenomenology movement (Straus, 1964). Noteworthy neurophenomenologists that adhere to Straus’ invitation are Alexander Luria, Walter Freeman, Francisco Varela, and Antonio Damasio. Relatedly, embodied cognitive scientists who wish to computationally model intelligent behavior may concern themselves with phenomenology and qualia. In his collective works on artificial intelligence, Hubert Dreyfus challenges us to analyze modeling of this kind in a dynamic, integrative, and pragmatic way (Crossman, 1985).
Review of Techniques
In an editorial reporting on neurotechniques in Nature Neuroscience published in 2013, it is asserted that within the past five years alone, “…the number of abstracts describing new methods or technology development that were presented at the annual Society for Neuroscience meeting increased by nearly 50 percent” (Nature Neuroscience, 2013). Concurrently, the Obama Administration has announced a research venture called the BRAIN initiative, or the Brain Research through Advancing Innovative Neurotechnologies initiative, which is proposed to cost billions of dollars in federal funding over the next decade (The White House, 2013). In the spirit of the Human Genome Project, the BRAIN initiative aims to map the neuronal activity of the entire brain.
In the sections that follow, prominent technologies and methods from the domain of neuroscience will be critiqued within the scope of the hard problem and their potential for gauging qualia. This type of review cannot be quantitatively guided because we do not currently have a litmus test for a research tools’ sensitivity to qualia. Alternatively, the intention is to align with the neurophenomological tradition of encouraging critical thinking regarding the hard problem in cognitive science. Judicious efforts will be made to consider each category of neurotechnology in an individuated way for it’s informative value in the hard problem discussion. However, there are crucial elements that may guide our considerations as “soft” criteria, which are delineated by the following questions:
- Has the tool ever been analyzed for phenomenological competency?
- Is the tool or it’s object of measurement dynamic?
- Is the object of measurement intermodal or otherwise connected with distant bodily systems?
- Can it be (or has it already been) combined with other tools?
- Does it utilize any subjective or idiosyncratic measures?
Imaging and Mapping
Neuroimaging technologies implement an assortment of processing techniques to either directly or indirectly provide an image of the nervous system (Carter & Sheigh, 2010). Specifically regarding whole brain imaging, the image may be of a structural or functional type. A structural image depicts anatomical and architectural information about the brain, whereas a functional image illustrates any physiological process or pathway associated with neural activity (Carter & Sheigh, 2010). The sensitivity of the different imaging methods varies, where some can capture structure/function on a molecular level, and others depict the structure/function of global neural networks (Ashbury, 2011). Thus, neuroimaging is centrally concerned with visualizing the structure and/or function of neural circuits and regions. Although there are certain experimental paradigms employing neuroimaging for diverse purposes, it tends to align with faculty psychology in that specific brain regions are evidenced to give rise to specific cognitive, sensory, affective, or behavioral functions. Some modern neuroimaging techniques include (but are not limited to): positron emission tomography (PET), functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), event related optical imaging (EROS), and magnetoencephalography (MEG) (Ashbury, 2011).
According to a popular neuroscience textbook by Bernard Baars and Nicole Gage, “…we know there are regions of the brain, like the cerebellum, that do not give rise to conscious experience…” (Baars & Gage, 2010, p. 23), and that there is an empirical difference between brain regions that give rise to conscious experience and those that do not. Following this assertion are properties given by George Edelman, who is credited for mobilizing neural Darwinism, a biological theory for functional specialization that emphasizes the evolutionary fitness of different brain regions. Accordingly, the following four features inform “fitness”: (1) the system in question contains diverse elements, (2) those diverse elements can be replicated or amplified, (3) natural selection affects the products of those diverse elements (such as neuroplasticity and neurogenesis for synaptic elements), and (4) the system maintains degeneracy, or compensatory mechanisms that provide adaptive adjustments in the face of insult or injury (Baars & Gage, 2010, p. 24). Edelman suggests that a pervasive capacity for reentry exists in and between any dynamic core of interacting neurons, and this reentrant signaling is responsible for emergent conscious experience, and perhaps what we have been terming quale (Edelman, 1993). Reentrant signaling in this view describes the resonant activity between neurons or neuronal populations/networks wherein structural (and hence functional) modifications can be made to the system based on our experiences (Baars & Gage, 2010, p. 25). “Experiences” run the gamut from epigenetic occurrences in development to implicit memory occurrences in every day life (i.e., skill acquisition, priming, and conditioning). This theory asserts that global conscious experience (for our purposes, diffuse quale) is governed by the N-dimensional space of the dynamic core, with any single conscious content associated with a N neuronal population, such as the functional localization of memory in the hippocampus. Moreover, neural Darwinism argues that the dynamic and neuroplastic capacity of the N-dimensional space accounts for the diversity in conscious contents (Baars & Gage, 2010, p. 25).
Given that our intended usage of the term quale is commensurate with Edelman’s usage of conscious experience, it is advantageous to assess neural Darwinism’s bearing on neurophenomenology. This theory focuses on the properties of neural architecture; however, the ultimate picture it paints of neural networks is informative regarding structure-function relationships, which are traditionally examined with neuroimaging technology. Moreover, the importance of neuroplasticity and development to this theory necessitates research or modeling that assesses change over time (i.e., longitudinal and pre-post study designs).
In a 2003 article by Noe and Hurley, we are presented with a neurally founded case for functionalism (Hurley & Noe, 2003). In this context, functionalism draws upon the aforementioned tenets of faculty psychology coupled with behavioral outputs (evidenced by neuroimaging and related techniques), and further asserts that qualia associated with such mental actions arise from changes in cortical dominance or cortical deference (Hurley & Noe, 2003). A critical exchange ensued following this publication (Noe & Hurley, 2003; Gray 2003), particularly regarding the case of synesthesia. Moreover, functionalism is critiqued for its inadequacy in addressing the hard problem (Block, 1980; Searle, 1990), primarily because it would predict qualia in any model wherein we apply a functionally specialized network. It seems clear that the utility of structure-function neuroimaging techniques is limited to the clinical realm of neuropsychology, and functionalism is not a sufficient paradigm for predicting quale.
A popular neuroimaging technique is functional magnetic resonance imaging (fMRI), which is said to have poor temporal resolution, or poor real-time output (Darby & Walsh, 2005). However, fMRI has high spatial resolution, or a high discriminative capacity between locally active neurons (Carlson, 2012). Another popular neuroscientific technique (described more thoroughly in the following section) is electroencephalography (EEG), which maintains the inverse resolution capacity of fMRI, i.e., high temporal resolution and low spatial resolution. Therefore, these two techniques have been combined in recent efforts to synchronously capture discriminated neuronal activity and this activity in real-time (Lemieux et al., 2001). The logically termed EEG-correlated fMRI (EEG-fMRI) approach is an intriguing method for our considerations of qualia, particularly given the propensity for EEG to inform us about unconsciousness (Bachmann, 2012). Unconsciousness in this sense refers to an altered state often preceded by traumatic brain injury (such as the comatose state), which we should certainly not confuse with terminology referring to phenomenal character. However, if one were to address the hard problem with neuroimaging, the EEG-fMRI method may prove to be advantageous over fMRI alone, given the additional informative value and balance of resolution capacity.
One of the most enduring approaches in neural science is electroencephalography (EEG), wherein recordings of the brain’s electrical activity are made (Darby & Walsh, 2005). Activity captured by EEG recordings can be categorized into two general types of measurement: spontaneous or evoked/event-related potential (Baars & Gage, 2010, p. 559). Spontaneous recordings inform us about the routine electrical activity in the brain and can be performed invasively (i.e., intracranial EEG, sometimes called electrocorticography) or non-invasively (i.e., electrodes placed over the scalp) (Budzynski et al., 2008). The electrical observations made in spontaneous EEG are discussed in terms of rhythmic activity occurring in frequency bandwidths that tend to correlate with predictable cortical locations and functional states. Deviations in the expected electrical patterns have clinical utility, particularly when applied to epilepsy, traumatic brain injury, sleep/wake cycle disorders, and unconscious states (i.e., coma) (Budzynski et al., 2008). Evoked potentials (EP) and event-related potentials (ERP) provide us with constituent electrical activity from an EEG recording when one is presented with a stimulus (Baars & Gage, 2010, p. 559). An ERP results in a stereotyped waveform that rapidly follows a stimulus (on the order of milliseconds) and is often treated as a biomarker for early sensory processing activity and deficits therein (Dennis & Hajcak, 2009). Often, EPs are synonymous with ERPs because both refer to activity evoked by a stimulus, however most usage suggests that ERPs are a sub-class of EPs (Luck, 2005).
The various methods of electrophysiological neuroscience are vulnerable to the same critiques that we faced in assessing neuroimaging; namely, the explanatory gap. However, some authors contend that the binding problem may be best studied under the empirical guise of electrophysiology. In a 1999 article by Engel et al., it’s suggested that network integration is coordinated by synchronization of neuronal discharges (i.e., measured oscillations) and this synchronicity is a selection mechanism for functional circuits as well as intermodal processes (Engel et al., 1999). The up-shot of this study is a binding hypothesis that is temporally founded, which may be an important feature for those who consider consciousness an emergent property to consider.
An interesting offshoot of EEG technology is an experimental therapy technique called neurofeedback (NFB). It is based upon the operant conditioning principles of reinforcement and punishment to condition a desired behavior, but instead of targeting behavior, it targets a desired neural oscillation (Baars & Gage, 2010, p. 297). In practice, NFB is a type of brain-computer-interface (BCI) because of the critical dependence on computer technology to provide real-time feedback to the experimental participant (Neuper & Pfurtscheller, 2010). Feedback that informs the participant on whether or not they are activating the target oscillation is sometimes in the form of video game rewards or punishments (such as auditory tones or some other game specific goal being reached) dictated by the brain’s activity, i.e., hands free operation (Budzynski et al., 2009). The applications of NFB training are varied, and experimental results regarding its efficacy are disputed. However, NFB has intriguing implications for our discourse on qualia; the success of NFB is inextricably linked with volitional control, a mental skill tantamount to the philosopher’s intentionality.
Cellular and Molecular
The nervous system is a biological system with a cellular, molecular, and metabolic infrastructure (Carlson, 2012). The biologically oriented approaches to studying the brain are tremendously diverse, and include (but are not limited to): microscopy, histology, gross morphological imaging of neurons, neuronal cell cultures, tracking molecular activity of synaptic circuits, biochemical assays (particularly of the “immuno-” variety), and psychoactive drug studies (Carter & Shiegh, 2010). Many of the aforementioned research techniques are unethical to implement on human participants, therefore there is the familiar tradition of administering these methods to model organisms, such as worms, insects, rodents, and monkeys, both pre- and post-mortem (or post-sacrifice, as it is typically proclaimed in the materials and methods sections of scientific papers) (Baars & Gage, 2010, p. 512).
A notorious buzzword in cognitive neuroscience is plasticity, and specifically, synaptic plasticity. During typical learning and memory processes, as well as the naturally occurring modifications that are post-traumatic brain insult/injury, neuroplastic mechanisms advantageously allow for adaptive reorganization of network, synaptic, and molecular structures (Carlson, 2012). These levels of organization are linked to one another, where changes in one will ultimately lead to changes in another, i.e., changes on the molecular level will effectually cause changes on the network level (Pascual-Leone et al., 2011). This is a tenet of modern neuroscience because it implies a framework for thinking about the brain as a homeostatic organ, predisposed toward the maintenance of dynamic equilibrium (Darby & Walsh, 2005).
A suitable example of neuroplasticity is long-term potentiation (LTP), a neural operation whose examination was inspired by Hebbian theory (Baars & Gage, 2010, p. 547). As it is colloquially explained, “cells that fire together, wire together” (Doidge, 2007, page 427), Hebbian theory (sometimes called cell assembly theory) refers to increases in synaptic potency by virtue of increases in synaptic stimulation (Hebb, 2002). The study of LTP is traditionally linked with learning and memory processes occurring in the hippocampus (Lomo, 2003), however, LTP has been detected in the amygdala, the neocortex, the cerebellum, and even on a global neural scale (Malenka, 2004). While the examination of LTP was historically informed by Hebbian theory, modern notions of LTP can be catalogued as Hebbian, non-Hebbian, and even anti-Hebbian (Baars & Gage, 2010, pp. 547-548).
On the molecular level of the hippocampal circuit, LTP is demonstrated by the dependence on high frequency post-synaptic stimulation of hippocampal neurons (i.e., a high-bandpass switch) to trigger the NMDA receptor for the neurotransmitter glutamate, to open its channel (Baars & Gage, 2010, p. 545). Following NMDA receptor-dependent LTP, synaptic strength is increased for a long period of time, often up to months (Abraham, 2003). This suggests a selective affinity for environmental information that induces the highest frequency of neural stimulation on the molecular and synaptic level, generates changes on the network level (Baars & Gage, 2010, p. 546). Relatedly, LTP has been found to be crucial for the normal functions of working memory (WM), particularly by studies that block LTP and demonstrate degraded WM functioning (Lynch, 2004). While WM is decidedly different from quale, there is an intuitive link between WM and intentionality. Given the culmination of neuroplasticity, inter-level modification, and the association with intentionality (by virtue of probing WM) that LTP studies can demonstrate, LTP may be an advantageous process for future experimentation on quale.
Another time-honored method of studying molecular neuroscience involves the use of psychoactive drugs to infer the neurochemical basis of mental illness. Neurotransmitters are molecules endogenous to the brain that drive the activation of specific neural circuits (Carlson, 2012). In this sense, neurotransmitters can be thought of as another way brain regions are selected for their functional specialization, particularly given the notion that pre-synaptic neurotransmitters have “target” post-synaptic membranes (Baars & Gage, 2010). In the clinical assessment of psychopharmacological agents beneficial for patients with depression, scientists serendipitously discovered that drugs that inhibit monoamine oxidase (MAOIs; monoamine oxidase is an enzyme that breaks down monoamine neurotransmitters) effectually treat depression (Carlson, 2012). While more recent psychopharmacological treatments are preferred over MAOIs in clinical practice, they are historically pivotal for the development of the monoamine hypothesis (Cristancho, 2012). This hypothesis describes the cause of depression, a mental illness, to be an imbalance in neurotransmitters. Any scientist would likely agree that a fuller and more dynamic picture of “the cause” is needed to capture depression, which would require an interdisciplinary assessment of genetic, environmental, developmental, and cultural risk factors (Goldberg & Weinberger, 2009). However, it is still contended that chemical imbalance is a strong (if not the strongest) predictor for this mental illness. Given this predilection, it isn’t surprising that the media, and thus the layperson, portray neurotransmitters as the root cause for behavior, cognition, and emotion; akin to a miraculous dust sprinkled all over the brain. This is a misconception; neurotransmitters have a regulatory and selective function for supervening neural networks, wherein functional specialization more accurately takes place. It is acceptable to concede that inspection of neurotransmitters maintains significant meaning for neural network activation, however, the explanatory gap between activated neural networks and the quale of experience associated with the mentation of interest, still persists.
We have already addressed methodology paramount to the domain of cellular and molecular neuroscience, however, one may have noticed the absence of a crucial field of study; namely, genetics. Just as the nervous system maintains the parameters of a biological system, it is likewise driven by the central dogma of molecular biology. The central dogma accounts for the genotype-phenotype relationship in its assertion that DNA leads to RNA synthesis, which leads to protein synthesis (Crick, 1958). A phenotype consists of observed attributes, which can include morphology, development, and behavior (Goldberg & Weinberger, 2009). Putting the spotlight on behavioral phenotypes (including mental illness), the field of behavioral genetics is an interdisciplinary synthesis between anthropology, neuroscience, genetics, epigenetics, psychology, and behaviorism (Shonkoff & Phillips, 2000). This field has a dubious past due to its association with eugenics and the theories of Sir Francis Galton (Forrest, 1995), but is receiving resurgence in its translational utility. Noteworthy methods of studying behavioral phenotypes as well as the genetics of cognitive neuroscience include (but are not limited to): genome wide association studies, molecular cloning, recombinant DNA technology, transgenic organism studies, and twin studies (Carter & Sheigh, 2010).
Utilizing genetic methods to explore quale is arguably the most unfavorable technique in review. If we consider the ultimate goal of genetics (behavioral or traditional) to be translating biological predispositions (i.e., risk factors) to observable phenotypes, then we would have to maintain quale as a phenotype, or qualia as many phenotypes. The notion that quale is a phenotype may prove to be interesting and useful after other, more fundamental, questions are answered; however, this should be held in suspension until we bridge the explanatory gap.
Another sub-field of neuroscience implied throughout our discourse, but not explicitly addressed as of yet, is computational neuroscience (also called theoretical neuroscience). This is another interdisciplinary field that probes the functions of the brain, but does so from an information processing perspective (Churchland et al., 1993). Heavily based in mathematical, electrical, and computer science speculation, computational neuroscience methods often aim to model cognition and behavior in artificial systems (Abbott & Dayan, 2011). An artificial neural network is proposed to maintain machine-learning principles (i.e., statistical learning algorithms) in a non-linear, adaptive, and robust way, decidedly inspired by human neural capacity (Siegelmann & Sontag, 1994).
A model proposed by Stephen Grossberg is the Adaptive Resonance Theory (ART) model, wherein a dynamic interaction between bottom-up data, top-down anticipations, and the focusing capacity of attention, creates a resonant state crucial for the capacity to learn and mentally represent information (Grossberg, 1999). Grossberg equates resonant states with conscious brain states, and ART modeling allows for plasticity as well as stability (Carter & Sheigh, 2010). This is reminiscent of Bernard Baars’ Global Workspace Theory (GWT), which proposes conscious contents to be diffusely available, yet elected by the “spotlight” of attention (Baars, 1988). While both GWT and ART are progressive descriptions of cognitive architecture, and perhaps even beneficial parameters for the inclined neurophenomenologist, they do not serve our purposes in bridging the explanatory gap. That is to say, if we were to successfully model a neural network in an artificial system based on all of ART’s constraints, there would still be a dearth in explaining the phenomenal character of experience. Even if the artificial system (say, a robot) could report the quale of it’s experience, ART and GWT do not provide us with an index for veracity, or even differentiating it from human quale.
Neurophenomenology encourages us to confront the hard problem of qualia and the explanatory gap between physiology and phenomenal character in our research and clinical practice. In the words of Francisco Varela, “I hope I have seduced the reader into considering that we have in front of us the possibility of an open-ended quest for resonant passages between human experience and cognitive science” (Varela, 1996, page 346). In the spirit of Varela, our discourse is intended to energize critically minded deliberation regarding the hard problem(s). The field of neuroscience is remarkably varied in its sub-disciplines as well as its technology and methods, with significant contributions from neuroimaging, electrophysiology, cellular/molecular, genetics, and computational modeling. Each of these frameworks lightly touches on some aspect of quale (as it has been proposed by philosophy); however, there is currently no technology or method that directly examines phenomenal character.
Neuroimaging aligns well with concerns regarding functional regions of the brain, and may serve any future study on quale best if combined with electrophysiological methods, i.e., EEG-fMRI. Electrophysiological techniques alone have informative value for the binding problem as well as intentionality concerns related to neurofeedback. Intentionality is likewise linked to working memory, which has received interesting evaluation in the synaptic and molecular analysis of long-term potentiation. LTP studies additionally allow us to analyze neuroplastic and modulatory mechanisms in the brain, which may prove to be critical features of quale. Genotype-phenotype analyses will be prosperous only after the explanatory gap is bridged. Lastly, computational modeling yields progressive results for our understanding of cognitive architecture, and as such, those so inclined should carefully differentiate between consciousness theory and architectural theory.
The benefits of the neurophenomenological movement are multifold; it encourages divergent and innovative thinking; it synthesizes concerns from philosophy, cognitive science, and medicine; and it attempts to bridge the onerous explanatory gap. Ultimately, these benefits should be weighed with ethical concerns over probing qualia, insofar as any potential measure of qualia may be applied in an irresponsible matter. This, of course, doubles the burden on the neuorphenomenologist, to not only bear the weight of the hard problem in their research, but to responsibly contribute to and guide discussion over neuroethics.
 The term nature is used at this juncture to generally present the topic; specific terminological preferences will be addressed shortly.
 Accessibility here refers to unconscious versus conscious processes, or automatic versus willful. Automatic processes will not explicitly be addressed in this essay; however, they are relevant, particularly in the probing of qualia of sense-data.
 Awareness takes the place of consciousness here to have a broader connotation.
 To be abnormal refers to any phenomena observed in abnormal psychology.
 There’s a vast literature with more interesting commentary and thought experiments regarding the nuanced aspects of the consciousness debate; this is precluded for brevity.
 All things here includes quale.
 In attempts to choose the methods and tech for review, I first tried to analyze which techniques received the most “hits” in popular neuroscience databases. This proved fruitless, so I then based my choices on my personal experience as a student of neuroscience, along with the common research technique categories delineated in popular neuroscience textbooks.
 Soft is intended to mean subject to interpretation and of no particular hierarchy.
 These “soft criteria” are inspired by neurophenomenology and common criticisms in the literature.
 Following this line of thought, one could consider certain types of molecular analysis to be another form of neuroimaging, however, this will be specifically addressed later on, in the sub-section entitled “Cellular and Molecular”.
 Imaging methods not included here are PET scans, CAT scans, and SPECT.
 The biochemical cascade involved in NMDA receptor-dependent LTP is lengthy and precluded for brevity.
 It should be noted that RNA could reversibly lead to DNA modifications.
Please click the link below to access my reference list.