Final Exam Woes: Survival Tips

people-coffee-notes-tea-largeIf you’re like me, the appearance of holiday festooning, social media posts about family gatherings and wintry festivities, and the sudden need to encase oneself in layers of blankets means one thing: STRESS. Unfortunately, the year’s end for those of us who have chosen the academic route is marked by deadlines, exam dates, and pressure to organize your schedule for the following year. This end-of-year-anxiety certainly exists in other fields, but it seems exaggerated in academia. Case in point: I have a graduate degree, I am no longer enrolled in classes, and still, when I see December in my calendar, I react with pure panic.

So, perhaps we can all take a moment together, breathe, know that we are not alone in this, and take some sage advice from neuroscience, psychology, and well, just plain common sense (aka: things your momma told you). Here are my top five tips for surviving finals, followed by a TED talk on “learning to learn” by Barbara Oakley, which I highly recommend watching in its entirety.

1. Feign passion

This is along the lines of “fake it ’til you make it”, although in this case I implore you not to fake the knowledge, but the passion you have for said knowledge. The next time you’re cracking open your textbook or lecture notes, or really any material that feels cumbersome and outright boring, try saying to yourself “wow that’s so interesting!” or “I’m so impressed by this, I wonder if they ever thought of XYZ?” If you’re lucky enough to be thoroughly engaged by your material already, then you know how this changes things. These brief mentations drastically change the tone and manifestation of your work by creating an atmosphere of “swimming with the tide”, instead of against it. To borrow a term from social psychology, let’s think about mood congruence, where research has shown harmonization between emotions experienced and later behaviorisms. The most relevant ‘behavior’ here is the retrieval of memories (e.g. the stuff your studying). You will optimize your test-taking abilities if you have a positive temperament while studying, because when you recall this information later, it will trigger the same ardor and positivity attached to the memory, culminating in a sharpened cognitive state and better test-taking experience. I also suspect that feigning passion in this way stimulates a critical thinking mindset, where you may pick up on meaningful patterns in the material more readily and ask deeper questions, such as a scientist might do when they are really keen on a certain topic.

2. Rule of 7

This is also referred to as Miller’s Law, after its author, cognitive psychologist George Miller. He wrote about the “magical number 7” in the 1950’s, and although modern research has expanded upon this, and somewhat refuted his original notions, the heart of the matter remains the same: our working memory is limited to 7 (give or take 2) ‘chunks’ of information. So what does this mean? If you are studying for an exam with an immense scope of content, it’s most prudent to divide your topics into 7 broad categories, or chunks. Therefore, if you’re the type to create study guides that are in the traditional outline format (using roman numerals or something similar) do not exceed 7 in regards to the most broad section names or topic titles. Likewise, if you color-code flashcards by topic, do not exceed 7 colors!

3. Environments: mix it up

This one reminds us again of mood congruence, with a bit of twist. Psychologists have demonstrated an accord between the environment you practice something in and the environment you ultimately act out the corresponding behavior in. Such experiments suggest that you’ll recall information best in a place that is similar to the place you studied it in. One might take this to mean that you should study in your lecture hall, which you can certainly try, but you’ll likely be kicked out by a custodian or overzealous TA. Therefore, it’s most advantageous to mix up your environments, so that your memory does not have a preferential attachment to one type of place over another. Try your home, study lounges, library, coffee shop, bus, etc., and when test day comes, your surroundings will presumably just phase out of your thought process (unconsciously as well). I’ve incorporated this into my study habits and I’ve found it to have the additional benefit of stimulating your focus or keeping things fresh, so to speak. When you spend hours upon hours (sometimes into double digits) slaving over your textbook in the same stagnant library area, you eventually feel something akin to highway hypnosis, let’s call it library hypnosis. If you feel like this, like a sweatshirt zombie wondering whether or not you’ve suddenly developed dyslexia because your eyes cannot process words anymore, it is time to stand up, take a deep breath, go to the local coffee shop, caffeinate, and freshen up your surroundings.

4. Reach out

This one is based on personal experience and is frankly something I wish someone would have told me when I was an undergrad. The innumerable forms of communication available to us nowadays are a gift, use them. If you have a question, no matter how specific or seemingly “silly”, reach out to your professor, TA, or classmates and just ask. Email, text, call, swing by their office: whatever it takes. In the case of faculty, they are being paid to educate you, which is not limited to your 2 hour lecture. If you do not fully understand something, especially after going back through the text and/or lecture material, they have not done their job. Which brings me to a point I cannot stress enough: you are not bothering them, and if you are, who cares! Many students express this as a reason for their reticence in corresponding with their professors, but along the way you will learn that ongoing communication is vital to every profession, and academia is no different. In the case of your classmates; if you have a question about something, there is an extremely high chance that someone else does as well. Reaching out to classmates to coordinate group study sessions or review each other’s notes can really bolster your level of comprehension, as well as add a social factor to studying that may ease the pain a bit. Additionally, research supports the so-called self-explanation effect, where learning is enhanced when you have to devise an explanation in your own words, or in other terms, teach it to someone else. So during your group study session you may have the opportunity to teach a friend…take it!

5. TLC

Finally, this point touches on things your momma told you and neuroscience. If you’re challenging yourself and exceeding limits you never thought you could, whether it be intellectually, psychologically, physically, or otherwise, you must give yourself time to recuperate and reconcile the growing pains, so to speak. Here we’re talking about reinforcing memories while studying, which depends in large part on a process called long-term potentiation, which is one of a few molecular processes that facilitate synaptic plasticity in the adult brain. Key points to consider are: this process is not immediate, and researchers have found that the neural basis of memory storage rests in the strengthening of synapses in distinct circuits via repeat exposure to whatever the content of your memory is. This is a Hebbian learning model proven physiologically, and as any physiological process goes, requires energy, work, and time for the effects to be seen ‘downstream’. Therefore, it is imperative that you give your precious synaptic circuits the nourishment and rest they need in order to properly “flex their muscles” on test day. Or as your momma told you: get a good nights’ rest and don’t forget to eat breakfast!!

Learning how to learn by Barbara Oakley



Lately, my thoughts have been converging on free will, intentionality, volitional control, etc.; notions I typically overlook, or perhaps take for granted as a student of neuropsychology, but certainly shouldn’t. Free will plays such a critical role in our clinical theories and practices; say for cognitive behavioral therapy, we proceed on the assumption that humans are inherently capable of deciding and directing their own thoughts and actions; this is pivotal for the treatment process. This is akin to saying that we make assumptions that existence itself is real and valid, insofar as questioning it almost feels absurd. Now, although I do think we should question everything, I think the real issue here is operationalizing free will, or having a commonly agreed upon definition of it and it’s bases. Only from there can it be properly integrated into the theories/practices that depend upon it, and perhaps those theories/practices will completely change once we understand free will better. Here’s an interesting interview with philosopher John Searle, alluding to a fundamental cognitive dissonance as the root cause for our avoidance of examining free will:

Additionally, here’s a succinct review of Derek Pereboom’s book on free will: The Free Will Debate, by Danaher.

Frontiers in Cognitive Neuroscience: The Pursuit of Phenomenal Character

Hippocampus III, Greg Dunn

Earlier this month I submitted a term paper for a philosophy course that was broadly focused on philosophy of mind, with more specific spotlight cast upon theories of cognitive architecture, the bridge between cognition and perception, and epistemology. I’d like to share this paper with you here, as I believe it represents an interesting, and ultimately formidable, task that scientists are now facing: namely, probing phenomenology in an era of fast-paced, and often spectacular, technological innovation. It should be said that the following paper may incite a tl;dr response, as the final page count totaled at about 30, however, I think that this line of inquiry is compelling and drives home the point that we have a lot of work in front of us; making this a fitting treatise as we approach the new year.


If one were so inclined, one could find questions and theories regarding the nature of being dating back to the origins of philosophy. Amazingly enough, what it means to exist has by no means been resolved. To complicate the issue further, modern science employs systematized parameters to define biology, for purposes that include (but are not limited to): organismal/structural classification, functionality, abnormal functioning, and even to differentiate something as living or non-living. This complicates our considerations of nature[1] considerably by begging the question of how biology and the nature of experience can be reconciled into a unified and cohesive theory of existence. Is this even possible?

The purpose of this investigation is oriented at human nature, and specifically, the nature of human mentation. The processes, capacities, and functions of mentation are studied in the various branches of cognitive science; however, the nature of cognition that bears upon this essay refers to the individuated and endogenous experience of having a mental activity. Thus, emphasis can be placed on intentionality, the experience of intelligent agents, and the essence of thinking.

One could envisage other meaningful considerations in the broad topic of nature, such as: the nature of being a non-human species, the nature of emotion, the nature of interpersonal dynamics, changes to one’s nature throughout development, and the nature of accessibility of different cognitive elements[2]. This essay will not explicitly address these considerations, but regarding them as ultimately relevant to a unified theory is encouraged. The reason for this exclusion is pragmatic; to reconcile the science of cognition with the nature of cognition we must first critically review the analytical tools at our disposal. Such tools consist of research technologies and methodologies that can be broadly categorized as functional or process-level analysis. To invoke the aforementioned ‘other considerations’ requires review of particular experimental manipulations, such as reviewing a longitudinal study design for developmental considerations. That is to say, an analysis of study tools is distinct from an analysis of study design; however, both factors should ultimately be considered and synthesized in a robust investigation of nature.

Origins and Etymology

The contemplation of the nature of being has roots in metaphysics under the study of ontology (Petrov, 2011). The implications of the term ontology have transformed over the years and throughout the literature, however, the investigations proposed herein are ontological. This is not to say our discourse assumes a particular ontological perspective over another, but to say that ontological queries are apropos.

Another term that may have emerged for readers already is phenomenology, or the study of the objects of awareness and of awareness itself[3]. Phenomenology is another term with a storied etymological history, particularly if it is compared to ontology. Historically, Johann Heinrich Lambert first used the term in his treatise called Neues Organon (Lambert, 1764), or New Organon. Phenomenology was later revisited by illustrious philosophers, such as Immanuel Kant (1781) and G.W.F. Hegel (1807). In Hegel’s Phenomenology of Spirit (1807), we are presented with the term as a study of appearances in consciousness, with a hierarchical progression from sensory consciousness up to the self-consciousness. Furthermore, Hegel proposed philosophical consciousness, which is akin to a notion of reason, to be the preeminent ascension in the evolution of consciousness (Hegel, 1807). In later years, Edmund Husserl mobilized phenomenology as a recognizable school of thought, with pioneering usage in Logical Investigations (Husserl, 1901). Husserl’s early literature on phenomenology was influenced by discourses on intentionality by his predecessor, Franz Betrano (1838-1917), as well as Carl Stumpf (1848-1936). This early formulation focused on the intentionality of consciousness in the sense that it can be considered directed and willful. Husserl’s later work developed phenomenology to mean the study of the essences, or the elements, of phenomena (Husserl, 1913); many refer to this paradigm as eidetic reduction. Some time later in the 20th century, Martin Heidegger (1927) challenged Husserl’s conception of phenomenology with existential charges, asserting that phenomenology cannot be captured as disentangled from the experiencing person (von Herrmann, 2013). Heidegger’s theory of dasein, or existence, is non-dualistic and requires any phenomenology to incorporate engagement with every day life; i.e., ontology is irreducible (Reiners, 2012).

Given the complex etymology of phenomenology and ontology, it isn’t surprising that more recent thinkers have adopted different terminology to represent studies of this kind (however, many authors recall those terms when appropriate). This brings us to our preferred term, qualia, which will be appropriated for the remainder herein to mean the phenomenal characters of experience (Keeley, 2009). The etymology of qualia (or quale in its singular form) is nebulous, but it’s popularization can be traced distinctly back to C.I. Lewis’ Mind and the World-Order (1929), wherein he focused specifically on qualia of sense-data. Many thinkers have since debated the nuanced implications of qualia (Nagel, 1995; Chalmers, 1995; Searle, 1998), however we will take quale to represent the phenomenal character of any mentation of interest, as opposed to modalities that are exclusively sensory.

The Hard Problem

Modern day cognitive neuroscience allows us develop deductions regarding functional localization, or the systematic appraisal of brain regions based on apparent functionality (Darby & Walsh, 2005). Furthermore, we can chronicle the properties of these functional regions on a variety of levels; electrical, chemical, molecular, and network. A germane example of a neural region with well-documented functional specialization is the hippocampus, a deep brain structure located in the medial temporal lobe. The properties of the hippocampus have been studied on all of the aforementioned levels and there is compelling evidence that it is specialized for memory functions, specifically, long-term memory processes (Turner, 1969; Mueller et al., 2011; Carlson, 2012). Evidence of this kind supports the Fodorian notion of mental modularity (Fodor, 1983) as well as the tenets of faculty psychology (Aquinas, 1947, Translated Ed.), insofar as distinct cognitive faculties can be correlated with distinct neural regions (or modules in Fodor’s phrasing). In parallel with functional specialization, cognitive science provides us with adequate language and evidence for: the integration of information in the central nervous system (Baars & Gage, 2010); the presence of endogenous and/or voluntarily mental capacities, such as attentional deployment (Baars & Gage, 2010); the correlation between behavioral cycles and brain-state cycles (Budzynski et al., 2008); malleable and adaptive neural mechanisms, such as neuroplasticity or learning mechanisms (Baars & Gage, 2010); and the processes that connect sensory processing with reactive behavior, such as reflex circuits and reflexology (Carlson, 2012). Given the availability of high-tech research tools and innovative translational paradigms, these problems can be considered “easy”. The research itself may be time-consuming and require a preponderance of confirmatory papers, but the problems themselves have palpable solutions in the sense that these phenomena exist and scientific observations can be replicated (Chalmers, 1995).

In contrast to these easy problems of cognition and behavior stands the so-termed “hard” problem (Chalmers, 1995). David Chalmers notably called this hard problem one of consciousness itself, of the subjective experience of the aforementioned phenomena (Chalmers, 1995). Revisiting our example of memory, we can correlate hippocampal activity with memory processes and say that this functional specialization is valid and reliable. However, it is substantially harder to scientifically address what it is like to have a memory. Theories and papers surrounding the hard problem tend to utilize the term consciousness when referring to the object of the problem. This may be due to the connotations of the term itself, in that consciousness implies a principality or some other sovereign and diffuse property of mind. However, it is contented here that the hard problem exists on every level of mind science, including (but not limited to): cognitions, sensory processes, emotions, behavior, and disease/disorder states. That is to say, what it is like to have a mental activity, to sense, to feel, to act, and to be abnormal[4], are all hard problems in the same way as ‘what it is like to experience consciousness’ is a hard problem. En masse, we may find that these differentiated hard problems synthesize to the hard problem of consciousness, i.e., a Gestalt effect, or alternatively, we may find that there is a fundamental property common to each mental attribute, i.e., an Occam’s razor effect. The latter result would implicate consciousness as merely another mental capacity, in equal standing with cognition and perception. The former result would implicate consciousness as an emergent property of all other mental attributes, a conceivably troubling suggestion regarding the potential of artificial intelligence. In any case, the hard problem persists[5].

Over a decade before Chalmers, Joseph Levine designated the source of such difficulties as an explanatory gap between the physiological bases of mind and the experiential phenomena associated with such physiology (Levine, 1983). An example offered by Levine asks us to consider nociception, or the physiological processes underlying pain perception. We can explicate nociception in terms of bodily systems, but we cannot scientifically account for what it is like to experience pain. Re-orienting the explanatory gap back to our preferred terminology: the gap lies in questions regarding the qualia of pain, or, the inability to scientifically describe the phenomenal character/s of pain.

Related to the hard problem is the binding problem, as put forth by John Smythies, refers either to a computational issue of segregation (i.e., binding problem one) or a combinatorial issue of emergence (i.e., binding problem two) (Smythies, 1994). Arguably, binding problem two is more pertinent to our current exploration in that it is broader in scope and can describe the problem for cognition just as it does for perception. The binding problem confronts the explanatory gap existing specifically between cognitive, neural, and philosophical disciplines.

Recalling the dispute between Husserl and Heidegger, are these problems due to our modus operandi for what is knowable? Or is the problem due to inadequate tools for studying ontology? Knowability issues are likely related to the requirements of the scientific method; insofar as reproducibility is a dogmatic standard all scientists must meet (Popper, 1935). The very essence of quale as a phenomenal character implies temporal singularity, or one-of-a-kindness, so how could we fit quale into our standard of reproducibility?


The hard problem is particularly formidable for those who ascribe to a physicalist paradigm (Stoljar, 2009). This includes any scientist or scholar who submits an ontological monism, whether it be: bottom-up physicalism, i.e., there is a physical basis for all things; top-down physicalism, i.e., physicality supervenes over all things; or an intermediate framework, i.e., there is dynamic interaction between all things and their physicality[6]. The physicalist paradigm concerning the mind is sometimes called the embodied mind thesis, which can be associated with the work of Maurice Merleau-Ponty (1908-1961). The review that follows will specifically interrogate formulations from the wide purview of neuroscience, including cognitive and behavioral neuroscience (sometimes called biopsychology), neuropsychology, translational neuroscience, and interdisciplinary fields, such as behavioral genetics.

Noteworthy contributions to this discourse come from self-proclaimed neurophenomenologists. Such scholars may be philosophically, scientifically, or clinically oriented, but find common ground in explicit appreciation of the hard problem in their work. In the 1960’s, neurologist Erwin Straus urged his colleagues to consider phenomenology in their research and personal practice, pioneering the neurophenomenology movement (Straus, 1964). Noteworthy neurophenomenologists that adhere to Straus’ invitation are Alexander Luria, Walter Freeman, Francisco Varela, and Antonio Damasio. Relatedly, embodied cognitive scientists who wish to computationally model intelligent behavior may concern themselves with phenomenology and qualia. In his collective works on artificial intelligence, Hubert Dreyfus challenges us to analyze modeling of this kind in a dynamic, integrative, and pragmatic way (Crossman, 1985).

Review of Techniques

In an editorial reporting on neurotechniques in Nature Neuroscience published in 2013, it is asserted that within the past five years alone, “…the number of abstracts describing new methods or technology development that were presented at the annual Society for Neuroscience meeting increased by nearly 50 percent” (Nature Neuroscience, 2013). Concurrently, the Obama Administration has announced a research venture called the BRAIN initiative, or the Brain Research through Advancing Innovative Neurotechnologies initiative, which is proposed to cost billions of dollars in federal funding over the next decade (The White House, 2013). In the spirit of the Human Genome Project, the BRAIN initiative aims to map the neuronal activity of the entire brain.

In the sections that follow, prominent technologies and methods from the domain of neuroscience will be critiqued within the scope of the hard problem and their potential for gauging qualia[7]. This type of review cannot be quantitatively guided because we do not currently have a litmus test for a research tools’ sensitivity to qualia. Alternatively, the intention is to align with the neurophenomological tradition of encouraging critical thinking regarding the hard problem in cognitive science. Judicious efforts will be made to consider each category of neurotechnology in an individuated way for it’s informative value in the hard problem discussion. However, there are crucial elements that may guide our considerations as “soft” criteria[8], which are delineated by the following questions[9]:

  1. Has the tool ever been analyzed for phenomenological competency?
  2. Is the tool or it’s object of measurement dynamic?
  3. Is the object of measurement intermodal or otherwise connected with distant bodily systems?
  4. Can it be (or has it already been) combined with other tools?
  5. Does it utilize any subjective or idiosyncratic measures?

Imaging and Mapping

Neuroimaging technologies implement an assortment of processing techniques to either directly or indirectly provide an image of the nervous system (Carter & Sheigh, 2010). Specifically regarding whole brain imaging, the image may be of a structural or functional type. A structural image depicts anatomical and architectural information about the brain, whereas a functional image illustrates any physiological process or pathway associated with neural activity (Carter & Sheigh, 2010). The sensitivity of the different imaging methods varies, where some can capture structure/function on a molecular level, and others depict the structure/function of global neural networks (Ashbury, 2011)[10]. Thus, neuroimaging is centrally concerned with visualizing the structure and/or function of neural circuits and regions. Although there are certain experimental paradigms employing neuroimaging for diverse purposes, it tends to align with faculty psychology in that specific brain regions are evidenced to give rise to specific cognitive, sensory, affective, or behavioral functions. Some modern neuroimaging techniques include (but are not limited to): positron emission tomography (PET), functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), event related optical imaging (EROS), and magnetoencephalography (MEG)[11] (Ashbury, 2011).

According to a popular neuroscience textbook by Bernard Baars and Nicole Gage, “…we know there are regions of the brain, like the cerebellum, that do not give rise to conscious experience…” (Baars & Gage, 2010, p. 23), and that there is an empirical difference between brain regions that give rise to conscious experience and those that do not. Following this assertion are properties given by George Edelman, who is credited for mobilizing neural Darwinism, a biological theory for functional specialization that emphasizes the evolutionary fitness of different brain regions. Accordingly, the following four features inform “fitness”: (1) the system in question contains diverse elements, (2) those diverse elements can be replicated or amplified, (3) natural selection affects the products of those diverse elements (such as neuroplasticity and neurogenesis for synaptic elements), and (4) the system maintains degeneracy, or compensatory mechanisms that provide adaptive adjustments in the face of insult or injury (Baars & Gage, 2010, p. 24). Edelman suggests that a pervasive capacity for reentry exists in and between any dynamic core of interacting neurons, and this reentrant signaling is responsible for emergent conscious experience, and perhaps what we have been terming quale (Edelman, 1993). Reentrant signaling in this view describes the resonant activity between neurons or neuronal populations/networks wherein structural (and hence functional) modifications can be made to the system based on our experiences (Baars & Gage, 2010, p. 25). “Experiences” run the gamut from epigenetic occurrences in development to implicit memory occurrences in every day life (i.e., skill acquisition, priming, and conditioning). This theory asserts that global conscious experience (for our purposes, diffuse quale) is governed by the N-dimensional space of the dynamic core, with any single conscious content associated with a N neuronal population, such as the functional localization of memory in the hippocampus. Moreover, neural Darwinism argues that the dynamic and neuroplastic capacity of the N-dimensional space accounts for the diversity in conscious contents (Baars & Gage, 2010, p. 25).

Given that our intended usage of the term quale is commensurate with Edelman’s usage of conscious experience, it is advantageous to assess neural Darwinism’s bearing on neurophenomenology. This theory focuses on the properties of neural architecture; however, the ultimate picture it paints of neural networks is informative regarding structure-function relationships, which are traditionally examined with neuroimaging technology. Moreover, the importance of neuroplasticity and development to this theory necessitates research or modeling that assesses change over time (i.e., longitudinal and pre-post study designs).

In a 2003 article by Noe and Hurley, we are presented with a neurally founded case for functionalism (Hurley & Noe, 2003). In this context, functionalism draws upon the aforementioned tenets of faculty psychology coupled with behavioral outputs (evidenced by neuroimaging and related techniques), and further asserts that qualia associated with such mental actions arise from changes in cortical dominance or cortical deference (Hurley & Noe, 2003). A critical exchange ensued following this publication (Noe & Hurley, 2003; Gray 2003), particularly regarding the case of synesthesia. Moreover, functionalism is critiqued for its inadequacy in addressing the hard problem (Block, 1980; Searle, 1990), primarily because it would predict qualia in any model wherein we apply a functionally specialized network. It seems clear that the utility of structure-function neuroimaging techniques is limited to the clinical realm of neuropsychology, and functionalism is not a sufficient paradigm for predicting quale.

A popular neuroimaging technique is functional magnetic resonance imaging (fMRI), which is said to have poor temporal resolution, or poor real-time output (Darby & Walsh, 2005). However, fMRI has high spatial resolution, or a high discriminative capacity between locally active neurons (Carlson, 2012). Another popular neuroscientific technique (described more thoroughly in the following section) is electroencephalography (EEG), which maintains the inverse resolution capacity of fMRI, i.e., high temporal resolution and low spatial resolution. Therefore, these two techniques have been combined in recent efforts to synchronously capture discriminated neuronal activity and this activity in real-time (Lemieux et al., 2001). The logically termed EEG-correlated fMRI (EEG-fMRI) approach is an intriguing method for our considerations of qualia, particularly given the propensity for EEG to inform us about unconsciousness (Bachmann, 2012). Unconsciousness in this sense refers to an altered state often preceded by traumatic brain injury (such as the comatose state), which we should certainly not confuse with terminology referring to phenomenal character. However, if one were to address the hard problem with neuroimaging, the EEG-fMRI method may prove to be advantageous over fMRI alone, given the additional informative value and balance of resolution capacity.


One of the most enduring approaches in neural science is electroencephalography (EEG), wherein recordings of the brain’s electrical activity are made (Darby & Walsh, 2005). Activity captured by EEG recordings can be categorized into two general types of measurement: spontaneous or evoked/event-related potential (Baars & Gage, 2010, p. 559). Spontaneous recordings inform us about the routine electrical activity in the brain and can be performed invasively (i.e., intracranial EEG, sometimes called electrocorticography) or non-invasively (i.e., electrodes placed over the scalp) (Budzynski et al., 2008). The electrical observations made in spontaneous EEG are discussed in terms of rhythmic activity occurring in frequency bandwidths that tend to correlate with predictable cortical locations and functional states. Deviations in the expected electrical patterns have clinical utility, particularly when applied to epilepsy, traumatic brain injury, sleep/wake cycle disorders, and unconscious states (i.e., coma) (Budzynski et al., 2008). Evoked potentials (EP) and event-related potentials (ERP) provide us with constituent electrical activity from an EEG recording when one is presented with a stimulus (Baars & Gage, 2010, p. 559). An ERP results in a stereotyped waveform that rapidly follows a stimulus (on the order of milliseconds) and is often treated as a biomarker for early sensory processing activity and deficits therein (Dennis & Hajcak, 2009). Often, EPs are synonymous with ERPs because both refer to activity evoked by a stimulus, however most usage suggests that ERPs are a sub-class of EPs (Luck, 2005).

The various methods of electrophysiological neuroscience are vulnerable to the same critiques that we faced in assessing neuroimaging; namely, the explanatory gap. However, some authors contend that the binding problem may be best studied under the empirical guise of electrophysiology. In a 1999 article by Engel et al., it’s suggested that network integration is coordinated by synchronization of neuronal discharges (i.e., measured oscillations) and this synchronicity is a selection mechanism for functional circuits as well as intermodal processes (Engel et al., 1999). The up-shot of this study is a binding hypothesis that is temporally founded, which may be an important feature for those who consider consciousness an emergent property to consider.

An interesting offshoot of EEG technology is an experimental therapy technique called neurofeedback (NFB). It is based upon the operant conditioning principles of reinforcement and punishment to condition a desired behavior, but instead of targeting behavior, it targets a desired neural oscillation (Baars & Gage, 2010, p. 297). In practice, NFB is a type of brain-computer-interface (BCI) because of the critical dependence on computer technology to provide real-time feedback to the experimental participant (Neuper & Pfurtscheller, 2010). Feedback that informs the participant on whether or not they are activating the target oscillation is sometimes in the form of video game rewards or punishments (such as auditory tones or some other game specific goal being reached) dictated by the brain’s activity, i.e., hands free operation (Budzynski et al., 2009). The applications of NFB training are varied, and experimental results regarding its efficacy are disputed. However, NFB has intriguing implications for our discourse on qualia; the success of NFB is inextricably linked with volitional control, a mental skill tantamount to the philosopher’s intentionality.

Cellular and Molecular

The nervous system is a biological system with a cellular, molecular, and metabolic infrastructure (Carlson, 2012). The biologically oriented approaches to studying the brain are tremendously diverse, and include (but are not limited to): microscopy, histology, gross morphological imaging of neurons, neuronal cell cultures, tracking molecular activity of synaptic circuits, biochemical assays (particularly of the “immuno-” variety), and psychoactive drug studies (Carter & Shiegh, 2010). Many of the aforementioned research techniques are unethical to implement on human participants, therefore there is the familiar tradition of administering these methods to model organisms, such as worms, insects, rodents, and monkeys, both pre- and post-mortem (or post-sacrifice, as it is typically proclaimed in the materials and methods sections of scientific papers) (Baars & Gage, 2010, p. 512).

A notorious buzzword in cognitive neuroscience is plasticity, and specifically, synaptic plasticity. During typical learning and memory processes, as well as the naturally occurring modifications that are post-traumatic brain insult/injury, neuroplastic mechanisms advantageously allow for adaptive reorganization of network, synaptic, and molecular structures (Carlson, 2012). These levels of organization are linked to one another, where changes in one will ultimately lead to changes in another, i.e., changes on the molecular level will effectually cause changes on the network level (Pascual-Leone et al., 2011). This is a tenet of modern neuroscience because it implies a framework for thinking about the brain as a homeostatic organ, predisposed toward the maintenance of dynamic equilibrium (Darby & Walsh, 2005).

A suitable example of neuroplasticity is long-term potentiation (LTP), a neural operation whose examination was inspired by Hebbian theory (Baars & Gage, 2010, p. 547). As it is colloquially explained, “cells that fire together, wire together” (Doidge, 2007, page 427), Hebbian theory (sometimes called cell assembly theory) refers to increases in synaptic potency by virtue of increases in synaptic stimulation (Hebb, 2002). The study of LTP is traditionally linked with learning and memory processes occurring in the hippocampus (Lomo, 2003), however, LTP has been detected in the amygdala, the neocortex, the cerebellum, and even on a global neural scale (Malenka, 2004). While the examination of LTP was historically informed by Hebbian theory, modern notions of LTP can be catalogued as Hebbian, non-Hebbian, and even anti-Hebbian (Baars & Gage, 2010, pp. 547-548).

On the molecular level of the hippocampal circuit, LTP is demonstrated by the dependence on high frequency post-synaptic stimulation of hippocampal neurons (i.e., a high-bandpass switch) to trigger the NMDA receptor for the neurotransmitter glutamate, to open its channel[12] (Baars & Gage, 2010, p. 545). Following NMDA receptor-dependent LTP, synaptic strength is increased for a long period of time, often up to months (Abraham, 2003). This suggests a selective affinity for environmental information that induces the highest frequency of neural stimulation on the molecular and synaptic level, generates changes on the network level (Baars & Gage, 2010, p. 546). Relatedly, LTP has been found to be crucial for the normal functions of working memory (WM), particularly by studies that block LTP and demonstrate degraded WM functioning (Lynch, 2004). While WM is decidedly different from quale, there is an intuitive link between WM and intentionality. Given the culmination of neuroplasticity, inter-level modification, and the association with intentionality (by virtue of probing WM) that LTP studies can demonstrate, LTP may be an advantageous process for future experimentation on quale.

Another time-honored method of studying molecular neuroscience involves the use of psychoactive drugs to infer the neurochemical basis of mental illness. Neurotransmitters are molecules endogenous to the brain that drive the activation of specific neural circuits (Carlson, 2012). In this sense, neurotransmitters can be thought of as another way brain regions are selected for their functional specialization, particularly given the notion that pre-synaptic neurotransmitters have “target” post-synaptic membranes (Baars & Gage, 2010). In the clinical assessment of psychopharmacological agents beneficial for patients with depression, scientists serendipitously discovered that drugs that inhibit monoamine oxidase (MAOIs; monoamine oxidase is an enzyme that breaks down monoamine neurotransmitters) effectually treat depression (Carlson, 2012). While more recent psychopharmacological treatments are preferred over MAOIs in clinical practice, they are historically pivotal for the development of the monoamine hypothesis (Cristancho, 2012). This hypothesis describes the cause of depression, a mental illness, to be an imbalance in neurotransmitters. Any scientist would likely agree that a fuller and more dynamic picture of “the cause” is needed to capture depression, which would require an interdisciplinary assessment of genetic, environmental, developmental, and cultural risk factors (Goldberg & Weinberger, 2009). However, it is still contended that chemical imbalance is a strong (if not the strongest) predictor for this mental illness. Given this predilection, it isn’t surprising that the media, and thus the layperson, portray neurotransmitters as the root cause for behavior, cognition, and emotion; akin to a miraculous dust sprinkled all over the brain. This is a misconception; neurotransmitters have a regulatory and selective function for supervening neural networks, wherein functional specialization more accurately takes place. It is acceptable to concede that inspection of neurotransmitters maintains significant meaning for neural network activation, however, the explanatory gap between activated neural networks and the quale of experience associated with the mentation of interest, still persists.

Interdisciplinary Considerations

We have already addressed methodology paramount to the domain of cellular and molecular neuroscience, however, one may have noticed the absence of a crucial field of study; namely, genetics. Just as the nervous system maintains the parameters of a biological system, it is likewise driven by the central dogma of molecular biology. The central dogma accounts for the genotype-phenotype relationship in its assertion that DNA leads to RNA synthesis, which leads to protein synthesis[13] (Crick, 1958). A phenotype consists of observed attributes, which can include morphology, development, and behavior (Goldberg & Weinberger, 2009). Putting the spotlight on behavioral phenotypes (including mental illness), the field of behavioral genetics is an interdisciplinary synthesis between anthropology, neuroscience, genetics, epigenetics, psychology, and behaviorism (Shonkoff & Phillips, 2000). This field has a dubious past due to its association with eugenics and the theories of Sir Francis Galton (Forrest, 1995), but is receiving resurgence in its translational utility. Noteworthy methods of studying behavioral phenotypes as well as the genetics of cognitive neuroscience include (but are not limited to): genome wide association studies, molecular cloning, recombinant DNA technology, transgenic organism studies, and twin studies (Carter & Sheigh, 2010).

Utilizing genetic methods to explore quale is arguably the most unfavorable technique in review. If we consider the ultimate goal of genetics (behavioral or traditional) to be translating biological predispositions (i.e., risk factors) to observable phenotypes, then we would have to maintain quale as a phenotype, or qualia as many phenotypes. The notion that quale is a phenotype may prove to be interesting and useful after other, more fundamental, questions are answered; however, this should be held in suspension until we bridge the explanatory gap.

Another sub-field of neuroscience implied throughout our discourse, but not explicitly addressed as of yet, is computational neuroscience (also called theoretical neuroscience). This is another interdisciplinary field that probes the functions of the brain, but does so from an information processing perspective (Churchland et al., 1993). Heavily based in mathematical, electrical, and computer science speculation, computational neuroscience methods often aim to model cognition and behavior in artificial systems (Abbott & Dayan, 2011). An artificial neural network is proposed to maintain machine-learning principles (i.e., statistical learning algorithms) in a non-linear, adaptive, and robust way, decidedly inspired by human neural capacity (Siegelmann & Sontag, 1994).

A model proposed by Stephen Grossberg is the Adaptive Resonance Theory (ART) model, wherein a dynamic interaction between bottom-up data, top-down anticipations, and the focusing capacity of attention, creates a resonant state crucial for the capacity to learn and mentally represent information (Grossberg, 1999). Grossberg equates resonant states with conscious brain states, and ART modeling allows for plasticity as well as stability (Carter & Sheigh, 2010). This is reminiscent of Bernard Baars’ Global Workspace Theory (GWT), which proposes conscious contents to be diffusely available, yet elected by the “spotlight” of attention (Baars, 1988). While both GWT and ART are progressive descriptions of cognitive architecture, and perhaps even beneficial parameters for the inclined neurophenomenologist, they do not serve our purposes in bridging the explanatory gap. That is to say, if we were to successfully model a neural network in an artificial system based on all of ART’s constraints, there would still be a dearth in explaining the phenomenal character of experience. Even if the artificial system (say, a robot) could report the quale of it’s experience, ART and GWT do not provide us with an index for veracity, or even differentiating it from human quale.


Neurophenomenology encourages us to confront the hard problem of qualia and the explanatory gap between physiology and phenomenal character in our research and clinical practice. In the words of Francisco Varela, “I hope I have seduced the reader into considering that we have in front of us the possibility of an open-ended quest for resonant passages between human experience and cognitive science” (Varela, 1996, page 346). In the spirit of Varela, our discourse is intended to energize critically minded deliberation regarding the hard problem(s). The field of neuroscience is remarkably varied in its sub-disciplines as well as its technology and methods, with significant contributions from neuroimaging, electrophysiology, cellular/molecular, genetics, and computational modeling. Each of these frameworks lightly touches on some aspect of quale (as it has been proposed by philosophy); however, there is currently no technology or method that directly examines phenomenal character.

Neuroimaging aligns well with concerns regarding functional regions of the brain, and may serve any future study on quale best if combined with electrophysiological methods, i.e., EEG-fMRI. Electrophysiological techniques alone have informative value for the binding problem as well as intentionality concerns related to neurofeedback. Intentionality is likewise linked to working memory, which has received interesting evaluation in the synaptic and molecular analysis of long-term potentiation. LTP studies additionally allow us to analyze neuroplastic and modulatory mechanisms in the brain, which may prove to be critical features of quale. Genotype-phenotype analyses will be prosperous only after the explanatory gap is bridged. Lastly, computational modeling yields progressive results for our understanding of cognitive architecture, and as such, those so inclined should carefully differentiate between consciousness theory and architectural theory.

The benefits of the neurophenomenological movement are multifold; it encourages divergent and innovative thinking; it synthesizes concerns from philosophy, cognitive science, and medicine; and it attempts to bridge the onerous explanatory gap. Ultimately, these benefits should be weighed with ethical concerns over probing qualia, insofar as any potential measure of qualia may be applied in an irresponsible matter. This, of course, doubles the burden on the neuorphenomenologist, to not only bear the weight of the hard problem in their research, but to responsibly contribute to and guide discussion over neuroethics.


[1] The term nature is used at this juncture to generally present the topic; specific terminological preferences will be addressed shortly.

[2] Accessibility here refers to unconscious versus conscious processes, or automatic versus willful. Automatic processes will not explicitly be addressed in this essay; however, they are relevant, particularly in the probing of qualia of sense-data.

[3] Awareness takes the place of consciousness here to have a broader connotation.

[4] To be abnormal refers to any phenomena observed in abnormal psychology.

[5] There’s a vast literature with more interesting commentary and thought experiments regarding the nuanced aspects of the consciousness debate; this is precluded for brevity.

[6] All things here includes quale.

[7] In attempts to choose the methods and tech for review, I first tried to analyze which techniques received the most “hits” in popular neuroscience databases. This proved fruitless, so I then based my choices on my personal experience as a student of neuroscience, along with the common research technique categories delineated in popular neuroscience textbooks.

[8] Soft is intended to mean subject to interpretation and of no particular hierarchy.

[9] These “soft criteria” are inspired by neurophenomenology and common criticisms in the literature.

[10] Following this line of thought, one could consider certain types of molecular analysis to be another form of neuroimaging, however, this will be specifically addressed later on, in the sub-section entitled “Cellular and Molecular”.

[11] Imaging methods not included here are PET scans, CAT scans, and SPECT.

[12] The biochemical cascade involved in NMDA receptor-dependent LTP is lengthy and precluded for brevity.

[13] It should be noted that RNA could reversibly lead to DNA modifications.

Please click the link below to access my reference list.


Gray Matters Revival…


Hello agents of the blogosphere. I started this blog about a year ago, with true intentions and perhaps an over-zealous spirit. As my concomitant studies and research in grad school became more demanding, I unfortunately had to put my blogging intentions on the back-burner. With the new year approaching, I’ve decided to revive this space and devote more time to posting and engaging in emergent discourse. To reiterate my purpose, I’m interested in human nature from a varied and interconnected array of perspectives. This, of course, includes modern neuroscience theory and neuropsychology, with additional attention paid to philosophy, translational medicine, bioethics, behavioral genetics, engineering, and computational theory. I’m not strictly concerned with academia (although it’s clear that there are scholarly overtones here), I’d like to celebrate and remark on contributions from visual art, creative writing, music, comedy, consumer technology, and otherwise mixed-media creations as well. With that, your ideas and opinions are welcomed, in either comment form associated with my postings, or, if you’d like me to post something you’ve written, or link to/review something you’ve created, please feel free to message me.

Here’s to 2015, may your restored solar revolutions be inspired, fun, and brimming with possibility.

Holiday Gift Suggestions for the Neurophile

20131207-172515.jpg The holiday season is upon us, and this often means racking your brain to find that perfect gift for the people in your life. If you’re like me, you like to get people gifts that say “I get you”, which usually is the in the form of something unusual or obscure; a gift that appeals to that distinctive aspect of their personality. As an interlude between more academic posts, I thought it would be fun to share some gift ideas I have for the neuroscience lover in your life (or maybe YOU’RE the neurophile and you should just leave this site open on a friend’s computer, hint hint hint). The majority of these suggestions will be books….I know, huge surprise there! Some of the books are directly related to neuroscience, psychology, or human nature, and others are tangentially related to what I like to call “fringe science” – which, let’s face it, if you’re a neurophile, you undoubtedly have a mad scientist streak lingering beneath the surface. Please feel free to comment with more suggestions!

The Synaptic Self by Joseph LeDoux (big ups to NYU faculty)

– Literally anything written by Oliver Sacks, my favorite is Musicophilia: Tales of Music and the Brain

– Literally anything written by V.S. Ramachandran, my favorite is A Brief Tour of Human Consciousness: From Impostor Poodles to Purple Numbers

Connectome: How the Brain’s Wiring Makes Us Who We Are by Sebastian Seung

Descartes’ Error: Emotion, Reason, and the Human Brain by Antonio R. Damasio

Cosmos and Psyche: Intimation of a New World View by Richard Tarnas

Neuroethics by Martha J. Farah

Vision and Art: The Biology of Seeing by Margaret Livingstone and David Hubel

The Psychology of Art and the Evolution of the Conscious Brain by Robert L. Solso

The Structure of Scientific Revolutions by Thomas S. Kuhn

Natural Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence by Andy Clark

Science and the Akashic Field: An Integral Theory of Everything by Ervin Laszlo


NEUROGIFTS: Here are some (admittedly random) gift ideas for the neuroscientist, psychologist, student, or just general neurophile!

– “After Therapy Mints” I saw these today at my local coffee shop and thought they were 1) Hilarious and 2) Would make an awesome stocking stuffer!

0009Link: After Therapy Mints

– CafePress has a whole page with neuro gifts! T-shirts, stickers, mugs, etc! Don’t be afraid to brag about your love of the human brain, do you, you freakin’ genius.

im_hyperpolarized_mugLink: CafePress Neuroscience Gifts

– Know someone who’s graduating? This is a cute gift. The link is to the neuroscience news website gift store, which has a lot of cool items.

31ePNCezCpL._SL210_Link: NeuroscienceNews gifts and toys

– Seriously, we ALL want one of these on our desks/in our labs, but they are CRAZY expensive. Feel free to buy me one.

photo_25Link: 3-D Brain Model

– And finally, there is always neuro-inspired jewelery. In a recent search I found that Etsy vendors sell a wide variety of necklaces with brain pendants and many neurotransmitter adornments. Such as:

il_570xN.317112642Link: Brain Necklace Pendant by Etsy Vendor mrd74

il_570xN.401850563_adjj Link: Serotonin Molecule Necklace by Etsy Vendor arohasilhouettes

Please feel free to comment if you have any other gift ideas! Happy holidays everyone!! As a parting gift to you, I leave you with this video description of synaptic plasticity.

Inaugural Post

ImageThis is my first post here on wordpress! I intend on utilizing this space for discourse related to neuroscience. I’m currently a graduate student focusing on the study of cognitive and behavioral neuroscience, a field that is rapidly gaining momentum in the academic and public eye. I would like to cover a wide array of topics, including: scholarly article reviews, educational analysis of current theories/findings, interesting experiments, technological advances, philosophical debate, and even creative sentiments, such as poetry, visual art, and music related to human nature. I welcome comments, critiques, and alternative ideologies, as I believe this type of critical discourse is the most valuable asset we have toward propagating our understanding of the world around (and inside of) us.

My main area of interest is theory relating to the basis of human consciousness. We can trace explorations of the notion of consciousness back as far as Aristotle, yet this still remains one of the most poorly understood phenomenons in our world. There are striking disparities in consciousness paradigms; from the bottom-up (or top-down!), we still face questions like, is there a material basis for consciousness? The purpose of this blog is certainly not to provide definitive answers to such questions, but to share modern theories of consciousness, unpack some of the nuanced implications of such theories, and hopefully offer insights and analyses that will spark some introspection and debate. Stay tuned for my first post: “Paradigm Shifts: A Delineation of Modern Theories of Human Consciousness”.

“Every man can, if he so desires, become the sculptor of his own brain”
– Santiago Ramón y Cajal