Saturday, July 19, 2014

PLOS Blogger Calls Out PLOS ONE Journal

This is an interesting story - a PLOS blogger offers a powerful critique of a high profile PLOS ONE article that is little more than junk science. John Grohol of Psych Central offers a nice overview of the situation, including some good observations around peer review standards and the editorial process.

Links:
It's good to see that Coyne is not so afraid of criticizing his publisher's mistakes that he kept quiet.

PLOS Blogger Calls Out PLOS ONE Journal — Huh?

By John M. Grohol, Psy.D.
July 15, 2014



We’re living in strange times.

You don’t have to look any further than this lengthy critique of a recent journal article.

The critique appears on a Public Library of Science (PLOS) blog called Mind the Brain penned by James Coyne, PhD. Coyne is a well-published and diverse researcher himself, so he knows bad research when he sees — or smells — it.

The journal article being critiqued?



Something that was published by PLOS itself in its premiere open-access journal, PLOS ONE.

I guess the disconnect for me is that there is literally thousands of words of critique expended here on this journal article. And not just by Coyne, but also by the anonymous Neurocritic. Coyne expends over 3,000 words in this blog entry alone, and another 1,500 words critiquing the press release alone on this study on his own blog. Coyne is both brilliant and extensive in his critique; his review is worth the read.

The article under critique is only 5,500 some odd words. So nearly as much writing has been put into bashing this poor-quality study than it took to actually write it. This isn’t the first time Coyne has written something that may be a little uncomfortable for his publishers.1

Don’t get me wrong — it is, in my opinion, a very poor study. It deserved never to have been published (something I suspect that at least 20 percent of modern psychological research deserves).

But it’s both odd and awesome to see a blogger calling out the problems of a study published by the blogger’s own publisher. Odd in the sense of, “Wait a minute… Isn’t there a better way to do this? Shouldn’t we be leveraging these brilliant bloggers earlier on in the process, instead of conducting extensive post-hoc analyses?”

In my opinion, it suggests a serious disconnect — and lack of serious review by PLOS ONE. After all, PLOS ONE does have editorial standards.

Wouldn’t it make more sense to employ such bloggers as reviewers in your system, to ensure junk science such as this study never sees the light of day?

We hope the publishers at PLOS are listening. Because the best way to stop bad science in its tracks is to refuse to publish it in the first place. As PLOS ONE becomes a repository of anything with “data” in it, it loses its impact and importance. And indeed, PLOS is apparently losing its impact factor, and new journal submissions appear to be down as well.

There’s got to be a better way. Perhaps PLOS ONE is simply not immune to the crap research being regularly published by some of the world’s most prestigious journals, ranging from Pediatrics to Science to the Lancet. Or perhaps this sort of episode demonstrates the fatal flaws in the current review system, where reviewers seemingly have little incentive to reject bad research.

Read the full critique: Neurobalm: the pseudo-neuroscience of couples therapy

Footnotes:
  1. He discontinued his blog — or his blog was discontinued for him — at Psychology Today over a dispute about the title of a blog entry he wrote. []

Clinicians and Neuroscientists Must Work Together to Understand and Improve Psychological Treatments


In this interesting commentary article from the journal Nature, the authors argue that we need to create a mental health science that draws from both the neuroscientific literature as well as the clinical research and practice side. These two camps rarely communicate with each other and there has been very little interdisciplinary work so far. Let's hope that changes.


Full Citation:
Emily A. Holmes, E.A., Craske, M.G. & Graybiel, A.M. (2014, Jul 17). Psychological treatments: A call for mental-health science. Nature; 511:287–289. doi:10.1038/511287a

Psychological treatments: A call for mental-health science

Emily A. Holmes, Michelle G. Craske & Ann M. Graybiel
16 July 2014

Clinicians and neuroscientists must work together to understand and improve psychological treatments, urge Emily A. Holmes, Michelle G. Craske and Ann M. Graybiel.


Illustration by David Parkins


How does one human talking to another, as occurs in psychological therapy, bring about changes in brain activity and cure or ease mental disorders? We don't really know. We need to.

Mental-health conditions, such as post-traumatic stress disorder (PTSD), obsessive–compulsive disorder (OCD), eating disorders, schizophrenia and depression, affect one in four people worldwide. Depression is the third leading contributor to the global burden of disease, according to the World Health Organization. Psychological treatments have been subjected to hundreds of randomized clinical trials and hold the strongest evidence base for addressing many such conditions. These activities, techniques or strategies target behavioural, cognitive, social, emotional or environmental factors to improve mental or physical health or related functioning. Despite the time and effort involved, they are the treatment of choice for most people (see ‘Treating trauma with talk therapy’).

For example, eating disorders were previously considered intractable within our life time. They can now be addressed with a specific form of cognitive behavioural therapy (CBT) [1] that targets attitudes to body shape and disturbances in eating habits. For depression, CBT can be as effective as antidepressant medication and provide benefits that are longer lasting [2]. There is also evidence that interpersonal psychotherapy (IPT) is effective for treating depression.
Treating trauma with talk therapy

Ian was filling his car with petrol and was caught in the cross-fire of an armed robbery. His daughter was severely injured. For the following decade Ian suffered nightmares, intrusive memories, flashbacks of the trauma and was reluctant to drive — symptoms of post-traumatic stress disorder (PTSD).

Ian had twelve 90-minute sessions of trauma-focused cognitive behavioural therapy, the treatment with the strongest evidence-base for PTSD, which brings about improvement in about 75% of cases. As part of his therapy, Ian was asked to replay the traumatic memory vividly in his mind's eye. Ian also learned that by avoiding reminders of the trauma his memories remained easily triggered, creating a vicious cycle. Treatment focused on breaking this cycle by bringing back to his mind perceptual, emotional and cognitive details of the trauma memory.

After three months of treatment, Ian could remember the event without being overwhelmed with fear and guilt. The memory no longer flashed back involuntarily and his nightmares stopped. He began to drive again. 
 

A house divided


But evidence-based psychological treatments need improvement. Although the majority of patients benefit, only about half experience a clinically meaningful reduction in symptoms or full remission, at least for the most common conditions. For example, although response rates vary across studies, about 60% of individuals show significant improvement after CBT for OCD, but nearly 30% of those who begin therapy do not complete it [3]. And on average, more than 10% of those who have improved later relapse [4]. For some conditions, such as bipolar disorder, psychological treatments are not effective or are in their infancy.

Moreover, despite progress, we do not yet fully understand how psychological therapies work — or when they don't. Neuroscience is shedding light on how to modulate emotion and memory, habit and fear learning. But psychological understanding and treatments have, as yet, profited much too little from such developments.

It is time to use science to advance the psychological, not just the pharmaceutical, treatment of those with mental-health problems. Great strides can and must be made by focusing on concerns that are common to fields from psychology, psychiatry and pharmacology to genetics and molecular biology, neurology, neuroscience, cognitive and social sciences, computer science, and mathematics. Molecular and theoretical scientists need to engage with the challenges that face the clinical scientists who develop and deliver psychological treatments, and who evaluate their outcomes. And clinicians need to get involved in experimental science. Patients, mental-health-care providers and researchers of all stripes stand to benefit.

Interdisciplinary communication is a problem. Neuroscientists and clinical scientists meet infrequently, rarely work together, read different journals, and know relatively little of each other's needs and discoveries. This culture gap in the field of mental health has widened as brain science has exploded. Researchers in different disciplines no longer work in the same building, let alone the same department, eroding communication. Separate career paths in neuroscience, clinical psychology and psychiatry put the fields in competition for scarce funding.

Part of the problem is that for many people, psychological treatments still conjure up notions of couches and quasi-mystical experiences. That evidence-based psychological treatments target processes of learning, emotion regulation and habit formation is not clear to some neuroscientists and cell biologists. In our experience, many even challenge the idea of clinical psychology as a science and many are unaware of its evidence base. Equally, laboratory science can seem abstract and remote to clinicians working with patients with extreme emotional distress and behavioural dysfunction.

Changing attitudes

Research on psychological treatments is, in the words of this journal, “scandalously under-supported” (see Nature 489, 473–474; 2012). Mental-health disorders account for more than 15% of the disease burden in developed countries, more than all forms of cancer. Yet it has been estimated that the proportion of research funds spent on mental health is as low as 7% in North America and 2% in the European Union.

Within those slender mental-health budgets, psychological treatments receive a small slice — in the United Kingdom less than 15% of the government and charity funding for mental-health research, and in the United States the share of National Institute of Mental Health funding is estimated to be similar. Further research on psychological treatments has no funding stream analogous to investment in the pharmaceutical industry.

This Cinderella status contributes to the fact that evidence-based psychological treatments, such as CBT, IPT, behaviour therapy and family therapy, have not yet fully benefitted from the range of dramatic advances in the neuroscience related to emotion, behaviour and cognition. Meanwhile, much of neuroscience is unaware of the potential of psychological treatments. Fixing this will require at least three steps.


Three steps


Uncover the mechanisms of existing psychological treatments. There is a very effective behavioural technique, for example, for phobias and anxiety disorders called exposure therapy. This protocol originated in the 1960s from the science of fear-extinction learning and involves designed experiences with feared stimuli. So an individual who fears that doorknobs are contaminated might be guided to handle doorknobs without performing their compulsive cleansing rituals. They learn that the feared stimulus (the doorknob) is not as harmful as anticipated; their fears are extinguished by the repeated presence of the conditional stimulus (the doorknobs) without safety behaviours (washing the doorknobs, for example) and without the unconditional stimulus (fatal illness, for example) that was previously signalled by touching the doorknob.

But in OCD, for instance, nearly half of the people who undergo exposure therapy do not benefit, and a significant minority relapse. One reason could be that extinction learning is fragile — vulnerable to factors such as failure to consolidate or generalize to new contexts. Increasingly, fear extinction is viewed [5] as involving inhibitory pathways from a part of the brain called the ventromedial prefrontal cortex to the amygdala, regions of the brain involved in decision-making, suggesting molecular targets for extinction learning. For example, a team led by one of us (M.G.C.), a biobehavioural clinical scientist at the University of California, Los Angeles, is investigating the drug scopolamine (usually used for motion sickness and Parkinson's disease) to augment the generalization of extinction learning in exposure therapy across contexts. Others are trialling D-cycloserine (originally used as an antibiotic to treat tuberculosis) to enhance the response to exposure therapy [6].

Another example illustrates the power of interdisciplinary research to explore cognitive mechanisms. CBT asserts that many clinical symptoms are produced and maintained by dysfunctional biases in how emotional information is selectively attended to, interpreted and then represented in memory. People who become so fearful and anxious about speaking to other people that they avoid eye contact and are unable to attend their children's school play or a job interview might notice only those people who seem to be looking at them strangely (negative attention bias), fuelling their anxiety about contact with others. A CBT therapist might ask a patient to practice attending to positive and benign faces, rather than negative ones.

In the past 15 years, researchers have discovered that computerized training can also modify cognitive biases [7]. For example, asking a patient (or a control participant) to repeatedly select the one smiling face from a crowd of frowning faces can induce a more positive attention bias. This approach enables researchers to do several things: test the degree to which a given cognitive bias produces clinical symptoms; focus on how treatments change biases; and explore ways to boost therapeutic effects.

One of us (E.A.H.) has shown with colleagues that computerized cognitive bias modification alters activity in the lateral prefrontal cortex [8], part of the brain system that controls attention. Stimulating neural activity in this region electrically augments the computer training. Such game-type tools offer the possibility of scalable, 'therapist-free' therapy.

Optimize psychological treatments and generate new ones. Neuroscience is providing unprecedented information about processes that can result in, or relieve, dysfunctional behaviour. Such work is probing the flexibility of memory storage, the degree to which emotions and memories can be dissociated, and the selective neural pathways that seem to be crucial for highly specialized aspects of the emotional landscape and can be switched on and off experimentally. These advances can be translated to the clinical sphere.

For example, neuroscientists (including A.M.G.) have now used optogenetics to block [9] and produce [10] compulsive behaviour such as excessive grooming by targeting different parts of the orbitofrontal cortex. The work was inspired by clinical observations that OCD symptoms, in part, reflect an over-reaction to conditioned stimuli in the environment (the doorknobs in the earlier example). These experiments suggest that a compulsion, such as excessive grooming, can be made or broken in seconds through targeted manipulation of brain activity. Such experiments, and related work turning on and off 'normal' habits with light that manipulates individual cells (optogenetics), raise the tantalizing possibility of optimizing behavioural techniques to activate the brain circuitry in question.

Forge links between clinical and laboratory researchers. We propose an umbrella discipline of mental-health science that joins behavioural and neuroscience approaches to problems including improving psychological treatments. Many efforts are already being made, but we need to galvanize the next generation of clinical scientists and neuroscientists to interact by creating career opportunities that enable them to experience advanced methods in both.

New funding from charities, the US National Institutes of Health and the European framework Horizon 2020 should strive to maximize links between fields. A positive step was the announcement in February by the US National Institute of Mental Health that it will fund only the psychotherapy trials that seek to identify mechanisms.

Neuroscientists and clinical scientists could benefit enormously from national and international meetings. The psychological treatments conference convened by the mental-health charity MQ in London in December 2013 showed us that bringing these groups together can catalyse new ideas and opportunities for collaboration. (The editor-in-chief of this journal, Philip Campbell, is on the board of MQ.) Journals should welcome interdisciplinary efforts — their publication will make it easier for hiring committees, funders and philanthropists to appreciate the importance of such work.
What next

By the end of 2015, representatives of the leading clinical and neuroscience bodies should meet to hammer out the ten most pressing research questions for psychological treatments. This list should be disseminated to granting agencies, scientists, clinicians and the public internationally.

Mental-health charities can help by urging national funding bodies to reconsider the proportion of their investments in mental health relative to other diseases. The amount spent on research into psychological treatments needs to be commensurate with their impact. There is enormous promise here. Psychological treatments are a lifeline to so many — and could be to so many more.

References

1. Fairburn, C. G. et al. Am. J. Psychiatry 166, 311–319 (2009).
2. Hollon, S. D. et al. Arch. Gen. Psychiatry 62, 417–422 (2005).
3. Foa, E. B. et al. Am. J. Psychiatry 162, 151–161 (2005).
4. Simpson, H. B. et al. Depress. Anxiety 19, 225–233 (2004).
5. Vervliet, B., Craske, M. G. & Hermans, D. Annu. Rev. Clin. Psychol. 9, 215–248 (2013).
6. Otto, M. W. et al. Biol. Psychiatry 67, 365–370 (2010).
7. MacLeod, C. & Mathews, A. Annu. Rev. Clin. Psychol. 8, 189–217 (2012).
8. Browning, M., Holmes, E. A., Murphy, S. E., Goodwin, G. M. & Harmer, C. J. Biol. Psychiatry 67, 919–925 (2010).
9. Burguière, E., Monteiro, P., Feng, G. & Graybiel, A. M. Science 340, 1243–1246 (2013).
10. Ahmari, S. E. et al. Science 340, 1234–1239 (2013).

Affiliations

Emily A. Holmes is at the Medical Research Council Cognition & Brain Sciences Unit, Cambridge, UK, and in the Department for Clinical Neuroscience, Karolinska Institute, Stockholm, Sweden.

Michelle G. Craske is in the Department of Psychology and Department of Psychiatry and Biobehavioral Sciences, University of California, Los Angeles, USA.

Ann M. Graybiel is in the Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA.

What Is Dreaming and What Does It Tell Us about Memory?


From Scientific American, this is an excerpt from a new book by Penelope Lewis, The Secret World of Sleep: The Surprising Science of the Mind at Rest. I'm not convinced based on this short section that there is anything new in the book, but it seems worth sharing nonetheless.

What Is Dreaming and What Does It Tell Us about Memory?

Dreams may play a role in memory incorporation and influence our long-term moods, physiology and creativity

Jul 18, 2014 | By Penelope A. Lewis

Excerpted with permission from The Secret World of Sleep: The Surprising Science of the Mind at Rest, by Penelope A. Lewis. Available from Palgrave Macmillan Trade. Copyright © 2013. (Scientific American and Palgrave Macmillan are part of the Holtzbrinck Publishing Group.)

You are terrified and running along a dark, narrow corridor. Something very evil and scary is chasing you, but you’re not sure why. Your fear is compounded by the fact that your feet won’t do what you want—it feels like they are moving through molasses. The pursuer is gaining, but when it finally catches you, the whole scene vanishes...and you wake up.

Almost by definition, a dream is something you are aware of at some level. It may be fragmentary, disconnected, and illogical, but if you aren’t aware of it during sleep then it isn’t a dream. Many people will protest, “I never remember my dreams!,” but that is a different matter entirely. Failing to remember a dream later on when you’re awake doesn’t mean you weren’t aware of it when it occurred. It just means the experience was never really carved into your memory, has decayed in storage, or isn’t accessible for easy call back.

We all intuitively know what a dream is, but you’ll be surprised to learn there’s no universally accepted definition of dreaming. One fairly safe catch-all is “all perceptions, thoughts, or emotions experienced during sleep.” Because this is very broad, there are also several different ways of rating, ranking, and scoring dreams. For example, one uses an eight-point rating system from 0 (no dream) to 7 (“an extremely long sequence of 5 or more stages”).

Physical Bases of Dreams

But let me backtrack. One aim of neuroscience is to map the brain loci of thoughts and mental experiences. Everything we see, imagine, or think about is linked to neural responses somewhere in the brain. Dreams also have a home. Neural activity in the primary sensory areas of the neocortex produces the impression of sensory perception. This means that neurons firing in the primary visual cortex create the illusion of seeing things, neurons firing in the primary auditory area create the illusion of hearing things, and so forth. If that firing occurs at random, these perceptions can feel like crazy, randomly fragmented hallucinations. It is easy to imagine that the random imagery and sensations created in this way could be woven together to create a complex, multisensory hallucination which we might call a dream.

Do Dreams Serve a Purpose?

In contrast to an activation-synthesis model, which views dreams as epiphenomena—a simple by-product of neural processes in sleep—other scientists have suggested that dreams serve an important function. As usual in psychology, there are lots of different ideas about what this function could be. Sigmund Freud’s suggestion that dreams express forbidden desires is of course the most famous of these, but there are lots of other theories about what dreams might do, many with more empirical support than the Freudian view. For example, the threat simulation hypothesis suggests that dreams may provide a sort of virtual reality simulation in which we can rehearse threatening situations, even if we don’t remember the dreams. Presumably, this rehearsal would lead to better real-life responses, so the rehearsal is adaptive. Evidence supporting this comes from the large proportion of dreams which include a threatening situation (more than 70 percent in some studies) and the fact that this percentage is much higher than the incidence of threats in the dreamer’s actual daytime life. Furthermore, studies of children in two different areas of Palestine show that those who live in a more threatening environment also have a much higher incidence of threat in their dreams. Reactions to these threats are almost always relevant and sensible, so the rehearsal (if that’s what it is) clearly involves plausible solutions, again suggesting that they provide a kind of valid simulation of potential real-life scenarios.

Another suggestion is that dreams influence the way you feel the next day, either in terms of mood or more basic bodily states. Forcing people to remember the nastier dreams from their REM sleep definitely puts them in a foul mood, and nightmares (defined as very negative dreams which can wake you up) may even lead to ongoing mood problems. On the other hand, there is also evidence that dreams could help to regulate long-term mood. For instance, a study of dreams in divorced women showed that those who dreamed about their ex-husbands more often were better adapted to the divorce. Amazingly enough, dreams also seem able to influence physiological state: One study showed that people who were deprived of water before they slept, but then drank in their dreams, felt less thirsty when they woke up.

The content of dreams can be influenced in lots of different ways. For instance, recent work has shown that sleepers tend to initiate pleasant dreams if nice smells are wafted at them in REM sleep, and they have negative or unhappy dreams if stinky, unpleasant smells are sent their way. Some people can achieve lucid dreaming, in which they control the sequence of events in their dream, and evidence suggests that these techniques can be learned by intensive practice and training. All of this is highly tantalizing, of course, because (though it tells us nothing at all about the original evolved purpose of dreams) it suggests we might not only be able to set ourselves up for pleasant experiences while we sleep, but we might also eventually be able to use these techniques to treat mood disorders, phobias, and other psychological problems. We already know that hypnotic suggestion can cause people to incorporate snakes, spiders, or other things about which they have phobias into their dreams, and—when combined with more benign forms of these menacing objects—such incorporation helps to remove the phobia. Hypnotic suggestion can also make dreams more pleasant, and mental imagery practiced during the day can be used to modify (and often nullify) persistent nightmares.

There is little evidence that people actually learn during their dreams. The fact that they can learn during sleep is a different matter, but dreams themselves don’t appear to be a good forum for imprinting new information into the hippocampus (after all, we don’t even remember our dreams most of the time). Studies of language learning illustrate this well. Although learning efficiency is predicted by an increase in the percentage of the night that is spent in REM, the dreams which are experienced during this extra REM don’t have much to do with language. If they relate to it at all they are most often about the frustration of not being able to understand something and not about the mechanics of how to construct or decode a sentence.

Memories in Dreams

What’s the most recent dream you can remember? Was anyone you know in it? Did it happen in a place you know well? Were you doing something familiar? Most dreams incorporate fragments of experiences from our waking lives. It’s common to dream about disconnected snippets like a particular person, place, or activity. But do dreams ever replay complete memories—for instance, the last time you saw your mother, including the place, activities, and people? Memories like this are called episodic because they represent whole episodes instead of just fragments; studies the secret world of sleep of dreaming show that these types of memories are sometimes replayed in sleep, but it is quite rare (around 2 percent of dreams contain such memories, according to one study). Most of our dreams just recombine fragments of waking life. These fragments are relatively familiar and reflect the interests and concerns of the dreamer. This means cyclists dream about cycling, teachers dream about teaching, and bankers dream about money.

Some researchers have capitalized upon dream reports to gain insight into the process by which memories are immediately incorporated (i.e., in the first night after they were initially experienced). Freud famously referred to this as “day-residues.” One study showed day residues appear in 65 to 70 percent of single dream reports. On the other hand, a more recently described phenomenon called the dream-lag effect refers to the extraordinary observation that, after its initial appearance as a day residue, the likelihood that a specific memory will be incorporated into dreams decreases steadily across the next few nights after the memory was formed, then increases again across the following few nights (Fig. 20).

Thus, it is very common for memories to be incorporated into dreams on the first night after they were initially experienced (if I have a car crash today, I’m likely to dream about it tonight). The likelihood of such incorporation decreases gradually across the next few nights, with few memories incorporated into dreams three to five days after they occurred. Extraordinarily, however, the probability that a memory will be incorporated into a dream increases again on nights six and seven after it was initially experienced. What is going on here? Why are memories less likely to be incorporated into dreams three to five days after they originally occurred than six to seven days afterward? One possibility relates to the need for consolidation. Memories may be inaccessible at this stage because they are being processed in some way which takes them temporarily “offline.” Notably, this effect is only true for people who report vivid dreams, and it also appears to only be true of REM dreams. As with most research, the dream-lag effect raises more questions than it answers.

Why Do We Have Different Kinds of Dreams at Different Stages of the Night?

Dreams aren’t all the same. Everyone is aware of the difference between good and bad dreams, but we don’t tend to notice that some dreams are more logical and structured while others are more bizarre. Some dreams are so highly realistic that it is difficult to convince ourselves they aren’t real, while others are fuzzy and indistinct. Some dreams are fragmented, jumping rapidly from one topic to another, while others move forward in a more coherent story. Recent analyses have suggested that these differences are far from random; instead they may be driven by the physiology of various brain states and the extent to which structures like the hippocampus and neocortex are in communication during different sleep stages.

Dreams occur in all stages of sleep, but they seem to become increasingly fragmented as the night progresses. In general, they appear to be constructed out of a mishmash of prior experience. As mentioned above, dreams contain disconnected memory fragments: places we’ve been, faces we’ve seen, situations that are partly familiar. These fragments can either be pasted together in a semi-random mess or organized in a structured and realistic way. The dreams that occur in non-REM sleep tend to be shorter but more cohesive than REM dreams, and often they relate to things that just happened the day before. REM dreams that occur early in the night often also reflect recent waking experiences, but they are more fragmented than their non-REM counterparts. Conversely, REM dreams that occur late in the night are typically much more bizarre and disjointed.

Simply thinking about where these memory fragments are coming from and how they are connected together may provide an explanation for the difference between early and late-night dreams. The various elements of an episode are thought to be stored in the neocortex, but they are not necessarily linked together to form a complete representation. For example, if your memory of having dinner last night involves memories about a specific place, specific sounds, specific actions, and maybe even memories about other people who were there, each of these bits of information is represented by a different area of the neocortex. Even though they combine together to make up a complete memory, these various neocortical areas may not be directly interlinked. Instead, the hippocampus keeps track of such connections and forms the appropriate linkages, at least while the memory is relatively fresh. However, communication between the neocortex and hippocampus is disrupted during sleep, so this process is also disrupted. During REM sleep, both the hippocampus and those parts of the neocortex which are involved in a current dream are strongly active—but they don’t appear to be in communication. Instead, responses in the neocortex occur independently, without hippocampal input, so they must relate to memory fragments rather than linked multisensory representations. Essentially, when memories which have been stored in the neocortex are accessed or activated during REM, they remain fragmentary instead of drawing in other aspects of the same memory to form a complete episodic replay. These fragments aren’t linked together in the way they might be if you thought of the same place while you were awake (or indeed in non-REM sleep). For instance, cortical representations of both someone who was present for your dinner last night and of the place where it was held may be triggered, but these will not necessarily be linked together, and they may not be linked to the idea of dinner or eating at all. Instead, seemingly unrelated characters and events may be activated in conjunction with the memory of this place. One possible driver for this is the stress hormone cortisol, which increases steadily across the night. High cortisol concentrations can block communication between the hippocampus and neocortex, and since concentrations are much higher early in the morning, this could provide a physiological reason for the disjointed properties of late-night (early morning) dreams.

Irrespective of how it happens, it is clear that dreams not only replay memory fragments but also create brand-new, highly creative mixtures of memories and knowledge. This process has led to the creation of many works of literature, art, and science, such as Mary Shelley’s Frankenstein, the molecular formula of benzene, and the invention of the light bulb. An especially good demonstration of this somnolent creativity comes from a study of 35 professional musicians who not only heard more music in their dreams than your normal man-on-the-street but also reported that much of this (28 percent) was music they had never heard in waking life. They had created new music in their dreams!

Although we don’t quite understand how dreams achieve this type of innovative recombination of material, it seems clear that the sleeping brain is somehow freed of constraints and can thus create whole sequences of free associations. This is not only useful for creativity, it is also thought to facilitate insight and problem solving. It may even be critical for the integration of newly acquired memories with more remote ones (see chapter 8). In fact, this facilitated lateral thinking could, in itself, be the true purpose of dreams. It is certainly valuable enough to have evolved through natural selection.

Friday, July 18, 2014

Neuroplasticity: New Clues to Just How Much the Adult Brain Can Change


In this recent article from Scientific American, Gary Stix takes a somewhat critical stance on the proliferation of books and video games purportedly explaining or increasing neuroplasticity. On the other hand, he summarizes some basic research that is moving our knowledge of neuroplasticity in the right direction.

The image above comes from Searching for the Mind with John Lieff, M.D.

Neuroplasticity: New Clues to Just How Much the Adult Brain Can Change

By Gary Stix | July 14, 2014

The views expressed are those of the author and are not necessarily those of Scientific American.
 


A boy with ambylopia exercises his weaker eye

Popular neuroscience books have made much in recent years of the possibility that the adult brain is capable of restoring lost function or even enhancing cognition through sustained mental or physical activities. One piece of evidence often cited is a 14-year-old study that shows that London taxi drivers have enlarged hippocampi, brain areas that store a mental map of one’s surroundings. Taxi drivers, it is assumed, have better spatial memory because they must constantly distinguish the streets and landmarks of Shepherd’s Bush from those of Brixton.

A mini-industry now peddles books with titles like The Brain that Changes Itself or Rewire Your Brain: Think Your Way to a Better Life. Along with self-help guides, the value of games intended to enhance what is known as neuroplasticity are still a topic of heated debate because no one knows for sure whether or not they improve intelligence, memory, reaction times or any other facet of cognition.

Beyond the controversy, however, scientists have taken a number of steps in recent years to start to answer the basic biological questions that may ultimately lead to a deeper understanding of neuroplasticity. This type of research does not look at whether psychological tests used to assess cognitive deficits can be refashioned with cartoonlike graphics and marketed as games intended to improve mental skills. Rather, these studies attempt to provide a simple definition of how mutable the brain really is at all life stages, from infancy onward into adulthood.

One ongoing question that preoccupies the basic scientists pursuing this line of research is how routine everyday activities—sleep, wakefulness, even any sort of movement—may affect the ability to perceive things in the surrounding environment. One of the leaders in these efforts is Michael Stryker, who researches neuroplasticity at the University of California San Francisco. Stryker headed a group that in 2010 published a study on what happened when mice run on top of a Styrofoam ball floating on air. They found that neurons in a brain region that processes visual signals—the visual cortex—nearly doubled their firing rate when the mice ran on the ball.

The researchers probed further and earlier this year published on a particular circuit that acts as a sort of neural volume control in the visual cortex. It turns out that a certain type of neuron—the vasoactive intestinal peptide neurons (yes, they’re brain cells)— respond to incoming signals from a structure deep within the brain that signals that the animal is on the move. The VIP neurons then issue a call to turn up the firing of cells in the visual cortex. (As always with the brain, it’s not quite that straightforward: the VIP neurons squelch the activity of other neurons whose job is to turn down the “excitatory” neurons that rev up processing of visual information.)

“In the mouse the circuit happens to be hooked in the visual cortex to locomotion which puts it into a high-gain state,” Styker says. “That’s a sensible thing because when you move through the environment, you want the sensory system that tells you about things far away to be more active, to give a larger signal.” The researchers postulate that these neurons may form part of a general-purpose circuit able to detect an animal’s particular behavioral state and then respond to that input by regulating different parts of the cortex that process vision, hearing and other sensory information.

In late June, the investigators took their studies in a new direction with a publication that showed the possible clinical benefits of dialing up their newly identified neural knob. They did so in a study that demonstrated how the circuit that involves the VIP neurons plays a pivotal role in restoring visual acuity in a mouse that had been deprived of sight during a critical period in infancy when the animal must either use it or lose it. They sewed shut one eye in the young mouse for a time—effectively replicating amblyopia, a condition called “lazy eye” in human children that leads to vision loss. They waited until the mice had passed through the critical development stage, took out the stitches, and then switched on the VIP neurons in the behavioral plasticity circuit by having the mice go for a run. That restored vision to normal levels, but only if the animals were also exposed simultaneously to various forms of visual stimuli—either a grating pattern or random noise, similar to a television picture when a station is off the air.

The investigators have plans to see whether the same circuit in humans operates in a similar manner. One cautionary note to brain-game designers: the success in these experiments in eliciting plasticity—restoring vision, that is—was highly sensitive to the particular conditions under which the experiments were carried out. The visual cortex of a running rodent exposed to the grating pattern was better able to distinguish a similar geometric representation later, but not an image of snow-like noise.

What that means is that simply creating a game out of an n-back or Stroop test, or any other psychological assay for that matter, may not work that well in improving memory or self-control unless neuroscientists delve down with the same detailed level of analysis that Stryker and colleagues brought to bear. “We still don’t know what changes in circuitry are responsible for these phenomena of adult plasticity because we don’t have a really solid anatomical grasp of them,” Stryker says. Without the requisite insight, it may be that brain games make you into an ace at taking psychological tests designed to assess cognition, but these same tests may which have little or nothing to do with actually improving mental skills. You may spend all that time on sharpening cognition and end up as nothing more than a highly practiced test taker.

Stay tuned in coming years as brain science tries to sort out the plastic from the inelastic.

Image Source: National Eye Institute
About the Author: Gary Stix, a senior editor, commissions, writes, and edits features, news articles and Web blogs for SCIENTIFIC AMERICAN. His area of coverage is neuroscience. He also has frequently been the issue or section editor for special issues or reports on topics ranging from nanotechnology to obesity. He has worked for more than 20 years at SCIENTIFIC AMERICAN, following three years as a science journalist at IEEE Spectrum, the flagship publication for the Institute of Electrical and Electronics Engineers. He has an undergraduate degree in journalism from New York University. With his wife, Miriam Lacob, he wrote a general primer on technology called Who Gives a Gigabyte? Follow on Twitter @@gstix1.

Neuroscience: "I Can Read Your Mind" (BBC Future)

From the BBC Future blog, this is an interesting article on new efforts in neuroscience to read minds (sort of), or least to identify images received through sensory channels as well as those experienced in dreams. All of this part of the European Human Brain Project.

This article looks at a little bit of the research currently being conducted and how it fits into bigger projects.

Neuroscience: ‘I can read your mind’

Rose Eveleth | 18 July 2014


(Getty Images)

What are you looking at? Scientist Jack Gallant can find out by decoding your thoughts, as Rose Eveleth discovers.

After leaders of the billion-euro Human Brain Project hit back at critics, six top neuroscientists have expressed "dismay" at their public response

Jack Gallant can read your mind. Or at least, he can figure out what you’re seeing if you’re in his machine watching a movie he’s playing for you.

Gallant, a researcher at the University of California, Berkeley, has a brain decoding machine – a device that uses brain scanning to peer into people’s minds and reconstruct what they’re seeing. If mind-reading technology like this becomes more common, should we be concerned? Ask Gallant this question, and he gives a rather unexpected answer.

In Gallant’s experiment, people were shown movies while the team measured their brain patterns. An algorithm then used those signals to reconstruct a fuzzy, composite image, drawing on a massive database of YouTube videos. In other words, they took brain activity and turned it into pictures, revealing what a person was seeing.
For Gallant and his lab, this was just another demonstration of their technology. While his device has made plenty of headlines, he never actually set out to build a brain decoder. “It was one of the coolest things we ever did,” he says, “but it’s not science.” Gallant’s research focuses on figuring out how the visual system works, creating models of how the brain processes visual information. The brain reader was a side project, a coincidental offshoot of his actual scientific research. “It just so happens that if you build a really good model of the brain, then that turns out to be the best possible decoder.”

Science or not, the machine strokes the dystopian futurists among people who fear that the government could one day tap into our innermost thoughts. This might seem like a silly fear, but Gallant says it’s not. “I actually agree that you should be afraid,” he says, “but you don’t have to be afraid for another 50 years.” It will take that long to solve two of the big challenges in brain-reading technology: the portability, and the strength of the signal.



Decoding brain signals from scans can reveal what somebody is looking at (SPL)
Right now, in order for Gallant to read your thoughts, you have to slide into a functional magnetic resonance imaging (MRI) machine – a huge, expensive device that measures where the blood is flowing in the brain. While MRI is one of the best ways to measure the activity of the brain, it’s not perfect, nor is it portable. Subjects in an MRI machine can’t move, and the devices are expensive and huge. 
And while comparing the brain image and the movie image side by side makes their connection apparent, the image that Gallant’s algorithm can build from brain signals isn’t quite like peering into a window. The resolution on MRI scans simply isn’t high enough to create something that generates a clear picture. “Until somebody comes up with a method for measuring brain activity better than we can today there won’t be many portable brain-decoding devices that will be built for a general use,” he says.

Dream reader

While Gallant isn’t working on trying to build any more decoding machines, others are. One team in Japan is currently trying to make a dream reader, using the same fMRI technique. But unlike in the movie experiment, where researchers know what the person is seeing and can confirm that image in the brain readouts, dreams are far trickier.

To try and train the system, researchers put subjects in an MRI machine and let them slip into that weird state between wakefulness and dreaming. They then woke up the subject and ask what they had seen. Using that information, they could correlate the reported dream images—everything from ice picks to keys to statues – to train the algorithm.


Researchers are trying to decode dreams by studying brain activity (Thinkstock)
Using this database, the Japanese team was able to identify around 60% of the types of images dreamers saw. But there’s a key hurdle between these experiments and a universal dream decoder: each dreamer’s signals are different. Right now, the decoder has to be trained to each individual. So even if you were willing to sleep in an MRI machine, there’s no universal decoder that can reveal your nightly adventures.

Even though he’s not working on one, Gallant knows what kind of brain decoder he might build, should he chose to. “My personal opinion is that if you wanted to build the best one, you would decode covert internal speech. If you could build something that takes internal speech and translates into external speech,” he says, “then you could use it to control a car. It could be a universal translator.”

Inner speech

Some groups are edging closer to this goal; a team in the Netherlands, for instance, scanned the brains of bilingual speakers to detect the concepts each participant were forming – such as the idea of a horse or cow, correctly identifying the meaning whether the subjects were thinking in English or Dutch. Like the dream decoder, however, the system needed to be trained on each individual, so it is a far cry from a universal translator.

If nothing else, the brain reader has sparked more widespread interest in Gallant’s work. “If I go up to someone on the street and tell them how their brains work their eyes glaze over,” he says. When he shows them a video of their brains actually at work, they start to pay attention.

If you would like to comment on this, or anything else you have seen on Future, head over to our Facebook or Google+ page, or message us on Twitter.

If Trauma Victims Forget, What Is Lost to Society?

http://static.nautil.us/3745_6d3a2d24eb109dddf78374fe5d0ee067.png

Emily Anthes is the author of Frankenstein's Cat: Cuddling Up to Biotech's Brave New Beasts (2013) and in this article for Nautlius she takes an in-depth look at the psychopharmacology of erasing trauma memories, but also at what the social cost of these lost memories might mean.

Here is a key quote:
Perhaps, researchers hypothesized, propranolol could weaken emotional memories if PTSD patients took the drug after they conjured up the details of a painful experience. By blocking the effects of norepinephrine and epinephrine upon recall, propranolol might dampen down activity in the amygdala and disrupt reconsolidation.
This is interesting research, but in my perspective (as someone who works with those who suffer from the symptoms of PTSD), erasing memories is the wrong approach.

If it were up to me, we would be discovering ways to release the emotional charge of trauma memories so that memories still exist but are not so haunting or debilitating.

If Trauma Victims Forget, What Is Lost to Society?

A pill to dampen memories stirs hope and worry.

By Emily Anthes | Photo by Andrew B. Myers | July 17, 2014

THE WOMEN THAT COME to see Deane Aikins, a clinical psychologist at Wayne State University, in Detroit, are searching for a way to leave their traumas behind them. Veterans in their late 20s and 30s, they served in Iraq and Afghanistan. Technically, they’d been in non-combat positions, but that didn’t eliminate the dangers of warfare. Mortars and rockets were an ever-present threat on their bases, and they learned to sleep lightly so as not to miss alarms signaling late-night attacks.

Some of the women drove convoys of supplies across the desert. It was a job that involved worrying about whether a bump in the road was an improvised explosive device, or if civilians in their path were strategic human roadblocks.

On top of all that, some of the women had been sexually assaulted by their military colleagues. After one woman was raped, she helped her drunk assailant sneak back into his barracks because she worried that if they were caught, she’d be disciplined or lose her job.

These traumas followed the women home. Today, far from the battlefield, they find themselves struggling with vivid flashbacks and nightmares, tucking their guns under their pillows at night. Some have turned to alcohol to manage their symptoms; others have developed exhausting routines to avoid any people or places that might trigger painful memories and cause them to re-live their experiences in excruciating detail.

Despite their sometimes debilitating symptoms, the women, who have all been diagnosed with post-traumatic stress disorder (PTSD), are working hard to carry out the tasks of day-to-day living. “They’re trying to get a job or go back to school,” says Aikins. “They’re just trying to get on with their lives.” Aikins is hoping to help the veterans do just that, using what may seem like an unlikely remedy: a cheap, generic pill commonly prescribed for high blood pressure. The drug, a beta blocker called propranolol, has gained attention for its potential to dial down traumatic memories, making them less emotionally upsetting when they’re recalled.

However, promising studies have also stirred controversy, with some bioethicists warning that memory-dulling drugs could have profound, unintended consequences for our psyches and our society. The debate is raising tricky questions about what—and who—memory is for. The European Union’s highest court recently ruled that, at least when it comes to the Internet, we all have the “right to be forgotten” for things no longer relevant. Do we also have the right to forget?

WHEN THEY'RE BEING FORMED, memories are fragile. At the neurological level, learning occurs when the connections, or synapses, between two or more neurons grow stronger. This synaptic strengthening involves a cascade of cellular and molecular processes as neurons turn genes on and off, synthesizing new proteins. These changes can take hours and are susceptible to disruption. (That’s why someone with a concussion, for instance, may not remember anything from several hours before the injury.) Over time, however, this flurry of neuronal activity appears to transform transient, short-term memories into stable, long-term ones in a process known as memory consolidation.

During a stressful or frightening event, the body releases a flood of “fight or flight” hormones, including epinephrine in the blood and norepinephrine in the brain. The rush of norepinephrine causes the amygdala, a brain region involved in the processing of emotion, to send signals to parts of the brain that consolidate memories. “What the amygdala does is to say, ‘Make a better, strong memory of whatever happened,’” says James McGaugh, a neuroscientist at the University of California, Irvine.

As a result, emotionally arousing experiences get stamped more firmly into our brains than routine events, such as a weekly trip to the grocery store. Although vivid memories of stressful events can provide important lessons for the future, in a small number of cases these memories become intrusive and disabling, resulting in PTSD. In this disorder, even innocuous environmental stimuli—the sound of a car backfiring, a glimpse of a neighbor who looks vaguely like an assailant—can trigger traumatic memories, as well as the overwhelming fear that accompanied the initial trauma. And when these recollections come, the sympathetic nervous system kicks into overdrive; hearts pound, palms sweat, and breathing quickens, even though the danger has long passed. “It’s this incredibly debilitating sympathetic surge,” Aikins says. Some 7.7 million American adults suffer from PTSD, according to the National Institute of Mental Health.


Two decades ago, scientists began to wonder if they could weaken traumatic memories by suppressing the hormonal rush that accompanies their formation. They turned to propranolol, which was already on the market as a treatment for hypertension and blocks the activity of hormones like epinephrine and norepinephrine. Psychological studies with the drug helped to reveal how stress hormones and the “emotional” amygdala influence memory consolidation. In 1994, McGaugh and Larry Cahill, also a neuroscientist at the University of California, Irvine, reported that healthy subjects who received a dose of propranolol before they heard an upsetting story later recalled fewer details about that story than those who took a placebo. Next, in 2002, neuroscientists reported that emergency room patients who took propranolol within 6 hours of a traumatic event were less likely to experience the heightened emotions and arousal associated with PTSD one month later, compared with people who took placebos.[1]

The hitch was that in order to interfere with memory consolidation, propranolol needed to be given within hours of a trauma, long before doctors knew whether someone would go on to develop PTSD. But around the same time, studies began to show that memories can once again become fragile when they are recalled. “There’s new encoding happening so that you’re not only playing back the memory, but you’re also updating it,” says Stephen Maren, a neuroscientist at Texas A&M University. “It’s sort of opening this window of opportunity to manipulate the memory, to erase the memory, to dampen it.” After recall, the brain must then reconsolidate the new version of the memory, which requires cellular changes similar to those involved in the initial consolidation.

Perhaps, researchers hypothesized, propranolol could weaken emotional memories if PTSD patients took the drug after they conjured up the details of a painful experience. By blocking the effects of norepinephrine and epinephrine upon recall, propranolol might dampen down activity in the amygdala and disrupt reconsolidation.

Initial tests of this idea have been promising. In these experiments, patients with PTSD are typically given propranolol either shortly before or after describing their traumas in detail. In a 2011 report on three trials, researchers reported that after six propranolol-recall treatment sessions, patients showed a significant decrease in PTSD symptoms.[2] Larger clinical trials are now underway, including Aikins’s study, in which female veterans with PTSD will receive either propranolol or a placebo after thinking about their wartime experiences. Four weeks later, Aikins will measure the veterans’ physiological responsiveness and their PTSD symptoms, looking for signs of improvement.

It’s still unclear exactly how propranolol works, but the evidence so far suggests that the drug is not erasing people’s recollections of the facts, but just muting their emotional and physiological responses. In a study of healthy volunteers, subjects who received uncomfortable electric shocks after viewing certain slides were perfectly capable of recalling which images foretold a shock after they’d taken propranolol. But they did not become fearful when they saw the images again, unlike control subjects who took a placebo.[3] Propranolol appeared to alter the emotional content of the memory, while sparing fact-based recall.

However, scientists don’t yet know how the drug will alter remembrances of real-world traumas. Although some studies of healthy subjects have found that people given propranolol remember fewer details about a fictional story, researchers theorize that the drug largely spares the facts that comprise declarative memory, particularly when the memory is a powerfully negative one. Anecdotally, Alain Brunet, a psychologist at McGill University, says that forgetfulness has not been a problem among the trauma victims who have participated in his propranolol studies. Patients still remember their traumas—they just seem less distressed by them. Some people with PTSD turn to alcohol or illegal drugs to calm themselves down and ease the pain of their memories; propranolol could provide a safer, more targeted way to do a similar thing.

 
Vasily Vereshchagin (1842–1904)

FROM THE START, the notion of memory dampening made bioethicists uneasy. Some worried pharmaceutical companies might “medicalize” ordinary bad memories, such as break-ups or workplace embarrassments. Others pointed out that there’s a reason that dangerous events typically prompt such strong emotion—and are engraved so deeply in our minds: They help us remember to avoid similar situations in the future.

Some of the strongest objections came from the President’s Council on Bioethics, created by President George W. Bush, which critiqued “memory-blunting” from seemingly every angle in a 2003 report.[4] They suggested that the technology might alter people’s sense of identity and make “shameful acts seem less shameful or terrible acts less terrible.” A criminal could exploit them to ease his own conscience after harming another person. Or, the Council wrote, we might be tempted to start giving these compounds to soldiers to numb the emotion they feel when they kill other human beings in combat.

Finally, the Council argued that memory-blunting should not necessarily be a personal choice, writing: “Strange to say, our own memory is not merely our own; it is part of the fabric of the society in which we live.” To make its point, the Council invoked the Holocaust. Though survivors themselves might benefit from dampening their recollections, the Council suggested society needs them to bear witness, “lest we all forget the very horrors that haunt them.”

To many clinicians, and even other ethicists, the Council’s abstract concerns rang a bit hollow compared to patients—who exist in the here and now—whose suffering could potentially be eased by these compounds. “These are extremely theoretical arguments, and they’re interesting, but when we start talking about PTSD it’s almost irresponsible,” says Stuart Youngner, a bioethicist at Case Western Reserve University. “I thought that the examples that they gave were so outlandish and dystopian and [showed] very little consideration of the suffering of people with PTSD.”

As for the Council’s worry that the drug might be used to absolve people of their guilt over harming others, some psychologists countered that any drug can be misused, and such concerns do not justify abandoning a promising therapy. Further, many of the Council’s fears do not seem to align with scientific reality, researchers said. “Accounts suggesting that propranolol wipes memory are totally exaggerated and unduly alarming,” Brunet says. “This is not what propranolol does.”

Researchers are currently exploring other substances that may be useful for cordoning off the emotional content of a memory. In June, scientists at Emory University discovered that a compound, called osanetant, disrupts the consolidation of fear memories.[5] Mice given the drug, which interferes with a gene expressed by certain neurons in the amygdala, were still able to learn the connection between a sound and an electric foot shock, but they displayed less fear when they heard the sound.

As far as a drug that can annihilate specific memories with pinpoint precision is concerned—that still exists only in Hollywood. The techniques and compounds that have shown promise so far seem to exert their effects primarily on the emotional and physiological components of memory, and fear memories in particular.

Even so, they bring us into new territory, and it’s little wonder that the approach has been met with controversy. “Anytime we have the potential to develop a new kind of control over our bodies it makes us uneasy,” says Adam Kolber, a law professor at Brooklyn Law School in New York. He acknowledges that there may indeed be cases in which the drugs pit what’s good for an individual against what’s good for society. For instance, emotionally muting the memory of a violent assault may make it more difficult for victims to persuasively testify against a perpetrator in front of a jury.

However, Kolber adds, that’s no reason to abandon the treatments. He predicts that concerns over memory-dampening technology will eventually fade away, particularly if the therapies end up relieving suffering. “Memory just always seems like something that’s given to us, that we can’t really do anything about,” he says. “It almost feels like it’s not ours to control.” Soon, it will be.

Emily Anthes is a Brooklyn-based science journalist and author of the book Frankenstein’s Cat: Cuddling Up to Biotech’s Brave New Beasts.

References

1. Pitman, R., Sanders, M., Zusman, R., et al. Pilot Study of Secondary Prevention of Posttraumatic Stress Disorder with Propranolol. Biological Psychiatry 51, 189-142 (2002).

2. Brunet, A., Poundja, J., Tremblay, J., et al. Trauma reactivation under the influence of propranolol decreases posttraumatic stress symptoms and disorder: 3 open-label trials. Journal of Clinical Psychopharmacology 31, 547-550 (2011).

3. Kindt, M., Soeter, M., and Vervliet, B. Beyond extinction: erasing human fear responses and preventing the return of fear. Nature Neuroscience 12, 256-258 (2009).

4. President’s Council on Bioethics. Beyond therapy: Biotechnology and the pursuit of happiness (2003).

5. Andero, R., Dias, B., and Ressler, K. A Role for Tac2, NkB, and Nk3 Receptor in Normal and Dysregulated Fear Memory Consolidation. Neuron epub ahead of print (2014).

Thursday, July 17, 2014

Who Are Hacktivists? Using Tech for Social Action


This is an interesting discussion, nearly 2 hours worth, on the nature of the hacktivist movement - such as Edward Snowden or Anonymous.

The conversation below took place on July 10, 2014 at swissnex San Francisco in California.

Who Are Hacktivists?




Who Are Hacktivists? from swissnex San Francisco on FORA.tv

Summary



When privacy, particularly as it pertains to our lives online, is such a hot button issue it can deter some from getting into the meat of the conversation. Swissnex however, has organized this panel of forward thinkers who are living proof of the need for a front line offensive. Plucked from diverse fields of expertise, these five individuals provide intriguing insight and a wealth of knowledge. An exciting glimpse into the realm of online privacy and how the fight for our rights is waged day in and day out.   
It’s been a year since Edward Snowden leaked classified National Security Agency (NSA) documents to The Guardian newspaper in the UK, revealing widespread surveillance.

Today, the discussions around mass surveillance, privacy, and data collection are ongoing and heated, and the reputation of so-called hacktivists (people using computers and networks as a means of protest or action) is evolving.

Who are today’s hacktivists, and what tools and means do they use to express themselves and their ideals? Join artists, activists, researchers, and a "cypherpunk" to discuss and debate these questions, and to examine how to harness the power and pitfalls of computer systems and new technologies.


Speakers


!Mediengruppe Bitnik (read - the not Mediengruppe Bitnik) live and work in Zurich and London. Using hacking as an artistic strategy, their works re-contextualize the familiar to allow for new readings of established structures and mechanisms. They have been known to intervene into London’s surveillance space by hijacking CCTV security cameras and replacing the video images with an invitation to play chess.

In early 2013, !Mediengruppe Bitnik, Carmen Weisskopf and Domagoj Smoljo, sent a parcel equipped with a camera to WikiLeaks founder Julian Assange at the Ecuadorian Embassy in London. Their works have been shown and awarded internationally with the Swiss Art Award, Migros New Media Jubilee Award, Honorary Mention Prix Ars Electronica.

April Glaser is a staff activist at EFF, the Electronic Frontier Foundation, where she focuses on community outreach and blogs about a wide range of digital rights issues. She works directly with community organizations interested in promoting free speech, privacy, and innovation in digital spaces, and she lectures frequently on these topics for groups large and small. Glaser previously worked at the New America Foundation's Open Technology Institute, where she designed tools for civic engagement in media policy and spent years on the front lines of media justice advocacy and research.

Andy Isaacson is a software engineer and co-founder of the anarchistic and educational hackerspace Noisebridge in San Francisco. He runs the Noisebridge Tor exit node, a network that anonymizes internet users. He asks pointed questions about cryptography, security, and their intersection with society and ethics as @eqe on Twitter.

Thomas Maillart is a Swiss National Science Foundation Fellow at the UC Berkeley School of Information. His research is focused on the complex social dynamics of peer-production, and on the mechanisms of cyber risks as an innovation process. He received his Ph.D. in Mechanims of Internet Evolution & Cyber Risk from ETH Zurich and graduated (M.S.) in Engineering from EPFL, also in Switzerland.

What Makes Us Conscious of Our Own Agency? And Why the Conscious Versus Unconscious Representation Distinction Matters

http://i.livescience.com/images/i/000/054/286/original/consciousness.jpg?1372351968

This hypothesis and theory article from Frontiers in Human Neuroscience (open access) looks at the awareness (consciousness) of our own agency (our capacity to act in the world) and why conscious vs. unconscious representations of agency is important.

From Wikipedia:
Agency may either be classified as unconscious, involuntary behavior, or purposeful, goal directed activity (intentional action). An agent typically has some sort of immediate awareness of their physical activity and the goals that the activity is aimed at realizing. In ‘goal directed action’ an agent implements a kind of direct control or guidance over their own behavior.
This article, then, examines the difference between involuntary behavior and goal directed action.


Full Citation: 
Carruthers, G. (2014, Jun 23). What makes us conscious of our own agency? And why the conscious versus unconscious representation distinction matters. Front. Hum. Neurosci. 8:434. doi: 10.3389/fnhum.2014.00434

What makes us conscious of our own agency? And why the conscious versus unconscious representation distinction matters

Glenn Carruthers
ARC Centre of Excellence in Cognition and Its Disorders, Macquarie University, Sydney, NSW, Australia
Abstract

Existing accounts of the sense of agency tend to focus on the proximal causal history of the feeling. That is, they explain the sense of agency by describing the cognitive mechanism that causes the sense of agency to be elicited. However, it is possible to elicit an unconscious representation of one’s own agency that plays a different role in a cognitive system. I use the “occasionality problem” to suggest that taking this distinction seriously has potential theoretical pay-offs for this reason. We are faced, then, with a need to distinguish instances of the representation of one’s own agency in which the subject is aware of their sense of own agency from instances in which they are not. This corresponds to a specific instance of what Dennett calls the “Hard Question”: once the representation is elicited, then what happens? In other words, how is a representation of one’s own agency used in a cognitive system when the subject is aware of it? How is this different from when the representation of own agency remains unconscious? This phrasing suggests a Functionalist answer to the Hard Question. I consider two single function hypotheses. First, perhaps the representation of own agency enters into the mechanisms of attention. This seems unlikely as, in general, attention is insufficient for awareness. Second, perhaps, a subject is aware of their sense of agency when it is available for verbal report. However, this seems inconsistent with evidence of a sense of agency in the great apes. Although these two single function views seem like dead ends, multifunction hypotheses such as the global workspace theory remain live options which we should consider. I close by considering a non-functionalist answer to the Hard Question: perhaps it is not a difference in the use to which the representation is put, but a difference in the nature of the representation itself. When it comes to the sense of agency, the Hard Question remains, but there are alternatives open to us.

Introduction

In this paper I argue that we, as a community investigating the sense of agency, are not doing enough to answer what Dennett has called the “Hard Question” of consciousness. Our existing models do a very good job of explaining when a representation of own agency is elicited. I illustrate this with two historically important accounts: the comparator model of Frith et al. and Wegner et al. inference to apparent mental state causation. Following Revonsuo, I consider these to be proximal etiological explanations. Although powerful so far as they go, these accounts, on their own, do not provide us with the explanatory resources to distinguish conscious and unconscious representations of one’s own agency. This is not a problem we can ignore. I use the “occasionality problem” to suggest that there are potential theoretical benefits to taking this distinction more seriously as conscious and unconscious representations of own agency play very different roles in cognition. I conclude by considering how we might approach the Hard Question for the sense of agency. I consider two Functionalist approaches (i) that a representation of own agency is conscious if it is taken as the object of attention; and (ii) that a representation of own agency is conscious if it is available for verbal report. Although such approaches offer clear research agendas, both of these specific approaches seem non-starters on empirical grounds. That said multifunction hypotheses such as the global workspace theory remain viable Functionalist positions. Finally I consider a Vehicle theory approach to the Hard Question. Such an approach also offers some clear research questions, but currently no clear answers. As of now, the Hard Question remains under-considered for the sense of agency even though there exist a variety of questions we can ask to make progress on it if we take either a Functionalist or a Vehicle approach. These are questions we would all do well to consider.

Standard Accounts of the Sense of Agency

Standard explanations of the sense of agency are of a particular type. Revonsuo (2006, pp. 20–22) calls this type of explanation “proximal etiological explanation”. Such explanations have two defining characteristics. First, they enumerate causes of the sense of agency. Second, the explanations are cognitive explanations. The specific causes posited are mental representations and computations. To understand these accounts as explanations of the sense of agency then, is to understand them as a description of what aspects of the mind, i.e., mental representations and computations, cause a subject to experience their own agency. The sense of agency itself is taken to be just another representation in this causal chain.

These traits are shared by prominent accounts of the sense of agency. Consider first the comparator model. This model gets its name from the use of three hypothetical comparisons performed by the motor control system. Each of these comparisons performs specific functions for motor control and motor learning (Wolpert and Ghahramani, 2000). One of these comparisons also elicits the sense of agency. This is the comparison that will concern us here (for the full account and its broader applicability see Frith et al., 2000a). On this model it is hypothesized that performing an action requires the formation of a goal state or motor intention (Pacherie, 2008), which represents where the body needs to move to in order to perform the action. From this, the motor control system formulates a motor command, which specifies how to move the body from where it is to where it needs to be in order to attain the goal. Two copies of the motor command are formed; one is sent to the periphery and elicits the requisite contractions of the effector muscles to perform the movement needed to attempt the action. This movement, of course, affects the sensory organs, allowing the motor control system to represent the movement after it occurs. The second copy of the motor command, sometimes called the “efference copy” or “corollary discharge”, is used by the motor system to form a prediction of what sensory feedback will be received due to the action. This predicted feedback can be used to represent the action as it occurs (Frith et al., 2000a,b; Blakemore et al., 2002).

Now we get to the sense of agency. It is hypothesized that the collection of representations and computations introduced above cause the sense of agency to be elicited. Specifically when feedback from the senses to the motor control system (actual sensory feedback), matches an internally generated prediction of what this feedback will be (predicted sensory feedback), a sense of agency is elicited (Frith et al., 2000a, p. 1784). These two representations matching in this context means that they represent the same action. The comparator model has been considered a promising explanation of the sense of agency and is able to explain some important discoveries (for recent reviews of what and how see Carruthers, 2012).

Wegner et al. have suggested that the sense of agency is elicited by a rather different kind of computation. On their model, the sense of agency is elicited when one infers that one or other of one’s mental states caused the action of one’s body (Wegner and Wheatley, 1999, p. 480; Wegner, 2003, p. 67). If correct, we can provide a proximal etiological explanation of the sense of agency by explaining how this inference is made. To make the inference the subject needs to represent their mental states qua potential causes of action, represent which body in the world is their own (i.e., a sense of embodiment) and represent the action which is occurring or has occurred. Next they must represent that one or other mental state causes the action of their body. This is the role of the inference to apparent mental state causation. According to Wegner et al. this inference is made when three facts about the relationship between the mental state and action are recognized. First, the mental state must be consistent with the action in that it specifies the action that actually occurs. Second, the mental state must seem to occur at an appropriate time before the action occurs, for example a memory of an action won’t be inferred as a cause of that action. Third the thought must appear to be the only possible cause of the action, i.e., if something else, another person or gust of wind, say, could have caused the action then the inference will not be made, or at least not made with a high degree of certainty. Wegner et al. call these the principles of “consistency”, “priority” and “exclusivity” respectively (Wegner and Wheatley, 1999, pp. 482–487; Wegner, 2003, p. 67). Like the comparator model, this account has been considered a promising approach to the sense of agency and it can explain some important discoveries (see Wegner, 2002 for reviews; Carruthers, 2010).

Numerous authors have followed the general approach of these classic hypotheses. Recently, several authors have proposed that the sense of agency is elicited by a process that integrates the output of several such computations (Synofzik et al., 2008, 2009; Moore and Fletcher, 2012; Carruthers, in press). This work is characterized by considerable progress in investigations into the computations that elicit the sense of agency. However, there is a limitation to this approach. Knowing how the representation of agency is elicited doesn’t distinguish between cases where it is elicited but remains unconscious and cases where it is elicited and the subject is aware of it. Unless we are to view the sense of agency as unique amongst all mental representations in that it can only ever be conscious, we must allow for the possibility of a representation of own agency to be elicited and remain unconscious. In the next section I consider the occasionality problem which has been presented as an objection to the comparator model, but which is generalizable to other accounts. I use this as an example to show that taking seriously the distinction between unconscious representations and a conscious sense of agency can have theoretical pay-offs in this area as each of these play different roles in the broader cognitive system. In particular I suggest that if we take this distinction seriously then the occasionality problem doesn’t arise.

The Occasionality Problem and Unconscious Representations of One’s own Agency

It would, I take it, be bizarre if there were no unconscious representations of own agency. But, is there any theoretical benefit for sense of agency research as it it currently done to considering this fact explicitly? In this section I argue that there is. That by considering the different roles conscious and unconscious representations of own agency play, in particular that only the absence conscious representations can be noticed by the subject, we can avoid the “occasionality problem”. In the next section I consider some ways in which we might attempt to explain the difference between conscious and unconscious representations of own agency. The occasionality problem, it should be noted, was originally formulated as an objection to the comparator model, but it applies equally to Wegner et al. account described above. To see the problem we first need to take a step back and consider clinical phenomena that the above models need to explain.

One of the central explananda for the accounts introduced above is thought to be delusions of alien control. This delusion, commonly seen as a symptom of schizophrenia, is a patient’s belief that not they, but rather some other agent, control the patient’s actions. This is expressed in reports such as:
I felt like an automaton, guided by a female spirit who had entered me during it [an arm movement].
I thought you [the experimenter] were varying the movements with your thoughts.
I could feel God guiding me [during an arm movement] (Spence, 2001, p. 165).
There is a growing consensus that explanations focusing on the sense of agency alone cannot explain every feature of this delusion (Synofzik et al., 2008; Carruthers, 2009). In particular, such accounts do not have the resources to explain why patients attribute the action to another specific agent. What such accounts can explain is why patients fail to attribute their actions to themselves. According to the comparator model, in healthy subjects the comparison between actual and predicted sensory feedback causes a sense of agency to be elicited for actions the subject performs. However, it is hypothesized that this computation goes wrong for the patient suffering delusions of alien control. They do not represent a match between predicted and actual sensory feedback when they should and so no sense of agency is elicited. Without this sense, the patient has no experiential basis for a self-attribution of action- they do not feel as though they perform the action- and so actions are not self-attributed. For those interested in the details of why this occurs, there is some experimental evidence that these patients have an underlying deficit in forming or using predicted sensory feedback (Frith and Done, 1989; Blakemore et al., 2000; Carruthers, 2012).

As with the comparator model, Wegner et al. inference to apparent mental state causation is unlikely to explain every feature of this delusion. Like the comparator model it may offer an account of how the sense of agency is lost. According to this view the sense of agency would be lost when one of the principles of priority, exclusivity or consistency is not met. I have argued elsewhere (forthcoming) that on this model it is reasonable to hypothesize that it is the principle of priority which is violated, as there is some evidence that patients suffering from delusions of alien control display abnormalities in the representation of the timing of their actions (Voss et al., 2010).

Now we are in a position to examine the occasionality problem (de Vignemont and Fourneret, 2004; Proust, 2006, p. 89; Synofzik et al., 2008). This problem starts from the observation that those suffering from delusions of alien control only attribute some of their actions to other agents. None of the models above appear to have, on their own, the resources to explain this observation. At the core of this objection is an accusation of a false prediction. A model like the comparator model predicts patients lack a sense of agency for their actions because they cannot represent a match at the comparison between actual and predicted sensory feedback. This does not offer us principled grounds for distinguishing those actions that the patient self-attributes and those that they attribute to others. If the comparison fails then the model should predict that patients lack a sense of agency for all of their actions. This is not the case, so the comparator model appears to be incorrect. This problem arises again when we consider Wegner et al. account. Hypothesizing that these patients fail to represent their own agency for their actions because they misrepresent the timing of their actions (thus violating the principle of priority) again fails to explain why only some actions are misattributed. In essence these accounts suggest that such patients always lack a representation of their own agency, but it seems that this lack only matters to the subject some of the time.

de Vignemont and Fourneret, (2004, p. 9) have suggested that the system which elicits the sense of agency, whether it be the comparator model or something else, fails only occasionally and in a context specific way. If it is true that the comparator or inference to apparent mental state causation face intermittent failures, then the occasionality problem disappears. However, the questions of how and why the mechanism occasionally fails have not been answered and nothing about the actions themselves or features such as their personal significance affect whether or not they are self-attributed (Proust, 2006, p. 89). More importantly there is no evidence independent of reports of the delusion that the comparator model or the representation of the timing of actions fails only occasionally for such patients. Until such evidence is forthcoming it is difficult for this solution to shake the appearance of being ad hoc and it is worth considering other accounts. More so, as I will suggest below, if we consider the different roles conscious and unconscious representations of own agency play in cognition, which we should do anyway, then there is no need to add additional assumptions of this type.

An argument from the occasionality problem against the hypotheses described, like that sketched above, assumes that the result of the process leading to a representation of own agency is a conscious sense of agency. If we drop this assumption the problem needn’t arise. To see why this assumption is being made let us consider the relationship between experiences and delusions. So, what is the evidence that patients suffering delusions of alien control lack a sense of agency? One might be tempted to think that they say so. But, this isn’t typically the case. Rather a deficit in a conscious sense of agency is inferred from the fact that patients attribute their own actions to another agent. This inference is justified by some standard assumptions in the study of delusions. The state of the art in delusions research is strongly influenced by Maher (1988, 1974) hypothesis that delusions are attempts to explain anomalous experience. Now there may be controversy regarding whether this explanatory attempt involves normal or deficient reasoning (Davies et al., 2002; Gerrans, 2002), but both sides agree that the delusion arises from an attempt to make sense of an anomalous experience. This supposition is not universally accepted, of course (Campbell, 2001; Bayne and Pacherie, 2004), but what matters here is that this assumption is needed if we are to justify inferring that patients lack a sense of agency from their acts of other attribution. We can justify this inference if the lack of a sense of agency is the anomalous experience which the delusion of alien control is an attempt to make sense of. So first, why should the absence of a sense of agency be an anomaly that needs to be explained? Well, it would be, if a conscious sense of agency typically accompanied one’s actions. If this is the case, we would expect that its absence would be noticed and felt to be in need of explanation. After all, if one feels one’s body move, but one does not seem to be the agent behind the movement, then one would naturally search for a reason that one moved.

We see this assumption that there is a conscious sense of agency accompanying all actions at play in the argument from the occasionality problem. The general failure of a process like the comparator should mean that the sense of agency that is usually present is not. This is an anomaly to be explained by the patient. The patient should show delusions of alien control for all of their actions, but they do not, therefore the comparator model (or which ever process we are considering) is false, quod erat demonstrandum (QED).

To avoid this conclusion, all we need do is drop the assumption that a conscious sense of agency always accompanies our actions. Instead, we need only hypothesize that a representation of own agency which may or may not be conscious accompanies our actions. In other words, the output of processes like the comparator or inference to apparent mental state causation is a representation of the subject’s agency which is sometimes conscious and sometimes not. An absence of a sense of agency is thus not always an anomaly which the patient need explain. An absence of representation is not a representation of absence, as the saying goes, and it is particularly not a representation of absence to the subject. It is the subject noticing (i.e., representing to themselves) that the sense of agency is absent which is hypothesized to lead to delusions of alien control, not it’s mere absence. This noticing of the absence will occur when the sense of agency is expected and so we might say the absence of a sense of agency is only an anomaly when the subject expects to experience it.

A possible objection to this line of response is to assert, based on introspection, that a conscious sense of agency accompanies all of our actions in the normal case. As such, it is always expected and any absence is an anomaly to be explained. However, introspection gives us poor grounds to assume that there is a ubiquitous sense of agency. What would lead one to assume that there is a conscious sense of agency accompanying every action? We can see where this assumption comes from, and how poorly grounded it is, by an analogy with visual consciousness. A favorite example purporting to show that we are not conscious of as much as we think we are comes from Dennett (1991). This example is so easy to replicate that given minimal resources you can do it yourself right now. All you need is a well shuffled deck of playing cards. Stare at a point on a wall in front of you. It is important that you continue to stare at this point throughout the entire demonstration. Without looking randomly select a card and hold it out to one side at arm’s length. Gradually move it toward the center of your vision. At what point can you see the color and number on the card? The typical finding is that it is only about 2 or 3°1 from the point one is looking at that these features become visible (Dennett, 1991, p. 54). The reason for this is to do with the nature of photoreceptors outside of the fovea on the retina and need not concern us here. What I wish to draw attention to, however, is that on first experiencing this demonstration most people seem surprised (Dennett, 1991, p. 68). Pre-theoretically, we expect to be able to discriminate objects easily when they are presented in our peripheral vision. Dennett suggests, and I agree, that this expectation is based on a folk-theoretical belief that vision presents us with a relatively uniformly clear and colored world in which objects are easily distinguished. But, as this simple demonstration shows, as do other more rigorous experiments, e.g., Brooks et al. (1980), this is at best only true of the foveated world, and even then with some exceptions (Caplovitz et al., 2008).2

Why do we believe this is true of our peripheral vision? We can speculate on many possible reasons for this. One reason might be that things we use as public representations of what we see, e.g., photographs or videos, are somewhat like this. There may be a misbegotten analogy between visual depictions and visual experience. Another more universal proposal comes from Schwitzgebel (2008: p. 255) as well as Dennett (1991: p. 68) who suggests that objects in our peripheral vision appear distinct and colored because they are when we look at them. Whenever one looks to see what object is in one’s periphery one finds it clear, distinct and colored. As such we tend to assume that we always experience those objects as such. This claim provides us with a useful analogy for understanding why accounts like the comparator model and Wegner et al. inference to apparent mental state causation needn’t suppose that a conscious sense of agency accompanies every action.

If a model like one of those above is right, then it would be true that our actions are normally accompanied by a representation of our own agency. However, the subject need not be aware of their own agency. The representation could be unconscious but, because the representation is formed with every action, it is there whenever we go “looking” for it, or more generally when it is expected to occur to the subject, i.e., consciously. Just as objects in our periphery always appear clear, distinct and colored when we go looking for them, our representation of agency is always experienced when we go looking for it, thus meeting our expectations. Just as this may lead us to believe that objects in our periphery always appear clear, distinct and colored, this may lead us to believe we always experience a sense of agency accompanying our actions rather than merely representing it.

Accepting this conclusion then, the comparator model or the inference to apparent mental state causation need not suppose that representing one’s own agency is always a conscious sense of agency. Still, one may wonder, how exactly does this affect the occasionality problem? After all, it would still seem to be the case that these models predict that the unconscious representation of agency would be missing for every action performed by the patient suffering delusions of alien control, so should the model still predict that the patient would show the delusion for every action?

The answer to this is no. However, to see why, we need to return to the purported role of consciousness in the formation of delusions such as delusions of alien control. Recall Maher’s proposal that delusions are attempts to make sense of anomalous experiences. In the case we are interested in here, the delusion of alien control arises because the patient attempts to make sense of the absence of a sense of agency. They expect a sense of agency, but it is not there when they “look”, giving rise to an anomalous experience that must be explained. On this view then, an absent sense of agency is only anomalous when it is expected. A subject would not notice the absence of an unconscious representation. It is only when the representation would otherwise become conscious that its absence would be noticeable. Again the absence of the representation is not the same as the subject representing to themselves that something is absent. The upshot of this is that if we hypothesize that the comparator or inference produces an unconscious representation of agency, which only becomes conscious when it is needed by the subject (say in self-recognition or introspecting to see what experiences one has), we find that the occasionality problem is no problem at all.

It is not so much that the problem is solved as it doesn’t arise in the first place, all because conscious and unconscious representations of own agency play different roles in cognition. Only conscious representations can be expected by the subject, and only their absence can be noticed by the subject. The normal case is that actions are not accompanied by a conscious sense of agency (only an unconscious representation) and so a lack of this feeling is typically not an anomaly that the patient suffering delusions of control needs to explain. It is only when they would “look for” (however this analogy is to be cashed out mechanistically—see below) this representation that it is expected and so its absence is an anomaly that needs to be explained.

This consideration of the occasionality problem shows us that there are theoretical benefits to taking seriously the distinction between conscious and unconscious representations of own agency. By doing so and considering the different roles conscious and unconscious representations play in cognition we see that the occasionality problem doesn’t arise. As such we don’t need to add assumptions to our models, such as assuming that they only fail some of the time, which lack supporting evidence. However, we do have a new set of issues to consider. What then is the analogy of “looking for” the representation of agency that produces the expectation of the sense of agency needed to explain delusions of alien control? This question is no less than what distinguishes an unconscious representation of agency from a conscious sense of agency, and this is what Dennett has called the “Hard Question” of consciousness.

The Hard Question

The approach to the sense of agency used by traditional accounts such as the comparator model and the inference to mental state causation are only designed to answer one question about the sense of agency: how is a representation of one’s own agency elicited? This is a vitally important question in the study of the sense of agency, but to think it is the only question is to treat awareness as the end of the line of a computation, the dreaded Cartesian Theatre, and to deny the possibility of an unconscious representation of one’s own agency. In addition to this question, we also need to ask of accounts of agency what Dennett calls the “Hard Question” [not to be confused with any purported “Hard Problems” (Chalmers, 2003)3]: after the representation of own agency is elicited by one or other of these mechanisms, well, then what happens (Dennett, 1991, p. 255)? What is the difference between a representation of my own agency of which I become aware and one that languishes forever in the apparent irrelevance of unconsciousness?

The analogy employed above of “looking” for the sense of agency suggests one possible answer. Perhaps an unconscious representation of agency becomes a conscious representation when the subject’s attention is directed to it? In the following section I consider and discuss this possibility. Having found this wanting, I consider a further possibility, that the answer to the Hard Question is that the representation enters into the mechanisms required for verbal report. I argue that this answer is also unsatisfactory, as it is inconsistent with behavioral evidence of a sense of agency in non-verbal animals. These first two options are Functionalist theories. They propose that consciousness is playing a certain role in cognition. Although these two specific proposals seem to fail on empirical grounds it is important to note that other Functionalist theories, notably those that identify consciousness with multiple functional roles remain open. Finally, I propose a radical alternative suggesting that the answer to the Hard Question is to be found not in the uses to which representations are put within a cognitive system, but in the nature of the representations themselves. Regardless of which of the two research agendas individuals chose to pursue, it is clear that we do not have an answer to the Hard Question for the sense of agency nor do we spend enough time thinking about it.

Attention

One potential answer to the Hard Question is attention. Such an answer is suggested by well-known cases of inattentional blindness, where subjects fail to see perfectly obvious stimuli (like a woman in a gorilla suit) simply because their attention is directed elsewhere (Mack and Rock, 1998). More specifically, let us hypothesize that the difference between an unconscious representation of agency and the conscious sense of agency is that the conscious representation is attended to. If this is true then we would have a clear research agenda: understand how and why a representation of agency is selected or not selected for attention and understand the mechanisms of attention.

Such a view has not been developed in detail for the sense of agency; indeed, I am suggesting here that consideration of the Hard Question with respect to the sense of agency has been neglected almost entirely. Notwithstanding, attention based accounts of consciousness do have some currency in the explanation of perceptual consciousness. Prinz (2000, 2012), for example, advocates such a view. Unfortunately evidence is mounting that attention is not a good answer to the Hard Question, at least not on its own, as attention is not sufficient for consciousness. That is, subjects can attend to things of which they are not conscious. Here I discuss one well-studied example.

Norman et al. (2013) have provided compelling evidence that subjects can visually attend to objects, namely two-dimensional shapes, even when they cannot consciously see those objects. They start from prior observations of the effects of taking two-dimensional shapes as the objects of attention in color discrimination tasks. In these tasks, subjects are asked to indicate with a button press the color of a circle. Before the target colored circle appears a supraliminal spatial cue is presented. In the trials of interest here the, cue appears some distance from where the target circle will ultimately appear. However, it may appear in the same shape as the cue or a different shape. See Figure 1 for an example layout.
FIGURE 1  
http://www.frontiersin.org/files/Articles/93110/fnhum-08-00434-HTML/image_m/fnhum-08-00434-g001.jpg

Figure 1. The spatial layout of the stimuli. In the center we see a fixation cross, above and below are two rectangles. A cue appears at the x and disappears, followed by a target at one of the two circles. The subject’s task is to indicate the color of the target. Subjects respond faster to cues appearing within the same shape as the cue, even though they are as far from the cue as the target in the other shape.
When the target appears in the same object as the cue, response times are facilitated (Egly et al., 1994). Norman et al. take this as characteristic of attention to such shapes.

Norman et al. repeated this experiment, but made the shapes invisible. They presented on a screen an array of Gabor patches whose orientation rapidly alternated between vertical and horizontal. Within the array rectangles were defined by Gabor patches flickering out of phase with the remainder of the array (Norman et al., 2013, p. 838). When the background patches were vertical, those defining the rectangle were horizontal, and vice versa. Observing the array subjects reported seeing flickering Gabor patches, but were unable to see the rectangles. Indeed, subjects were no better than chance when asked to guess whether or not such flickering displays contained rectangles (Norman et al., 2013, p. 840). Despite the invisibility of the shapes there was a facilitation effect in the color discrimination task characteristic of attention being directed at the shapes. That is, subjects were faster at responding to targets which appeared in the same shape as the cue, than for targets which appeared the same distance from the cue but in a different shape (Norman et al., 2013, p. 839).

In this study we see an effect characteristic of attention being directed at an object, despite the object being invisible. This demonstrates that subjects can attend to shapes of which they are not conscious. In general, this also suggests that attention is not sufficient for consciousness. Without a reason to think that the sense of agency will be an exception to this, it seems unlikely that attention will answer the Hard Question for the sense of agency.

Reportability

Often we take it that we can be confident that a subject experiences something if they are able to verbally report it. Although such reports are susceptible to a variety of introspective omissions and commissions (Dennett, 1991, p. 96; Schwitzgebel, 2008), in practise, verbal reports (especially questionnaire responses) are very often treated as the best way to operationalize experience. Indeed the theories of the sense of agency introduced above are built on studies using questionnaires to ask subjects to report their experiences of agency. At the heart of this approach lies an intuition that, however imperfectly, we are able to talk about those things that we experience, but not those things that reside in our unconscious minds. This intuition suggests an approach to the Hard Question: perhaps the difference between conscious and unconscious representations is just that conscious representations are available for report. Although such an approach would be highly controversial (Block, 2007), there is no approach to the Hard Question that is not controversial, and this proposal remains live.

That said, we do have strong reason to doubt that it is reportability that distinguishes conscious and unconscious representations of own agency, as there are many non-verbal animals that display evidence of experiencing a sense of agency. This suggests that being available for verbal report is not necessary for a conscious sense of agency.

Good evidence for this comes from the mirror self-recognition test. This test, first proposed by Gallup (1970), involves marking an animal surreptitiously (usually when anesthetized) with a non-irritating, odorless dye on a part of the animal’s body that cannot be seen without a mirror (such as the forehead). An animal is deemed to pass the mirror self recognition test if there is a significant increase in mark directed behavior coincident with the animal observing itself in the mirror (Gallup, 1970, p. 87). Such behavior indicates that the animal has recognized itself in the mirror as it uses the mirror to direct actions towards itself. A sense of agency is needed to pass such tests. To learn to recognize oneself in a mirror one needs to realize that the actions one sees in the mirror are equivalent to the actions one is currently performing (Povinelli, 2001, p. 855). In order to recognize oneself in a mirror, then, one needs to know (amongst many other things) what action one is performing. This is a function of the sense of agency (Povinelli and Cant, 1995). As such, passing the test is good evidence for a sense of agency.

Where this creates a problem for using reportability as an answer to the Hard Question is in the fact that many non-verbal animals pass the mirror self-recognition test. This includes chimpanzees (Gallup, 1970), orang-utans, human raised gorillas (Povinelli and Cant, 1995), bottlenose dolphins (Marten and Psarakos, 1994) and European magpies (Prior et al., 2008). These animals thus show evidence of experiencing a conscious sense of agency. As such, verbal report does not seem necessary for consciousness, and thus investigating how unconscious representations of agency become available for verbal report is a non-starter as a solution to the Hard Question.

The solutions considered so far to the Hard Question are Functionalist theories. They posit that for a representation to be conscious is for it to be used a certain way, say be being attended to or by being made available for report. On such views it is use which constitutes consciousness. Whilst the two options considered here do seem like non-starters, there are other Functionalist theories available. Other accounts, such as Dennett (1991) multiple drafts model Dennett (1991) or Baars (1988) global workspace Baars (1988), suggest that consciousness is not a single use within a cognitive system, but rather a conglomeration of many uses and these options remain live. My point here is not to solve the problem of what distinguishes conscious and unconscious representations, but merely to suggest that in sense of agency research this is a problem we should spend more time on. Next, I turn to a theoretical basis for approaching the Hard Question that offers a fundamentally different kind of solution to the options considered so far.

Vehicle Theories

Vehicle theories of consciousness answer the Hard Question in a rather different way. The key issue we are getting at is: what is the difference between an unconscious and a conscious representation of own agency? The proposals considered thus far have followed Dennett in hypothesizing that this difference is a difference between how unconscious and conscious representations are processed (e.g., are they subject to attention or made available for verbal report). In other words the difference is a matter of what is done with the representation. Such approaches are Functionalist theories in that they consider the particular use of a representation within a cognitive system to constitute that representation’s being conscious.

Vehicle theories, in contrast, hypothesize that the difference between conscious and unconscious representations is not how they are processed, but in the nature of the representation itself (O’Brien and Opie, 1999, p. 128). The nature of conscious vehicles of representation (also known as representation bearers) is hypothesized to be different to the nature of unconscious representing vehicles. On such views consciousness is a way of representing the world using different kinds of vehicle than those used by unconscious representations. On this kind of view the answer to the Hard Question is not “and then some additional processing occurs” but rather, “and then the vehicle of representation is changed from one form to another”.

O’Brien and Opie propose a general answer to this question making use of distinctions in kinds of representing vehicles offered by Dennett (1982). In particular they focus on a distinction between “explicit” representations which are: “physically distinct objects, each possessed of a single semantic value” (O’Brien and Opie, 1999, p. 133) and “potentially explicit” and “tacit”4 representations which are to be understood in terms of a computational system’s capacity to make certain information explicit in the above sense. In general, O’Brien and Opie hypothesize that we are conscious of all and only things that are represented in an explicit form. All unconscious representations would then take the form of potentially explicit or tacit representations.

According to this version of a Vehicle Theory, a conscious sense of agency would be an explicit representation of own agency. That is, a discrete vehicle, such as a stable pattern of activity across a layer of neurons (O’Brien and Opie, 1999, p. 138), with that content. An unconscious representation of agency would not be a discrete vehicle, but a disposition in the cognitive system to produce such a representation. To allow for unconscious representations of own agency on such a view, the output of the comparator model or Wegner et al. inference needs to be reconceived. It is not an explicit representation of own agency, but rather a change in the dispositions of a computational system to produce such a representation.

If such an approach is correct then we have a new way to approach the Hard Question for the sense of agency. How is the output of the comparator model, or whichever account we ultimately agree on, made explicit? Why is it the case that it is sometimes not made explicit? Is this a matter of the subject metaphorically “looking for” it, if so, how would that be understood more literally?

The benefit of taking this approach is that it offers a new kind of answer to the Hard Question by offering a new conception of what properties of a computational system distinguish conscious from unconscious representations. With this reconceptualization we can deploy O’Brien and Opie’s hypothesis for the sense of agency and answer the Hard Question in a way that doesn’t seem to be falsified like the other answers considered here. In addition, a research agenda is set: why and how is a representation of own agency sometimes made explicit? Of course this question has not yet been answered. Indeed whichever form of the Hard Question we prefer it is clear that we have not yet answered it, although there seem to be two promising avenues to approach it. And so I implore us, as a community to ask of ourselves, now that we have made progress in understanding how a representation of own agency is elicited, then what happens?

Conclusion

In this paper I have argued that in order to explain the sense of agency we need to move beyond proximal etiological explanations and consider the Hard Question. Although such accounts, including the comparator model and the inference to apparent mental sate causation, are powerful so far as they go, they fail to distinguish between conscious and unconscious representations of own agency. As a consideration of the occasionality problem suggests, not only is this a real distinction, but such representations can play very different roles in cognition. Finally, I have suggested that there are ways we can approach the Hard Question, and although some of the specifics of certain particular approaches might seem like non-starters on empirical grounds it should be clear that there are alternative approaches, both Functionalist and vehicle, available and specific questions to ask. Now what happens?

Conflict of Interest Statement

The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Footnotes

1. ^ As a rough guide 1 degree is approximately the angle subtended by a point either side of your thumb nail held at arms length.
2. ^ Now of course in the normal case our eyes saccade constantly allowing us to build a much more detailed visual representation than is possible from staring at a fixation point making the area of clear vision significantly larger than the 2 to 3° observable in a fixation task. This doesn’t affect the central Dennettian claim that periphery is not clear and colored in the way that we would typically assume. Nor do we typically reflect on howmuch moving our eyes is necessary for seeing the way we do.
3. ^ The supposed “Hard Problem” of consciousness is the problem of explaining how mental and physical states give rise to conscious experience, given that it seems that no explanation in terms of the structure or function of mental states is sufficient to explain this (Chalmers, 2003). This is a problem closely tied with mysterian and dualistic approaches to consciousness. In contrast the “Hard Question” is a question within the materialist tradition, which works from arguments that a structural or functional explanation of consciousness is possible in principle, and asks what is the difference between a conscious and unconscious mental representation.
4. ^ There are differences between “potentially explicit” and “tacit” representations. These differences become important when we consider the kinds of computations being performed within a cognitive system, but won’t play a role in this short statement of O’Brien and Opie’s hypothesis regarding consciousness (indeed O’Brien and Opie argue that a Vehicle Theory could only be true for connectionist systems and that the “potentially explicit” versus “tacit” distinction does not apply to such systems).

References available at the Frontiers site.