LIKE PHOTOGRAPHS, brain scans are both factual records and cultural artifacts; they document real biological events and yet are interpreted within a social context. Unlike photographs, however, brain scans are not indexical, which is to say they are not direct mechanical imprints of a world beyond the image. But to most nonexperts, the products of functional magnetic resonance imaging (fMRI) appear as photographs. And, like photographs, they are often seen only for their reportorial and evidentiary qualities. Calling them “scans” suggests they present a direct empirical view of activity in the brain. In fact, the fMRI machine is just a tool: It records blood-oxygen levels in the brain as subjects respond to specific stimuli, then algorithmically manipulates the experimental data to produce an overall image, which scientists must then interpret. And since fMRI science is a relatively young field, scientists don’t agree on the best practices of interpretation, much less what the data mean.
For example, let’s say researchers scan the brain of someone who has just been insulted, with the goal of locating the neural correlates of anger. In the resulting fMRI-generated image, a red smudge appears over a certain area of the cortex. That smudge is a sign of a high blood-oxygen level. But there’s no evidence of causality; that particular cluster of neurons doesn’t necessarily begin firing whenever we start to feel angry. Furthermore, the fMRI image doesn’t show how the specific area of the brain represented by the red smudge is working within a network of neurons spread throughout the brain. Conversely, one region may be more vital to a task than a scan suggests: Since the brain becomes more efficient at tasks it does repeatedly or automatically, the blood-oxygen levels may underrepresent how central the region is to a task. More often than not, neuroimaging results alone do not tell a clear story.
So, despite the rhetoric around fMRI images, they are neither photographic nor indexical: They don’t have a continuous material connection to the things they reference. What they deliver is the appearance of objectivity, the illusion of direct perceptual evidence. This undergirds the recent introduction of neuroimaging into criminal proceedings in the US. Brain scans were first used in American courtrooms in 2009, during the sentencing hearing for Brian Dugan, who was already serving a life sentence for murdering two young girls when he admitted to raping and killing a third in 1983. Sentencing hearings are more lenient than trials in terms of the evidence that can be admitted, and Dugan’s lawyer, Steve Greenberg, thought to introduce fMRI scans to show that there was a clear physical basis for Dugan’s psychopathology. Seeking a lightened punishment, Greenberg tried to convince the jury that Dugan was not fully responsible for his crimes: His brain was faulty; he wasn’t wired to care. “Someone shouldn’t be executed for a condition that they were born with, because it’s not their fault,” Greenberg said. “The crime is their fault, and he wasn’t saying it wasn’t his fault, and he wasn’t saying, give [me] a free pass. But he was saying: Don’t kill me, because it’s not my fault that I was born this way.” Greenberg’s strategy failed, and Dugan was sentenced to death, but only after ten hours of deliberation. Those extra hours testify to the potential impact of neuroimaging in courtrooms. Since Dugan’s hearing, scientists and legal professionals have speculated that scans may soon be employed to establish a suspect’s guilt or innocence by revealing familiarity with the weapon used to commit a crime or the scene where one occurred. Involvement in a crime would be judged by indications of recognition and memory in the brain. This process may also help victims identify their perpetrators through personalized facial-recognition studies—a high-tech police lineup in which a victim is scanned while being shown images of potential perpetrators.
So far, brain scans have been used in trials in the US to show how brain trauma or inherent defects impair the ability of defendants to determine right from wrong or control their actions. But not without controversy. A 2008 survey, “Flickering Admissibility: Neuroimaging Evidence in the US Courts,” published in Behavioral Sciences and the Law, reveals that anatomical CT and straight MRI scans are routinely admitted to show disease or trauma. However, courts are wary of what can be inferred from functional MRI imagery and dubious of claims that such images alone can prove insanity or incompetency. Another article, “Through a Scanner Darkly: Functional Neuroimaging as Evidence of a Criminal Defendant’s Past Mental States,” published in the Stanford Law Review in 2010, warns that fMRI images are far more prejudicial than probative of a suspect’s mental state, and so extreme caution should be exercised when using them in courts. Yet there is growing interest in the legal potential of these processes. Mounting a defense based on organic brain defects is an effective way to challenge a person’s culpability—the extent to which a defendant “freely” decided to commit a crime.
But in Dugan’s case, if his brain is pathologically wired not to care, does that make him less responsible for his actions? We expect citizens to uphold the social contract; when they do not, it is tempting to understand their infractions as results of a predictable, hard-wired abnormality rather than as products of choices they have made or complex interactions among brain, body, and world. When used reductively, fMRI images can seem to resolve the contradiction between cultural expectations and actual behavior by proving that people have a hard time—or are incapable of—overriding certain natural mechanisms. Which leads to the question of free will and biological determinism. Writing about “the shift from blame to biology” in the courtroom, neuroscientist David Eagleman stresses the impact of an individual’s unique physiological conditions on the ability to control actions and make decisions. Some of us are naturally better at suppressing negative impulses, so we shouldn’t all be judged with the same yardstick. “Acts cannot be understood separately from the biology of the actors—and this recognition has legal implications,” Eagleman wrote in the Atlantic last year, challenging a simplistic notion of choice and culpability. But regardless of the thorny philosophical debate about free will, the use of brain scans to prove underlying physiological causes of criminality requires interpretation and belies the indeterminacy of the supposed evidence.
Roland Barthes writes of the impossibility of any photograph being purely evidentiary in his 1961 essay “The Photographic Message,” in which he argues that images are interpreted as soon as they are perceived. He called this phenomenon, which involves categorizing images by reference to language and social codes, “perceptive connotation.” For Barthes, no photograph has the power to perfectly denote what it pictures, despite its inherent ties to the physical world. This theory is particularly relevant to scientific images, which have long formed the rhetorical basis for claims of knowledge and are too often accepted as unquestionable representations of some external reality. For example, one recent study in the journal Cognition, “Seeing Is Believing: The Effect of Brain Images on Judgments of Scientific Reasoning,” found that readers were more likely to believe a scientific report if it included fMRI images than if it included either traditional bar graph representations of data or no images at all; even when the science presented in the report was flawed, the presence of neuroimages was persuasive. (Recall the Stanford Law Review article’s warning that fMRI images are more prejudicial than probative.) Such technical pictures, and the related memes that bubble up around them, provide the imago of human biology as the anchor for individual identity. And by emphasizing biology as opposed to the environments in which the body becomes the self, they allow viewers to overlook how complex and conditional the scientific findings based on these images are.
Once neuroimages enter the mainstream media (e.g., “‘God Spot’ Researchers See the Light in MRI Study” in the Guardian and “Liar, Liar, Scans on Fire: fMRI Could Have Predicted Madoff Would Break Promises” on DailyFinance.com), their complexity is further reduced and their impact is magnified. For example, in 2003 a Guardian report on a study conducted at Baylor College of Medicine in Houston, titled “The Brain Can’t Lie,” ran with the teaser ”Brain scans can reveal how you think and feel, and even how you might behave. No wonder the CIA and big business are interested.” The title suggests a verifiable brain-based record of all our intentions and actions—the ability to read minds, basically. Articles like this, which are published in the thousands each year (and often turned into books that populate the best seller list), project far more certitude than the technology can deliver and perpetuate the myth of a pat brain-mind connection. According to psychologists Paolo Legrenzi and Carlo Umiltá, the authors of Neuromania: On the Limits of Brain Science (2011), our infatuation with the brain—or at least with the corresponding neuro- prefix—has less to do with any definitive progress made by researchers than with the public’s susceptibility to brain scans. We’ve ended up with “a revival of the classic utopia of reducing the mind to the functioning of the brain,” a vision of “man as material machine.”
I BEGAN THINKING ABOUT why these scientific methods and the resulting images have such a hold on our imaginations a couple of years ago, when I started shadowing a team of cognitive neuroscientists as they developed a study about the neural and cognitive bases of semantic knowledge. We eventually decided I’d be one of the test subjects. The study, which began late last spring, has given me first-hand experience with the fMRI machine and how data are collected and interpreted into usable results.
Scientists are now employing fMRI technology—which has been in practical use since the 1980s—to study a wide range of neurological phenomena: visual perception, object recognition, memory, the effects of stroke and brain injury, depression, schizophrenia, degenerative diseases like Alzheimer’s and Parkinson’s, personality traits, fear, racial attitudes, deception, our relationship to food and sex, how we make financial and political decisions, and so on. On the most basic level, an fMRI machine visualizes changes in the amount of oxygen flowing through the brain; more flow equals more activity. The scanner itself consists of a long circular chamber containing powerful superconducting magnets. These are used to charge the protons in the subject’s cells, causing them to line up with the magnetic source. Then a radio wave pulse is sent to temporarily disrupt the magnetic field and thus the protons’ alignment. When that interference stops, the machine measures the energy released as the protons settle back into their initial alignments. Due to the magnetic properties of hemoglobin, oxygenated blood shows the strongest response during this process; this response is measured by the scanner. More blood-oxygen consumption implies more energy use in those areas. Since the neurons in the brain are always active—in a global sense the brain is always “on”—researchers must calibrate their measurements to get rid of all of the background activity.
One advantage of fMRI technology is that it gives a relatively safe and noninvasive access to the brain while it is engaged in specific tasks. But the resultant images only allow us to observe neuronal activity indirectly. They track blood-oxygen levels, but there is no consensus about the exact meaning of increased blood flow; they seem to provide a realistic image of brain activity, but really the data are manipulated algorithmically to generate visual images. The images are originally produced in grayscale, with gradients set to represent specific levels of activation, and later colored, which makes them more legible and eases interpretation. (Red is generally used to indicate the highest levels of activation.) Background noise—the result of body movement and random thoughts during scanning—is minimized in the final image. Furthermore, scans that are not used for diagnostic purposes are actually aggregate images, synthesizing data sets from a number of test subjects rather than representing unique occurrences of a neurological event in a single individual. Additional algorithms are used to generate this composite image from individual brain scans. (The presence of false positives, false negatives, outliers, and abnormal scans can muddy this picture, so scientists must make their own decisions about which data to exclude.)
This is all to say that brain scans are not direct images of the body, like traditional X-rays; they are highly constructed images, graphic articulations of data, subject to interpretation however much they bear the stamp of a real referent. But they are often perceived to share the indexical quality of photographs, which explains their rhetorical power. For Barthes, the essential paradox of photography is that it is simultaneously natural and cultural. A photograph is natural because it is materially continuous with reality—Barthes calls this “analogical perfection.” Despite the reduction of the subject by the photograph—which is two-dimensional, distorts perspective, limits color, and jettisons sound and smell—the image is still an analogue of the reality it represents. Light reflects off the subject, travels through the air, and is mechanically recorded on sensitized film; there is no physical rupture between the subject and the resulting image. The effect of this empirical link is that the photograph appears as evidence that the subject existed just as it is pictured.
Barthes acknowledges that the photograph is also always culturally coded, which presents a challenge to its evidentiary status. Photographs never exist in isolation; they are embedded in specific contexts such as magazines and newspapers, abutting captions and articles. Photographs are viewed within a historically particular stream of other media, images, and information, with their own values and meanings. Moreover, the connotative associations of the subject of a photograph—which have to do with how the photographer frames the image—accompany whatever visual fact is being communicated. Yet the sense that photography communicates fact is so convincing as to paper over all of this messiness. The social, cultural, and linguistic aspects of the image tend to be overwhelmed or “innocented” by the “analogical plenitude” of the medium, according to Barthes. Nevertheless, the viewer’s understanding of a photograph depends on knowledge of specific signs, and so the meaning of photographs is always conditional and fluctuates over time. This is a part of the paradox: We never perceive a photograph only in a natural, denotative state.
The advent of digital photography since Barthes’s time has done little to diminish photography’s rhetorical power. Digital photography does not undo the direct empirical link between picture and world: A digital sensor may replace the chemical substrate of film-based photography, but there is still a continuous stream of light connecting referent and image. The introduction of digital editing software makes the manipulation of photographs much more common, but this is not a fundamental shift—film negatives have always been manipulated in the process of printing (dodging, burning, masking, transposing). People today may be used to the idea that most images they encounter have been digitally or mechanically manipulated; nonetheless, photography’s existential connection to the world is paramount. This speaks precisely to Barthes’s assertion that photos are always both natural and cultural (and this holds true for digital photos).
Despite the prevalence of digital manipulation, our increasingly sophisticated understanding of how images are manufactured and circulated, and widespread skepticism regarding the truthfulness of media, photography is still the most reliable means of visual representation. In contrast to fMRI imagery, photographic evidence is fully accepted in court. Regardless of any suspicion of photography as a truth-telling device, we still want—and, in the courtroom, need—to have a credible record of our experiences. Brain scans are alluring in part because, more than photographs, they seem to bypass the subjective interpretation of the indexical sign. On the surface the technology of fMRI extracts the image maker and the viewer from the equation, leaving us with pure communication. Yet the popular desire for such pure, objective representations far exceeds the ability of science to provide them.
Underlying our interpretation of brain scans is the assumption that human behavior is mostly determined by biology and the mind and consciousness are nothing more than emergent properties of the brain. Assuming that the legal system will soon catch up to the conclusions of so many neuroscientists, companies like No Lie MRI and Cephos are marketing brain scans as a lie-detection (or truth-verification) service. “Legal battles often revolve around unsubstantiated claims that cannot be proven by hard evidence,” states the “Product Benefits” section of the No Lie MRI website. “In legal cases, NO LIE MRI will enable objective, scientific evidence regarding truth verification or lie detection to be submitted in a similar manner to which DNA evidence is used.” Cephos describes fMRI as an “unbiased and scientifically validated” technology. The inner workings of the mind are directly observable; biology trumps proclaimed intent or conscious reasoning.
No Lie MRI and Cephos are, of course, playing into the increasingly influential idea that it is not meaningful to think about the brain in terms of free will. As Eagleman advocates for neuroscience as a way of making the justice system more fair and efficient, new fields like neurolaw and neuroethics argue for a broad understanding of human agency in terms of biology. Too often, brain scans are treated like souped-up DNA tests, as if they’re capable not just of judging guilt and innocence but of determining the root causes of—and remedies for—the harmful actions that arise from defective brains. But the science cannot support such casuistry. And admitting neuroimages as evidence may replace long-standing structural inequities with newer ones: Justice will increasingly be about scientific diagnostics—and about who can afford the expensive procedures and experts to interpret the results—and less attuned to the political and economic conditions that help determine our actions. As biology is blamed, society is let off the hook.
THE PROMISE OF NEUROSCIENCE has been buttressed by a strong belief that complex subjective phenomena can be reduced to concrete physical events, leading to efforts to locate, for instance, the neural correlate for consciousness. This dream of biological reductionism recalls the Human Genome Project, which ran from 1990 to 2003. Breaking the genomic code seemed to offer simple solutions to medical and behavioral problems; if the body were genetically hardwired, then interventions in one’s genetic expression could wield great power. However, other than providing a handful of genetic tests for specific medical predispositions, mapping the human genome has not produced significant findings or clinical solutions; scientists have just begun interpreting the massive amounts of data. Similarly, despite the persistence of hardcore materialists, cognitive scientists increasingly understand that the relationship between mind and brain cannot be reduced to a matter of neural correlates. Although identifying the physiological substrate of the mind is important, and potentially illuminating, we must also account for the vagaries of environment, culture, language, and innumerable other contingencies whose relation to the hardware of the brain still eludes us.
“To understand consciousness—the fact that we think and feel and that a world shows up for us—we need to look at a larger system of which the brain is only one element,” writes the philosopher Alva Noë in Out of Our Heads: Why You Are Not Your Brain, and Other Lessons from the Biology of Consciousness (2009). “Consciousness is not something that a brain achieves on its own. Consciousness involves the joint operation of brain, body, and world.” As such, images of the brain can never fully represent consciousness or that confounding construction we call the self. The brain is always housed in a body, which interacts with people and places, culture and history, hardship and privilege, education and neglect. Perhaps most importantly, it is the mind—not the brain—that processes complex ideas. And thinking, as much as responding to stimuli, shapes how we behave and, ultimately, who we are. Brain scans cannot account for this, and so often they put forth a vision of the world in which we are reduced to our biology. For those seeking relief from uncertainty, from unbearable subjectivity, this view of ourselves as expressions of physiological facts is comforting.
Of course, the notion that technological progress will ultimately deliver total self-knowledge is a myth. Our biology is but one of many causes and conditions that shape our lives; like the brain itself, these comprise a vast network of interconnections too complex to fully grasp. Rather than exchange volition for biology, or naively proclaim ourselves to be the masters of our own fates, we must remain conscious of the limits of consciousness without devaluing the thoughts and choices that make us human. Brain scans provide a certain image of the powerful biological forces that condition our experiences, but our understanding of ourselves is also culturally coded. Like photographs, our selves never exist in isolation: They are embedded in specific contexts, they depend on one another, and they are historically situated. No photo album could provide a full account of a life. Likewise, any images that purport to offer a purely indexical sign of the self should be cause for apprehension.