Friday, January 02, 2009

The Edge Annual Question — 2009

The Edge Annual Question for 2009 is up at their site, and it's a good one:

WHAT WILL CHANGE EVERYTHING?

"What game-changing scientific ideas and developments do you expect to live to see?"

Here is the text:
New tools equal new perceptions.

Through science we create technology and in using our new tools we recreate ourselves. But until very recently in our history, no democratic populace, no legislative body, ever indicated by choice, by vote, how this process should play out.

Nobody ever voted for printing. Nobody ever voted for electricity. Nobody ever voted for radio, the telephone, the automobile, the airplane, television. Nobody ever voted for penicillin, antibiotics, the pill. Nobody ever voted for space travel, massively parallel computing, nuclear power, the personal computer, the Internet, email, cell phones, the Web, Google, cloning, sequencing the entire human genome. We are moving towards the redefinition of life, to the edge of creating life itself. While science may or may not be the only news, it is the news that stays news.

And our politicians, our governments? Always years behind, the best they can do is play catch up.

Nobel laureate James Watson, who discovered the DNA double helix, and genomics pioneer J. Craig Venter, recently were awarded Double Helix Awards from Cold Spring Harbor Laboratory for being the founding fathers of human genome sequencing. They are the first two human beings to have their complete genetic information decoded.

Watson noted during his acceptance speech that he doesn't want government involved in decisions concerning how people choose to handle information about their personal genomes.

Venter is on the brink of creating the first artificial life form on Earth. He has already announced transplanting the information from one genome into another. In other words, your dog becomes your cat. He has privately alluded to important scientific progress in his lab, the result of which, if and when realized, will change everything.

— John Brockman
Editor and Publisher

And here is the list of respondents:

CONTRIBUTORS

Damn, that's a who's who of thinkers.

Here are a couple of responses from people I think are cool.

Mihaly Csikszentmihalyi
Psychologist; Director, Quality of Life Research Center, Claremont Graduate University; Author, Flow

THE END OF ANALYTIC SCIENCE

The idea that will change the game of knowledge is the realization that it is more important to understand events, objects, and processes in their relationship with each other than in their singular structure.

Western science has achieved wonders with its analytic focus, but it is now time to take synthesis seriously. We shall realize that science cannot be value-free after all. The Doomsday clock ticking on the cover of the Bulletin of Atomic Scientists ever closer to midnight is just one reminder that knowledge ignorant of consequences is foolishness.

Chemistry that shrugs at pollution is foolishness, Economics that discounts politics and sociology is just as ignorant as are politics and sociology that discount economics.

Unfortunately, it does not seem to be enough to protect the neutral objectivity of each separate science, in the hope that the knowledge generated by each will be integrated later at some higher level and used wisely. The synthetic principle will have to become a part of the fundamental axioms of each science. How shall this breakthrough occur? Current systems theories are necessary but not sufficient, as they tend not to take values into account. Perhaps after this realization sets in, we shall have to re-write science from the ground up.

* * *

HOWARD GARDNER
Psychologist, Harvard Graduate School of Education; Author, Five Minds for the Future

CRACKING OPEN THE LOCKBOX OF TALENT

What is talent? If you ask the average grade school teacher to identify her most talented student, she is likely to reject the question: "All my students are equally talented." But of course, this answer is rubbish. Anyone who has worked with numerous young people over the years knows that some catch on quickly, almost instantly, to new skills or understandings, while others must go through the same drill, with little depressingly little improvement over time.

As wrongheaded as the teacher's response is the viewpoint put forward by some psychological researchers, and most recently popularized in Malcolm Gladwell's Outliers: The Story of Success. This is notion that there is nothing mysterious about talent, no need to crack open the lockbox: anyone who works hard enough over a long period of time can end up at the top of her field. Anyone who has the opportunity to observe or read about a prodigy — be it Mozart or Yo-Yo Ma in music, Tiger Woods in golf, John von Neumann in mathematics — knows that achievement is not just hard work: the differences between performance at time 1 and successive performances at times 2, 3, and 4 are vast, not simply the result of additional sweat. It is said that if algebra had not already existed,, precocious Saul Kripke would have invented it in elementary school: such a characterization would be ludicrous if applied to most individuals.

For the first time, it should be possible to delineate the nature of talent. This breakthrough will come about through a combination of findings from genetics (do highly talented individuals have a distinctive, recognizable genetic profile?); neuroscience (are there structural or functional neural signatures, and, importantly, can these be recognized early in life?); cognitive psychology (are the mental representations of talented individuals distinctive when contrasted to those of hard workers); and the psychology of motivation (why are talented individuals often characterized as having 'a rage to learn, a passion to master?)

This interdisciplinary scientific breakthrough will allow us to understand what is special about Picasso, Gauss, J.S. Mill. Importantly, it will illuminate whether a talented person could have achieved equally in different domains (could Mozart have been a great physicist? Could Newton have been a great musician?) Note, however, that will not illuminate two other issues:

1. What makes someone original, creative? Talent, expertise,
are necessary but not sufficient.
2. What determines whether talents are applied to constructive
or destructive ends?

These answers are likely to come from historical or cultural case studies, rather than from biological or psychological science. Part of the maturity of the sciences is an appreciation of which questions are best left to other disciplinary approaches.

* * *

Steven Pinker
Johnstone Family Professor, Department of Psychology; Harvard University; Author, The Stuff of Thought

If you Insist: Personal Genomics?

I have little faith in anyone’s ability to predict what will change everything. A look at the futurology of the past turns up many chastening examples of confident predictions of technological revolutions that never happened, such as domed cities, nuclear-powered cars, and meat grown in dishes. By the year 2001, according to the eponymous movie, we were supposed to have suspended animation, missions to Jupiter, and humanlike mainframe computers (though not laptop computers or word processing – the characters used typewriters.) And remember interactive television, the internet refrigerator, and the paperless office?

Technology may change everything, but it’s impossible to predict how. Take another way in which 2001: A Space Odyssey missed the boat. The American women in the film were “girl assistants”: secretaries, receptionists, and flight attendants. As late as 1968, few people foresaw the second feminist revolution that would change everything in the 1970s. It’s not that the revolution didn’t have roots in technological change. Not only did oral contraceptives make it possible for women to time their childbearing, but a slew of earlier technologies (sanitation, mass production, modern medicine, electricity) had reduced the domestic workload, extended the lifespan, and shifted the basis of the economy from brawn to brains, collectively emancipating women from round-the-clock childrearing.

The effects of technology depend not just on what the gadgets do but on billions of people’s judgments of their costs and benefits (do you really want to have call a help line to debug your refrigerator?). They also depend on countless nonlinear network effects, sleeper effects, and other nuisances. The popularity of baby names (Mildred, Deborah, Jennifer, Chloe), and the rates of homicide (down in the 1940s, up in the 1960s, down again in the 1990s) are just two of the social trends that fluctuate wildly in defiance of the best efforts of social scientists to explain them after the fact, let alone predict them beforehand.

But if you insist. This past year saw the introduction of direct-to-consumer genomics. A number of new companies have been recently launched. You can get everything from a complete sequencing of your genome (for a cool $350,000), to a screen of more than a hundred Mendelian disease genes, to a list of traits, disease risks, and ancestry data. Here are some possible outcomes:

• Personalized medicine, in which drugs are prescribed according to the patient’s molecular background rather than by trial and error, and in which prevention and screening recommendations are narrowcasted to those who would most benefit.

• An end to many genetic diseases. Just as Tay-Sachs has almost been wiped out in the decades since Ashkenazi Jews have tested themselves for the gene, a universal carrier screen, combined with preimplantation genetic diagnosis for carrier couples who want biological children, will eliminate a hundred others.

• Universal insurance for health, disability, and home care. Forget the political debates about the socialization of medicine. Cafeteria insurance will no longer be actuarially viable if the highest-risk consumers can load up on generous policies while the low-risk ones get by with the bare minimum.

• An end to the genophobia of many academics and pundits, whose blank-slate doctrines will look increasingly implausible as people learn their about genes that affect their temperament and cognition.

• The ultimate empowerment of medical consumers, who will know their own disease risks and seek commensurate treatment, rather than relying on the hunches and folklore of a paternalistic family doctor.

But then again, maybe not.

* * *

GARY MARCUS
Cognitive Scientist,, New York University; Author, Kluge

DECODING THE BRAIN

Within my lifetime (or soon thereafter) scientists will finally decode the language of the brain. At present, we understand a bit about the basic alphabet of neural function, how neurons fire, and how they come together to form synapses, but haven't yet pieced together the words, let alone the sentences. Right now, we're sort of like Mendel, at the dawn of genetics: he knew there must be something like genes (what he called "factors"), but couldn't say where they lived (in the protein? in the cytoplasm?) or how they got their job done. Today, we know that thought has something to do with neurons, and that our memories are stored in brain matter, but we don't yet know how to decipher the neurocircuitry within.

Doing that will require a quantum leap. The most popular current techniques for investigating the brain, like functional magnetic resonance imaging (fMRI), are far too coarse. A single three dimensional "voxel" in an fMRI scan lumps together the actions of tens or even hundreds of thousands of neurons — yielding a kind of rough geography of the brain (emotion in the amygdala, decision-making in the prefrontal cortex) but little in the way of specifics. How does the prefrontal cortex actually do its thing? How does the visual cortex represent the difference between a house and a car, or a Hummer and a taxi? How does Broca's area know the difference between a noun and verb?

To answer questions like these, we need to move beyond the broad scale geographies of fMRI and down to the level of individual neurons.

At the moment, that's a big job. For one thing, in the human brain there are billions of neurons and trillions of connections between them; the sheer amount of data involved is overwhelming. For another, until recently we've lacked the tools to understand the function of individual neurons in action, within the context of microcircuits.

But there's good reason to think all that's about to change. Computers continue to advance at a dizzying pace. Then there's the truly unprecedented explosion in databases like the Human Genome and the Allen Brain Atlas, enormously valuable datasets that are shared publically and instantly available to all researchers, everywhere; even a decade ago there was nothing like them. Finally, genetic neuroimaging is just around the corner — scientists can now induce individual neurons to fire and (literally) light up on demand, allowing us to understand individual neural circuits in a brand new way.

Technical advances alone won't be enough, though — we'll need a scientist with the theoeretical vision of Francis Crick, who not only helped identify the physical basis of genes — DNA — but also the code by which the individual nucleotides of a gene get translated (in groups of three) into amino acids. When it comes to the brain, we already know that neurons are the physical basis of thinking and knowledge, but not the laws of translation that relate one to the other.

I don't expect that there will be one single code. Although every creature uses essentially the same translation between DNA and amino acids, different parts of the brain may translate between neurons and information in different ways. Circuits that control muscles, for example, seem to work on a system of statistical averaging; the angle at which a monkey extends its arm seems, as best we can tell, to be a kind of statistical average of the actions of hundreds of individual neurons, each representing a slightly different angle of possible motion, 44 degrees, 44.1 degrees, and so forth. Alas, what works for muscles probably can't work for sentences and ideas, so-called declarative knowledge like the proposition that "Michael Bloomberg is the Mayor of New York" or the idea that my fight to Montreal leaves at noon. It's implausible that the brain would have vast population of neurons reserved for each specific thought I might entertain ("my flight to Montreal leaves at 11:58 am", "my flight to Montreal leave leaves at 11:59 am", etc). Instead, the brain, like language itself, needs some sort of combinatorial code, a way of putting together smaller pieces (Montreal, flight, noon) into larger elements.

When we crack that nut, when we figure out how the brain manages to encode declarative knowledge , an awful lot is going to change. For one thing, our relationship to computers will be completely and irrevocably altered; clumsy input devices like mice, windows, keyboards, and even heads-up displays and speech recognizers will go the way of typewriters and fountain pens; our connection to computers will be far more direct. Education, too, will fundamentally change, as engineers and cognitive sciences begin to leverage an understanding of brain code into ways of directly uploading information into the brain. Knowledge will become far cheaper than it already has become in the Internet era; with luck and wisdom, we as species could advance immeasurably.

Read all of them - they're all over the place, but it seems that artificial intelligence of some sort is highly anticipated.

No comments: