Testing, not studying, makes for strong long-term memories

Blogging on Peer-Reviewed ResearchIt’s a familiar scene – the wee hours of the morning are ticking away and your head is bent over a stack of notes, desperately trying to cram as much knowledge into your head before the test in the morning.

Exam roomBecause of the way our education system works, this process of hard studying has become almost synonymous with the act of learning, and the inevitable tests and exams that bookend this ordeal merely assess how much information has stuck.

But a new study reveals that the tests themselves do more good for our ability to learn that the many hours before them spent relentlessly poring over notes and textbook. The act of repeatedly retrieving and using learned information drives memories into long-term storage, while repetitive revision produced almost no benefits.

Continue reading

Time doesn’t actually slow down in a crisis

Blogging on Peer-Reviewed ResearchIn The Matrix, when an agent first shoots at Neo, his perception of time slows down, allowing him to see and avoid oncoming bullets. In the real world, almost all of us have experienced moments of crisis when time seems to slow to a crawl, be it a crashing car, an incoming fist, or a falling valuable.

Time doesn’t actually slow down in a crisisNow, a trio of scientists has shown that this effect is an illusion. When danger looms, we don’t actually experience events in slow motion. Instead, our brains just remember time moving more slowly after the event has passed.

Chess Stetson, Matthew Fiesta and David Eagleman demonstrated the illusion by putting a group of volunteers through 150 terrifying feet of free-fall. They wanted to see if the fearful plummet allowed them to successfully complete a task that was only possible if time actually moved more slowly to their eyes.

Continue reading

Songbirds need so-called “human language gene” to learn new tunes

Blogging on Peer-Reviewed ResearchThe nasal screech of Chris Tucker sound worlds apart from the song of a nightingale but both human speech and birdsong actually have a lot in common. Both infants and chicks learn their respective tongues by imitating others. They pick up new material most easily during specific periods of time as they grow up, they need practice to improve and they pick up local dialects. And as infants unite words to form sentences, so do songbirds learn to combine separate riffs into a full song. Songbirds need so-called “human language gene” to learn new tunes

Because of these similarities, songbirds make a good model for inquisitive neuroscientists looking to understand the intricacies of human speech. Zebra finches are a particularly enlightening species and they have just shown Sebastian Haesler that the so-called human ‘language gene’ FOXP2 also controls an songbird’s ability to pick up new material.

FOXP2 has a long and sordid history of fascinating science and shoddy science writing. It has been consistently mislabelled as “the language gene” and after the discovery that the human and chimp versions differed by just two small changes, it was also held responsible for the evolution of human language. Even though these claims are far-fetched (for reasons I’ll delve into later), there is no doubt that faults in FOXP2 can spell disaster for a person’s ability to speak.

Mutated versions cause a speech impairment called developmental verbal dyspraxia (DVD), where people are unable to coordinate the positions of their jaws, lips, tongues and faces, even though their minds and relevant muscles are in reasonable working order. They’re like an orchestra that plays a cacophony despite having a decent conductor and tuned instruments.

Brain scans of people with DVD have revealed abnormalities in the basal ganglia, an group of neurons at the heart of the brain with several connections to other areas. Normal people show strong activation of FOXP2 here and fascinatingly, so do songbirds. Haesler reasoned that studying the role of this gene in birds could tell him more about its human counterpart.

Continue reading

Doctors repress their responses to their patients’ pain

A new study shows that experienced doctors learn to control the part of their brain that allows them to empathise with a patient’s pain, and switch on another area that allows them to control their emotions.

Many patients would like their doctors to be more sensitive to their needs. That may be a reasonable request but at a neurological level, we should be glad of a certain amount of detachment.

In some doctors, being detached can be a good thing.Humans are programmed, quite literally, to feel each others’ pain. The neural circuit in our brains that registers pain also fires when we see someone else getting hurt; it’s why we automatically wince.

This empathy makes evolutionary sense – it teaches us to avoid potential dangers that our peers have helpfully pointed out to us. But it can be liability for people like doctors, who see pain on a daily basis and are sometimes forced to inflict it in order to help their patients.

Clearly, not all doctors are wincing wrecks, so they must develop some means of keeping this automatic response at bay. That’s exactly what Yawei Chang from Taipei City Hospital and Jean Decety from University of Chicago found when they compared the brains of 14 acupuncturists with at least 2 years of experience to control group of 14 people with none at all.

Continue reading

Molecule’s constant efforts keep our memories intact

Our memories are more fragile than we thought. New research suggests that they need the constant action of a key protein to remain stored in our minds – block the protein and erase the memories.

Memories are dynamic things, unlike books stored on library shelves.Our mind often seems like a gigantic library, where memories are written on parchment and stored away on shelves. Once filed, they remain steadfast and inviolate over time, although some may eventually become dusty and forgotten.

Now, Reut Shema, Yadin Dudai and colleagues from the Weizmann Institute of Science have found evidence that challenges this analogy. According to their work, our memory is more like a dynamic machine – it requires a constant energy supply to work. Cut the power and memories are lost.

Continue reading

Babies can tell apart different languages with visual cues alone

Most of us could easily distinguish between spoken English and French. But could you tell the difference between an English and a French speaker just by looking at the movements of their lips? It seems like a difficult task. But surprising new evidence suggest that babies can meet this challenge at just a few months of age.

The shapes of our mouths when we speak provide valuable clues that we can use to understand language.Young infants can certainly tell the difference between the sounds of different languages. Whitney Weikum and colleagues from the University of British Columbia decided to test their powers of visual discrimination.

They showed 36 English babies silent video clips of bilingual French-English speakers reading out the same sentence in one of the two languages. When they babies had become accustomed to these, Weikum showed them different clips of the same speakers reading out new sentences, some in English and some in French.

When the languages of the new sentences matched those of the old ones, the infants didn’t react unusually. But when the language was switched, they spent more time looking at the monitors. This is a classic test for child psychologists and it means that the infants saw something that drew their attention. They noticed the language change.

Weikum found that the babies have this ability at 4 and 6 months of age, but lose it by their eighth month. During the same time, other studies have found that infants become worse at telling apart consonant and vowel sounds from other languages, and even musical rhythms from other cultures.

Babies can distinguish between two languages from birth.It seems that initially, infants are sensitive to the properties of a wide range of languages. But without continuing exposure, their sensitivities soon narrow without continuing exposure to both languages, the babies’ sensitivities soon narrowed to the range that is most relevant for their mother tongue.

To test this idea, Weikum repeated his experiments on bilingual infants. Sure enough, at 8 months, these babies could still visually tell the difference between English and French speakers.

We normally think of lip-reading as a trick used only by deaf people. But this study suggests that the shapes our mouths make when we talk provide all of us with very important visual clues.

From a very early age, infants are programmed to sense these clues, and this so-called ‘visual speech’ may even help them to learn the characteristics of their native tongue.

Reference: Weikam, Vouloumanos, Navarra, Soto-Faraco, Sebastian-Galles & Werker. 2007. Visual language discrimination in infancy. Science 316: 1159.

Technorati Tags: , , , , ,

Digg this Del.icio.us Reddit Google Bookmarks Stumbleupon

Experience tunes a part of the brain to the shapes of words

A new brain imaging study has found a part of the brain specifically attuned to the shape of written words. And unlike other similar areas, this one develops its abilities through learning and experience.

Over the course of evolution, certain parts of our brain have been specifically tuned to faces, human bodies and landscapes. These structures turn up in the same places in very different people. Their roles are so fundamental to the way we (and our ancestors) experience the world, that they have been long since hardwired into our genetic plans.

Words on the brain

Reading and writing are too new for evolution to have shaped our brains to them.But not everything we see is like this. Words, for example, are an exception. Even though reading and writing are such central parts of our lives now, they have only been around for a few millennia. And for most of that time, they were skills available only to a learned elite.

The entire history of writing is a mere blip in evolutionary time, certainly not long enough to evolve a specialised, genetically determined brain region dedicated to processing written words. Nonetheless, one such region exists.

Chris Baker and colleagues from the National Institute of Mental Health, Bethesda, have found that a small part of the brain specifically recognises written words. And unlike the areas that recognise faces and bodies, its origins lie in learning and experience.

Where words are recognised

Baker examined the brain activity of several English speakers using functional magnetic resonance imaging (fMRI), a technique that measures the flow of blood and oxygen in the brain. He found that a small region at the back of the brain – no bigger than a piece of sweet corn – responds strongly and specifically to English words.

The cLSSR only recognises Hebrew characters in Hebrew speakers.Strings of consonants worked just as well but strings of numbers or Hebrew words (right), which use unfamiliar characters, triggered much weaker responses. And the region responded even more weakly to line drawings of common objects or Chinese characters, which obviously perform the same function as English words but are very different in appearance.

Baker gave the region the slightly unwieldy name of ‘candidate letter string-selective region’ or cLSSR for short. He had its location, but it was still unclear if its properties are innate of the product of experience. Indeed, the cLSSR lies very close to the fusiform gyrus, a part of the brain genetically programmed to recognise faces and numbers.

Things got interesting when Baker repeated his experiments in people who were fluent speakers and readers of both Hebrew and English. Their cLSSRs responded equally strongly to both English and Hebrew words. But in all other ways, they behaved identically to the cLSSRs of those who just spoke English.

Nature and nurture

These results provide powerful evidence that experience shapes the abilities of this part of the brain. It’s obvious that experience breeds familiarity. But this is the first time that someone has shown that a part of the brain becomes specifically attuned to a type of visual through experience and learning alone.

The cLSSR is found in the extrastriate cortex in the back of the brain.Obviously, a genetic influence on the cLSSR cannot be ruled out. After all, genes control the structure of the developing brain, and in different people, the cLSSR is consistently found in the same place. This is most often in the left hemisphere. If it is damaged in adults, the right hemisphere can’t pick up the slack, and people suffer form problems in reading.

All this suggest that this particular bundle of neurons may develop its taste for words, but it is somehow predisposed to do so. Baker speculates that the cLSSR’s lies along the route that nervous signals take from visual areas to language areas, and gradually learns from the signals it carries.

The discovery of the cLSSR is just the beginning, and the questions practically ask themselves. Is it more diffuse or less responsive in dyslexic children? And how does it grow and develop over time? More research and new methods even more accurate than fMRI will help to provide the answers.

Reference: Baker, Liu, Wald, Kwong, Benner & Kanwisher. 2007. Visual word processing and experiential origins of functional selectivity in human extrastriate cortex. PNAS 104: 9087-9092.

Technorati Tags: , , , ,

Follow

Get every new post delivered to your Inbox.

Join 38 other followers