Some years ago a colleague, Stevie Draper, said to me, "The reason telepathy doesn't occur isn't because it's impossible, it's because it wouldn't work. You couldn't transmit thoughts directly from person to person because brains are too different. That's why you need an interface, which is language." I was, and remain, grateful for this insight, but an aspect of it has been challenged recently by research in functional Magnetic Resonance Imaging (fMRI). I heard this in a talk given by Marcel Just at a meeting of the International Society for Empirical Research in Literature, in Montreal this July. Led by colleague, Tom Mitchell, the research group of which Just is a member has shown that patterns of fMRI activation in the brain in response to particular words are not only repeatable in the same individual, but also have commonalities among individuals.
Mitchell, Just, and colleagues (2008) have joined machine learning with fMRI to produce this striking result. They started with a corpus of 3 trillion words from published sources. From this corpus they found that words had sets of frequent associations. For the word "celery," for instance, frequent associations included "eat," and "taste." The researchers trained a computational model to learn such semantic associations from the corpus. The next stage was to take particular target words and discover which areas of the brain were activated by their frequently associated words. As shown in the diagram, the word "celery" had a set of semantic associations each of which activated a particular set of brain areas.
Michell, Just, et al. then conducted experiments with nine participants. They chose 60 words, five each from 12 semantic categories: animals, body parts, food, clothing, and so on. Each participant, in an fMRI machine, was asked to view each the 60 words as word-picture pairs, and to view each pair six times. In this way, fMRI activation patterns were found for the nine participants for each of the 60 word-picture pairs. Next, the researchers determined a manageable set of semantic associations that would mediate between the word-picture pairs and the brain areas activated by them. This manageable set consisted of 25 verbs: sensory-motor words that would be likely to be associated with the 60 target words. They included "see, hear, listen, taste, run, push," and so on. Separate computational models were then trained to associate these 25 verbs for each of the nine participants' brain activation patterns, for 58 of the original 60 word-picture pairs. The test, then, was to see whether these computational models could predict the participants' brain activation patterns for the other two word-picture pairs in the 60-word list. The models were able to do this at significantly above-chance levels.
Towards the end of their paper, Mitchell et al. say:
Mitchell, Just, and colleagues (2008) have joined machine learning with fMRI to produce this striking result. They started with a corpus of 3 trillion words from published sources. From this corpus they found that words had sets of frequent associations. For the word "celery," for instance, frequent associations included "eat," and "taste." The researchers trained a computational model to learn such semantic associations from the corpus. The next stage was to take particular target words and discover which areas of the brain were activated by their frequently associated words. As shown in the diagram, the word "celery" had a set of semantic associations each of which activated a particular set of brain areas.
Michell, Just, et al. then conducted experiments with nine participants. They chose 60 words, five each from 12 semantic categories: animals, body parts, food, clothing, and so on. Each participant, in an fMRI machine, was asked to view each the 60 words as word-picture pairs, and to view each pair six times. In this way, fMRI activation patterns were found for the nine participants for each of the 60 word-picture pairs. Next, the researchers determined a manageable set of semantic associations that would mediate between the word-picture pairs and the brain areas activated by them. This manageable set consisted of 25 verbs: sensory-motor words that would be likely to be associated with the 60 target words. They included "see, hear, listen, taste, run, push," and so on. Separate computational models were then trained to associate these 25 verbs for each of the nine participants' brain activation patterns, for 58 of the original 60 word-picture pairs. The test, then, was to see whether these computational models could predict the participants' brain activation patterns for the other two word-picture pairs in the 60-word list. The models were able to do this at significantly above-chance levels.
Towards the end of their paper, Mitchell et al. say:
the neural encodings that represent concrete objects are at least partly shared across individuals, based on evidence that it is possible to identify which of several items a person is viewing, through only their fMRI image and a classifier model trained from other people.
In other words, Mitchell, Just, and colleagues were able to guess from a pattern of brain activation what a person was thinking, or at least to guess what word-picture pair that person was viewing.
So we humans can not only think similar thoughts by means of concepts to which culturally shared words can point, but this similarity extends to shared patterns of brain activation. If we could transmit our thoughts, telepathy might be possible after all.
Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.-M., Malave, V. L., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320, 1191-1195.
So we humans can not only think similar thoughts by means of concepts to which culturally shared words can point, but this similarity extends to shared patterns of brain activation. If we could transmit our thoughts, telepathy might be possible after all.
Mitchell, T. M., Shinkareva, S. V., Carlson, A., Chang, K.-M., Malave, V. L., & Just, M. A. (2008). Predicting human brain activity associated with the meanings of nouns. Science, 320, 1191-1195.
2 comments:
Thanks for this post. I believe in precognition, an anomalous experience akin to telepathy, which is often experienced as an instantaneous or imagistic "knowing" versus a language-based phenomenon. Perhaps you'd be interested in my write-up of Daryl Bem's study on precognition presented at the 2012 Toward a Science of Consciousness conference: http://rightmindmatters.blogspot.com/2012/04/consciousness-of-future-at-tsc.html
Dear Carole,
Thanks very much for this comment. And thanks too for giving us this link to your write-up of Daryl Bem's study of precognition on your blog; helpful for people who are interested in unusual aspects of consciousness.
All best, Keith
Post a Comment