Generating text from functional brain images

From Brede Wiki
Jump to: navigation, search
Paper (help)
Generating text from functional brain images
Authors: Francisco Pereira, Greg Detre, Matthew Botvinick
Citation: Frontiers in Human Neuroscience 5 (72): missing pages. 2011 August
Database(s): Google Scholar cites
DOI: 10.3389/fnhum.2011.00072.
Web: Bing Google Yahoo!Google PDF
Article: BASE Google Scholar PubMed
Restricted: DTU Digital Library
Other: NIF
Format: BibTeX
Extract: Talairach coordinates from linked PDF: CSV-formated wiki-formated

Generating text from functional brain images

[edit] Abstract from paper (CC-BY)

Recent work has shown that it is possible to take brain images acquired during viewing of a scene and reconstruct an approximation of the scene from those images. Here we show that it is also possible to generate text about the mental content reflected in brain images. We began with images collected as participants read names of concrete items (e.g., "Apartment") while also seeing line drawings of the item named. We built a model of the mental semantic representation of concrete concepts from text data and learned to map aspects of such representation to patterns of activation in the corresponding brain image. In order to validate this mapping, without accessing information about the items viewed for left-out individual brain images, we were able to generate from each one a collection of semantically pertinent words (e.g., “door,” “window” for “Apartment’’). Furthermore, we show that the ability to generate such words allows us to perform a classification task and thus validate our method quantitatively.

[edit] Related papers

  1. Mapping cognitive ontologies to and from the brain
  2. Inferring mental states from neuroimaging data: from reverse inference to large-scale decoding
  3. Selecting corpus-semantic models for neurolinguistic decoding
Personal tools