Misc. VRML

Human Brain Project collaborators

[VRML1] Map with our collaborators in the so-called "International Neuroimaging Consortium" funded by the "Human Brain Project". The flag in Minneapolis links to our collaborators at the VA medical center, PET imaging service. The other flags denote our collaborators throughout USA and in Akita, Japan.
Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk 1995.

Brain activations

First model: ianspm.wrl [VRML2] Functional activations
First model: ianspm.wrl [VRML1] Functional activations
First model: ianspm_mri.wrl [VRML2] Functional activations with Talairach brain
First model: ianspm_mri.wrl [VRML1] Functional activations with Talairach brain

Second model: ianspm2.wrl [VRML2] Functional activations
Second model: ianspm2.wrl [VRML1] Functional activations

The first model is visualizing an analysis made by Ian Law and Claus Svarer. The analysis is made with the program SPM. The colors on the functional activations - sometimes called "blobs" represent the four different stimulus: Red: Left Hand, Blue: Right Hand, Yellow: Visual activation, Orange: Language.

The second model is visualizing other results of Ian Law and Claus Svarer, and furthermore contains blobs from the BrainMap database: Green: Olfaction (from BrainMap), Grey: Auditory (passive words, from BrainMap), Light blue: Touch (from BrainMap), Orange: Reading, Yellow: Mouth, Blue: "Anti"-saccadic eye movements, Red: Reflexive saccadic eye movements, Magenta wireframe: saccadic eye movements.

Similar models are available from the BrainMap project page.

Finn Årup Nielsen; 1996, 1997 april, nov.

Minneapolis Corner Cube

[VRML1] Corner Cube model from Minneapolis. Adjusted to the Talairach space and embedded with Talairach brain net from our site.

VRML2 model.
VRML1 model.
VRML1 model.The unadjusted model without the brain net.
VRML1 model, merged with the Ian Law model (see right above).
Kirt Schaper kirt@pet.med.va.gov, Kelly Rehm kelly@pet.med.va.gov, Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk. 1996.

"Polygon brain"

[VRML1, 75kb] A polygon brain. With a magnetic resonance imaging scanner an volume image has been acquired. The original volume has then been filtered and down-sampled 6 times. The VRML-model is constructed with our marching cube program called polyr, developed by our former research assistant Jesper James Jensen. You will find the program on our software page.
Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk. 1995.

Brain morphing

[VRML2, 300kb] Brain Morphing. Watch the optimization of a model of a head: A 'finite difference model' is constructed, and given a gradient image and an initial guess for the surface the model is optimized with gradient descent steps. This specific model is interpolating between the first initial guess and the 1000th optimization step.

A bigger model [VRML2, 2.5Mb] has every 25th step between the initial guess and the 1000th step.

The model contains a small user interface: The green arrow should start the animation. The red square will stop it. After you have stopped the animation you can manually move through the steps with the slider1. Note that any smooth interpolation between the single steps in the optimization is done by VRML, - not by the optimization.

[1]: Slider by Rich Gossweiler VRML-employee at SGI, http://reality.sgi.com/rcg/

Mattias Ohlsson, Finn Årup Nielsen; 1997 april.

Saliency maps

[VRML1, 8kb+5kb] Cornercubes from finger tapping and saccade studies.

The first two models (Motor I and Motor II, - there are different subject in each study) are from motor studies were the subjects are "tapping" with a finger. In the third study (Saccade) the subjects follow a blinking arcade of lights with their eyes. The rate of the light varied.

The data for these studies have been acquired with a PET scanner.

The red "blobs" in the models are showing the specific area that an artificial neural network has found important for the explanation of each individual task.

The paper describing the analysis method: Visualization of Neural Networks Using Saliency Maps
Kelly Rehm kelly@pet.med.va.gov, Kirt Schaper kirt@pet.med.va.gov, Niels Mørch nmorch@eivind.imm.dtu.dk, Ulrik Kjems kjems@eivind.imm.dtu.dk, and Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk. 1995.

Brain gallery

[VRML1] Walk around in a small gallery of various brain related images. Once you have entered the gallery you can get information about the images by clicking on the sphere above each collection.
Mattias Ohlsson mo@imm.dtu.dk. 1995.

These models were made for Lars Kai Hansen's multimedia presentation at the annual celebration of DTU 1996 entitled: "Kan Man se en Tanke?" (Looking into the mind). Three books were cited: Richard Restack's "Brainscapes" [1] Francis Crick's "The Astonishing Hypothesis" [2], and "Udredning om Dansk Neuroforskning". Further VRML models show: An artificial neural network next to a biological [3]; a gallery for neural network application; a PET scanner (measured on a real PET scanner at "Riget"); a model showing the VRML hyperlinks and inlining idea; a map of Copenhagen, DTU and our building with our server in one of the rooms; a firewall - literally; our "nnnn" analysis model: an artificial neural network analyzes a "real" neural network (the brain), here done during a saccade study; a finally - the grand point: A neurologist [4] studies her own thoughts as she is being scanned - She is seeing a thought!

[1]: Picture from the front page of the book.
[2]: Picture from the front page of the book.
[3]: Picture from Geoffrey E. Hinton's article in Scientific American 267, special edition about the human brain.
[4]: Model of a woman from WorldToolKit.

Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk. 1996.

Hidden variables

[VRML1] "Hidden variables": From the Master Thesis of Finn Årup Nielsen. The inner state of an artificial neural network that has analyzed brain scans. The brain scanning technique is fMRI, and a total of 576 scans were acquired: 72 scans in 8 runs. In the VRML models each scan has its own polygon. In each run the subject:
1) rested for 24 scans (red polygons in the VRML model),
2) performed a sequential finger opposition task the next 24 scans (green polygons) and
3) rested again in the last 24 scans (blue polygons).
By analyzing the brain scans the artificial neural network should discriminate between the activation (task) state and the rest states. By looking at the VRML model one sees that the artificial neural network was able do this (The green polygons are separated from the blue and red polygons); furthermore the artificial neural network found that there was a difference between the two rest states (The cluster of blue polygons is a little translated compared to the cluster of the red polygons).

The shape of the polygons refers to the set grouping: square polygon are in the training set, triangles and pentagons are in the test sets.

Individual runs: 1 2 3 4 5 6 7 8
Ensembles: side-by-side mixed

Finn Årup Nielsen fnielsen@eivind.imm.dtu.dk. 1996.

VRML (Virtual Reality Modeling Language) is a filetype that is used on the internet for exchanging objects: 3 dimensional interactive animated and hyperlinked objects. To view these files you will need a VRML-browser. It might already be embedded in the webbrowser you use, - else: Please look at our software page for a list of them. There are two flavours of VRML: version 1 and version 2. Furthermore VRML version 2 can either use VrmlScript, JavaScript or Java for programming of the behavior of the model. There are yet few VRML-browsers that support VRML version 2 fully, so be careful when you click on one of our "VRML2" models, it might not run satisfactory on your VRML-browser.

HBP THOR Center for Neuroinformatics, Human Brain Project Repository (This server)
THOR THOR Center for Neuroinformatics
DSP Section for Digital Signal Processing
IMM Department of Mathematical Modelling
DTU Technical University of Denmark

© DSP IMM DTU, 1995-1999, 2002
$Id: vrmlhome.html,v 1.2 2002/05/06 13:00:04 fnielsen Exp $