VR Experience: Neural Network Playground

Photo by Josh Riemer on Unsplash

Written by Ollie Lynass

The audience will be given a virtual space in which to construct a simple example of a Neural Network while listening to a lecture on the subject on AI & neural networks. Marked squares on the floor will demarcate the various zones for Data Input, Processing, & Output. The audience will be able to create the dataset along an X, Y & Z axis, set the parameters for processing, and observe how these affect the output. Each zone will have an intractable voiceover icon to allow the audience to trigger the relevant lecture at any time. The lecture will provide background on and explain the relevant zone.

Input will be on the left

There are three “shape buckets” on the ground from which the audience can pull 3D shapes (cubes, spheres or prisms) that serve as data points.

There are also three “colour buckets” into which the audience can dip a shape to dye it red, blue, or green, respectively.

The audience can then place these coloured shapes anywhere within the Input Space. The shapes will hover in midair if placed there.

Alternatively, the audience can draw the shape of the data in the 3D space, and allow the program to generate the data from this.

A number of data presets will also be provided for quick use.

The dataset will freeze after the audience has been stood outside the Input Zone for 3 seconds.

Processing will be in the centre

There will be a number of “feature buckets” from which the audience can pull various axes upon which the neural network will measure the created input (I.E: The x axis, the Y^2 axis, the sin(Z) axis, etc).

There will also be a “neuron bucket” from which the audience can pull a neuron. Once a neuron has been placed in the 3D space, the audience can pinch & drag from one neuron to another to connect it. The processing neurons will begin processing data after the audience has been stood outside the Input Zone for 3 seconds.


Output will be on the Right

Once the neural network has started, the Output Zone will begin a timer, and start constructing an output for the audience to observe.

Why this works in VR

The purpose of the Neural Network Playground is to provide an audience with a working example of how AI functions beneath the surface, on a simple level, and to educate an audience on the limitations & capabilities of an artificial intelligence.

The implications & function of AI are commonly misunderstood, and “raising

problem-awareness among decision-makers and persons interacting with AI systems is effential”. (Strauß, 2021:3) AI is becoming increasingly more relevant to the general public with the advent of ChatGPT, DaliAI and similar products, but the education around the topic has not advanced at a similar rate.

VR is the perfect way to visualise and communicate this complex system: “Virtual reality has the potential to present a more effective visualisation of this data offering higher user satisfaction and different accuracy and depth of insights.” (Andersen et. al, 2019:1). A world more educated and interested in the complexities of neural networks is a world much safer from the potential dangers that abuse and miscommunication AI poses.

Bibliography

STRAUß, Stefan. 2021. “Don’t let me be misunderstood”: Critical AI literacy for the constructive use of AI technology. Germany: Journal for Technology Assessment in Theory and Practise.

ANDERSEN, Benjamin J. H., DAVIS, Arran T. A., WEBER, Gerald, Wünsche, Burkhard C. 2019. Immersion or Diversion: Does Virtual Reality Make Data Visualisation More Effective? Auckland, New Zealand: University of Auckland.

FalWriting Team