Vocabulary Networks: Program-2
This simulation shows the effect of turning words ON.
It randomly builds a network of 1000 words,
turns about half of these words ON, and lets the
network settle into a stable attractor state.
The simulation runs through 1000 updates.
The green dots show the number of active words.
The red dots mark the occurrence of an EVENT.
An event turns ON a fixed number of words.
THINGS TO DO
Change the value of NTWK to get a new model.
Change the value of INIT to randomise the network.
Change nEv to tell the program how many events you want.
Change sEv to tell the program how big an event should be.
Change rEv to get a different set of events.
THINGS TO NOTE
infrequent events don't usually have a lasting effect.
frequent small events sometimes produce big effects.
The network usually returns to its rest state. Why?
What implications does this set of behaviours have
for theories of L2 vocabulary acquisition?