Basic HTML Version

that D4 said, “If I don’t solve this within the next six months I really am going home, because
this is a waste of my time.”
D11 felt that computational modeling could provide information about the problem of
drift that was otherwise unavailable, given that the advantage of the technique is its ability to
measure everything. Therefore, he built a computational simulation of a generic dish and
fitted it to the research data. This was a novel approach for this particular lab, which had
adopted a philosophical stance against network modeling. Nevertheless, the PI agreed that
D11 could undertake the research he proposed as part of his PhD.
Building the computational model was a very complex process. First, the initial model
was constructed by putting together a number of constraints from every possible source: the
modeling platform, the literature, the single neuron studies, other dish studies, and brain slice
studies. It was also an iterative process. The initial goal was to make an analogy to the living
dish to enable the insight gained via the computational model to be transferred to the dish
itself. However, the computational model only gradually became a helpful analogy and
indeed gained in complexity. Eventually, it was nevertheless able to enact the behavior of the
target system – not any specific system, but rather a generic representation that exemplified
selected features.
The researchers began to understand the system through the dynamical interactions of
variables that they were building into the model to produce behavior. The computational
model, as opposed to the living dish, facilitated the running of limited scenarios, as well as
stopping and starting and tracking of variables. Most importantly, it allowed the researchers
to “see into the dish.” D11 imagined the dish as a network, but the eight-by-eight grid on
which they grew the neurons was not displaying network properties, as these were hidden
under the spike data. So D11 constructed a representation, over the network representation,
that ended up looking like the spike data. These two representations were, of course, quite
different. One captured the activity per channel but not the network behavior, while the other
showed the network behavior but hid the activity per channel behind the spike data.
In addition to constructing the representations, D11 made movies of the dynamic
network visualizations. He watched the movies over and over again, as did the rest of the
group, so that everyone saw the same thing. The bursts were structurally similar. There
seemed to be a small number of patterns of propagation. And, most importantly, the
spontaneous activity was very stable. If the bursts were stable, then they could be used as
signal rather than noise. They could be used for creating a control structure. The
representations that D11 built enabled mathematical analysis of the propagation of burst
patterns – the “center of activity trajectory.”
So eventually, MEart learned to draw within the box. D2 and D11 stayed on for about
another year to develop a control structure, first for a computational simulation of that
mechanical arm and then for the robotic version. Interestingly, the control structure was
counter-intuitive. Normally, to reinforce learning, you would repeat a stimulation. But in this
case they discovered that in fact you have to give the stimulation, and then follow it by a
random stimulation, which stabilized the initial stimulation, before finally giving the initial
stimulation again.