profundo- an RL agent for neural cell tracing
I recently finished a graduate internship at the Allen Institute for Brain Science, working with the Neuroanatomy group. My project applied reinforcement learning (RL) to automate neural cell tracing.
Neural cell tracing is a major challenge in neuroscience. Microscopy teams produces huge volumes of images of brain cells-- well into the petabyte scale. These images need to be digitized to analyze brain connectomics, cell morphology, and overall organization of the brain. There are many existing rules-based tools to automate this, but none are perfect. As a result, tracing is a major bottleneck for brain research.
These rules-based tools often struggle with problems that humans solve intuitively. For example, patches of neural cells often don't fluoresce, so these patches appear dark in the image stacks. The tools incorrectly segment these the cells as different cells, while a human can easily infer that they are the same cell and bridge the gaps. In these cases, it's unavoidable to manually trace the cells using VR headsets or libraries like Vaa3D. This is expensive and does not scale.
So, for my internship proposal, I asked: what if we use artifical neural networks to trace biological neural networks? I was inspired by DeepMind's AlphaGo, which achieved super-human pattern matching using RL. I proposed using their same approach, deep Q learning, to automate neuron tracing.
Of course, that is too ambitious to solve in a few months, so I simplified the problem by cropping high quality, human labeled microscope images into tiny volumes. Then, I trained the agent to trace the cells voxel by voxel.
Here are a couple of videos I made during the project. Initially, before training, the agent moves randomly:
After training, the agent is much better at following the trajectory, although sometimes it backtracks on itself. I experimented with penalizing backtracking in the reward function, but this caused the agent to avoid the cell body until time ran out. I think this could be fixed using curriculum learning: starting without a backtracking penalty, and ramp up the coefficient as the agent improves.
My DQN agent, which I named profundo
(Spanish for "deep") is available as a Vaa3D plugin here:
https://github.com/Vaa3D/vaa3d_tools/tree/master/hackathon/profundo