The Imaris 10 release brings significant improvements to the filament tracing module. Filament tracing in Imaris 10 runs much faster, it is capable of segmenting much larger images, and it performs tracing of networks much better than before. In addition, the Imaris 10 filament tracing module uses machine learning to achieve greater versatility.
The Imaris 10 release also adds the capability to use Swiping in Circle Select Mode to select objects, which can be very useful for working with Spots, Surfaces, Cells or Filaments.
To select spots or other objects it can be very handy to use the circle select mode with “swiping”. In Circle Select Mode, when pressing Ctrl + LeftMouse and moving the mouse, Imaris selects all objects the circle “swipes” over.
The circle select with swipe functionality can for example enable efficient deletion of objects from entire regions of the image. When on the edit tab of Spots or Surfaces one can very quickly select all objects within entire image regions by swiping over them and then use the delete button or key to delete the selected objects.
The workflow for tracing neurons starts with soma detection, followed by detection of seed points along the filament, from which Imaris then computes segments that the user may classify into good and bad. The user only needs to classify the subset of seed points (around 50 in each group) and this subset becomes a training set for the machine learning seed point classification. Imaris automatically classifies the remaining seed points. The process of adding seed points to the training set and having Imaris classify all other seed points can be repeated multiple times to achieve the best results. Finally seed points on good segments are traced back to the soma along shortest paths in the segment network.
The computation of segments between pairs of seed points is new in Imaris 10. This step was introduced to run multiple fast marching computations [Sethian 1999] in parallel and to give users the ability to classify segments into groups of good and bad segments before the final tracing is performed.
The workflow to trace networks in Imaris 10 is very similar to the neuron tracing workflow. It starts with the detection of seed points along the filaments from which Imaris then computes segments that the user may classify into good and bad. In the final step, the bad segments are removed from the segment network (with the exception of bad segments that close short gaps) to produce the final network.
The computation of segments between pairs of seed points together with the machine learning classification of segments facilitate this new workflow for network tracing.
The computation of segments between pairs of seed points produces some segments that are not part of the desired tracing. The machine learning segment classification step provides the possibility to remove “bad” segments. Imaris computes many features for machine learning segment classification that are very useful to discriminate between “good” and “bad” segments. It computes features such as segment intensity, or contrast compared to its local background, as well as so-called HGD features (histogram of gradient deviations) which describe how well the image gradients around a segment point toward the center of the segment [Türetken 2012]. These features are very useful for segment classification and they often facilitate good classification. Sometimes the image quality doesn’t permit easy classification and in those cases it may be most efficient to use a combination of automated and manual filament tracing.
The machine learning segment classification step comes with some “good” and “bad” segments automatically selected by Imaris. The user can add segments to the training set to improve the classification. It is often useful to have between 50 and 100 segments in each class. It is also often useful to perform a few cycles of training and prediction.
To trace filaments with significant variations in diameter, Imaris 10 has the capability to do multi-scale seed point detection. For this the user enters the diameter of the smallest seed points and a large diameter, larger than the largest seed points. For each seed point Imaris then automatically determines the best scale according to a scale-selection model [Lindeberg 1998].
Subsequently Imaris computes segments between the largest seed points to first determine the thick filaments. Following the detection of thick filaments Imaris computes thin segments between smaller seed points and thick segments.
For this approach to be successful it is useful to ensure good quality for the large diameter seed points. You can do this by manually deleting “bad” large seed points or by training the machine learning classification for seed points.
Imaris 10 automatically estimates radii of segments from the local image intensities around the segment. This fully automated approach is easy to use and usually produces good results. The only thing the user needs to decide is the amount of smoothing to be applied to radius estimates along each segment.
Sometimes you might want to restrict filament detection to a region inside or outside a Surface. In previous versions of Imaris you would have accomplished this by “masking”. In Imaris 10 if you turn on “object-object statistics” for a surface and the filament, then the machine learning seed point classification will be trained with a shortest distance to surface statistics value for each seed point. This creates the possibility for the classifier to learn from the shortest distance to surface statistics. This provides an easy to use way to restrict the seed points to a certain distance to the surface.
Imaris 10 Spots can have a statistics value “Shortest Distance to Filament”. This value is calculated when the scene contains a Filament component and a spots component and when “object-object statistics” is turned on for both.
New in Imaris 10 is the computation of segment intensity statistics for filament segments.