TrakEM2 | news | snapshots | manual | how to | tutorials | scripting

A Fiji Scripting Tutorial

Most of what you want to do with an image exists in Fiji.
What happens is: you still don't know what it's called, and where it is.

This tutorial will provide you with the general idea of how Fiji works: how are its capabilities organized, and how can they be composed into a program.

To learn about Fiji, we'll start the hard way: by programming.
Your first program will be very simple: obtain an image, and print out its title. We'll slowly iterate towards increasingly complex programs.

This tutorial will teach you both python and Fiji.


Index
  1. Getting started
  2. Your first Fiji script
  3. Inspecting properties and pixels of an image
  4. Running ImageJ / Fiji plugins on an ImagePlus
  5. Creating images and regions of interest (ROIs)
  6. Create and manipulate image stacks
  7. Interacting with humans: file and option dialogs, messages, progress bars.
  8. Batch processing
  9. Turn your script into a plugin
  10. Lists, native arrays, and passing lists and arrays to Java classes
  11. Generic algorithms that work on images of any kind: using Imglib
  12. ImgLib2: writing generic, high-performance image processing programs
  13. Image registration

Tutorial created by Albert Cardona. Zurich, 2010-11-10.
(Last update: 2018-12-02)

All source code is under the Public Domain.

Remember: Fiji is just ImageJ (batteries included).


See also:

Thanks to:

  • 2018-07-22: Nikolas Schnellbächer for reporting an error in a script.
  • 2018-10-17: Tobias Pietzsch for identifying an error in the Memoize class (namely lack of synchronization), leading to both incorrectness of the cache and performance issues.

1. Getting started

Open the Script Editor by choosing "File - New - Script".

Alternatively, use the Command finder:

Push 'l' (letter L) and then start typing "scri".
You will see a list of Fiji commands, getting shorter the more letters you type. When the "Script Editor" command is visible, push the up arrow to go to it, and then push return to launch it.
(Or double-click on it.)

The Command Finder is useful for invoking any Fiji command.

2. Your first Fiji script

We'll need an image to work on: please open any image.
For example, go to "File - Open Samples - Boats (356K)".

This tutorial will use the programming language Python 2.7. We start by telling the opened Script Editor what language you want to write the script on: choose "Language - Python".

Grabbing an open image

Type in what you see on the image to the right into the Script Editor, and then push "Run", or choose "Run - Run", or control+R (command+R in MacOSX).
The program will execute and print, at the bottom, its result.

Line by line:

  1. Import the namespace "IJ" from the package "ij".
    A namespace is a group of functions. And a package is a group of namespaces.
    Just imagine: if all functions were in the same namespace, it would be huge, and you wouldn't be able to have repeated names. Organizing functions in small namespaces is a great idea.
  2. (An empty line)
  3. Assign the result of invoking the function "getImage" from the namespace "IJ" to the local variable "imp".
    So now "imp" points to the last image you opened, or whose window was brought to focus by a mouse click. In our example, it's the "boats" image.
  4. Print the contents of the variable "imp".
    Notice how, at the bottom, the script first printed its own title "New_.py" and the starting time, and then printed "imp[boats.gif 720x576x1]"--which is just some data on the boats image: the title "boats.gif" and the dimensions of the image, in pixels.

So what is "imp"? "imp" is a commonly used name to refer to an instance of an ImagePlus. The ImagePlus is one of ImageJ's abstractions to represent an image. Every image that you open in ImageJ is an instance of ImagePlus.


Saving an image with a file dialog

The first action we'll do on our image is to save it.

To do that, you could call "File - Save" from the menus.
In our program, we import the namespace "FileSaver" and then create a new instance of FileSaver with our image "imp" as the only parameter. Then we invoke the function "save" on it, which will open a file dialog. After choosing a name and a folder, the image will be saved in TIFF format.

Saving an image directly to a file

The point of running a script is to avoid human interaction.
We want to save an image automatically: we tell the FileSaver instance where it should save our image, and in what format (like TIFF with saveAsTiff). The FileSaver offers more methods, such as saveAsPng, saveAsJpeg, etc.

Notice that the '#' sign defines comments. In python, any text after a '#' is not executed.

from ij import IJ
from ij.io import FileSaver

imp = IJ.getImage()
fs = FileSaver(imp)

# A known folder to store the image at:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

filepath = folder + "/" + "boats.tif"
fs.saveAsTiff(filepath):
          

Saving an image ... checking first if it's a good idea.

The FileSaver will overwrite whatever file exists at the file path that you give it. That is not always a good idea!

Here, we write the same code but checking first:

  1. If the folder exists at all, and whether the file at that file path is really a folder.
  2. If a file with the same name as the file we are about to write is already there--to avoid overwriting, if desired.
  3. If the FileSaver.saveAsTiff call really worked, or failed.
    Notice in the documentation for FileSaver.saveAsTiff that this method returns a boolean variable: it will be true if all went well, and false if the image could not be saved in the file.

And finally, if all expected preconditions hold, then we place the call to saveAsTiff.


This script introduced three new programming items:

  • if, else, and elif ("elif" being a combination of "else" and "if")
  • The concept of a code block, which, in python, starts with a ':' and then the code lines are indented.
    Notice how the code below the if or the else are indented to the right. By how much, it doesn't matter, as long as it's consistent.
  • The os.path namespace, which contains utility functions for inspecting files and folders (also called "directories"). One such function is os.mkdir, which we could have used in this script to create the directory when it didn't exist (note, though, that os.mkdir will throw an error and stop the execution of the script when the directory already exists, so os.path.exists must be called first).
from ij import IJ
from ij.io import FileSaver
from os import path

imp = IJ.getImage()
fs = FileSaver(imp)

# A known folder to store the image at:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

# Test if the folder exists before attempting to save the image:
if path.exists(folder) and path.isdir(folder):
  print "folder exists:", folder
  filepath = path.join(folder, "boats.tif") # Operating System-specific
  if path.exists(filepath):
    print "File exists! Not saving the image, would overwrite a file!"
  elif fs.saveAsTiff(filepath):
    print "File saved successfully at ", filepath
else:
  print "Folder does not exist or it's not a folder!"
          

3. Inspecting properties and pixels of an image

An image in ImageJ or Fiji is, internally, an instance of ImagePlus.
The ImagePlus contains data such as the title and dimensions of the image (width, height, number of slices, number of time frames, number of channels), as well as the pixels, which are wrapped in an ImageProcessor instance.
Each of these data is stored internally in a field of the ImagePlus class. The field is nothing else than a variable, which, for a given image instance, points to a specific value.
For example, the "title" field points to "boats.gif" for the instance of ImagePlus that contains the sample boats image that we opened earlier.

In python, accessing fields of an instance is straightforward: just add a dot '.' between the variable "imp" and the field "title" to access.

In the Fiji API documentation, if you don't see a specific field like width in a particular class, but there is a getWidth method, then from python they are one and the same.

The image type

Notice how we created a dictionary to hold key/value pairs: of the image type versus a text representation of that type. This dictionary (also called map or table in other programming languages) then lets us ask it for a specific image type (such as ImagePlus.GRAY8), and we get back the corresponding text, such as "8-bit".

You may have realized by now that the ImagePlus.getType() (or what is the same in python: "imp.type") returns us any of the controled values of image type that an ImagePlus instance can take. These values are GRAY8, GRAY16, GRAY32, COLOR_RGB, and COLOR_256.

What is the image type? It's the kind of pixel data that the image holds. It could be numbers from 0 to 255 (what fits in an 8-bit range), or from 0 to 65536 (values that fit in a 16-bit range), or could be three channels of 8-bit values (an RGB image), or floating-point values (32-bit).

The COLOR_256 indicates an 8-bit image that has an associated look-up table: each pixel value does not represent an intensity, but rather it's associated with a color. The table of values versus colors is limited to 256, and hence these images may not look very well. For image processing, you should avoid COLOR_256 images (also known as "8-bit color" images). These images are meant for display in the web in ".gif" format, but have been superseeded by JPEG or PNG.

The GRAY_8 ("8-bit"), GRAY_16 ("16-bit") and GRAY_32 ("32-bit") images may also be associated with a look-up table. For example, in a "green" look-up table on an 8-bit image, values of zero are black, values of 128 are darkish green, and the maximum value of 255 is fully pure green.

from ij import IJ, ImagePlus

# Grab the last activated image
imp = IJ.getImage()

# Print image details
print "title:", imp.title
print "width:", imp.width
print "height:", imp.height
print "number of pixels:", imp.width * imp.height
print "number of slices:", imp.getNSlices()
print "number of channels:", imp.getNChannels()
print "number of time frames:", imp.getNFrames()

types = {ImagePlus.COLOR_RGB : "RGB",
         ImagePlus.GRAY8 : "8-bit",
         ImagePlus.GRAY16 : "16-bit",
         ImagePlus.GRAY32 : "32-bit",
         ImagePlus.COLOR_256 : "8-bit color"}

print "image type:", types[imp.type]
          
Started New_.py at Wed Nov 10 14:57:46 CET 2010
title: boats.gif
width: 720
height: 576
number of pixels: 414720
number of slices: 1
number of channels: 1
number of time frames: 1
image type: 8-bit
          

Obtaining pixel statistics of an image (and your first function)

ImageJ / Fiji offers an ImageStatistics class that does all the work for us.
The ImageStatistics class offers a convenient getStatistics static method. (A static method is a function, in this case of the ImageStatistics namespace, that is unrelated to a class instance. Java confuses namespaces with class names).

Notice how we import the ImageStatistics namespace as "IS", i.e. we alias it--it's too long to type!

The options variable is the bitwise-or combination of three different static fields of the ImageStatistics class. The final options is an integer that has specific bits set that indicate mean, median and min and max values.
(Remember that in a computer, an integer number is a set of bits, such as 00001001. In this example, we'd say that the first and the fourth bits are set. Interpreting this sequence of 0 and 1 in binary gives the integer number 4097 in decimal).

from ij import IJ
from ij.process import ImageStatistics as IS

# Grab the active image
imp = IJ.getImage()

# Get its ImageProcessor
ip = imp.getProcessor()

options = IS.MEAN | IS.MEDIAN | IS.MIN_MAX
stats = IS.getStatistics(ip, options, imp.getCalibration())

# print statistics on the image
print "Image statistics for", imp.title
print "Mean:", stats.mean
print "Median:", stats.median
print "Min and max:", stats.min, "-", stats.max
          
Started New_.py at Wed Nov 10 19:54:37 CET 2010
Image statistics for boats.gif
Mean: 120.026837384
Median: 138.0
Min and max: 3.0 - 220.0
          

Now, how about obtaining statistics for a lot of images? (in other words, batch processing).
From a list of images in a folder, we would have to:

  1. Load each image
  2. Get statistics for it

So we define a folder that contains our images, and we loop the list of filenames that it has. For every filename that ends with ".tif", we load it as an ImagePlus, and handle it to the getStatistics function, which returns us the mean, median, and min and max values.

(Note: if the images are stacks, use StackStatistics instead.)

This script introduces a few new concepts:

  • Defining a function: it's done with def, followed by the desired function name, and any number of comma-separated arguments between parenthesis. The function is a code block--remember the code block is specified with indentation (any amount of indentation, as long as it's consistent).
  • The triple quote """ : defines a string of text over multiple lines. It's also the convention for adding documentation to a function in python.
  • The global keyword: lets you read, from within a function code block, a variable defined outside of the function code block.
  • The for loop: to iterate every element of a list. In this case, every filename in the list of filenames of a folder, which we obtain from the os.listdir function.
    Notice the continue keyword, used to jump to the next loop iteration when desired. In the example, when the image couldn't be loaded.

See also the python documentation page on control flow, with explanations on the keywords if, else and elif, the for loop keyword and the break and continue keywords, defining a function with def, functions with variable number of arguments, anonymous functions (with the keyword lambda), and guidelines on coding style.


 
from ij import IJ
from ij.process import ImageStatistics as IS
import os

options = IS.MEAN | IS.MEDIAN | IS.MIN_MAX

def getStatistics(imp):
  """ Return statistics for the given ImagePlus """
  global options
  ip = imp.getProcessor()
  stats = IS.getStatistics(ip, options, imp.getCalibration())
  return stats.mean, stats.median, stats.min, stats.max


# Folder to read all images from:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

# Get statistics for each image in the folder
# whose file extension is ".tif":
for filename in os.listdir(folder):
  if filename.endswith(".tif"):
    print "Processing", filename
    imp = IJ.openImage(os.path.join(folder, filename))
    if imp is None:
      print "Could not open image from file:", filename
      continue
    mean, median, min, max = getStatistics(imp)
    print "Image statistics for", imp.title
    print "Mean:", mean
    print "Median:", median
    print "Min and max:", min, "-", max
  else:
    print "Ignoring", filename
          

Iterating pixels

Iterating pixels is considered a low-level operation that you would seldom, if ever, have to do. But just so you can do it when you need to, here are various ways to iterate all pixels in an image.

The three iteration methods:

  1. The C-style method, where we iterate over a list of numbers from zero to length of the pixel array minus one, and obtain each pixel by doing an array lookup.
    The list of numbers is obtained by calling the built-in function xrange, which delivers a lazy sequence of 0, 1, 2, ... up to the length of the pixel array minus one.
    The length of the pixels array is obtained by calling the built-in function len.
  2. The iterator method, where the pixels array is iterated as if it was a list, and the pix variable takes the value of each pixel.
  3. The functional method, were instead of looping, we reduce the array to a single value (the minimum) by applying the min function to every adjacent pair of pixel values in the pixels array. (Realize that any function that takes two arguments, like min, could have been used with reduce.)

The last should be your preferred method. There's the least opportunity for introducting an error, and it is very concise.


Regarding the example given, keep in mind:

  • That the pixels variable points to an array of pixels, which can be any of byte[], short[], float[], or int[] (for RGB images, with the 3 color channels channels bit-packed).
  • That the example method for finding out the minimum value would NOT work for RGB images, because they have the three 8-bit color channels packed into a single integer value.
    For an RGB image, you'd want to ask which pixel is the least bright. It's easy to do so by calling getBrightness() on the ImageProcessor of an RGB image (which is a ColorProcessor). Or compute the minimum for one of its color channels, which you get with the method ip.toFloat(0, None) to get the red channel (1 is green, and 2 is blue).

  •  
from ij import IJ

# Grab the active image
imp = IJ.getImage()

# Grab the image processor converted to float values
# to avoid problems with bytes
ip = imp.getProcessor().convertToFloat() # as a copy
# The pixels points to an array of floats
pixels = ip.getPixels()

print "Image is", imp.title, "of type", imp.type

# Obtain the minimum pixel value

# Method 1: the for loop, C style
minimum = Float.MAX_VALUE
for i in xrange(len(pixels)):
  if pixels[i] < minimum:
    minimum = pixels[i]

print "1. Minimum is:", minimum

# Method 2: iterate pixels as a list
minimum = Float.MAX_VALUE
for pix in pixels:
  if pix < minimum:
    minimum = pix

print "2. Minimum is:", minimum

# Method 3: apply the built-in min function
# to the first pair of pixels,
# and then to the result of that and the next pixel, etc.
minimum = reduce(min, pixels)

print "3. Minimum is:", minimum
         
Started New_.py at Wed Nov 10 20:49:31 CET 2010
Image is boats.gif of type 0
1. Minimum is: 3.0
2. Minimum is: 3.0
3. Minimum is: 3.0
          

On iterating or looping lists or collections of elements

Ultimately all operations that involve iterating a list or a collection of elements can be done with the for looping construct. But in almost all occasions the for is not the best choice, neither regarding performance nor in clarity or conciseness. The latter is important to minimize the amount of errors that we may possibly introduce without noticing.

There are three kinds of operations to perform on lists or collections: map, reduce, and filter. We show them here along with the equivalent for loop.


 

A map operation takes a list of length N and returns another list also of length N, with the results of applying a function (that takes a single argument) to every element of the original list.

For example, suppose you want to get a list of all images open in Fiji.

With the for loop, we have to create first a list explictly and then append one by one every image.

With list comprehension, the list is created directly and the logic of what goes in it is placed inside the square brackets--but it is still a loop. That is, it is still a sequential, unparallelizable operation.

With the map, we obtain the list automatically by executing the function WM.getImage to every ID in the list of IDs.

While this is a trivial example, suppose you were executing a complex operation on every element of a list or an array. If you were to redefine the map function to work in parallel, suddenly any map operation in your program will run faster, without you having to modify a single line of tested code!


 
from ij import WindowManager as WM
            
# Method 1: with a 'for' loop
images = []
for id in WM.getIDList():
  images.append(WM.getImage(id))

# Method 2: with list comprehension
images = [WM.getImage(id) for id in WM.getIDList()]

# Method 3: with a 'map' operation
images = map(WM.getImage, WM.getIDList())

          

A filter operation takes a list of length N and returns a shorter list, with anywhere from 0 to N elements. Only those elements of the original list that pass a test are placed in the new, returned list.

For example, suppose you want to find the subset of opened images in Fiji whose title match a specific criterium.

With the for loop, we have to create a new list first, and then append elements to that list as we iterate the list of images.

The second variant of the for loop uses list comprehension. The code is reduced to a single short line, which is readable, but is still a python loop (with potentially lower performance).

With the filter operation, we get the (potentially) shorter list automatically. The code is a single short line, instead of 4 lines!


 
from ij import WindowManager as WM

# A list of all open images
imps = map(WM.getImage, WM.getIDList())

def match(imp):
  """ Returns true if the image title contains the word 'cochlea'"""
  return imp.title.find("cochlea") > -1

# Method 1: with a 'for' loop
# (We have to explicitly create a new list)
matching = []
for imp in imps:
  if match(imp):
    matching.append(imp)

# Method 2: with list comprehension
matching = [imp for imp in imps if match(imp)]

# Method 3: with a 'filter' operation
matching = filter(match, imps)
          

A reduce operation takes a list of length N and returns a single value. This value is composed from applying a function that takes two arguments to the first two elements of the list, then to the result of that and the next element, etc. Optionally an initial value may be provided, so that the cycle starts with that value and the first element of the list.

For example, suppose you want to find the largest image, by area, from the list of all opened images in Fiji.

With the for loop, we have to we have to keep track of which was the largest area in a pair of temporary variables. And even check whether the stored largest image is null! We could have initizalized the largestArea variable to the first element of the list, and then start looping at the second element by slicing the first element off the list (with "for imp in imps[1:]:"), but then we would have had to check if the list contains at least one element.

With the reduce operation, we don't need any temporary variables. All we need is to define a helper function (which could have been an anonymous lambda function, but we defined it explicitly for extra clarity and reusability).


 
from ij import IJ

from ij import WindowManager as WM

# A list of all open images
imps = map(WM.getImage, WM.getIDList())

def area(imp):
  return imp.width * imp.height

# Method 1: with a 'for' loop
largest = None
largestArea = 0
for imp in imps:
  if largest is None:
    largest = imp
  else:
    a = area(imp)
    if a > largestArea:
      largest = imp
      largestArea = a

# Method 2: with a 'reduce' operation
def largestImage(imp1, imp2):
  return imp1 if area(imp1) > area(imp2) else imp2

largest = reduce(largestImage, imps)
          

Subtract the min value to every pixel

First we obtain the minimum pixel value, using the reduce method explained just above.

Then we subtract this minimum value to every pixel. We have two ways to do it:

  1. In place, by iterating the pixel array C-style and setting a new value to each pixel: that of itself minus the minimum value.
  2. On a new list: we declare an anonymous function (with lambda instead of def) that takes one argument x (the pixel value), subtracts the minimum from it, and returns the result. We map (in other words, we apply) this function to every pixel in the pixels array, returning a new list of pixels with the results.

With the first method, since the pixels array was already a copy (notice we called convertToFloat() on the ImageProcessor), we can use it to create a new ImagePlus with it without any unintended consequences.

With the second method, the new list of pixels must be given to a new FloatProcessor instance, and with it, a new ImagePlus is created, of the same dimensions as the original.

from ij import IJ

imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat() # as a copy
pixels = ip.getPixels()

# Apply the built-in min function
# to the first pair of pixels,
# and then to the result of that and the next pixel, etc.
minimum = reduce(min, pixels)

# Method 1: subtract the minim from every pixel,
# in place, modifying the pixels array
for i in xrange(len(pixels)):
  pixels[i] -= minimum
# ... and create a new image:
imp2 = ImagePlus(imp.title, ip)

# Method 2: subtract the minimum from every pixel
# and store the result in a new array
pixels3 = map(lambda x: x - minimum, pixels)
# ... and create a new image:
ip3 = FloatProcessor(ip.width, ip.height, pixels3, None)
imp3 = ImagePlus(imp.title, ip3)

# Show the images in an ImageWindow:
imp2.show()
imp3.show()
          

Reduce a pixel array to a single value: count pixels above a threshold

Suppose you want to analyze a subset of pixels. For example, find out how many pixels have a value over a certain threshold.

The reduce built-in function is made just for that. It takes a function with two arguments (the running count and the next pixel); the list or array of pixels; and an initial value (in this case, zero) for the first argument (the "count'), and will return a single value (the total count).

In this example, we computed first the mean pixel intensity, and then filtered all pixels for those whose value is above the mean. Notice that we compute the mean by using the convenient built-in function sum, which is able to add all numbers contained in any kind of collection (be it a list, a native array, a set of unique elements, or the keys of a dictionary). We could imitate the built-in sum function with reduce(lambda s, x: s + x, pixels), but paying a price in performance.

Notice we are using anonymous functions again (that is, functions that lack a name), declared in place with the lambda keyword. A function defined with def would do just fine as well.


from ij import IJ

# Grab currently active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat()
pixels = ip.getPixels()

# Compute the mean value (sum of all divided by number of pixels)
mean = sum(pixels) / len(pixels)

# Count the number of pixels above the mean
n_pix_above = reduce(lambda count, a: count + 1 if a > mean else count, pixels, 0)

print "Mean value", mean
print "% pixels above mean:", n_pix_above / float(len(pixels)) * 100
          
Started New_.py at Thu Nov 11 01:50:49 CET 2010
Mean value 120.233899981
% pixels above mean: 66.4093846451
          

Another useful application of filtering pixels by their value: finding the coordinates of all pixels above a certain value (in this case, the mean), and then calculating their center of mass.

The filter built-in function is made just for that. The indices of the pixels whose value is above the mean are collected in a list named "above", which is created by filtering the indices of all pixels (provided by the built-in function xrange). The filtering is done by an anonymous function declared with lambda, with a single argument: the index i of the pixel.

Here, note that in ImageJ, the pixels of an image are stored in a linear array. The length of the array is width * height, and the pixels are stored as concatenated rows. Therefore, the modulus of dividing the index of a pixel by the width the image provides the X coordinate of a pixel. Similarly, the integer division of the index of a pixel by the width provides the Y coordinate.

To compute the center of mass, there are two equivalent methods. The C-style method with a for loop, with every variable being declared prior to the loop, and then modified at each loop iteration and, after the loop, dividing the sum of coordinates by the number of coordinates (the length of the "above" list). For this example, this is the method with the best performance.

The second method computes the X and Y coordinates of the center of mass with a single line of code for each. Notice that both lines are nearly identical, differing only in the body of the function mapped to the "above" list containing the indices of the pixels whose value is above the mean. While, in this case, the method is less performant due to repeated iteration of the list "above", the code is shorter, easier to read, and with far less opportunities for introducing errors. If the actual computation was far more expensive than the simple calculation of the coordinates of a pixel given its index in the array of pixels, this method would pay off for its clarity.


from ij import IJ

# Grab the currently active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat()
pixels = ip.getPixels()

# Compute the mean value
mean = sum(pixels) / len(pixels)

# Obtain the list of indices of pixels whose value is above the mean
above = filter(lambda i: pixels[i] > mean, xrange(len(pixels)))

print "Number of pixels above mean value:", len(above)

# Measure the center of mass of all pixels above the mean

# The width of the image, necessary for computing the x,y coordinate of each pixel
width = imp.width

# Method 1: with a for loop
xc = 0
yc = 0
for i in above:
  xc += i % width # the X coordinate of pixel at index i
  yc += i / width # the Y coordinate of pixel at index i
xc = xc / len(above)
yc = yc / len(above)
print xc, yc

# Method 2: with sum and map
xc = sum(map(lambda i: i % width, above)) / len(above)
yc = sum(map(lambda i: i / width, above)) / len(above)
print xc, yc
          

The third method pushes the functional approach too far. While written in a single line, that doesn't mean it is clearer to read: it's intent is obfuscated by starting from the end: the list comprehension starts off by stating that each element (there are only two) of the list resulting from the reduce has to be divided by the length of the list of pixels "above", and only then we learn than the collection being iterated is the array of two coordinates, created at every iteration over the list "above", containing the sum of all coordinates for X and for Y. Notice that the reduce is invoked with three arguments, the third one being the list [0, 0] containing the initialization values of the sums. Confusing! Avoid writing code like this. Notice as well that, by creating a new list at every iteration step, this method is the least performant of all.

The fourth method is a clean up of the third method. Notice that we import the partial function from the functools package. With it, we are able to create a version of the "accum" helper function that has a frozen "width" argument (also known as currying a function). In this way, the "accum" function is seen by the reduce as a two-argument function (which is what reduce needs here). While we regain the performance of the for loop, notice that now the code is just as long as with the for loop. The purpose of writing this example is to illustrate how one can write python code that doesn't use temporary variables, these generally being potential points of error in a computer program. It is always better to write lots of small functions that are easy to read, easy to test, free of side effects, documented, and that then can be used to assemble our program.

# (Continues from above...)

# Method 3: iterating the list "above" just once
xc, yc = [d / len(above) for d in
            reduce(lambda c, i: [c[0] + i % width, c[1] + i / width], above, [0, 0])]
print xc, yc

# Method 4: iterating the list "above" just once, more clearly and performant
from functools import partial

def accum(width, c, i):
  """ Accumulate the sum of the X,Y coordinates of index i in the list c."""
  c[0] += i % width
  c[1] += i / width
  return c

xy, yc = [d / len(above) for d in reduce(partial(accum, width), above, [0, 0])]
print xc, yc
          


4. Running ImageJ / Fiji plugins on an ImagePlus

Here is an example plugin run programmatically: a median filter applied to the currently active image.

The median filter, along with the mean, minimum, maximum, variance, remove outliers and despeckle menu commands, are implemented in the RankFilters class.
A new instance of RankFilters is created (notice the "()" after "RankFilters"), and we call its method rank with the ImageProcessor, the radius, and the desired filter flag as arguments.
With the result, we create a new ImagePlus and we show it.

from ij import IJ
from ij.plugin.filter import RankFilters

# Grab the active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat() # as a copy

# Remove noise by running a median filter
# with a radius of 2
radius = 2
RankFilters().rank(ip, radius, RankFilters.MEDIAN)

imp2 = ImagePlus(imp.title + " median filtered", ip)
imp2.show()
          

Finding the class that implements a specific ImageJ command

When starting ImageJ/Fiji programming, the problem is not so much how to run a plugin on an image, as it is to find out which class implements which plugin. Here is a simple method to find out, via the Command Finder:

  1. Open the Command Finder by pushing 'l' or going to "Plugins - Utilities - Find commands...".
  2. Type "FFT". A bunch of FFT-related commands are listed.
  3. Click on the "Show full information" checkbox at the bottom.
  4. Read, next to each listed command, the plugin class that implements it.

Notice that the plugin class comes with some text. For example:

FFT (in Process > FFT) [ij.plugin.FFT("fft")]
Inverse FFT (in Process > FFT) [ij.plugin.FFT("inverse")]
          

The above two commands are implemented by a single plugin (ij.plugin.FFT) whose run method accepts, like all PlugIn, a text string specifying the action: the fft, or the inverse.
The first part of the information shows where in the menus you will find the command. In this case, under menu "Process", submenu "FFT".


Finding the java documentation for any class

Once you have found the PlugIn class that implements a specific command, you may want to use that class directly. The information is either in the online java documentation or in the source code. How to find these?

  • The Fiji java documentation can be opened directly from the Script Editor for a specific class. Type in the name of the class, select it, and then execute the menu "Tools - Open help for class (with frames)". A new web browser window will open, with the web page corresponding to the class in question. When there is more than one possible class (because they share the same name but live in different packages), then a dialog will prompt for choosing the correct one.
  • The source code for a plugin included in Fiji is in the Fiji git repository. The fastest way to find the corresponding java class is to Google it. Of course another way to search is directly in the Fiji source code repository, which has a search box to look up the source code of a plugin by its name their own repositories. The ImageJ source code is perhaps the easiest to browse, but contains only the core ImageJ library source code.

Figuring out what parameters a plugin requires

To do that, we'll use the Macro Recorder. Make sure that an image is open. Then:

  1. Open the "Plugins - Macros - Record..."
  2. Run the command of your choice, such as "Process - Filters - Median..."
    A dialog opens. Set the desired radius, and push "OK".
  3. Look into the Recorder window:
    run("Median...", "radius=2");
                

That is valid macro code, that ImageJ can execute. The first part is the command ("Median..."), the second part is the parameters that that command uses; in this case, just one ("radius=2"). If there were more parameters, they would be separated by spaces.


Running a command on an image

We can use these macro recordings to create jython code that executes a given plugin on a given image. Here is an example.

Very simple! The IJ namespace has a function, run, that accepts an ImagePlus as first argument, then the name of the command to run, and then the macro-ready list of arguments that the command requires.
When executing this script, no dialogs are shown!
Behind the curtains, ImageJ is placing the right parameters in the right places, making it all just work.


from ij import IJ

# Grab the active image
imp = IJ.getImage()

# Run the median filter on it, with a radius of 2
IJ.run(imp, "Median...", "radius=2")
          

5. Creating images and regions of interest (ROIs)

Create an image from scratch

An ImageJ/Fiji image is composed of at least three objects:

  • The pixels array: an array of primitive values.
    (Where primitive is one of byte, short, int, or float.)
  • The ImageProcessor subclass instance that holds the pixels array.
  • The ImagePlus instance that holds the ImageProcessor instance.

In the example, we create an empty array of floats (see creating native arrays), and fill it in with random float values. Then we give it to a FloatProcessor instance, which is then wrapped by an ImagePlus instance. Voilà!

from ij import ImagePlus
from ij.process import FloatProcessor
from array import zeros
from random import random

width = 1024
height = 1024
pixels = zeros('f', width * height)

for i in xrange(len(pixels)):
  pixels[i] = random()

fp = FloatProcessor(width, height, pixels, None)
imp = ImagePlus("White noise", fp)

imp.show()
          

Fill a region of interest (ROI) with a given value

To fill a region of interest in an image, we could iterate the pixels, find the pixels that lay within the bounds of interest, and set their values to a specified value. But that tedious and error prone. Much more effective is to create an instance of a Roi class or one of its subclasses (PolygonRoi, OvalRoi, ShapeRoi, etc.) and tell the ImageProcessor to fill that region.

In this example, we create an image filled with white noise like before, and then define a rectangular region of interest in it, which is filled with a value of 2.0.

The white noise is drawn from a random distribution whose values range from 0 to 1. When filling an area of the FloatProcessor with a value of 2.0, that is the new maximum value. The area with 2.0 pixel values will look white (look at the status bar):


from ij import IJ, ImagePlus
from ij.process import FloatProcessor
from array import zeros
from random import random
from ij.gui import Roi, PolygonRoi

# Create a new ImagePlus filled with noise
width = 1024
height = 1024
pixels = zeros('f', width * height)

for i in xrange(len(pixels)):
  pixels[i] = random()

fp = FloatProcessor(width, height, pixels, None)
imp = ImagePlus("Random", fp)

# Fill a rectangular region of interest
# with a value of 2:
roi = Roi(400, 200, 400, 300)
fp.setRoi(roi)
fp.setValue(2.0)
fp.fill()

# Fill a polygonal region of interest
# with a value of -3
xs = [234, 174, 162, 102, 120, 123, 153, 177, 171,
      60, 0, 18, 63, 132, 84, 129, 69, 174, 150,
      183, 207, 198, 303, 231, 258, 234, 276, 327,
      378, 312, 228, 225, 246, 282, 261, 252]
ys = [48, 0, 60, 18, 78, 156, 201, 213, 270, 279,
      336, 405, 345, 348, 483, 615, 654, 639, 495,
      444, 480, 648, 651, 609, 456, 327, 330, 432,
      408, 273, 273, 204, 189, 126, 57, 6]
proi = PolygonRoi(xs, ys, len(xs), Roi.POLYGON)
fp.setRoi(proi)
fp.setValue(-3)
fp.fill(proi.getMask())  # Attention!

imp.show()
          

6. Create and manipulate image stacks and hyperstacks

Load a color image stack and extract its green channel

First we load the stack from the web--it's the "Fly Brain" sample image.

Then we iterate its slices. Each slice is a ColorProcessor: wraps an integer array. Each integer is represented by 4 bytes, and the lower 3 bytes represent, respectively, the intensity values for red, green and blue. The upper most byte is usually reserved for alpha (the inverse of transparency), but ImageJ ignores it.

Dealing with low-level details like that is not necessary. The ColorProcessor has a method, toFloat, that can give us a FloatProcessor for a specific color channel. Red is 0, green is 1, and blue is 2.

Representing the color channel in floats is most convenient for further processing of the pixel values--won't overflow like a byte would.

In this example, all we do is collect each slice into a list of slices we named greens. Then we add all the slices to a new ImageStack, and pass it to a new ImagePlus. Then we invoke the "Green" command on that ImagePlus instance, so that a linear green look-up table is assigned to it. And we show it.

from ij import IJ, ImagePlus, ImageStack

# Load a stack of images: a fly brain, in RGB
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")
stack = imp.getImageStack()

print "number of slices:", imp.getNSlices()

# A list of green slices
greens = []

# Iterate each slice in the stack
for i in xrange(1, imp.getNSlices()+1):
  # Get the ColorProcessor slice at index i
  cp = stack.getProcessor(i)
  # Get its green channel as a FloatProcessor
  fp = cp.toFloat(1, None)
  # ... and store it in a list
  greens.append(fp)

# Create a new stack with only the green channel
stack2 = ImageStack(imp.width, imp.height)
for fp in greens:
  stack2.addSlice(None, fp)

# Create a new image with the stack of green channel slices
imp2 = ImagePlus("Green channel", stack2)
# Set a green look-up table:
IJ.run(imp2, "Green", "")
imp2.show()
          

Convert an RGB stack to a 2-channel, 32-bit hyperstack

We load an RGB stack--the "Fly brain" sample image, as before.

Suppose we want to analyze each color channel independently: an RGB image doesn't really let us, without lots of low-level work to disentangle the different color values from each pixel value. So we convert the RGB stack to a hyperstack with two separate channels, where each channel slice is a 32-bit FloatProcessor.

The first step is to create a new ImageStack instance, to hold all the slices that we'll need: one per color channel, times the number of slices.
We ignore the blue channel (which is empty in the "Fly brain" image), so we end up creating twice as many slices as we had in the RGB stack.

Realize that we could have 7 channels if we wanted, or 20, for each slice. As many as you want.

The final step is to open the hyperstack. For that:

  1. We assign the new stack2 to a new ImagePlus, imp2.
  2. We set the same calibration (microns per pixel) that the original image has.
  3. We tell it how to interpret its image stack: as having two channels, the same amount of Z slices as before, and just 1 time frame.
  4. We pass the imp2 to a new CompositeImage, comp, indicating how we want it displayed: assign a color to each channel. (With CompositeImage.COMPOSITE, all channels would be merged for display.)
  5. We show the comp, which will open a stack window with two slides: one for the channels, and one for the Z slices.

Open the "Image - Color - Channels Tool" and you'll see that the Composite image is set to show only the red channel--try checking the second channel as well.


For a real-world example of a python script that uses hyperstacks, see the Correct_3D_drift.py script (available as the command "Plugins - Registration - Correct 3D drift").
The script takes an opened, virtual hyperstack as input, and registers in 3D every time frame to the previous one, using phase correlation, correcting any translations on the X,Y,Z axis. The script is useful for correcting sample drift under the microscope in long 4D time series.


from ij import IJ, ImagePlus, ImageStack, CompositeImage

# Load a stack of images: a fly brain, in RGB
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")
stack = imp.getImageStack()

# A new stack to hold the data of the hyperstack
stack2 = ImageStack(imp.width, imp.height)

# Convert each color slice in the stack
# to two 32-bit FloatProcessor slices
for i in xrange(1, imp.getNSlices()+1):
  # Get the ColorProcessor slice at index i
  cp = stack.getProcessor(i)
  # Extract the red and green channels as FloatProcessor
  red = cp.toFloat(0, None)
  green = cp.toFloat(1, None)
  # Add both to the new stack
  stack2.addSlice(None, red)
  stack2.addSlice(None, green)

# Create a new ImagePlus with the new stack
imp2 = ImagePlus("32-bit 2-channel composite", stack2)
imp2.setCalibration(imp.getCalibration().copy())

# Tell the ImagePlus to represent the slices in its stack
# in hyperstack form, and open it as a CompositeImage:
nChannels = 2             # two color channels
nSlices = stack.getSize() # the number of slices of the original stack
nFrames = 1               # only one time point 
imp2.setDimensions(nChannels, nSlices, nFrames)
comp = CompositeImage(imp2, CompositeImage.COLOR)
comp.show()
          

7. Interacting with humans: file and option dialogs, messages, progress bars.

Ask the user for a directory

See DirectoryChooser.

from ij.io import DirectoryChooser

dc = DirectoryChooser("Choose a folder")
folder = dc.getDirectory()

if folder is None:
  print "User canceled the dialog!"
else:
  print "Selected folder:", folder
          

Ask the user for a file

See OpenDialog and SaveDialog.

from ij.io import OpenDialog

od = OpenDialog("Choose a file", None)
filename = od.getFileName()

if filename is None:
  print "User canceled the dialog!"
else:
  directory = od.getDirectory()
  filepath = directory + filename
  print "Selected file path:", filepath
          

Ask the user to enter a few parameters in a dialog

There are more possibilities, but these are the basics. See GenericDialog.

All plugins that use a GenericDialog are automatable. Remember how above we run a command on an image? When the names in the dialog fields match the names in the macro string, the dialog is fed in the values automatically. If a dialog field doesn't have a match, it takes the default value as defined in the dialog declaration.

If a plugin was using a dialog like the one we built here, we would run it automatically like this:

args = "name='first' alpha=0.5 output='32-bit' scale=80"

IJ.run(imp, "Some PlugIn", args)
          

Above, leaving out the word 'optimize' means that it will use the default value (True) for it.

from ij.gui import GenericDialog

def getOptions():
  gd = GenericDialog("Options")
  gd.addStringField("name", "Untitled")
  gd.addNumericField("alpha", 0.25, 2)  # show 2 decimals
  gd.addCheckbox("optimize", True)
  types = ["8-bit", "16-bit", "32-bit"]
  gd.addChoice("output as", types, types[2])
  gd.addSlider("scale", 1, 100, 100)
  gd.showDialog()
  #
  if gd.wasCanceled():
    print "User canceled dialog!"
    return
  # Read out the options
  name = gd.getNextString()
  alpha = gd.getNextNumber()
  optimize = gd.getNextBoolean()
  output = gd.getNextChoice()
  scale = gd.getNextNumber()
  return name, alpha, optimize, output, scale

options = getOptions()
if options is not None:
  name, alpha, optimize, output, scale = options
  print name, alpha, optimize, output, scale
          

Show a progress bar

Will show a progress bar in the Fiji window.

from ij import IJ

imp = IJ.getImage()
stack = imp.getImageStack()

for i in xrange(1, stack.getSize()+1):
  # Report progress
  IJ.showProgress(i, stack.getSize()+1)
  # Do some processing
  ip = stack.getProcessor(i)
  # ...

# Signal completion
IJ.showProgress(1)
          

Batch processing

  Apply the same operation to multiple images

Chances are, if you are scripting, it's because there's a task that has to be repeated many times over as many images. Above, we showed how to iterate over a list of files using os.listdir, applying a function to each file and printing a result.

Here, we will take two directories, a directory from which images are read (sourceDir) and another one into which modified images are written into, or saved (targetDir).

There are two strategies for iterating images inside a directory:

  • With os.listdir, which is limited to listing files and directories inside a parent directory: works great for that. But if we wanted to also look into files within a nested directory, we would have to first find out whether a file is a directory with os.path.isdir, and if the file is a directory, then call os.listdir and process its files. This makes for cumbersome code, needing if and else statements and a helper function processDirectory so that we can invoke it recursively (on nested directories).
  • With os.walk, which, as the name suggests, iterates through the directory and nested directories (or subfolders), recursively, visiting every single directory and providing that directory as the root variable, and then the list of filenames that are not directories. The directories loop variable we ignore here, for we don't need them: os.walk will iterate through all of them in any case. The elegance of os.walk enables us to write concise code, that therefore is also less prone to errors.

For every file that we come across using any of the two file system traversing strategies, we could directly do something with it, or delegate to a helper function, named here loadProcessAndSave, which takes two arguments: a file path and another function. It loads the image, invokes the function given as argument, and then saves the result in the targetDir.

The actual work happens in the function normalizeContrast, which implements the operation that we want to apply to every image. The NormalizeLocalContrast plugin (see documentation on the algorithm) is useful for e.g. removing uneven background illumination and maximizing the contrast locally, that is, using for every pixel information gathered from nearby pixels only, rather than from the whole image.

The NormalizeLocalContrast plugin uses the integral image technique (also known as summed-area table) which computes a value for each pixel on the basis of its neighoring pixels (a window or arbitrary size centered on the pixel) while only iterating over each pixel twice: once to create an interim integral image (that we never see), and a second time to perform the desired computation. A naive approach would revisit pixels many times, as a function of the dimension of the window around any one pixel, because two consecutive pixels share a lot of neighbors when the window is large. By not revisiting pixels many times, the integral image approach is much faster because it performs less operations and revisits memory locations less times. The trade-off is in the shape of the window around every pixel: it is a square, rather than a more traditional circle defined by a radius around each pixel. Using a square is not an impediment for performing complex computations (such as very fast approximations of Gabor filters for detecting e.g. object contours for use as features in machine learning-based segmentation; "Integral Channel Features", Dollar et al. 2009).

The NormalizeLocalContrast plugin can correct for background illumination issues quite well, and very fast. To explore the parameters, first load a single image and find out which window size gives the desired output, having ticked the "preview" checkbox. The plugin can be invoked from "Plugins - Integral image filters - Normalize local contrast".

Using either of the two strategies for traversing directories, we'll load a bunch of images from a source directory (sourceDir), then apply the local contrast normalization, and save the result in the target directory (targetDir). These 3 operations are wrapped in a try/except because some filenames may not be images or couldn't be loaded (possible), or for some reason the plugin doesn't know how to handle them (unlikely). In the except code block, notice that any file path that failed is printed out.

Notice that we pass the normalizeContrast function as an argument to loadProcessAndSave: the latter is generic, and could equally invoke any other function that operates on an ImagePlus. The actual code for batch processing, therefore, consists of a mere 3 lines (in strategy #2) to visit all files, and a helper function loadProcessAndSave to robustly execute the desired operation on every image.

 

import os, sys
from mpicbg.ij.plugin import NormalizeLocalContrast
from ij import IJ, ImagePlus
from ij.io import FileSaver

sourceDir = "/tmp/images-originals/"
targetDir = "/tmp/images-normalized/"


# A function that takes an input image and returns a contrast-normalized one
def normalizeContrast(imp):
  # The width and height of the box centered at every pixel:
  blockRadiusX = 200 # in pixels
  blockRadiusY = 200
  # The number of standard deviations to expand to
  stds = 2
  # Whether to expand from the median value of the box or the pixel's value
  center = True
  # Whether to stretch the expanded values to the pixel depth of the image
  # e.g. between 0 and 255 for 8-bit images, or e.g. between 0 and 65536, etc.
  stretch = True
  # Duplicate the ImageProcessor
  copy_ip = imp.getProcessor().duplicate()
  # Apply contrast normalization to the copy
  NormalizeLocalContrast().run(copy_ip, 200, 200, stds, center, stretch)
  # Return as new image
  return ImagePlus(imp.getTitle(), copy_ip)


# A function that takes a file path, attempts to load it as an image,
# normalizes it, and saves it in a different directory
def loadProcessAndSave(sourcepath, fn):
  try:
    imp = IJ.openImage(sourcepath)
    norm_imp = fn(imp) # invoke function 'fn', in this case 'normalizeContrast'
    targetpath = os.path.join(targetDir, os.path.basename(sourcepath))
    if not targetpath.endswith(".tif"):
      targetpath += ".tif"
    FileSaver(norm_imp).saveAsTiff(targetpath)
  except:
    print "Could not load or process file:", sourcepath
    print sys.exc_info()


# Stategy #1: nested directories with os.listdir and os.path.isdir
def processDirectory(theDir, fn):
  """ For every file in theDir, check if it is a directory, if so, invoke recursively.
      If not a directory, invoke 'loadProcessAndSave' on it. """
  for filename in os.listdir(theDir):
    path = os.path.join(theDir, filename)
    if os.path.isdir(path):
      # Recursive call
      processDirectory(path, fn)
    else:
      loadProcessAndSave(path, fn)

# Launch strategy 1:
processDirectory(sourceDir, normalizeContrast)


# Strategy #2: let os.walk do all the work
for root, directories, filenames in os.walk(sourceDir):
  for filename in filenames:
    loadProcessAndSave(os.path.join(root, filename), normalizeContrast)

					

  Create a VirtualStack as a vehicle for batch processing

ImageJ owes much of its success to the VirtualStack: an image stack whose individual slices are not stored in memory. What it stores is the recipe for generating each slice. The original VirtualStack loaded on demand each individual slice from a file that encoded a 2D image. For all purposes, a VirtualStack operates like a fully memory-resident ImageStack. The extraordinary ability to load image stacks larger than the available computer memory is wonderful, with only a trade-off in speed: having to load each slice on demand has a cost.

Batch processing is one of the many uses of the VirtualStack. From "File - Open - Image Sequence", choose a folder and a file name pattern, and load the whole folder of 2D images as a VirtualStack.

Programmatically, a VirtualStack can be created (among other ways) by providing the width and height, and the path to the directory containing the images.

We define the function dimensionsOf to read the width and height from the header of an image file. The BioFormats library is very powerful, and among its features it offers a ChannelSeparator, which, despite its odd name (it has other capabilities not relevant here), is capable of parsing image file headers without actually reading the whole image. Because a file path is opened, we close it safely in a try/finally block, ensuring that the fr.close() is always invoked regardless of errors within the try block. While we could have also simply typed in the numbers for the width, height, or loaded the whole first image to find them out via getWidth(), getHeight() on the ImagePlus, now you know how to extract the width, height from the header of an image file.

Then the function tiffImageFilenames returns a generator, which is essentially a list that is constructed one item at a time on the fly using the yield built-in keyword. Here, we yield only filenames under sourceDir that end in ".tif" and therefore most likely images stored in TIFF format. Importantly, now sorting matters, as we are to display the images sequentially in the stack: so the loop is done over a sorted version of the list of files returned by os.listdir. Also note we call lower on the filename to obtain an all-lowercase version, so that we can handle both ".TIF" and ".tif", while we still return the untouched, original filename. (The string returned by lower is used only for the if test and discarded immediately.)

Now, we obtain the first_path by combining the sourceDir and the first yielded image file path (by calling next on the generator returned by tiffImageFilenames).

Then, we extract the width, height from the header of the image file under first_path.

The VirtualStack can then be constructed with the width, height, a null ColorModel (given here as None; will be found out later), and the sourceDir. To this vstack we then add every TIFF image present in sourceDir.

All we've done so far is construct the VirtualStack. We can now wrap it with an ImagePlus (just like before we wrapped an ImageProcessor) and show it.

Importantly, a VirtualStack has no permanence: feel free to run a plugin such as NormalizeLocalContrast (from "Plugins - Integral image filters - Normalize Local Contrast") on one of its slices: the moment you navigate to another slice, and then come back, the changes are lost.

 

import os, sys
from ij import IJ, ImagePlus, VirtualStack
from loci.formats import ChannelSeparator


sourceDir = "/tmp/images-originals/"


# Read the dimensions of the image at path by parsing the file header only,
# thanks to the LOCI Bioformats library
def dimensionsOf(path):
  fr = None
  try:
    fr = ChannelSeparator()
    fr.setGroupFiles(False)
    fr.setId(path)
    return fr.getSizeX(), fr.getSizeY()
  except:
    # Print the error, if any
    print sys.exc_info()
  finally:
    fr.close()


# A generator over all file paths in sourceDir
def tiffImageFilenames(directory):
  for filename in sorted(os.listdir(directory)):
    if filename.lower().endswith(".tif"):
      yield filename


# Read the dimensions from the first image
first_path = os.path.join(sourceDir, tiffImageFilenames(sourceDir).next())
width, height = dimensionsOf(first_path)

# Create the VirtualStack without a specific ColorModel
# (which will be set much later upon loading any slice)
vstack = VirtualStack(width, height, None, sourceDir)

# Add all TIFF images in sourceDir as slices in vstack
for filename in tiffImageFilenames(sourceDir):
  vstack.addSlice(filename)

# Visualize the VirtualStack
imp = ImagePlus("virtual stack of images in " + os.path.basename(sourceDir), vstack)
imp.show()
					

  Process slices of a VirtualStack and save them to disk

With the VirtualStack now loaded, we can use it as the way to convert image file paths into images, process them, and save them into a targetDir.

That's exactly what is done here. First, we define the targetDir, import the necessary classes, iterate over each slice (notice slices are indexed starting from one, not from zero), and directly apply the NormalizeLocalContrast to the ImageProcessor of every slice. Remember: changes to slices of a VirtualStack are not permanent, so browisng the stack will reload anew the data for each slice directly from the source images.

Notice that nothing here is actually specific of virtual stacks. Any normal stack can be processed in exactly the same way.

We could now open a second VirtualStack listing not the original images in sourceDir, but the processed images in targetDir. I leave this as an exercise for the reader.

 

vstack = ... # defined above, or get it from the active image window
             # via IJ.getImage().getStack()
targetDir = "/tmp/images-normalized/"

from mpicbg.ij.plugin import NormalizeLocalContrast
from ij.io import FileSaver

# Process and save every slice in targetDir
for i in xrange(0, vstack.size()):
  ip = vstack.getProcessor(i+1) # 1-based listing of slices
  # Run the NormalizeLocalConstrast plugin on the ImageProcessor
  NormalizeLocalContrast.run(ip, 200, 200, 3, True, True)
  # Store the result
  name = vstack.getFileName(i+1)
  if not name.lower().endswith(".tif"):
    name += ".tif"
  FileSaver(ImagePlus(name, ip)).saveAsTiff(os.path.join(targetDir, name))

					

  Filter each slice of a VirtualStack dynamically

What we could do instead is filter images after these are loaded, but before they are used to render slices of the VirtualStack. To this end, we will create here your first python class: the FilterVirtualStack, which extends the VirtualStack class (so that we don't have to reimplement most of its functionality, which doesn't change).

The keyword class is used. A class has an opening declaration that includes the name (FilterVirtualStack) and, in parentheses, zero or more superclasses or interfaces separated by commas (here, only the superclass VirtualStack).

Then zero or more function definitions follow: these are the methods of the class.

Notice the first argument of every function, self: that is the equivalent of the "this" keyword in java, and provides the means to change properties and invoke methods (functions) of the class. You could name it "this" instead of "self" if you wanted, it doesn't matter, except it is convention in python to use the word "self".

The function named __init__ is, by convention, the constructor: it is invoked when we want to create a new instance of the class. Here, the body of the function has 3 statements:

  1. an invocation of a superclass constructor via super: we initialize the class by first initializing its superclass, VirtualStack, with the arguments width, height, None, sourceDir (just like we did above in the previous script).
  2. the storing as a member field (a property of the class instance) of the params, which is a dictionary containing the parameters for running later the NormalizeLocalContrast plugin.
  3. the finding of all TIFF files under sourceDir and adding them, sorted, as slices of the stack instance. (Before we did this after creating the VirtualStack; here, for convenience, we do it already within the constructor.)

The next and last method to implement is getProcessor. This is the key method: the ImageProcessor that it returns will be used as the pixel data of the current stack slice. Whatever modifications we do to it, will appear in the data. So the method loads the appropriate image from disk (at filepath), gets its processor (named ip as is convention), then retrives the parameters for the NormalizeLocalContrast plugin from the self.params dictionary, and invokes the plugin on the ip. Finally, it returns the ip.

Once the class is defined, we declare the parameters for the filtering plugin in the params dictionary, which we then use to construct the FilterVirtualStack together with the sourceDir from which to retrieve the image files, and the width, height that, here, I hard-coded, but we could have discovered them from e.g. parsing the header of the first image as we did above. We construct an ImagePlus and show it. Done!

Now you may ask: how is this batch processing? Only one image is retrieved at a time, and, if you were to run "File - Save As - Image Sequence", the original images would be saved into the directory of your choice, in the format and filename pattern of your choice, transformed (i.e. filtered) by the NormalizeLocalContrast.

The critical advantage of this approach is two-fold: first, you get to see what you get, given the parameters, without having to load all the images. Second, if you run the script from an interactive session (e.g. from the "Plugins - Scripting - Jython Interpreter"), you may edit the params dictionary on the fly, and merely browsing back and forth the stack slices would render them using the new parameters (or by invoking imp.updateAndDraw() to update the current slice).

 

import os
from ij import IJ, ImagePlus, VirtualStack
from mpicbg.ij.plugin import NormalizeLocalContrast

class FilterVirtualStack(VirtualStack):
  def __init__(self, width, height, sourceDir, params):
    # Tell the superclass to initialize itself with the sourceDir
    super(VirtualStack, self).__init__(width, height, None, sourceDir)
    # Store the parameters for the NormalizeLocalContrast
    self.params = params
    # Set all TIFF files in sourceDir as slices
    for filename in sorted(os.listdir(sourceDir)):
      if filename.lower().endswith(".tif"):
        self.addSlice(filename)
  
  def getProcessor(self, n):
    # Load the image at index n
    filepath = os.path.join(self.getDirectory(), self.getFileName(n))
    imp = IJ.openImage(filepath)
    # Filter it:
    ip = imp.getProcessor()
    blockRadiusX = self.params["blockRadiusX"]
    blockRadiusY = self.params["blockRadiusY"]
    stds = self.params["stds"]
    center = self.params["center"]
    stretch = self.params["stretch"]
    NormalizeLocalContrast.run(ip, blockRadiusX, blockRadiusY, stds, center, stretch)
    return ip

# Parameters for the NormalizeLocalContrast plugin
params = {
  "blockRadiusX": 200, # in pixels
  "blockRadiusY": 200, # in pixels
  "stds": 2, # number of standard deviations to expand to
  "center": True, # whether to anchor the expansion on the median value of the block
  "stretch": True # whether to expand the values to the full range, e.g. 0-255 for 8-bit
}

sourceDir = "/tmp/images-originals/"
width, height = 2048, 2048 # Or obtain them from e.g. dimensionsOf defined in an erlier script

vstack = FilterVirtualStack(width, height, sourceDir, params)

imp = ImagePlus("FilterVirtualStack with NormalizeLocalContrast", vstack)
imp.show()

					

8. Turn your script into a plugin

Save the script in Fiji's plugins folder or a subfolder, with:

  • An underscore "_" in the name.
  • ".py" file extension.
For example: "my_script.py"

Then run "Help - Update Menus", or restart Fiji. That's it!

The script will appear as a regular menu command under "Plugins", and you'll be able to run it from the Command Finder.

Where is the plugins folder?

  • In MacOSX, it's inside the Fiji application:
    1. Go to the "Applications" folder in the Finder.
    2. Right-click on the Fiji icon and select "Show package contents"
  • In Ubuntu and in Windows, it's inside the "Fiji.app" folder.

See also the Fiji wiki on Jython Scripting.


9. Lists, native arrays, and interacting with Java classes

Jython lists are passed as read-only arrays to Java classes

Calling java classes and methods for jython is seamless: on the surface, there isn't any difference with calling jython classes and methods. But there is a subtle difference when calling java methods that expect native arrays.

Jython will automatically present a jython list as a native array to the java method that expects it. But as read-only!

In this example, we create an AffineTransform that specifies a translation. Then we give it:

  • A 2D point defined as a list of 2 numbers: the list fails to be updated in place by the transform method of the affine.
  • A 2D point defined as a native float array of 2 numbers: the array is correctly updated in place.

The ability to pass jython lists as native arrays to java methods is extremely convenient, and we have used it in the example above to pass a list of strings to the GenericDialog addChoice method.

from java.awt.geom import AffineTransform
from array import array

# A 2D point
x = 10
y = 40

# A transform that does a translation
# of dx=45, dy=56
aff = AffineTransform(1, 0, 0, 1, 45, 56)

# Create a point as a list of x,y
p = [x, y]
aff.transform(p, 0, p, 0, 1)
print p
# prints: [10, 40] -- the list p was not updated!

# Create a point as a native float array of x,y
q = array('f', [x, y])
aff.transform(q, 0, q, 0, 1)
print q
# prints: [55, 96] -- the native array q was updated properly
          
Started New_.py at Sat Nov 13 09:31:51 CET 2010
[10, 40]
array('f', [55.0, 96.0])
          

Creating native arrays: empty, or from a list

The package array contains two functions:

  • zeros: to create empty native arrays.
  • array: to create an array out of a list, or out of another array of the same kind.

The type of array is specified by the first argument. For primitive types (char, short, int, float, long, double), use a single character in quotes. See the list of all possible characters.

Manipulating arrays is done in the same way that you would do in java. See lines 16--18 in the example. But in jython, these arrays have built-in convenient functions such as the '+' sign to concatenate two arrays into a new, longer one. In some ways, arrays behave very much like lists, and offer functions like extend (to grow the array using elements from an iterable like a list, tuple, or generator), append, pop, insert, reverse, index and others like tolist, tostring, fromstring, fromlist.

See also the documentation on how to create multidimensional native arrays with Jython.

In addition to the array package, jython provides the jarray package (see documentation). The difference between the two is unclear; the major visible difference is the order of arguments when invoking their homonimous functions zeros and array: in the array package, the type character is provided first; in jarray, second. Perhaps the only relevant difference is that the array package supports more types of arrays (such as unsigned int, etc.) that java doesn't support natively (java has only signed native arrays), whereas the jarray package merely allows the creation of native java signed arrays.

 

from array import array, zeros
from ij import ImagePlus

# An empty native float array of length 5
a = zeros('f', 5)
print a

# A native float array with values 0 to 9
b = array('f', [0, 1, 2, 3, 4])
print b

# An empty native ImagePlus array of length 5
imps = zeros(ImagePlus, 5)
print imps

# Assign the current image to the first element of the array
imps[0] = IJ.getImage()
print imps

# Length of an array
print "length:", len(imps)
          
Started New_.py at Sat Nov 13 09:40:00 CET 2010
array('f', [0.0, 0.0, 0.0, 0.0, 0.0])
array('f', [0.0, 1.0, 2.0, 3.0, 4.0])
array(ij.ImagePlus, [None, None, None, None, None])
array(ij.ImagePlus, [imp[boats.gif 720x576x1], None, None, None, None])
length: 5
          

10. Generic algorithms that work on images of any kind: using ImgLib

Imglib is a general-purpose software library for n-dimensional data processing, mostly oriented towards images. Scripting with Imglib greatly simplifies operations on images of different types (8-bit, 16-bit, color images, etc).

Scripting in imglib is based around the Compute function, which composes images, functions and numbers into output images.


Mathematical operations on images

The script.imglib packages offers means to compute with images. There are three kinds of operations, each in its own package:

  • script.imglib.math: offers functions that operate on each pixel. These functions are composable: the result of one function may be used as the input to another function.
    These math functions accept any possible pair of: images, numbers, and other functions.
  • script.imglib.color: offers functions to create and manipulate color images, for example to extract specific color channels either in RGB or in HSB color space. The functions to extract channels or specific color spaces are composable with mathematical functions. For example, to subtract one color channel from another.
    These color functions are composable with math functions.
  • script.imglib.algorithm: offers functions such as Gauss, Scale3D, Affine3D, Resample, Downsample ... that alter many pixels in one pass--they are not pixel-wise operations. Some change the dimensions of an image.
    These algorithm functions all return images, or what is the same, they are the result images of applying the function to the input image.
  • script.imglib.analysis: offers functions to extract or measure images or functions that evaluate to images. For example, the DoGPeak, which finds intensity peaks in the image by difference of Gaussian, returns a list of the coordinates of the found peaks.
    These analysis functions are all collections of the results.

from script.imglib.math import Compute, Subtract
from script.imglib.color import Red, Green, Blue, RGBA
from script.imglib import ImgLib
from ij import IJ

# Open an RGB image stack
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")

# Wrap it as an Imglib image
img = ImgLib.wrap(imp)

# Example 1: subtract red from green channel
sub = Compute.inFloats(Subtract(Green(img), Red(img)))

ImgLib.wrap(sub).show()

# Example 2: subtract red from green channel, and compose a new RGBA image
rgb = RGBA(Red(img), Subtract(Green(img), Red(img)), Blue(img)).asImage()

ImgLib.wrap(rgb).show()
          

Using image math for flat-field correction

In the example, we start by opening an image from the sample image collection of ImageJ.
Then, since we are lacking a flatfield image, we simulate one. We could do it using a median filter with a very large radius, but that it's too expensive to compute just for this example. Instead, we scale down the image, apply a Gauss to the scaled down image, and then resample the result up to the original image dimensions.
Then we do the math for flat-field correction:

  1. Subtract the brighfield from the image. (The brighfield is an image taken in the same conditions as the data image, but without the specimen: just the dust and debris and uneven illumination of the microscope.)
  2. Subtract the darkfield from the image. (The darkfield could represent the thermal noise in the camera chip.)
  3. Divide 1 by 2.
  4. Multiply 3 by the mean intensity of the original image.

With imglib, all the above operations happen in a pixel-by-pixel basis, and are computed as fast or faster than if you had manually hand-coded every operation. And multithreaded!


from script.imglib.math import Compute, Divide, Multiply, Subtract
from script.imglib.algorithm import Gauss, Scale2D, Resample
from script.imglib import ImgLib
from ij import IJ

# 1. Open an image
img = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/bridge.gif"))

# 2. Simulate a brighfield from a Gauss with a large radius
# (First scale down by 4x, then gauss of radius=20, then scale up)
brightfield = Resample(Gauss(Scale2D(img, 0.25), 20), img.getDimensions())

# 3. Simulate a perfect darkfield
darkfield = 0

# 4. Compute the mean pixel intensity value of the image
mean = reduce(lambda s, t: s + t.get(), img, 0) / img.size()

# 5. Correct the illumination
corrected = Compute.inFloats(Multiply(Divide(Subtract(img, brightfield),
                                             Subtract(brightfield, darkfield)), mean))

# 6. ... and show it in ImageJ
ImgLib.wrap(corrected).show()
          

Extracting and manipulating image color channels: RGBA and HSB

In the examples above we have already used the Red and Green functions. There's also Blue, Alpha, and a generic Channel that takes the channel index as argument--where red is 3, green is 2, blue is 1, and alpha is 4 (these numbers are related to the byte order in the 4-byte that makes up a 32-bit integer). The basic color operations have to do with extracting the color channel, for a particular color space (RGBA or HSB)

The function RGBA takes from 1 to 4 arguments, and creates an RGBA image out of them. These arguments can be images, other functions, or numbers--for example, all pixels of a channel would have the value 255 (maximum intensity).
In the example, we create a new RGBA image that takes the Gaussian of the red channel, the value 40 for all pixels of the green channel, and the dithered image of the blue channel.
Notice that the Dither function returns 0 or 1 values for each pixel, hence we multiply them by 255 to make them full intensity of blue in the RGBA image.


from script.imglib.math import Compute, Subtract, Multiply
from script.imglib.color import Red, Blue, RGBA
from script.imglib.algorithm import Gauss, Dither
from ij import IJ

# Obtain a color image from the ImageJ samples  
clown = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/clown.jpg"))
  
# Example 1: compose a new image manipulating the color channels of the clown image:  
img = RGBA(Gauss(Red(clown), 10), 40, Multiply(255, Dither(Blue(clown)))).asImage()  
  
ImgLib.wrap(img).show()
          

In the second example, we extract the HSB channels from the clown image. To the Hue channel (which is expressed in the range [0, 1]), we add 0.5. We've shifted the hue around a bit.
To understand how the hue values work (by flooring the float value and subtracting that from it), see this page.

 


from script.imglib.math import Compute, Add, Subtract
from script.imglib.color import HSB, Hue, Saturation, Brightness
from script.imglib import ImgLib
from ij import IJ

# Obtain an image
img = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/clown.jpg"))

# Obtain a new clown, whose hue has been shifted by half
# with the same saturation and brightness of the original
bluey = Compute.inRGBA(HSB(Add(Hue(img), 0.5), Saturation(img), Brightness(img)))

ImgLib.wrap(bluey).show()
          

In the third example, we apply a gamma correction to an RGB confocal stack. To correct the gamma, we must first extract each color channel from the image, and then apply the gamma to each channel independently. In this example we use a gamma of 0.5 for every channel. Of course you could apply different gamma values to each channel, or apply it only to specific channels.

Notice how we use asImage() instead of Compute.inRGBA. The result is the same; the former is syntactic sugar of the latter.

 


# Correct gamma
from script.imglib.math import Min, Max, Exp, Multiply, Divide, Log
from script.imglib.color import RGBA, Red, Green, Blue
from ij import IJ

gamma = 0.5
img = ImgLib.wrap(IJ.getImage())

def g(channel, gamma):
  """ Return a function that, when evaluated, computes the gamma
	    of the given color channel.
      If 'i' was the pixel value, then this function would do:
      double v = Math.exp(Math.log(i/255.0) * gamma) * 255.0);
      if (v < 0) v = 0;
      if (v >255) v = 255;
  """
  return Min(255, Max(0, Multiply(Exp(Multiply(gamma, Log(Divide(channel, 255)))), 255)))

corrected = RGBA(g(Red(img), gamma), g(Green(img), gamma), g(Blue(img), gamma)).asImage()

ImgLib.wrap(corrected).show()
          

Find cells in an 3D image stack by Difference of Gaussian, count them, and show them in 3D as spheres.

First we define the cell diameter that we are looking for (5 microns; measure it with a line ROI over the image) and the minimum voxel intensity that will care about (in this case, anything under a value of 40 will be ignored). And we load the image of interest: a 3-color channel image of the first instar Drosophila larval brain.

Then we scale down the image to make it isotropic: so that voxels have the same dimensions in all axes.

We run the DoGPeaks ("Difference of Gaussian Peaks") with a pair of appropriate sigmas: the scaled diameter of the cell, and half that.

The peaks are each a float[] array that specifies its coordinate. With these, we create Point3f instances, which we transport back to calibrated image coordinates.

Finally, we show in the 3D Viewer the peaks as spheres, and the image as a 3D volume.


 
# Load an image of the Drosophila larval fly brain and segment
# the 5-micron diameter cells present in the red channel.

from script.imglib.analysis import DoGPeaks
from script.imglib.color import Red
from script.imglib.algorithm import Scale2D
from script.imglib.math import Compute
from script.imglib import ImgLib
from ij3d import Image3DUniverse
from javax.vecmath import Color3f, Point3f
from ij import IJ

cell_diameter = 5  # in microns
minPeak = 40 # The minimum intensity for a peak to be considered so.
imp = IJ.openImage("http://samples.fiji.sc//samples/first-instar-brain.zip")

# Scale the X,Y axis down to isotropy with the Z axis
cal = imp.getCalibration()
scale2D = cal.pixelWidth / cal.pixelDepth
iso = Compute.inFloats(Scale2D(Red(ImgLib.wrap(imp)), scale2D))

# Find peaks by difference of Gaussian
sigma = (cell_diameter  / cal.pixelWidth) * scale2D
peaks = DoGPeaks(iso, sigma, sigma * 0.5, minPeak, 1)
print "Found", len(peaks), "peaks"

# Convert the peaks into points in calibrated image space
ps = []
for peak in peaks:
  p = Point3f(peak)
  p.scale(cal.pixelWidth * 1/scale2D)
  ps.append(p)

# Show the peaks as spheres in 3D, along with orthoslices:
univ = Image3DUniverse(512, 512)
univ.addIcospheres(ps, Color3f(1, 0, 0), 2, cell_diameter/2, "Cells").setLocked(True)
univ.addOrthoslice(imp).setLocked(True)
univ.show()
          

11. ImgLib2: writing generic, high-performance image processing programs

For a high-level introduction to ImgLib2, see:

Views of an image, with ImgLib2

ImgLib2 is a powerful library with a number of key concepts for high-performance, memory-efficient image processing. One such concept is that of a view of an image.

First, wrap a regular ImageJ ImagePlus into an ImgLib2 image, with the 'wrap' function in the ImageJFunctions namespace (AKA a static method), which we alias as IL for brevity using the as keyword in the import line.

Then, we view the image as an infinite image, using the Views.extendZero function: beyond the boundaries of the image, return the value zero as the pixel value.

An infinite image cannot be visualized in full. Therefore, we apply the Views.interval function to delimit it: in this example, to a "canvas" twice as large as before, with the image centered.

Then, we wrap the ImgLib2 interval imgL into an ImageJ's ImagePlus (using a modified VirtualStack that reads directly from the imgL), and show it.

Importantly, no pixel data was duplicated at any step. The Views concept enables us to define transformations to the image that are then concatenated and finally used to render the final image.

And furthermore, thanks to ImgLib2's underlying dimension-independent, data source-independent, and image type-independent model, this code applies to any image of any type and dimensions: images, volumes, 4D series. ImgLib2 is a very powerful library.

from ij import IJ
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.view import Views

# Load an image (of any dimensions) such as the clown sample image
imp = IJ.getImage()

# Convert to 8-bit if it isn't yet, using macros
IJ.run(imp, "8-bit", "")

# Access its pixel data from an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with a value of zero beyond the image edges
imgE = Views.extendZero(img)

# Limit the infinite image with an interval twice as large as the original,
# so that the original image remains at the center.
# It starts at minus half the image width, and ends at 1.5x the image width.
minC = [int(-0.5 * img.dimension(i)) for i in range(img.numDimensions())]
maxC = [int( 1.5 * img.dimension(i)) for i in range(img.numDimensions())]
imgL = Views.interval(imgE, minC, maxC)

# Visualize the enlarged canvas, so to speak
imp2 = IL.wrap(imgL, imp.getTitle() + " - enlarged canvas") # an ImagePlus
imp2.show()
          


There are multiple strategies for filling in the space beyond an image boundaries. Above, we used Views.extendZero, which trivally sets the "outside" to the pixel value zero. But there are several variants, including View.extendValue for arbitrary pixel values instead of zero; Views.extendMirrorSingle and Views.extendMirrorDouble for mirroring the pixel values relative to the nearest image border, and others. See Views for details and for more.

In this example, we use Views.extendMirrorSingle and the effect is clear when we take an interval over it just like the one above: instead of the image surrounded by black space, we get mirror copies in every direction beyond the edges of the original image, which remains centered.

The various extended views each have their purpose. Extending enables, for example, to avoid writing in special purpose code for e.g. algorithms that use a moving window around every pixel. The pixels on the border or near the border (depending on the size of the window) would need to be special cased. Instead, with extended views, you can specify what data should be present beyond the border (a constant value, a mirror reflection of the image), and reduce enormously the complexity of your code.

You could also use them like ROIs (regions of interest): obtain a View on a specific region of the image, and apply to it any code that runs on whole images. Views simplify programming for image processing a lot.


img = ... # See above

# View mirroring the data beyond the edges
imgE = Views.extendMirrorSingle(img)
imgL = Views.interval(imgE, minC, maxC)

# Visualize the enlarged canvas, so to speak
imp2 = IL.wrap(imgL, imp.getTitle() + " - enlarged canvas") # an ImagePlus
imp2.show()
					


Counting cells: difference of Gaussian peak detection with ImgLib2

First we load ImageJ's "embryos" example image, which is RGB, and convert it to 8-bit (16-bit or 32-bit would work just fine). Then we wrap it as an ImgLib2 image, and acquire a mirroring infinite view of the image which is suitable for computing Gaussians.

The parameters of ImgLib2's Difference of Gaussian detection (DogDetection) are relatively straightforward. The key parameters are the sigmaLarger and sigmaSmaller, which define the sigmas of the two Gaussians that will be subtracted one from the other. The minPeakValue acts as a filter for noisy detections. The calibration would be useful in e.g. an LSM 3D volume where the Z axis has typically a lower resolution than the X and Y axes.

For visual validation, we read out the detected peaks as a PointRoi that we set on the imp, the original ImagePlus with the embryos (see image below with a PointRoi point on each embryo).

Then, we set out to measure a small interval around each detected peak (each embryo). For this, we use the sigmaSmaller, which is half of the radius of an embryo (determined empirically by using a line ROI over embryos and pushing 'm' to measure them), so that we define a 2d box around the peak, with a side twice that of sigmaSmaller plus one. Ideally, one would use a circular ROI by using a HyperSphere, but a square ROI as obtained with a View.interval will more than suffice here.

  Picking and measuring areas with Views.interval

To sum the pixel intensity values within the interval, we use Views.flatIterable on the interval, which provides a view that can be serially iterated over the interval. Otherwise, the interval, which is a RandomAccessibleInterval, would yield its pixel values only if we gave it each pixel coordinate to be measured. Then, we iterate each small view, obtaining a t (a Type) instance for every pixel, which in ImgLib2 is one of the key design features that enables so much indirection without sacrificing performance. To the t Type, which is a subclass of NumericType, we ask it to yield an integer with t.getInteger(). Python's built-in sum function adds up all the values of the generator (no list is created).

  Listing measurements with a Results Table

Finally, the peak X,Y coordinates and the sum of pixel values within the interval are added to an ImageJ ResultsTable.

 


from ij import IJ
from ij.gui import PointRoi
from ij.measure import ResultsTable
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.view import Views
from net.imglib2.algorithm.dog import DogDetection
from jarray import zeros

# Load a greyscale single-channel image: the "Embryos" sample image
imp = IJ.openImage("https://imagej.nih.gov/ij/images/embryos.jpg")
# Convert it to 8-bit
IJ.run(imp, "8-bit", "")

# Access its pixel data from an ImgLib2 data structure: a RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, mirrored at the edges which is ideal for Gaussians
imgE = Views.extendMirrorSingle(img)

# Parameters for a Difference of Gaussian to detect embryo positions
calibration = [1.0 for i in range(img.numDimensions())] # no calibration: identity
sigmaSmaller = 15 # in pixels: a quarter of the radius of an embryo
sigmaLarger = 30  # pixels: half the radius of an embryo
extremaType = DogDetection.ExtremaType.MAXIMA
minPeakValue = 10
normalizedMinPeakValue = False

# In the differece of gaussian peak detection, the img acts as the interval
# within which to look for peaks. The processing is done on the infinite imgE.
dog = DogDetection(imgE, img, calibration, sigmaSmaller, sigmaLarger,
  extremaType, minPeakValue, normalizedMinPeakValue)

peaks = dog.getPeaks()

# Create a PointRoi from the DoG peaks, for visualization
roi = PointRoi(0, 0)
# A temporary array of integers, one per dimension the image has
p = zeros(img.numDimensions(), 'i')
# Load every peak as a point in the PointRoi
for peak in peaks:
  # Read peak coordinates into an array of integers
  peak.localize(p)
  roi.addPoint(imp, p[0], p[1])

imp.setRoi(roi)

# Now, iterate each peak, defining a small interval centered at each peak,
# and measure the sum of total pixel intensity,
# and display the results in an ImageJ ResultTable.
table = ResultsTable()

for peak in peaks:
  # Read peak coordinates into an array of integers
  peak.localize(p)
  # Define limits of the interval around the peak:
  # (sigmaSmaller is half the radius of the embryo)
  minC = [p[i] - sigmaSmaller for i in range(img.numDimensions())]
  maxC = [p[i] + sigmaSmaller for i in range(img.numDimensions())]
  # View the interval around the peak, as a flat iterable (like an array)
  fov = Views.interval(img, minC, maxC)
  # Compute sum of pixel intensity values of the interval
  # (The t is the Type that mediates access to the pixels, via its get* methods)
  s = sum(t.getInteger() for t in fov)
  # Add to results table
  table.incrementCounter()
  table.addValue("x", p[0])
  table.addValue("y", p[1])
  table.addValue("sum", s)

table.show("Embryo intensities at peaks")
					


  Generative image: simulating embryo segmentation

Here, I am showing how to express images whose underlying data is not the typical array of pixels, but rather, each pixel value is chosen as a function of the spatial coordinate. The underlying pixel data is just the function. In this example, a white pixel is returned when the pixel falls within a radius of the detected embryo, and a black pixel otherwise, for the background.

You may ask yourself what is the point of this simulated object segmentation. It is merely to illustrate how these function-based images can be created. Practical uses will come later. If you want a real segmentation of the area of these embryos, see Fiji/ImageJ's "Analyze - Analyze Particles...", or the machine-learning based Trainable WeKa Segmentation and SIOX simple interactive object extraction plugins.

First, we detect embryos using the Difference of Gaussian approach used above, with the DogDetection class. From this, we obtain the centers of all detected embryos, in floating-point coordinates.

Second, we define a value for the inside of the embryo (white), and another for the outside (black, the background).

Then we specify the radius that we want to paint with the inside value around the center coordinate of every detected embryo.

And crucially, we construct a KDTree, which is a data structure for fast spatial queries of coordinates. Here we use the kdtree to swiftly find, for every pixel in the final image, the nearest embryo center point.

Then, we define our "image". In quotes, because it is not an image. What we define is a method to obtain pixel values at arbitrary spatial coordinates, returning either inside (white) or outside (black) depending on the position in space for which we request a value. To this end, we define a new class Circles that is a RealRandomAccess, and, to avoid having to implement all the necessary methods of the RealRandomAccess interface, we extend the RealPoint class too, because it already implements pretty much everything we need except the critical get method from the Sampler interface. In other words, the only practical difference between a RealPoint and a RealRandomAccess is that the latter also implements the Sampler interface for requesting values.

The search is implemented using a NearestNeighborSearchOnKDTree, which does exactly what it says, and offers a stateful search.search. method: first we invoke the search (at the implicit current spatial coordinate of the RealRandomAccess parts of the Circles class), and then we ask for the distance from the current coordinate to the nearest one that was found in the search. On the basis of the result--comparing with the radius--either the inside or the outside is returned.

All that remains now is using the Circles RealRandomAccess as the data provider for a RealRandomAccessible that we name CircleData, which is still in real coordinates and unbounded. So we view it in a rasterized way, to be able to iterate it with integer coordinates--like the pixels of an image--, and define its bounds to be those of the original image img containing the embryos (that is, img here can be used because it implements Interval and happens to have exactly the dimensions we want). The "pixels" never exist in memory until they are written to the final image that is visualized. Voilà.

from ij import IJ
from net.imglib2.view import Views
from net.imglib2.algorithm.dog import DogDetection
from net.imglib2 import KDTree, RealPoint, RealRandomAccess, RealRandomAccessible
from net.imglib2.neighborsearch import NearestNeighborSearchOnKDTree
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.type.numeric.integer import UnsignedByteType
from java.util import AbstractList

# The difference of Gaussian calculation, same as above (compressed)
# (or, paste this script under the script above where 'dog' is defined)
imp = IJ.openImage("https://imagej.nih.gov/ij/images/embryos.jpg")
IJ.run(imp, "8-bit", "")
img = IL.wrapReal(imp)
imgE = Views.extendMirrorSingle(img)
dog = DogDetection(imgE, img, [1.0, 1.0], 15.0, 30.0,
  DogDetection.ExtremaType.MAXIMA, 10, False)

# The spatial coordinates for the centers of all detected embryos
centers = dog.getSubpixelPeaks() # in floating-point precision

# A value for the inside of an embryo: white, in 8-bit
inside = UnsignedByteType(255)
  
# A value for the outside (the background): black, in 8-bit
outside = UnsignedByteType(0)

# The radius of a simulated embryo, same as sigmaLarger was above
radius = 30 # or = sigmaLarger

# KDTree: a data structure for fast lookup of spatial coordinates
kdtree = KDTree([inside] * len(centers), centers)

# The definition of circles (or spheres, or hyperspheres) in space
class Circles(RealPoint, RealRandomAccess):
  def __init__(self, n_dimensions, kdtree, radius, inside, outside):
    super(RealPoint, self).__init__(n_dimensions)
    self.search = NearestNeighborSearchOnKDTree(kdtree)
    self.radius = radius
    self.radius_squared = radius * radius
    self.inside = inside
    self.outside = outside
  def copyRealRandomAccess(self):
    return Circles(self.numDimensions(), self.kdtree, self.radius,
		               self.inside, self.outside)
  def get(self):
    self.search.search(self)
    if self.search.getSquareDistance() < self.radius_squared:
      return self.inside 
    return self.outside

# The RealRandomAccessible that wraps the Circles in 2D space, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
class CircleData(RealRandomAccessible):
  def realRandomAccess(self):
    return Circles(2, kdtree, radius, inside, outside)
  def numDimensions(self):
    return 2

# An unbounded view of the Circles that can be iterated in a grid, with integers
raster = Views.raster(CircleData())

# A bounded view of the raster, within the bounds of the original 'img'
# I.e. 'img' here is used as the Interval within which the CircleData is defined
circles = Views.interval(raster, img)

IL.wrap(circles, "Circles").show()
					

  Saving measurements into a CSV file, and reading them out as a PointRoi

Let's learn how to save the data to a CSV file. There are multiple ways to do so.

In the Results Table window, choose "File - Save...", which will save the table data in CSV format. Doesn't get any easier than this!

In the absence of a Results Table, we can use python's built-in csv library.

First, we define two functions to provide the data (peakData and the helper function centerAt), so that (for simplicity and clarity) we separate getting the peak data from writing the CSV. To get the peak data, we define the function peakData that does the same as was done above in a for loop: localize the peak (which writes its coordinates into the float array p) and then sum the pixels around the peak using an interval view. The helper function centerAt returns two copied arrays with the two arrays (minC, maxC) that delimit the region of interest around the peak translated to the peak.

Then, we write the CSV file one row at a time. We open the file within python's with statement, which ensures that, even if an error was to come up, the file handle will be closed, properly releasing system resources. The csv.writer function returns an object w onto which we call writerow for every peak. Notice the arguments provided to csv.writer, defining the delimiter (a comma, a space, a tab...), the quote character (for strings), and what to quote (everything that is not a number). The first row is the header, containing the titles of each column in the CSV file. Then each data row is written by providing writerow with the list of column entries to write: x, y and s, which is the sum of pixel values within the interval around x,y.

For completeness, I am showing here how to read the CSV file back into, in this example, a PointRoi, using the complementary function csv.reader. Note that numeric values are read in as strings, and must be transformed into floating-point numbers using the built-in function float.

from __future__ import with_statement
# IMPORTANT: imports from __future__ must go at the top of the file.

#
# ... same code as above here to obtain the peaks
#

from operator import add
import csv

# The minumum and maximum coordinates, for each image dimension,
# defining an interval within which pixel values will be summed.
minC = [-sigmaSmaller for i in xrange(img.numDimensions())]
maxC = [ sigmaSmaller for i in xrange(img.numDimensions())]

def centerAt(p, minC, maxC):
  """ Translate the minC, maxC coordinate bounds to the peak. """
  return map(add, p, minC), map(add, p, maxC)

def peakData(peaks, p, minC, maxC):
  """ A generator function that returns all peaks and their pixel sum,
      one at a time. """
  for peak in peaks:
    peak.localize(p)
    minCoords, maxCoords = centerAt(p, minC, maxC)
    fov = Views.interval(img, minCoords, maxCoords)
    s = sum(t.getInteger() for t in fov)
    yield p, s

# Save as CSV file
with open('/tmp/peaks.csv', 'wb') as csvfile:
  w = csv.writer(csvfile, delimiter=',', quotechar="\"", quoting=csv.QUOTE_NONNUMERIC)
  w.writerow(['x', 'y', 'sum'])
  for p, s in peakData(peaks, p, minC, maxC):
    w.writerow([p[0], p[1], s])

# Read the CSV file into an ROI
roi = PointRoi(0, 0)
with open('/tmp/peaks.csv', 'r') as csvfile:
  reader = csv.reader(csvfile, delimiter=',', quotechar="\"")
  header = reader.next() # advance reader by one line
  for x, y, s in reader:
    roi.addPoint(imp, float(x), float(y))

imp.show()
imp.setRoi(roi)
					

Transform an image using ImgLib2.

In this example, we will use ImgLib2's RealViews namespace to transform images with affine transforms: translate, rotate, scale, shear.

Let's introduce the concept of a View in ImgLib2: it's like a shallow copy, possibly transformed. Meaning, the underlying pixel array is not duplicated, with merely a transformation of some sort being applied to the pixels on the fly as these are requested. Views can be concatenated.

Here we use:

  • Views.extendZero: takes a finite image and returns a view that returns the proper pixel values within the image, but a pixel value of zero beyond its edges.
  • Views.interpolate: enables retrieving pixel values for fractional coordinates (i.e. non-integer coordinates) with the help of an interpolation strategy, such as the NLinearInterpolatorFactory. Returns images of the RealRandomAccessible type, suitable for transformations.
  • RealViews.transform: views an image as transformed by the provided transformation, such as an affine transform. Operates on images that are RealRandomAccessible, such as those returned by Views.interpolate.
  • Views.interval: takes an infinite image (generally an infinite View) and adds limits to it, defining specific intervals in each of its dimensions within which the image is said to be defined. This is what we use to "crop" or to select a specific field of view. If the field of view includes regions outside the originally wrapped image, then it'd better be "filled in" with a Views.extend (like Views.extendZero) or it will fail with out of bounds exception when a user of the returned interval attemps to get pixels from such "outside" regions.

While the reasons that led to split the functionality into two separate namespaces (the Views and the RealViews) don't matter, the basic heuristic when looking up for a View method is that we'll use Views when the interval is defined (that is, the image data is known to exist within a specific range between 0 and width, height, depth, etc., which is almost always), and we'll use RealViews when the interval is not defined and pixels can be retrieved with real numbers, that is, floating point numbers (such as when applying affine transforms or performing interpolations).

In the end, we call ImageJFunctions.wrap again to visualize the transformed image as a regular ImageJ's ImagePlus containing a VirtualStack whose pixel source is the scaled up View, whose pixel source, in turn, is the original ImagePlus. No data has been duplicated at any step!

from net.imglib2.realtransform import RealViews as RV
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.realtransform import Scale
from net.imglib2.view import Views
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from ij import IJ

# Load an image (of any dimensions)
imp = IJ.getImage()

# Access its pixel data as an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with a value of zero beyond the image edges
imgE = Views.extendZero(img)

# View the pixel data as a RealRandomAccessible
# (that is, accessible with sub-pixel precision)
# by using an interpolator
imgR = Views.interpolate(imgE, NLinearInterpolatorFactory())

# Obtain a view of the 2D image twice as big
s = [2.0 for d in range(img.numDimensions())] # as many 2.0 as image dimensions
bigger = RV.transform(imgR, Scale(s))

# Define the interval we want to see: the original image, enlarged by 2X
# E.g. from 0 to 2*width, from 0 to 2*height, etc. for every dimension
minC = [0 for d in range(img.numDimensions())]
maxC = [int(img.dimension(i) * scale) for i, scale in enumerate(s)]
imgI = Views.interval(bigger, minC, maxC)

# Visualize the bigger view
imp2x = IL.wrap(imgI, imp.getTitle() + " - 2X") # an ImagePlus
imp2x.show()
          

 


At any time, use e.g. print type(imgR) to see the class of e.g. the object imgR. Then, either look it up in the ImgLib2's github repositories or in Google, or perhaps sufficiently, use print dir(imgR) to list all its accessible methods.


While the code in this example applies to images of any number of dimensions (2D, 3D, 4D) and type (8-bit, 16-bit, 32-bit, others), here we scale by a factor of two the boats example ImageJ image.


print type(imgR)
print dir(imgR)
					

<type 'net.imglib2.interpolation.Interpolant'>

['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__',
'__ensure_finalizer__', '__eq__', '__format__', '__getattribute__',
'__hash__', '__init__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__str__', '__subclasshook__', '__unicode__',
'class', 'equals', 'getClass', 'getInterpolatorFactory', 'getSource',
'hashCode', 'interpolatorFactory', 'notify', 'notifyAll', 'numDimensions',
'realRandomAccess', 'source', 'toString', 'wait']
					

The resulting ImagePlus can be saved using ImageJ's FileSaver methods, just like any other ImageJ image.

from ij.io import FileSaver

FileSaver(imp2x).saveAsPng("/path/to/boats-2x.png")
					

Rotating image volumes with ImgLib2.

Now we continue with a rotation around the Z axis (rotation in XY) by 30 degrees. Remember, this code applies to images of any number of dimensions: would work equally well as is for the boats image example above.

The rotation must be defined as the values of a matrix that describes an affine transform. For convenience, I use here the java.awt.geom.AffineTransform (aliased as Affine2D) to obtain the values of the rotation transform. Then these are transferred to a JaMa Matrix, which the ImgLib2's AffineTransform class takes as argument for its constructor. The matrix has to have one more column than rows, with the last column defining the translation. (The last row would be all zeros and a 1.0 at the end, so it is omitted.) Notice that the rest of the diagonal of the matrix is filled with 1.0 in the loop, for as many dimensions as the image has.

Then we view the rotated image as an ImagePlus that wraps a VirtualStack just like above. Of course, the rotated image is cropped: when rotating relative to the center, the center stays within the field of view, but the corners disappear. Below, we instead view an enlarged interval that fully contains the rotated image. (In this particular example the effect is not very visible because the MRI stack of a human head has black corners. To reveal the issue, I draw a white line along the borders beforehand by pushing 'a' to select all with a rectangular ROI, then choosing white color for the foreground color, and then pushing 'd' to draw it, confirming the dialog to draw in every section.)

 

from net.imglib2.realtransform import RealViews as RV
from net.imglib2.realtransform import AffineTransform
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from ij import IJ
from net.imglib2.view import Views
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from java.awt.geom import AffineTransform as Affine2D
from java.awt import Rectangle
from Jama import Matrix
from math import radians

# Load an image (of any dimensions)
imp = IJ.getImage()

# Access its pixel data as an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with value zero beyond the image edges
imgE = Views.extendZero(img)

# View the pixel data as a RealRandomAccessible
# (that is, accessible with sub-pixel precision)
# by using an interpolator
imgR = Views.interpolate(imgE, NLinearInterpolatorFactory())

# Define a rotation by +30 degrees relative to the image center in the XY axes
# (not explicitly XY but the first two dimensions)
# by filling in a rotation matrix with values taken
# from a java.awt.geom.AffineTransform (aliased as Affine2D)
# and by filling in the rest of the diagonal with 1.0
# (for the case where the image has more than 2 dimensions)
angle = radians(30)
rot2d = Affine2D.getRotateInstance(
  angle, img.dimension(0) / 2, img.dimension(1) / 2)
ndims = img.numDimensions()
matrix = Matrix(ndims, ndims + 1)
matrix.set(0, 0, rot2d.getScaleX())
matrix.set(0, 1, rot2d.getShearX())
matrix.set(0, ndims, rot2d.getTranslateX())
matrix.set(1, 0, rot2d.getShearY())
matrix.set(1, 1, rot2d.getScaleY())
matrix.set(1, ndims, rot2d.getTranslateY())
for i in range(2, img.numDimensions()):
  matrix.set(i, i, 1.0)

print matrix.getArray()

# Define a rotated view of the image
rotated = RV.transform(imgR, AffineTransform(matrix))

# View the image rotated, without enlarging the canvas
# so we define the interval as the original image dimensions.
# (Notice the -1 on the max coordinate: the interval is inclusive)
minC = [0 for i in range(img.numDimensions())]
maxC = [img.dimension(i) -1 for i in range(img.numDimensions())]
imgRot2d = IL.wrap(Views.interval(rotated, minC, maxC),
  imp.getTitle() + " - rot2d")
imgRot2d.show()

# View the image rotated, enlarging the interval to fit it.
# (This is akin to enlarging the canvas.)
# We compute the bounds of the enlarged canvas by transforming a rectangle,
# then define the interval min and max coordinates by subtracting
# and adding as appropriate to exactly capture the complete rotated image.
# Notice the min coordinates have negative values, as the rotated image
# has pixels now somewhere to the left and up from the top-left 0,0 origin
# of coordinates.
bounds = rot2d.createTransformedShape(
  Rectangle(img.dimension(0), img.dimension(1))).getBounds()
minC[0] = (img.dimension(0) - bounds.width) / 2
minC[1] = (img.dimension(1) - bounds.height) / 2
maxC[0] += abs(minC[0]) -1 # -1 because its inclusive
maxC[1] += abs(minC[1]) -1
imgRot2dFit = IL.wrap(Views.interval(rotated, minC, maxC),
  imp.getTitle() + " - rot2dFit")
imgRot2dFit.show()
					

To read out the values of the transformation matrix that specifies the rotation, print it: it's an array of arrays. Or pretty-print it with pprint, which requires turning the inner arrays into lists for nicer printing.

Given the desired 30 degree rotation, the "scale" part (the diagonal) becomes the cosine of 30 degrees (sqrt(3)/2 = 0.866), and the "shear" part (the second column of the first row, and the first column of the second row) becomes the sine of 30 degrees (0.5) with the appropriate sign (to the "left" for X, hence negative; and to the "right" for Y, hence positive). The third column contains the translation values corresponding to a rotation specified relative to the center of the image. While you could always write in the matrix by hand, it is better to use libraries like, for 2D, the java.awt.geom.AffineTransform and its methods such as getRotateInstance. For 3D rotations and affine transformations in general, use e.g. javax.media.j3d.Transform3D and e.g. its method rotZ, which sets the transform to mean a rotation in the Z axis, as done in this example.


print matrix.getArray()

from pprint import pprint
pprint([list(row) for row in matrix.getArray()])
					

array([D, [array('d', [0.8660254037844387, -0.49999999999999994, 0.0,
68.95963744804719]), array('d', [0.49999999999999994, 0.8660254037844387,
0.0, -31.360870627641567]), array('d', [0.0, 0.0, 1.0, 0.0])])

[[0.8660254037844387,  -0.49999999999999994, 0.0,  68.95963744804719],
 [0.49999999999999994,  0.8660254037844387,  0.0, -31.360870627641567],
 [0.0,                  0.0,                 1.0,  0.0]]
					

Processing RGB and ARGB images with ImgLib2.

An ARGB image is a hack: the four color channels have been stored each in one of the 4 bytes of a 32-bit integer. Processing directly the pixel array, made of integers, makes no sense at all. Prior to any processing, color channels must be separated.

For reference, the alpha channel is in the upper byte (index 0), the red in the 2nd (index 1), the green in the 3rd (index 2) and blue in the lowest byte, the 4th (index 3).

In ImgLib2, rather than copying a color channel into a new image with a new array of bytes, we acquire a View of its channels: by using the Converters functions, optionally together with the Views.hyperSlice functionality.

First, we load an RGB or ARGB image and wrap it as an ImgLib2 object (despite what IL.wrapRGBA seems to imply, the alpha channel is still at index 0). If the ImagePlus is not backed by a ColorProcessor, it will throw an error.

Then we invoke one of the several functions in the Converters namespace that handles ARGB images. Here, we use Converters.argbChannels, which delivers a view of the ARGB image as a stack of 4 images, one per channel. The channels image is equivalent to ImageJ's CompositeImage, in that each channel can be processed independently.

To read out a single channel, e.g. the red channel (index 1), we could use Converters.argbChannel(img, 1). Or, as we illustrate here, use Views.hyperSlice: a function to reduce the dimensionally of an image, in this case by fixing the last dimension (the channels) to always be the red channel (at index 1).

Of course, this code runs on 2D images (e.g. the leaf) or 3D images (e.g. the Drosophila larval brain LSM stack), or 4D images, or images of any dimensions.

from net.imglib2.converter import Converters
from net.imglib2.view import Views
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from ij import IJ

# # Load an RGB or ARGB image
imp = IJ.getImage()

# Access its pixel data from an ImgLib2 data structure:
# a RandomAccessibleInterval
img = IL.wrapRGBA(imp)

# Convert an ARGB image to a stack of 4 channels: a RandomAccessibleInterval
# with one more dimension that before.
# The order of channels in the stack can be changed by changing their indices.
channels = Converters.argbChannels(img, [0, 1, 2, 3])

# Equivalent to ImageJ's CompositeImage: channels are separate
impChannels = IL.wrap(channels, imp.getTitle() + " channels")
impChannels.show()

# Read out a single channel directly
red = Converters.argbChannel(img, 1)

# Alternatively, pick a view of the red channel in the channels stack.
# Takes the last dimension, which are the channels,
# and fixes it to just one: that of the red channel (1) in the stack.
red = Views.hyperSlice(channels, channels.numDimensions() -1, 1)

impRed = IL.wrap(red, imp.getTitle() + " red channel")
impRed.show()
					

  

  

 
 

Analysis of 4D volumes with ImgLib2: GCaMP imaging data

In neuroscience, we can observe the activity of neurons in a circuit by expressing, for example, a calcium sensor in every neuron of interest, generally using viruses as delivery vectors for mammals, birds and reptiles, or genetic constructs for the fruit fly Drosophila, the nematode C. elegans and zebrafish. From the many options available, we'll use data here from those called genetically encoded calcium indicators (GECI), the most widely used being GCaMP.

Here is a copy of the first 10 time points to try out the scripts below. For testing, I used only the first two of these ten.

GCaMP time series data comes in many forms. Here, I am using a series of 3D volumes, each volume saved a single, separate file, representing a single time point of the neuronal activity data. These files were acquired by Nadine Randel and Rahghav Chhetri with the IsoView microscope by the Keller lab at HHMI Janelia. The file format, KLB, is a compressed open source format for which a library exists (klb-bdv) in Fiji: enable the "SiMView" update site in the "Help - Update Fiji" settings.

Opening KLB-formatted stacks is easy: we merley use the KLB library. Given that the library is optional, I wrapped it in a try statement to warn the user about is absence if so.

Each KLB stack file is compressed, so its size in disk can be misleading: a single 40 MB file may unpack into a 180 MB stack in computer memory. The decompression process is also costly. Therefore, we need a way to minimize the number of times we load each stack. To this end, I define a cache strategy known as memoization: the result of invoking a function with a specific set of arguments is stored, and if the function is called again with the same arguments, the stored result is returned right away. To prevent filling up all RAM, we can define a maximum amount of items to store using the keyword argument maxsize, which defaults here to 30 but we set to be 10. Which ones should be thrown out first? The specific implementation here is an LRU: the least recently used is thrown out first.

Note the make_synchronized function decoration on the Memoize.__call__ method: when multiple threads access the cache they will have to wait on each other (i.e.. synchronize their access to the method), to avoid simultaneously loading multiple times an individual volume that isn't yet cached. This is crucial not just for cache correctness, but also for good performance of e.g. the BigDataViewer, which uses multiple threads for rendering images. (Read more about thread concurrency and synchronization in jython.)

Note that in python 3 (not available from java so far), we could merely decorate the openStack function with functools.lru_cache, sparing us from having to create our own LRU cache.

  Representing the whole 4D series as an ImgLib2 image

To ease processing the 4D series we take full advantage of the ImgLib2 library capabilities to abstract over data sources and type, and represent the whole data set as a single image, vol4d. We accomplish this feat by using a LazyCellImg: an Img type (a fancy RandomAccessibleInterval) that enables us to only load what we access, while still pretending to be handling the whole 4D data set.

First, we define dimensions of the data, by reading the first stack. We assume that all stacks--one per time point--have the same spatial dimensions.

Then, we define the dimensions of vol4d: same as those of each time point, plus the 4th axis representing time.

Then, we define the CellGrid: we specify how each piece (each independent stack) fits into the overall continous volume (the whole 4D series). Basically, the grid here is a simple linear arrangement--in time--of individual 3D stacks.

Then we define how each Cell of the grid is loaded: each cell is merely a single 3D stack for a single timepoint. We do so with the class TimePointGet, which implements the Get interface defined inside the LazyCellImg class.

The method TimePointGet.get loads the stack--via our memoized function--and wraps it in a Cell object, with the latter having 4 dimensions: the 3 of the volume plus time, with the time dimension being of size 1: each stack represents a single time point in the 4D series.

Finally we define vol4d as a LazyCellImg, taking as argument the grid, the type (in this case, an implicit UnsignedShortType since KLB data is in 16-bit), and the cell loader.

Note that via the Converters we could be loading e.g. 16-bit data (like the KLB stacks) and converting it on the fly (no copy in RAM ever) as whatever type we'd like, such as unsigned byte, unsigned int, unsigned long, float, double or whatever we wanted, as made possible by the corresponding types available in ImgLib2.

 
 

from net.imglib2.img.cell import LazyCellImg, CellGrid, Cell
from net.imglib2.util import Intervals, IntervalIndexer
from net.imagej import ImgPlus
import os, sys
from collections import OrderedDict
from synchronize import make_synchronized

# Attempt to load the KLB library
# Must have enabled the "SiMView" update site from the Keller lab at Janelia
try:
  from org.janelia.simview.klb import KLB 
  klb = KLB.newInstance()
except:
  print "Could not import KLB file format reader."
  klb = None

# Source directory containing a list of files, one per stack
src_dir = "/home/albert/lab/scripts/data/4D-series/"
# You could also use a dialog to choose the directory
#from ij.io import DirectoryChooser
#dc = DirectoryChooser("Pick source directory")
#src_dir = dc.getDirectory()

# Each timepoint is a path to a 3D stack file
timepoint_paths = sorted(os.path.join(src_dir, name)
                         for name in os.listdir(src_dir)
                         if name.endswith(".klb"))

# Automatic LRU cache with limited storage:
class Memoize:
  def __init__(self, fn, maxsize=30):
    self.fn = fn           # The function to execute
    self.m = OrderedDict() # The cache
    self.maxsize = maxsize # The maximum number of items in the cache
  @make_synchronized
  def __call__(self, key):
    o = self.m.get(key, None)
    if o:
      self.m.pop(key) # Remove
    else:
      o = self.fn(key) # Invoke the memoized function
    self.m[key] = o # Store, as the last (newest) entry
    if len(self.m) > self.maxsize: # Trim cache
      self.m.popitem(last=False) # Remove first entry (the oldest)
    return o

# Function to open a single stack
def openStack(filepath):
  return klb.readFull(filepath)
  # If your files are e.g. TIFF stacks, use instead:
  #return IJ.openImage(filepath)

# Memoize the stack loading function for efficiency
getStack = Memoize(openStack, maxsize=10)

# Helper function to find the e.g. ShortArray object
# that wraps the short[] array containing the pixels.
# Each KLB file is opened as an ImgPlus, which wraps
# in this case an ArrayImg from which we obtain its ShortArray
def extractDataAccess(img, dimensions):
  if isinstance(img, ImgPlus):
    return extractDataAccess(img.getImg(), dimensions)
  try:
    return img.update(None) # a ShortArray holding pixel data for an ArrayImg
  except:
    print sys.exc_info()

# Open the first time point stack to read out the dimensions
# (It gets cached, so it is not time wasted)
first = getStack(timepoint_paths[0])

# One cell per time point
dimensions = [1 * first.dimension(0),
              1 * first.dimension(1),
              1 * first.dimension(2),
              len(timepoint_paths)]

cell_dimensions = dimensions[0:3] + [1]

# The grid: how each independent stack fits into the whole continuous volume
grid = CellGrid(dimensions, cell_dimensions)

# A class to retrieve each time point
# Each returned Cell has 4 dimensions: the 3 for the volume, plus time
class TimePointGet(LazyCellImg.Get):
  def __init__(self, timepoint_paths, cell_dimensions):
    self.timepoint_paths = timepoint_paths
    self.cell_dimensions = cell_dimensions
  def get(self, index):
    img = getStack(self.timepoint_paths[index])
    return Cell(self.cell_dimensions,
                [0, 0, 0, index],
                extractDataAccess(img, self.cell_dimensions))

vol4d = LazyCellImg(grid,
                    first.randomAccess().get().createVariable(),
                    TimePointGet(timepoint_paths, cell_dimensions))
					

 
 

Intermezzo: a better LRU cache, with soft references

Above, we memoized the loading of image volumes from disk as a way to avoid doing so repeatedly. Loaded images were stored in an OrderedDict, from which we could tell which images had been accessed least recently (by removing and reinserting images anytime we accessed them), and get rid of the eldest when the maximum number of images was reached. This approach has its drawbacks: we must know before hand the maximum number of images we want to store, and if memory is used up, we may incur in an OutOfMemoryError. We can do much better.

First, to ease the management of least accessed images, we'll use a LinkedHashMap data structure, which is a dictionary that "mantains a doubly-linked list running through all its entries." So it can be iterated predictably and knows which ones were added when relative to the others, just like the OrderedDict that we used before. The advantage is that we can tell its constructor to keep this linked list relative to the order in which entries were accesssed rather than added, which is great for an LRU cache (LRU means "Least Recently Used"), and furthermore, it offers the method removeEldestEntry to, upon inserting an entry, also remove the entry that was accessed least recently when e.g. there are more than a specified number of entries.

Second, we overcome the two problems of (1) having to define a maximum number of entries and (2) not knowing how much memory we can use, by storing each image wrapped in a SoftReference. Any images not referred to anywhere else in our program will be available for the automatic java garbage collector to remove to clear up memory for other uses. When that happens, accessing the entry in the cache will return an empty reference, and then we merely reload the image and store it again. Despite this safety mechanism, it is still sensible to define a maximum number of images to attempt to store; but this time our LRU cache is not commited to keeping them around.

 

from java.util import LinkedHashMap, Collections
from java.lang.ref import SoftReference
from synchronize import make_synchronized

class LRUCache(LinkedHashMap):
  def __init__(self, max_entries):
    # initialCapacity 16 (the default)
    # loadFactor 0.75 (the default)
    # accessOrder True (default is False)
    super(LinkedHashMap, self).__init__(10, 0.75, True)
    self.max_entries = max_entries
  def removeEldestEntry(self, eldest):
    if self.size() > self.max_entries:
      return True

class SoftMemoize:
  def __init__(self, fn, maxsize=30):
    self.fn = fn
    # Synchronize map to ensure correctness in a multi-threaded application:
    # (I.e. concurrent threads will wait on each other to access the cache)
    self.m = Collections.synchronizedMap(LRUCache(maxsize))
  @make_synchronized
  def __call__(self, key):
    softref = self.m.get(key, None)
    o = softref.get() if softref else None
    if o:
      return o
    else:
      # Either not present, or garbage collector discarded it
      # Invoke the memoized function
      o = this.fn(key)
      # Store return value wrapped in a SoftReference
      this.m.put(key, SoftReference(o))
      return o

openStack = ... # defined above
  
# Memoize the stack loading function for efficiency
getStack = SoftMemoize(openStack, maxsize=10)
					

  Visualizing the whole 4D series

With the vol4d variable now describing our entire 4D data set, we proceed to visualize it. There are multiple ways to do so.

  1. Trivially as an ImageJ CompositeImage containing a VirtualStack, using the ImageJFunctions.wrap method.  
     
     
  2. By creating both the CompositeImage and VirtualStack by hand, which affords more flexibility: we could, if we wanted, change the pixel type, or preprocess it in any way we wanted, even changing the dimensions by cropping or enlarging each slice. We could also insert slices as desired to e.g. represent two channels: the data and a segmentation--more on this below.
     
    To accomplish this, we require a fast way to copy pixels from a hyperslice of the 4D volume (obtained via Views.hyperslice, twice: once for time, another for Z, to extract a 2D slice from a 4D voume) into an ImageJ ShortProcessor. I am using here a trick that shouldn't be used much: embedding java code inside a python script, just for the tight loop. This is done via the Weaver.method static function to define a java method that copies the data from one Cursor (of the hyperslice) to another (of an ArrayImg created via the convenient ArrayImgs.unsignedShorts method).
     
    The Weaver.method is currently limited to a compiler without java generics, so the code is less idiomatic than it should be. Despite the ugly type casts, at runtime these are erased and therefore the code will perform just as fast as more modern java code with generics.
     
    In addition, in order to run a python script that inlines java code, you will need the java compiler library in the java class path. This is accomplished by e.g. launching Fiji with the tools.jar in the classpath, like this (adjusting for the location of the tools.jar in your system):
    $ ./ImageJ-linux64 --class-path \
    /usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar
     
     
     
     
     
     
     
     
     
     
     
     
     
  3. By using the BigDataViewer framework, which opens the 4D volume as a window with a slider for time at the bottom, and each volume at each time point can be resliced arbitrarily. Note that you'll have to adjust the brightness and contrast through its own menus, as the default is way off from the 16-bit range of the data.
    This framework, particularly via the simple interface offered by the BdvFunctions class of the bigdataviewer-vistools library (a replacement for the ImageJFunctions library), enables us to e.g. add additional volumes overlaid on the 4D data, and more (see below).

# Visualization option 1:
# An automatically created 4D VirtualStack

from net.imglib2.img.display.imagej import ImageJFunctions as IL

IL.wrap(vol4d, "Volume 4D").show()


# Visualization option 2:
# Create a 4D VirtualStack manually

from net.imglib2.view import Views
from net.imglib2.img.array import ArrayImgs
from net.imglib2.img.basictypeaccess import ShortAccess
from ij import VirtualStack, ImagePlus, CompositeImage
from jarray import zeros, array
from ij.process import ShortProcessor
from fiji.scripting import Weaver
from net.imglib2 import Cursor
from net.imglib2.type.numeric.integer import UnsignedShortType

# Need a fast way to copy pixel-wise
w = Weaver.method("""
  static public final void copy(final Cursor src, final Cursor tgt) {
    while (src.hasNext()) {
      src.fwd();
      tgt.fwd();
      final UnsignedShortType t1 = (UnsignedShortType) src.get(),
                              t2 = (UnsignedShortType) tgt.get();
      t2.set(t1.get());
    }
  }
""", [Cursor, UnsignedShortType])

class Stack4D(VirtualStack):
  def __init__(self, img4d):
    super(VirtualStack, self).__init__(img4d.dimension(0), img4d.dimension(1),
                                       img4d.dimension(2) * img4d.dimension(3))
    self.img4d = img4d
    self.dimensions = array([img4d.dimension(0), img4d.dimension(1)], 'l')
    
  def getPixels(self, n):
    # 'n' is 1-based
    # Obtain a 2D slice from the 4D volume
    aimg = ArrayImgs.unsignedShorts(self.dimensions[0:2])
    nZ = self.img4d.dimension(2)
    fixedT = Views.hyperSlice(self.img4d, 3, int((n-1) / nZ)) # Z blocks
    fixedZ = Views.hyperSlice(fixedT, 2, (n-1) % nZ)
    w.copy(fixedZ.cursor(), aimg.cursor())
    return aimg.update(None).getCurrentStorageArray()
    
  def getProcessor(self, n):
    return ShortProcessor(self.dimensions[0], self.dimensions[1],
                          self.getPixels(n), None)

imp = ImagePlus("vol4d", Stack4D(vol4d))
nChannels = 1
nSlices = first.dimension(2)
nFrames = len(timepoint_paths)
imp.setDimensions(nChannels, nSlices, nFrames)

com = CompositeImage(imp, CompositeImage.GRAYSCALE)
com.show()



# Visualization option 3: BigDataViewer

from bdv.util import BdvFunctions

bdv = BdvFunctions.show(vol4d, "vol4d")

					

 

 

  Nuclei detection with difference of Gaussian

Once the 4D volume is loaded, we proceed to detect nuclei at every timepoint, with each timepoint being a 3D volume. For this, we'll use again the difference of Gaussian with the DogDetection class.

Critical to the success of the difference of Gaussian approach to nuclei detection is the choosing of good values for the parameters sigmaLarger, sigmaSmaller, and minPeakValue.

The difference of Gaussian approach works quite well when the data resembles circles or spheres, even when these are in contact, if they all have approximately the same dimensions. This is the case here, since neuron nuclei in Drosophila larvae are all about 5 micrometers in diameter. Here, the image volumes are uncalibrated (value of 1.0 for pixel width, pixel height, pixel depth), and I chose 5 pixels (half the diameter of the average-looking nucleus) as the sigmaLarger. In practice, half works better, as nuclei are not perfect spheres and I suspect that capturing the difference of Gaussian from a range entirely enclosed within the boundaries of a nucleus works best. For sigmaSmaller I chose half the value of sigmaLarge, which works well in practice. The operation consists of subtracting a flatter Gaussian from a sharper one, narrowing any occurring intensity peak (see wikipedia).

The minPeakValue parameter is used to set a threshold for considering a peak as valid. To estimate a good value for minPeakValue, try one of these two approaches:

  1. Manually: adjust the display brightness and contrast, duplicate a slice twice, and apply a Gaussian with sigmaSmaller to one copy and with sigmaLarger to the other. Then use Fiji/ImageJ's "Process - Image Calculator..." to subtract the smaller from the larger, choosing to create a new image in 32-bit depth. Move the mouse over nuclei-looking whiteish blobs, and see what is the largest value there (it is printed under the Fiji/ImageJ toolbar as the mouse moves over the image). Do so for a few nuclei, particularly for nuclei that appear as large as nuclei can be (not just partially appearing in the optical section that the stack slice represents), and you'll get a sensible estimate for minPeakValue.
  2. Automatically: run the createDoG function for a range of values, and store the number of peaks found. If you plot the number of peaks as a function of the minPeakValue, you'll see an inflexion point near the good value, with the number of detected peaks not changing much or at all for a range of continuous values for minPeakValue, that will suggest a good numeric value for minPeakValue. This can be computationally quite expensive, but it is worth it.

In this example, I manually chose a minPeakValue. Estimating how wrong I was, using the automatic method, is left as an exercise for the reader.

 

from net.imglib2.algorithm.dog import DogDetection
from collections import defaultdict

vol4d = ... # NOTE, this variable was created above and represents the 4D series

# Parameters for a Difference of Gaussian to detect nuclei positions
calibration = [1.0 for i in range(vol4d.numDimensions())] # no calibration: identity
sigmaSmaller = 2.5 # in pixels: a quarter of the radius of a neuron nuclei
sigmaLarger = 5.0  # pixels: half the radius of a neuron nuclei
minPeakValue = 100

# A function to create a DogDetection from an img
def createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue):
  # Fixed parameters
  extremaType = DogDetection.ExtremaType.MAXIMA
  normalizedMinPeakValue = False
  # Infinite img
  imgE = Views.extendMirrorSingle(img)
  # In the differece of gaussian peak detection, the img acts as the interval
  # within which to look for peaks. The processing is done on the infinite imgE.
  return DogDetection(imgE, img, calibration, sigmaLarger, sigmaSmaller,
    extremaType, minPeakValue, normalizedMinPeakValue)

def getDoGPeaks(timepoint_index, print_count=True):
  # From the cache
  img = getStack(timepoint_paths[timepoint_index])
  # or from the vol4d (same thing)
  #img = Views.hyperslice(vol4d, 3, i)
  dog = createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue)
  peaks = dog.getSubpixelPeaks() # could also use getPeaks() in integer precision
  if print_count:
    print "Found", len(peaks), "peaks in timepoint", timepoint_index
  return peaks

# Map of timepoint indices and lists of DoG peaks in timepoint-local 3D coordinates
nuclei_detections = {ti: getDoGPeaks(ti) for ti in xrange(vol4d.dimension(3))}
					

  Visualizing detected peaks (nuclei) with a dynamically adjusted PointRoi

It is necessary to check how well we did in detecting nuclei. When there are thousands, this task can be onerous. For a quick look, we could display every detection as a point in a PointRoi. Here is how.

We define the class PointRoiRefresher which implements the ImageListener interface. Then we instantiate it with the nuclei_detections dictionary, and add it as an ImagePlus listener (using the Observer pattern [wikipedia]) via the static method ImagePlus.addImageListener.

In the PointRoiRefresher constructor, we loop through every timepoint and peaks pair via the dictionary iteritems method. Then, for every peak found in each timepoint 3D volume, we store its 2D coordinates in an array for the corresponding slice in the overall vol4d volume, which is stored in the self.nuclei dictionary. Note how we use the zOffset to account for the slices in vol4d from the volumes of prior timepoints.

Notice we use a defaultdict for the self.nuclei: useful to avoid the nuisance of having to check if a list of points has already been created and inserted into the dictionary for a specific slice index. With defaultdict, upon requesting the value for a key that doesn't yet exist, the key/value pair is inserted with a new instance of the default value (here, an empty list), which is also returned and can be used right away. Any other default value (other than list) is possible for defaultdict

Importantly, note that we remove the listener when the image is closed, as it wouldn't make sense to keep it around in computer memory.

Note the pass keyword for the imageOpened method: it means that there isn't a body for this function, i.e. does nothing.

Now, whenever you browse the Z axis, a new PointRoi is set on the image window of the vol4d, showing the detected nuclei. Neat! Of course, nuclei span multiple sections, so you'll have to scroll back and forth to make sure that an apparently undetected nuclei wasn't detected in another stack slice.

# Visualization 1: with a PointRoi for every vol4d stack slice,
#                  automatically updated when browsing through slices.

from ij import ImageListener, ImagePlus
from ij.gui import PointRoi
from java.awt import Color
from collections import defaultdict
import sys

# Variables created above: (paste this code under the script above)

vol4d = ... # NOTE, this variable was created above and represents the 4D series

com = ... # CompositeImage holding the vol4d
          # It's an ImagePlus, so you can get it again with IJ.getImage()

nuclei_detections = ... # dictionary of timepoint indices vs DoG peaks

# Create a listener that, on slice change, updates the ROI
class PointRoiRefresher(ImageListener):
  def __init__(self, imp, nuclei_detections):
    self.imp = imp
    # A map of slice indices and 2D points, over the whole 4d volume
    self.nuclei = defaultdict(list)  # Any query returns at least an empty list
    p = zeros(3, 'f')
    for ti, peaks in nuclei_detections.iteritems():
      # Slice index offset, 0-based, for the whole timepoint 3D volume
      zOffset = ti * vol4d.dimension(2)
      for peak in peaks: # peaks are float arrays of length 3
        peak.localize(p)
        self.nuclei[zOffset + int(p[2])].append(p[0:2])
  def imageOpened(self, imp):
    pass
  def imageClosed(self, imp):
    if imp == self.imp:
      imp.removeImageListener(self)
  def imageUpdated(self, imp):
    if imp == self.imp:
      self.updatePointRoi()
  def updatePointRoi(self):
    # Surround with try/except to prevent blocking
    #   ImageJ's stack slice updater thread in case of error.
    try:
      # Update PointRoi
      self.imp.killRoi()
      points = self.nuclei[self.imp.getSlice() -1] # map 1-based slices
                                                   # to 0-based nuclei Z coords
      if 0 == len(points):
        IJ.log("No points for slice " + str(self.imp.getSlice()))
        return
      # New empty PointRoi for the current slice
      roi = PointRoi()
      # Style: large, red dots
      roi.setSize(4) # ranges 1-4
      roi.setPointType(2) # 2 is a dot (filled circle)
      roi.setFillColor(Color.red)
      roi.setStrokeColor(Color.red)
      # Add points:
      for point in points: # points are floats
        roi.addPoint(self.imp, int(point[0]), int(point[1]))
      self.imp.setRoi(roi)
    except:
      IJ.error(sys.exc_info())

listener = PointRoiRefresher(com, nuclei_detections)
ImagePlus.addImageListener(listener)
					

 

  Visualizing detected peaks (nuclei) with 3D spheres

Manually checking whether all nuclei were detected is very time consuming, and error prone. Instead, we could render a 3D volume with a black background where spheres are painted white, with the average radius of the nuclei, at the coordinates of the detected nuclei. We will use these generated spheres as a second channel (e.g. red), visualizing overlaps in yellow. While this method is not foolproof either, the existence of spheres which paint beyond the single Z section where the center is placed makes it harder to miss cells when visually inspecting an area. It's also better for spotting false detections.

The second channel with the spheres consists of on-the-fly generated images, based on a KDTree: a data structure for fast lookup of spatial coordinates (see example above for explanations). While we could render them into 3D array-based volumes, the tradeoff is one of memory usage for CPU processing. Individual 2D slices are comparatively tiny, and very fast to compute using the kdtrees.

First we convert the nuclei_detections into kdtrees for fast lookup. Notice that we provide the nuclei spatial coordinates themselves (the peaks) as both values and coordinates; that's because here we don't care about the returned value at a peak location; we'll be doing distance queries, returning inside for a volume around the peak.

Then we define a color for inside (white) and outside (black) of the spheres, placing a sphere at the spatial coordinate of each putative nuclei detection.

Then we define the class Spheres and SpheresData, which are almost identical to the classes Circles and CircleData used in the generative image example above. In brief, these two classes define the way by which data in space is generated: when within a radius of a nuclei detection, paint white (i.e. return an inside value), otherwise paint black (i.e. return an outside value).

Notice the use of the asterisk * for capturing multiple parameters into the args list in SpheresData. It's a shortcut that the python language allows us, here useful to then invoke the Spheres constructors by unpacking all of them with the * again.

Finally, we define the virtual stack with the class Stack4DTwoChannels, which, at its getPixels method, divides the n (slice index) by two (there are two color channels, so twice as many virtual slices), and also tests for whether the requested n slice is even or odd, returning either an image (a 2D hyperslice of vol4d) or a rasterized and bounded 2D slice of the spheres that describe the nearby nuclei detections.

We then show the 2-channel 4D volume as a CompositeImage. The main advantage over the prior visualization with PointRoi is that the spheres span more than a single section, making it easier to visually evaluate whether what we think are active neuronal cell nuclei are all being detected. The blending of red with green colors results in yellow nuclei, leaving false positive detections as green only, and false negatives as red. Now it is evident that, at the bottom, there are several false detections in what looks like a bunch of bundled axons of a nerve; the PointRoi wasn't even showing them within the shown volume.

Compare the 2-channel version (left) with the PointRoi version (right):

 

Of course, the left panel being a CompositeImage, now you can open the "Image - Color - Channels tool..." and change the LUT (look up table) of each channel, so that instead of red and green, you set e.g. cyan and orange, which also result in great contrast. To do so, move the 'C' slider (either left or right: only two channels) and then push the "More" button of the "Channels Tool" dialog to select a different LUT.

# Visualization 2: with a 2nd channel where each detection is painted as a sphere

from net.imglib2 import KDTree, RealPoint, RealRandomAccess
from net.imglib2 import RealRandomAccessible, FinalInterval
from net.imglib2.neighborsearch import NearestNeighborSearchOnKDTree
from net.imglib2.type.numeric.integer import UnsignedShortType

# Variables from above

vol4d = ... # NOTE, this variable was created above and represents the 4D series

nuclei_detections = ... # dictionary of timepoint indices vs DoG peaks

w = ... # A Weaver object with a w.copy method (see above)

# A KDTree is a data structure for fast lookup of e.g. neareast spatial coordinates
# Here, we create a KDTree for each timepoint 3D volume
# ('i' is the timepoint index
kdtrees = {i: KDTree(peaks, peaks) for i, peaks in nuclei_detections.iteritems()}

radius = 5.0 # pixels

inside = UnsignedShortType(255) # 'white'
outside = UnsignedShortType(0)  # 'black'

# The definition of one sphere in 3D space for every nuclei detection
class Spheres(RealPoint, RealRandomAccess):
  def __init__(self, kdtree, radius, inside, outside):
    super(RealPoint, self).__init__(3) # 3-dimensional
    self.search = NearestNeighborSearchOnKDTree(kdtree)
    self.radius = radius
    self.radius_squared = radius * radius # optimization for the search
    self.inside = inside
    self.outside = outside
  def copyRealRandomAccess(self):
    return Spheres(3, self.kdtree, self.radius, self.inside, self.outside)
  def get(self):
    self.search.search(self)
    if self.search.getSquareDistance() < self.radius_squared:
      return self.inside
    return self.outside

# The RealRandomAccessible that wraps the Spheres, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
# NOTE: args are "kdtree, radius, inside, outside", using the * shortcut
#       given that this class is merely a wrapper for the Spheres class
class SpheresData(RealRandomAccessible):
  def __init__(self, *args): # captures all other arguments into args list
    self.args = args
  def realRandomAccess(self):
    return Spheres(*self.args) # Arguments get unpacked from the args list
  def numDimensions(self):
    return 3

# A two color channel virtual stack:
# - odd slices: image data
# - even slices: spheres (nuclei detections)
class Stack4DTwoChannels(VirtualStack):
  def __init__(self, img4d, kdtrees):
    # The last coordinate is the number of slices per timepoint 3D volume,
    # times the number of timepoints, times the number of channels (two)
    super(VirtualStack, self).__init__(img4d.dimension(0), img4d.dimension(1),
                                       img4d.dimension(2) * img4d.dimension(3) * 2)
    self.img4d = img4d
    self.dimensions = array([img4d.dimension(0), img4d.dimension(1)], 'l')
    self.kdtrees = kdtrees
    self.dimensions3d = FinalInterval([img4d.dimension(0),
                                       img4d.dimension(1),
                                       img4d.dimension(2)])
    
  def getPixels(self, n):
    # 'n' is 1-based
    # Target 2D array img to copy data into
    aimg = ArrayImgs.unsignedShorts(self.dimensions[0:2])
    # The number of slices of the 3D volume of a single timepoint
    nZ = self.img4d.dimension(2)
    # The slice_index if there was a single channel
    slice_index = int((n-1) / 2) # 0-based, of the whole 4D series
    local_slice_index = slice_index % nZ # 0-based, of the timepoint 3D volume
    timepoint_index = int(slice_index / nZ) # Z blocks
    if 1 == n % 2:
      # Odd slice index: image channel
      fixedT = Views.hyperSlice(self.img4d, 3, timepoint_index)
      fixedZ = Views.hyperSlice(fixedT, 2, local_slice_index)
      w.copy(fixedZ.cursor(), aimg.cursor())
    else:
      # Even slice index: spheres channel
      sd = SpheresData(self.kdtrees[timepoint_index], radius, inside, outside)
      volume = Views.interval(Views.raster(sd), self.dimensions3d)
      plane = Views.hyperSlice(volume, 2, local_slice_index)
      w.copy(plane.cursor(), aimg.cursor())
    #
    return aimg.update(None).getCurrentStorageArray()
    
  def getProcessor(self, n):
    return ShortProcessor(self.dimensions[0], self.dimensions[1], self.getPixels(n), None)


imp2 = ImagePlus("vol4d - with nuclei channel", Stack4DTwoChannels(vol4d, kdtrees))
nChannels = 2
nSlices = vol4d.dimension(2) # Z dimension of each time point 3D volume
nFrames = len(timepoint_paths) # number of time points
imp2.setDimensions(nChannels, nSlices, nFrames)
com2 = CompositeImage(imp2, CompositeImage.COMPOSITE)
com2.show()
					

  Improving performance of the generative Spheres volume

Turns out that Jython's performance, when it comes to pixel-wise operations, falls way below that of java or other, more JIT-friendly scripting languages. This performance drop led us, above, to use the Weaver.method approach to embedding a java implementation of a pixel-wise data copy from one image container to another.

Here, we run into another pixel-wise operation: the Spheres.get method is called for every pixel in the 2D plane that makes up a slice of the Stack4DTwoChannels VirtualStack. If you noticed that scrolling through stack sections of the 2-channel image was slow, it was, and it's Jython's fault.

The solution is to either use other JVM languages, such as Clojure, or to create a java library with the necessary functions, or, once again, to create an on-the-fly java solution, such as an implementation of the Spheres class in java, embedded inside our python script. Ugly, and requires you to know java reasonably well, but it gets the job done.

In the interest of brevity, I am showing here only the code that changes: the new declaration of the Spheres class, in java, and its use from the SphereData RealRandomAccessible wrapper class. The latter stays in jython: it does not perform any pixel-wise operations, so the additional cost incurred by the jython environment is negligible.

Given that instantiating an inner class (which is what Spheres is here in this embedded java code snippet) from jython is tricky, I added the helper newSpheres method. All the SpheresData.realRandomAccess method has to do is invoke ws.newSpheres just like before (see above) it invoked the jython-only Spheres constructor.

If you are getting errors when running this code snippet:

  1. Make sure that the tools.jar is in your java class path, e.g. launching Fiji like this:
    $ ./ImageJ-linux64 --class-path \
    /usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar
  2. Perhaps your Fiji.app/jars/weave_jy2java.*jar file is not up to date. Given the issues with java 6 vs java 8, if you are running java 8 or higher (you should), you may have to install the weaver library from source (NOTE: update file paths as necessary):
    $ git clone https://github.com/fiji/weave_jy2java.git
    $ cd weave_jy2java/
    $ mvn -Dimagej.app.directory=/home/albert/Fiji.app/ \
    clean install

# Big speed up: define the Spheres class in java
ws = Weaver.method("""
public final class Spheres extends RealPoint implements RealRandomAccess {
  private final KDTree kdtree;
  private final NearestNeighborSearchOnKDTree search;
  private final double radius,
                       radius_squared;
  private final UnsignedShortType inside,
                                  outside;

  public Spheres(final KDTree kdtree,
                 final double radius,
                 final UnsignedShortType inside,
                 final UnsignedShortType outside)
  {
    super(3); // 3 dimensions
    this.kdtree = kdtree;
    this.radius = radius;
    this.radius_squared = radius * radius;
    this.inside = inside;
    this.outside = outside;
    this.search = new NearestNeighborSearchOnKDTree(kdtree);
  }

  public final Spheres copyRealRandomAccess() { return copy(); }

  public final Spheres copy() {
    return new Spheres(this.kdtree, this.radius, this.inside, this.outside);
  }

  public final UnsignedShortType get() {
    this.search.search(this);
    if (this.search.getSquareDistance() < this.radius_squared) {
      return inside;
    }
    return outside;
  }
}

public final Spheres newSpheres(final KDTree kdtree,
                                final double radius,
                                final UnsignedShortType inside,
                                final UnsignedShortType outside)
{
  return new Spheres(kdtree, radius, inside, outside);
}

""", [RealPoint, RealRandomAccess, KDTree,
      NearestNeighborSearchOnKDTree, UnsignedShortType])


# The RealRandomAccessible that wraps the Spheres, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
# NOTE: args are "kdtree, radius, inside, outside", using the * shortcut
#       given that this class is merely a wrapper for the Spheres class
class SpheresData(RealRandomAccessible):
  def __init__(self, *args):
    self.args = args
  def realRandomAccess(self):
    # Performance improvement: a java-defined Spheres class instance
    return ws.newSpheres(*self.args) # Arguments get unpacked from the args list
  def numDimensions(self):
    return 3
					
  Visualizing detected peaks (nuclei) with 3D spheres with the BigDataViewer

Above, we used ImageJ's CompositeImage to visualize our VirtualStack. Here, we'll use the BigDataViewer framework, which offers arbitrary reslicing, and soon also (in an upcomming feature already available for 16-bit data) volume rendering for true GPU-accelerated 3D rendering with depth perception.

Just like before (see above), we visualize the vol4d trivially with the BdvFunctions class.

Then we create a second 4D volume for the nuclei detection. The nuclei detections of each time point are encoded each by its own KDTree (stored as values in the kdtrees dictionary). With the helper function asVolume, we use each KDTree to define spheres in 3D space using the SpheresData class and the given radius and inside/outside pixel values. Then we specify the bounds to match those of the first 3 dimensions of the vol4d (i.e. the dimensions of the 3D volume of each time point) on a rasterized view (i.e. iterable with integer coordinates). Finally, via Views.stack we express a sequence of 3D volumes as a single 4D volume named spheres4d.

Then we add spheres4d as a second data set to the bdv BigDataViewer instance.

We adjust the intensity range (the min and max) using the BigDataViewer menu "Settings - Brightness & Color", which we use as well to adjust the "set view colors" of each of the two volumes (vol4d in red and spheres4d in green).

# Visualization 3: two-channels with the BigDataViewer

from bdv.util import BdvFunctions, Bdv
from net.imglib2.view import Views
from net.imglib2 import FinalInterval

# Variables defined above, in addition to class SpheresData
vol4d = ...
kdtrees = ...
radius = ...
inside = ...
outside = ...


# Open a new BigDataViewer window with the 4D image data
bdv = BdvFunctions.show(vol4d, "vol4d")

# Create a bounded 3D volume view from a KDTree
def asVolume(kdtree, dimensions3d):
  sd = SpheresData(kdtree, radius, inside, outside)
  volume = Views.interval(Views.raster(sd), dimensions3d)
  return volume

# Define a 4D volume as a sequence of 3D volumes, each a bounded view of SpheresData
dims3d = FinalInterval(map(vol4d.dimension, xrange(3)))
volumes = [as3DVolume(kdtrees[ti], dims3d) for ti in sorted(kdtrees.iterkeys())]
spheres4d = Views.stack(volumes)

# Append the sequence of spheres 3d volumes to the open BigDataViewer instance
BdvFunctions.show(spheres4d, "spheres4d", Bdv.options().addTo(bdv))
					

GCaMP data analysis to be continued...

 

 

Morphing an image onto another: from a bird to an airplane

Consider a volume that represents a bird, and another an airplane. Can we define intermediate volumes describing the transformation of a bird into an airplane? There are a number of techniques to perform this morphing operation. Here, we illustrate the use of a signed distance transform for this purpose. Namely:

"For each binary image, the edges [of the mask] are found and then each pixel is assigned a distance to the nearest edge. [for pixels] Inside [the mask], distance values are positive; outside, negative. Then [pixels at the same location for] both images [source and target] are compared, and whenever the weighted sum [of the distances] is larger than zero, the result [interpolated] image gets a pixel set to true (or white, meaning inside)."

We start by opening two binary masks representing a bird and an airplane, obtained from the McGill 3D Shape Benchmark, which are binary masks (zero for background, one for inside the mask defining the volume) stored in binary format with a 1024-byte header, and measure 128x128x128 pixels at 8-bit (each pixel is a byte). We read them directly from the file with RandomAccessFile, which allows us to read the data into an array of bytes (after skipping the header) that we then hand over to the ArrayImg.

(Instead of opening binary masks, you could use any pair of 2D or 3D images, or ND, of equal dimensions, where the background is zero and the mask is any value other than zero.)

The orientation of the bird and the airplane are different (see for yourself by using e.g. IL.wrap(bird, "bird").show()). We use the Views.rotate to make both volumes have their dorsal side up and anterior side to the front. Then we copy each into an image created with ArrayImgs.unsignedBytes, because of an unknown issue that prevents the rotated views from working well with the Views.stack method used at the end.

Next, we define the function findEdgePixels, which returns a list of RealPoint containing the coordinates for all boundary pixels: pixels whose value is not zero and that have a zero-value pixel among immediate neighbors.

Note we use the itertools package imap function: like map (see above), but constructs an iterator rather than a list, avoiding therefore the creation of a list--a small performance optimization that can be big when the cost is incurred for many pixels. Keep in mind that if a list is built, it has to be iterated again in the next processing step, whereas the items in the imap iterator are consumed as they are generated, and don't need to be kept around.

Note we also use the functools partial function and the operator package add function. We use these to avoid explicit loops for computing the minimum and maximum coordinates of the neighboorhood window around any one pixel with Views.interval. That's because there's no need to build e.g. lists if all we want is to populate an array with the results, which can be done with an iterator (imap); a small performance improvement that adds up quickly. About partial: with it, we define the interim functions inc and dec that wrap another function (add) with some of its arguments populated (the number 1 or -1), so that we can use them as a single-argument function to e.g. imap it over a sequence of values. Very handy.

Then we define buildSearch, a convenience function that, given an image, returns a KDTree-based search function for swiftly finding points near a query point of interest, using a NearestNeighborSearchOnKDTree. The kdtree is populated with the list from findEdgePixels.

And then we define the makeInterpolatedImage function: it takes two input images (a source img1 and a target img2), and a weight that takes values between zero (exactly like source) and 1 (exactly like target). Here is where we use the signed distance transform: the sum of the distance from the current coordinate to the nearest edge pixel in the source image, with the distance from the current coordinate to the nearest edge pixel in the target image. The distances are either positive (point is inside the binary mask) or negative (point is outside). If the sum is larger than zero, the pixel is set to belong to the interpolated binary mask (value of one), otherwise to the background (zero). To accomplish all this we iterate simultaneously over all 3 images (source img1, target img2, and the newly created interpolated one img3) with 3 corresponding cursors. Then return the interpolated image.

  Executing functions in parallel with multiple threads

So far we defined functions but didn't use them. We do so now using independent execution threads. Each thread runs in parallel to the other threads. Concurrent execution (sometimes known as parallel execution or multithreaded execution or multithreading) is simplified enormously thanks to the Executors namespace with convenient methods for creating thread pools (via ExecutorService), and the Callable and Future interfaces. We construct tasks and submit them for execution, and each submission returns us a Future object whose get() will block execution (in the current thread) until the other thread executing the task is ready to deliver its return value. So we first submit all tasks, and retrieve their return values, the latter being a blocking operation that awaits execution completion.

In Jython, all functions defined via the def keyword automatically implement the Callable and Runnable interfaces. But their respective execution methods call and run do not take any arguments (of course: the executing thread wouldn't know what arguments to give it). So for tasks defined entirely by functions without arguments we could submit those functions directly. Here, though, there are arguments to pass on. Therefore we define the class Task that implements Callable, and whose constructor stores both the function to execute and its arguments.

The exe is our thread pool. We initialize it with a fixed number of threads: as many as CPUs are available to the Java Virtual Machine (JVM).

Notice we submit tasks and get their return values within a try/finally code block, ensuring that in the event of an error, the thread pool will be shutdown in any case.

First we submit two tasks to construct the search1 and search2, then wait on the two returned futures with get. The futures list has only two elements, so we can unpack them directly into the two variables search1 and search2.

Then we submit as many tasks as images to interpolate: in this case, for a list of weights equivalent to [0.2, 0.4, 0.6, 0.8] (four interpolated images). And we wait on them, creating the steps list that contains the images as returned by the futures upon completion of the execution.

Both sets of task submissions are done via list comprehension, collecting all returned Future objects into a list.

  Visualization of 4D data sets in the 3D Viewer

Now we create the 4d volume vol4d by concatenating the 3d source img1 and target img2 images with the steps in between, and visualize the whole list of images with an additional, 4th dimension (sort of "time" here) via Views.stack.

Now, realize all images are binary masks with values 0 for background and 1 for the mask. ImageJ's 3D Viewer is easier to work with when these masks are spread into e.g. a range of 0 and 255 values instead. So we set each non-zero pixel to 255 (we could also multiple by 255 to the same effect).

All that remains now is visualizing the interpolated binary masks. First we contruct a hyperstack as a CompositeImage, which we then give to the Image3DUniverse to display as a voltex: a rendered volume. The fact that there is a 4th dimension in the hyperstack (the "T" slider in its ImageJ hyperstack window) makes the 3D Viewer display an appropriate time axis at the bottom to switch between time points. Beautiful!

A final note: while we used here binary masks that consisted each of one single continuous object, using separate objects would have worked too. Try to use Views.interval to insert a whole flock of scaled-down birds into a black volume, in the approximate layout of the airplane, and see them smoothly morphing into the airplane shape.

 

from net.imglib2.img.array import ArrayImgs
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.view import Views
from net.imglib2 import KDTree, RealPoint
from net.imglib2.type.numeric.integer import UnsignedByteType
from net.imglib2.neighborsearch import NearestNeighborSearchOnKDTree
from net.imglib2.util import Intervals
from net.imglib2.algorithm.math.ImgMath import compute, mul
from net.imglib2.algorithm.math import ImgSource
from org.scijava.vecmath import Point3f
from ij3d import Image3DUniverse
from ij import CompositeImage
from java.io import RandomAccessFile
from java.util.concurrent import Executors, Callable
from java.lang import Runtime
from itertools import imap
from functools import partial
from jarray import zeros, array
import operator, os

# NOTE: adjust directory path as necessary
baseDir = "/home/albert/lab/scripts/data/cim.mcgill.ca-shape-benchmark/"

def readBinaryMaskImg(filepath, width, height, depth, header_size):
  ra = RandomAccessFile(filepath, 'r')
  try:
    ra.skipBytes(header_size)
    bytes = zeros(width * height * depth, 'b')
    ra.read(bytes)
    return ArrayImgs.unsignedBytes(bytes, [width, height, depth])
  finally:
    ra.close()

bird = readBinaryMaskImg(os.path.join(baseDir, "birdsIm/b21.im"),
                         128, 128, 128, 1024)
airplane = readBinaryMaskImg(os.path.join(baseDir, "airplanesIm/b14.im"),
                             128, 128, 128, 1024)

# Rotate bird: starts with posterior view, dorsal down
# Rotate 180 degrees around Y axis
birdY90 = Views.rotate(bird, 2, 0) # 90
birdY180 = Views.rotate(birdY90, 2, 0) # 90 again: 180

# Copy rotated bird into ArrayImg
dims = Intervals.dimensionsAsLongArray(birdY90)
img1 = compute(ImgSource(birdY180)).into(ArrayImgs.unsignedBytes(dims))

# Rotate airplane: starts with dorsal view, anterior down
# Set to: coronal view, but dorsal is still down
airplaneC = Views.rotate(airplane, 2, 1)
# Set to dorsal up: rotate 180 degrees
airplaneC90 = Views.rotate(airplaneC, 0, 1) # 90
airplaneC180 = Views.rotate(airplaneC90, 0, 1) # 90 again: 180

# Copy rotated airplace into ArrayImg
img2 = compute(ImgSource(airplaneC180)).into(ArrayImgs.unsignedBytes(dims))


# Find edges
def findEdgePixels(img):
  edge_pix = []
  zero = img.firstElement().createVariable()
  zero.setZero()
  imgE = Views.extendValue(img, zero)
  pos = zeros(img.numDimensions(), 'l')
  inc = partial(operator.add, 1)
  dec = partial(operator.add, -1)
  cursor = img.cursor()
  while cursor.hasNext():
    t = cursor.next()
    # A pixel is on the edge of the binary mask
    # if it has a non-zero value
    if 0 == t.getIntegerLong():
      continue
    # ... and its immediate neighbors ...
    cursor.localize(pos)
    minimum = array(imap(dec, pos), 'l') # map(dec, pos) also works, less performance
    maximum = array(imap(inc, pos), 'l') # map(inc, pos) also works, less performance
    neighborhood = Views.interval(imgE, minimum, maximum)
    # ... have at least one zero value:
    # Good performance: the "if x in <iterable>" approach stops upon finding the first x    
    if 0 in imap(UnsignedByteType.getIntegerLong, neighborhood):
      edge_pix.append(RealPoint(array(list(pos), 'f')))
  return edge_pix

def buildSearch(img):
  edge_pix = findEdgePixels(img)
  kdtree = KDTree(edge_pix, edge_pix)
  return NearestNeighborSearchOnKDTree(kdtree)

def makeInterpolatedImage(img1, search1, img2, search2, weight):
  """ weight: float between 0 and 1 """
  img3 = ArrayImgs.unsignedBytes(Intervals.dimensionsAsLongArray(img1))
  c1 = img1.cursor()
  c2 = img2.cursor()
  c3 = img3.cursor()
  while c3.hasNext():
    t1 = c1.next()
    t2 = c2.next()
    t3 = c3.next()
    sign1 = -1 if 0 == t1.get() else 1
    sign2 = -1 if 0 == t2.get() else 1
    search1.search(c1)
    search2.search(c2)
    value1 = sign1 * search1.getDistance() * (1 - weight)
    value2 = sign2 * search2.getDistance() * weight
    if value1 + value2 > 0:
      t3.setOne()
  return img3

# A wrapper for concurrent execution
class Task(Callable):
  def __init__(self, fn, *args):
    self.fn = fn
    self.args = args
  def call(self):
    return self.fn(*self.args) # expand args

n_threads = Runtime.getRuntime().availableProcessors()
exe = Executors.newFixedThreadPool(n_threads)

try:
  # Concurrent construction of the search for the source and target images
  futures = [exe.submit(Task(buildSearch, img)) for img in [img1, img2]] # list: eager
  # Get each search, waiting until both are built
  search1, search2 = (f.get() for f in futures) # parentheses: a generator, lazy

  # Parallelize the creation of interpolation steps
  # Can't iterate decimals, so iterate from 2 to 10 every 2 and multiply by 0.1
  # And notice search instances are copied: they are stateful.
  futures = [exe.submit(Task(makeInterpolatedImage,
                             img1, search1.copy(), img2, search2.copy(), w * 0.1))
             for w in xrange(2, 10, 2)] # list: eager
  # Get each step, waiting until all are built
  steps = [f.get() for f in futures] # list: eager, for concatenation in Views.stack
finally:
  # This 'finally' block executes even in the event of an error
  # guaranteeing that the executing threads will be shut down no matter what.
  exe.shutdown()


# ISSUE: Does not work with IntervalView from View.rotate,
# so img1 and img2 were copied into ArrayImg
# (The error would occur when iterating vol4d pixels
#  beyond the first element in the 4th dimension.)
vol4d = Views.stack([img1] + steps + [img2])

# Convert 1 to 255 for easier volume rendering in 3D Viewer
for t in vol4d:
  if 0 != t.getIntegerLong():
    t.setReal(255)

# Construct an ij.VirtualStack from vol4d
virtualstack = IL.wrap(vol4d, "interpolations").getStack()
imp = ImagePlus("interpolations", virtualstack)
imp.setDimensions(1, vol4d.dimension(2), vol4d.dimension(3))
imp.setDisplayRange(0, 255)
# Show as a hyperstack with 4 dimensions, 1 channel
com = CompositeImage(imp, CompositeImage.GRAYSCALE)
com.show()

# Show rendered volumes in 4D viewer: 3D + time axis
univ = Image3DUniverse()
univ.show()
univ.addVoltex(com)
					

 

12. Image registration

Registering two images means to bring the contents of one image into the coordinate system of the other (see wikipedia).

To register two images, first we find correspondences between them, and then compute a transformation model that transfers the data of one image onto the other so that corresponding areas of the images overlap as much as possible.

These correspondences could be one or more points defined on an image by a human or by an algorithm. Or the result of a cross-correlation in real or frequency domain (were it is then a phase correlation, using Fourier transforms).

You could think of extracted features as a lossy encoding of the image in a format that allows comparisons with other images at that level of encoding.

Here, we will illustrate techniques for extracting features from images, that can then be matched with those of other images to find correspondences between images, and methods to estimate a transformation model from them.

 

Fast translation-only image registration with phase correlation

The simplest form of image registration is a translation. A slow way to compute a translation is to shift one image over the other by one pixel at a time, computing for every shift a cross-correlation. A much, much faster way is to compute the shift in the frequency domain: a phase correlation, and then analyze only the few top peaks (the best possible solutions, that is, translations) using cross-correlation.

For this example, enable the BigSticher update site in the Fiji updater. Its jar files contain the PhaseCorrelation2 class and related.

First, we load the sample "Nile Bend" image, from which we extract the red channel with Converter.argbChannel (because PhaseCorrelation2 works only on RealType images, and ARGBType is not). From the red channel (of UnsignedByteType, which is a RealType) we cut out two images--actually, two views, using Views.interval and, importantly, Views.zeroMin, with the latter setting the origin of coordinates of the interval to 0, 0. This is necessary in order to correctly compute the translation of the second interval relative to the first.

Then we create a thread pool with Executors.newFixedThreadPool that will be used for compute the Fourier transforms of both views and the phase correlation between them.

The PhaseCorrelation2.calculatePCM static method takes as arguments the two images (actually two views, but ImgLib2 blends the two concepts), a factory and type for creating images of a particular kind (ArrayImg) for intermediate computations, and another factory and type (necessarily a complex type like ComplexFloatType) for the Fourier transforms, and the exe thread pool. Will return the pcm: the phase correlation matrix.

To compute the translation shift, we invoke the PhaseCorrelation2.getShift static method with the pcm, the images/views again, and several parameters. The nHighestPeaks will affect performance: the more peaks to analyze with cross-correlation over the original images, the more time it will take. The minOverlap serves the purpose of eliminating spurious possible translations by eliminating potentially high scores when the phase correlation peak suggests very little overlap between the images.

With the translation now computed (the peak), all that's left is to render the images in a way that we can visually assess whether the computed translation is accurate. For this, we compute the dimensions of the canvas where both images, registered, would fit in full. Then we define two views again, this time over the original ARGBType img of the "Nile bend". I chose to render them separated into two different images (slice1 and slice2) that then are stacked together so that I can flip back and forth between them to visually inspect the results.

The helper function intoSlice creates the slice as an ArrayImg, acquires a view on it where the translated image should be inserted, and writes into the view the pixel values from the original img.

from net.imglib2.algorithm.phasecorrelation import PhaseCorrelation2
from net.imglib2.img.array import ArrayImgFactory
from net.imglib2.type.numeric.real import FloatType
from net.imglib2.type.numeric.complex import ComplexFloatType
from net.imglib2.converter import Converters
from net.imglib2.view import Views
from net.imglib2.util import Intervals
from java.util.concurrent import Executors
from java.lang import Runtime
from java.awt import Rectangle
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from ij import IJ

# Open Nile Bend sample image
imp = IJ.openImage("https://imagej.nih.gov/ij/images/NileBend.jpg")
img = IL.wrapRGBA(imp)

# Extract red channel: alpha:0, red:1, green:2, blue:3
red = Converters.argbChannel(img, 1)

# Cut out two overlapping ROIs
r1 = Rectangle(1708, 680, 1792, 1760)
r2 = Rectangle( 520, 248, 1660, 1652)
cut1 = Views.zeroMin(Views.interval(red, [r1.x, r1.y],
                                         [r1.x + r1.width -1, r1.y + r1.height -1]))
cut2 = Views.zeroMin(Views.interval(red, [r2.x, r2.y],
                                         [r2.x + r2.width -1, r2.y + r2.height -1]))

# Thread pool
exe = Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors())

try:
  # PCM: phase correlation matrix
  pcm = PhaseCorrelation2.calculatePCM(cut1,
                                       cut2,
                                       ArrayImgFactory(FloatType()),
                                       FloatType(),
                                       ArrayImgFactory(ComplexFloatType()),
                                       ComplexFloatType(),
                                       exe)	

  # Number of phase correlation peaks to check with cross-correlation
  nHighestPeaks = 10

  # Minimum image overlap to consider, in pixels
  minOverlap = cut1.dimension(0) / 10

  # Returns an instance of PhaseCorrelationPeak2
  peak = PhaseCorrelation2.getShift(pcm, cut1, cut2, nHighestPeaks,
                                    minOverlap, True, True, exe)

  print "Translation:", peak.getSubpixelShift()
except Exception, e:
  print e
finally:
  exe.shutdown()

# Register images using the translation (the "shift")
shift = peak.getSubpixelShift()
dx = int(shift.getFloatPosition(0) + 0.5)
dy = int(shift.getFloatPosition(1) + 0.5)

# Top-left and bottom-right corners of the canvas that fits both registered images
x0 = min(0, dx)
y0 = min(0, dy)
x1 = max(cut1.dimension(0), cut2.dimension(0) + dx)
y1 = max(cut1.dimension(1), cut2.dimension(1) + dy)

canvas_width = x1 - x0
canvas_height = y1 - y0

def intoSlice(img, xOffset, yOffset):
  factory = ArrayImgFactory(img.randomAccess().get().createVariable())
  stack_slice = factory.create([canvas_width, canvas_height])
  target = Views.interval(stack_slice, [xOffset, yOffset],
                                       [xOffset + img.dimension(0) -1,
                                        yOffset + img.dimension(1) -1])
  c1 = target.cursor()
  c2 = img.cursor()
  while c1.hasNext():
    c1.next().set(c2.next())

  return stack_slice

# Re-cut ROIs, this time in RGB rather than just red
img1 = Views.interval(img, [r1.x, r1.y],
                           [r1.x + r1.width -1, r1.y + r1.height -1])
img2 = Views.interval(img, [r2.x, r2.y],
                           [r2.x + r2.width -1, r2.y + r2.height -1])

# Insert each into a stack slice
xOffset1 = 0 if dx >= 0 else abs(dx)
yOffset1 = 0 if dy >= 0 else abs(dy)
xOffset2 = 0 if dx <= 0 else dx
yOffset2 = 0 if dy <= 0 else dy
slice1 = intoSlice(img1, xOffset1, yOffset1)
slice2 = intoSlice(img2, xOffset2, yOffset2)

stack = Views.stack([slice1, slice2])
IL.wrap(stack, "registered with phase correlation").show()
					

 

Feature extraction with SIFT: translation transform

An effective feature extraction algorithms is the Scale Invariant Feature Transform (DG Lowe 1999). We'll use a SIFT implementation by Stephan Saalfeld (see documentation).

With a trivial example we'll illustrate the steps to follow for registering two images. For this, we load the "Nile" sample image and cut out two overlapping regions of interest. Then we do:

1. Extract SIFT features with FloatArray2DSIFT, which requires choosing a number of parameters (see explanations on the parameters). Each feature consists of a fixed 2d array of variables (defined by fdsize and fdbins), extracted at a specific scaled-down, Gaussian-blurred version of the image (the octaves) that start at initialSigma (for blurring), for a defined number (steps) of octaves computed between a starting and end dimensions (maxOctaveSize and minOctaveSize).
The starting scaled-down dimension (maxOctaveSize) of the image at which to start extracting features serves multiple purposes, one of them is to avoid extracting too many, perhaps irrelevant features that merely reflect high-frequency noise in the image (similarly to the initial Gaussian blur); also for performance purposes, given that in principle as little as 3 good features are necessary to define an affine transformation.
Similarly, the lower limit minOctaveSize avoids computing features for an overly scaled-down version of the image that may no longer capture any relevant information.
Note the number of variables of every feature at any octave is the same, but features in smaller octaves cover a larger fraction of the image, as the latter are more scaled down (and therefore the fixed-size 2d square of each feature covers more area).
Choosing the right range of octaves, with the right number of steps, is critical to the success of the SIFT feature extraction. The defaults provided do a good job for most densely labeled bioimagery such as natural scenery and electron microscopy images of tissues.
We then visualize coordinates of extracted features: we put the spatial coordinate of each extracted feature into a PointRoi (roi1 and roi2), which we store into the RoiManager (clicking on a listed ROI sets it in the currently active image).

2. Find correspondences (matches) between two lists of features: here, we use FloatArray2DSIFT.createMatches, a static method that takes the two lists of features and the parameter rod. Each feature of one list is compared to every feature of the other list, and good matches are returned as pairs of points (PointMatch).

3. Filter matches down to a spatially coherent subset. Matching individual features across two images is a different task than finding subsets of features that are coherent in matching spatially corresponding subsets of features in the other image. For example, an image could have some repeated or sufficiently similarly looking content that is recognized as a matching feature, but it is not coherent with neighboring features to consider it a match for registration. To filter these matching but undesirable feature pairs, we apply model.filterRansac, which runs two filters. First, RANSAC (random sampling consensus): for any given set of e.g. 3 matching pairs of features, check if the sum of the residual interdistance between matching feature pairs when overlapping their joint center is smaller than a defined threshold maxEpsilon (the alignment error). Then a second filter further discards undesirable (outlier) matches using robust iterative regression (see AbstractModel.filter).
The function model.filterRansac succeeds when finding at least 1 PointMatch for a translation-only model, 2 for a rigid model (translate and rotate), 2 for a similarity model (translate, rotate and scale) and 3 for an affine model (translate, rotate, scale and shear).
Again we visualize matches (the inlier subset) by storing them as a PointRoi roi1pm and roi2pm.

4. Register images: with the transformation model, we can now register one image onto the other. Here, this example was trivial and involved only a translation, so we first compute the joint bounds of both images when registered (canvas_width and canvas_height) by inverse transforming the top-left and bottom-right coordinates of the second image (inverse, because the model encodes the transformation from the first image into the second). Then we insert the images as slices in an ImageJ stack.

 

from mpicbg.imagefeatures import FloatArray2DSIFT, FloatArray2D
from mpicbg.models import PointMatch, TranslationModel2D, NotEnoughDataPointsException
from ij import IJ, ImagePlus, ImageStack
from ij.gui import PointRoi, Roi
from ij.plugin.frame import RoiManager

# Open Nile Bend sample image
# imp = IJ.getImage()
imp = IJ.openImage("https://imagej.nih.gov/ij/images/NileBend.jpg")

# Cut out two overlapping ROIs
roi1 = Roi(1708, 680, 1792, 1760)
roi2 = Roi(520, 248, 1660, 1652)

imp.setRoi(roi1)
imp1 = ImagePlus("cut 1", imp.getProcessor().crop())
imp1.show()

imp.setRoi(roi2)
imp2 = ImagePlus("cut 2", imp.getProcessor().crop())
imp2.show()

# Parameters for extracting Scale Invariant Feature Transform features
p = FloatArray2DSIFT.Param()
p.fdSize = 4 # number of samples per row and column
p.fdBins = 8 # number of bins per local histogram
p.maxOctaveSize = 512 # largest scale octave in pixels
p.minOctaveSize = 128   # smallest scale octave in pixels
p.steps = 3 # number of steps per scale octave
p.initialSigma = 1.6

def extractFeatures(ip, params):
  sift = FloatArray2DSIFT(params)
  sift.init(FloatArray2D(ip.convertToFloat().getPixels(),
                         ip.getWidth(), ip.getHeight()))
  features = sift.run() # instances of mpicbg.imagefeatures.Feature
  return features


features1 = extractFeatures(imp1.getProcessor(), p)
features2 = extractFeatures(imp2.getProcessor(), p)

# Feature locations as points in an ROI
# Store feature locations in the Roi manager for visualization later
roi_manager = RoiManager()

roi1 = PointRoi()
roi1.setName("features for cut1")
for f in features1:
  roi1.addPoint(f.location[0], f.location[1])

roi_manager.addRoi(roi1)

roi2 = PointRoi()
roi2.setName("features for cut2")
for f in features2:
  roi2.addPoint(f.location[0], f.location[1])

roi_manager.addRoi(roi2)

# Find matches between the two sets of features
# (only by whether the properties of the features themselves match,
#  not by their spatial location.)
rod = 0.9 # ratio of distances in feature similarity space (closest/next closest match)
pointmatches = FloatArray2DSIFT.createMatches(features1, features2, rod)

# Some matches are spatially incoherent: filter matches with RANSAC
model = TranslationModel2D() # We know there's only a translation
candidates = pointmatches # possibly good matches as determined above
inliers = [] # good point matches, to be filled in by model.filterRansac
maxEpsilon = 25.0 # max allowed alignment error in pixels (a distance)
minInlierRatio = 0.05 # ratio inliers/candidates
minNumInliers = 5 # minimum number of good matches to accept the result

try:
  modelFound = model.filterRansac(candidates, inliers, 1000,
                                  maxEpsilon, minInlierRatio, minNumInliers)
  if modelFound:
    # Apply the transformation defined by the model to the first point
    # of each pair (PointMatch) of points. That is, to the point from
    # the first image.
    PointMatch.apply(inliers, model)
except NotEnoughDataPointsException, e:
  print e

if modelFound:
  # Store inlier pointmatches: the spatially coherent subset
  roi1pm = PointRoi()
  roi1pm.setName("matches in cut1")
  roi2pm = PointRoi()
  roi2pm.setName("matches in cut2")

  for pm in inliers:
    p1 = pm.getP1()
    roi1pm.addPoint(p1.getL()[0], p1.getL()[1])
    p2 = pm.getP2()
    roi2pm.addPoint(p2.getL()[0], p2.getL()[1])

  roi_manager.addRoi(roi1pm)
  roi_manager.addRoi(roi2pm)

  # Register images
  # Transform the top-left and bottom-right corner of imp2
  # (use applyInverse: the model describes imp1 -> imp2)
  x0, y0 = model.applyInverse([0, 0])
  x1, y1 = model.applyInverse([imp2.getWidth(), imp2.getHeight()])
  # Determine dimensions of the registered images
  canvas_width = int(max(imp1.getWidth(), x1) - min(0, x0))
  canvas_height = int(max(imp1.getHeight(), y1) - min(0, y0))
  # Create a 2-slice stack with both images aligned, one on each slice
  stack = ImageStack(canvas_width, canvas_height)
  ip1 = imp1.getProcessor().createProcessor(canvas_width, canvas_height)
  ip1.insert(imp1.getProcessor(), int(0 if x0 > 0 else abs(x0)),
                                  int(0 if y0 > 0 else abs(y0)))
  stack.addSlice("cut1", ip1)
  ip2 = ip1.createProcessor(canvas_width, canvas_height)
  ip2.insert(imp2.getProcessor(), int(0 if x0 < 0 else x0),
                                  int(0 if y0 < 0 else y0))
  stack.addSlice("cut2", ip2)
  imp = ImagePlus("registered", stack)
  imp.show()

					

Transformation models

Above, we computed a shift between two images, which is a simple way of stating that we estimated a translation model that could bring pixel data from one image into the coordinate system of the other image by merely shifting pixel coordinates along the axes (e.g. x, y) of the image.

To compute a translation, all we need is a single point correspondence between two images. This was the best peak, as scored by cross-correlation, in the phase correlation approach above. Or the result of fitting a TranslationModel2D to the point correspondences derived from matching and filtering SIFT feature correspondences.

With two point correspondences, we can estimate a rigid model (translation and rotation, ignoring scale), and a similarity model (translation, rotation and scale).

With three point correspondences, we can estimate an affine model (translation, rotation, scale and shear).

The best solution when estimating models that include scaling is to shrink (scale down) the images to zero dimensions, because then the distances between correspondences are minimized. To avoid this undesirable solution, a regularization is introduced: for example, we estimate both an affine model and a rigid model (that can't do scaling), and at each iteration of the estimation procedure set the parameters of the affine model to those of e.g. 90% itself and 10% the rigid model, which in practice suffices to avoid infinite shrinking.

With 4 point correspondences, we can compute a perspective transform: translation, rotation, scale and, so to speak, a kind of shear that preserves straight lines (an homography or projective collineation), and is like a 3D projection.

With 4 or more point correspondences, we can estimate any of the above models (by computing a weighted consensus) or a non-linear transformation, such as a thin-plate spline or a moving least squares transform. The thin plate spline (Bookstein 1989) is perhaps the smoothest non-linear transformation that can be estimated from a set of point correspondences, but it is also costly at O(n^3) operations. The moving least squares transform (Schaefer, McPhail and Warren, 2006) is less accurate but offers very good performance and more than acceptable results, by estimating an affine transform at any one point in space, computed from a weighted (by distance) contribution of each point correspondence.

See the imglib2-realtransform model classes, and the mpicbg.models classes as well, which we use below in the SIFT feature extraction example.

 

Click-and-drag interactive transforms

Go to "Plugins - Transforms" and try the interactive transforms on any image,
by activating the plugin and then dragging the control points that appear on the image:

Interactive Rigid (2 points) [source code]
translate & rotate
Interactive Similarity (2 points) [source code]
translate, rotate & scale
Interactive Affine (3 points) [source code]
translate, rotate, scale & shear
Interactive Perspective (4 points) [source code]
translate, rotate, scale & perspective
Interactive Moving least squares transform mesh (any number of points) [source code]
local non-linear deformations
(push 'U' to see the triangle mesh; each triangle gets its own affine transform).
(Add points, and don't drag them, to areas you want to keep still.)
 

Feature extraction with SIFT: similarity transform

The transformation above was a simple translation. Here, we rotate and scale the second cut out, requiring a SimilarityModel2D to correctly register the second cut onto the first. There are two major differences with above.

First, the SIFT parameters: we use a larger maxOctaveSize and 4 steps: otherwise, only 2 good matches are found which, while enough for a similarity transform, may be sensitive to noise. Ideally more than 7 good matches are desired; if possible (performance-wise), dozens or a hundred.

Second, how we map imp2 onto imp1 in the stack: we create a new target ImageProcessor of the same type as that of imp1, whose dimensions we discover by first transforming the 4 corners of the image (x0, y0; x1, y1 ...). Then we iterate each target x,y coordinate, asking, in the pull function, for an interpolated pixel from imp2 (note we set the interpolation mode of imp2's ImageProcessor to bilinear). Some of the coordinates we request fall outside the imp2's ImageProcessor (that we named source): the ImageProcessor.getInterpolatedPixel method returns a zero value in that case. This spares us from having to compute a transformed mask and figure out whether a pixel can be requested or not.

Note that we use a deque with maxlen=0 to consume the generator that iterates all of target's pixels. We could have also used a double for loop directly, but the 0-length deque technique affords better performance for consuming iterables (a generator is an iterable too).

The complete, executable script is here: extract_SIFT_features+similarity.py.

 

# The full script is at:
# https://github.com/acardona/scripts/

from mpicbg.imagefeatures import FloatArray2DSIFT, FloatArray2D
from mpicbg.models import PointMatch, SimilarityModel2D, NotEnoughDataPointsException
from ij import IJ, ImagePlus, ImageStack
from ij.gui import Roi, PolygonRoi
from jarray import zeros
from collections import deque

# Open Nile Bend sample image
imp = IJ.openImage("https://imagej.nih.gov/ij/images/NileBend.jpg")

# Cut out two overlapping ROIs
roi1 = Roi(1708, 680, 1792, 1760)
roi2 = Roi(520, 248, 1660, 1652)

imp.setRoi(roi1)
imp1 = ImagePlus("cut 1", imp.getProcessor().crop())
imp1.show()

# Rotate and scale the second cut out
imp.setRoi(roi2)
ipc2 = imp.getProcessor().crop()
ipc2.setInterpolationMethod(ipc2.BILINEAR)
ipc2.rotate(67) # degrees clockwise
ipc2 = ipc2.resize(int(ipc2.getWidth() / 1.6 + 0.5))
imp2 = ImagePlus("cut 2", ipc2)
imp2.show()

# Parameters for SIFT: NOTE 4 steps, larger maxOctaveSize
p = FloatArray2DSIFT.Param()
p.fdSize = 4 # number of samples per row and column
p.fdBins = 8 # number of bins per local histogram
p.maxOctaveSize = 1024 # largest scale octave in pixels
p.minOctaveSize = 128   # smallest scale octave in pixels
p.steps = 4 # number of steps per scale octave
p.initialSigma = 1.6

def extractFeatures(ip, params):
  sift = FloatArray2DSIFT(params)
  sift.init(FloatArray2D(ip.convertToFloat().getPixels(),
                         ip.getWidth(), ip.getHeight()))
  features = sift.run() # instances of mpicbg.imagefeatures.Feature
  return features

features1 = extractFeatures(imp1.getProcessor(), p)
features2 = extractFeatures(imp2.getProcessor(), p)

# Find matches between the two sets of features
# (only by whether the properties of the features themselves match,
#  not by their spatial location.)
rod = 0.9 # ratio of distances in feature similarity space (closest/next closest match)
pointmatches = FloatArray2DSIFT.createMatches(features1, features2, rod)

# Some matches are spatially incoherent: filter matches with RANSAC
model = SimilarityModel2D() # supports translation, rotation and scaling
candidates = pointmatches # possibly good matches as determined above
inliers = [] # good point matches, to be filled in by model.filterRansac
maxEpsilon = 25.0 # max allowed alignment error in pixels (a distance)
minInlierRatio = 0.05 # ratio inliers/candidates
minNumInliers = 5 # minimum number of good matches to accept the result

try:
  modelFound = model.filterRansac(candidates, inliers, 1000,
                                  maxEpsilon, minInlierRatio, minNumInliers)
  if modelFound:
    # Apply the transformation defined by the model to the first point
    # of each pair (PointMatch) of points. That is, to the point from
    # the first image.
    PointMatch.apply(inliers, model)
except NotEnoughDataPointsException, e:
  print e

if modelFound:
  # Register images  
  # Transform the top-left and bottom-right corner of imp2
  # (use applyInverse: the model describes imp1 -> imp2)
  x0, y0 = model.applyInverse([0, 0])
  x1, y1 = model.applyInverse([imp2.getWidth(), 0])
  x2, y2 = model.applyInverse([0, imp2.getHeight()])
  x3, y3 = model.applyInverse([imp2.getWidth(), imp2.getHeight()])
  xtopleft = min(x0, x1, x2, x3)
  ytopleft = min(y0, y1, y2, y3)
  xbottomright = max(x0, x1, x2, x3)
  ybottomright = max(y0, y1, y2, y3)
  
  # Determine dimensions of the montage of registered images
  canvas_width = int(max(imp1.getWidth(), xtopleft) - min(0, xtopleft))
  canvas_height = int(max(imp1.getHeight(), ytopleft) - min(0, ytopleft))
  
  # Create a 2-slice stack with both images aligned, one on each slice
  stack = ImageStack(canvas_width, canvas_height)

  # Insert imp1
  slice1 = imp1.getProcessor().createProcessor(canvas_width, canvas_height)
  slice1.insert(imp1.getProcessor(), int(0 if xtopleft > 0 else abs(xtopleft)),
                                     int(0 if ytopleft > 0 else abs(ytopleft)))
  stack.addSlice("cut1", slice1)
  
  # Transform imp2 into imp1 coordinate space
  source = imp2.getProcessor()
  source.setInterpolationMethod(source.BILINEAR)
  target = imp1.getProcessor().createProcessor(int(xbottomright - xtopleft),
                                               int(ybottomright - ytopleft))
  
  p = zeros(2, 'd')

  # The translation offset: the transformed imp2 layes mostly outside
  # of imp1, so shift x,y coordinates to be able to render it within target
  xoffset = 0 if xtopleft > 0 else xtopleft
  yoffset = 0 if ytopleft > 0 else ytopleft
  
  def pull(source, target, x, xoffset, y, yoffset, p, model):
    p[0] = x + xoffset
    p[1] = y + yoffset
    model.applyInPlace(p) # imp1 -> imp2, target is in imp1 coordinates,
                          #                  source in imp2 coordinates.
    # getPixelInterpolated returns 0 when outside the image
    target.setf(x, y, source.getPixelInterpolated(p[0], p[1]))
  
  deque(pull(source, target, x, xoffset, y, yoffset p, model)
         for x in xrange(target.getWidth())
         for y in xrange(target.getHeight()),
        maxlen=0)

  slice2 = slice1.createProcessor(canvas_width, canvas_height)
  slice2.insert(target, int(0 if xtopleft < 0 else xtopleft),
                        int(0 if ytopleft < 0 else ytopleft))
  stack.addSlice("cut2", slice2)
  imp = ImagePlus("registered", stack)
  imp.show()
					

Custom features: demystifying feature extraction

What is a feature? It's an "interest point", a "local geometric descriptor", or a landmark on an image. At its core, it's a spatial coordinate with some associated parameters (measured from the image, presumably around the coordinate) that enable comparisons with other features of the same kind.

Here, we are going to define our own custom features. Useful features will capture the properties of the underlying image data from which they are extracted. The data I chose for testing custom features (see at the end) are 4D series of Ca2+ imaging. In these 3D volumes--one per time point measured--, neuronal somas light up and dim down as they fire action potentials. Given that the frequency of data sampling is higher than the decay of calcium signals, a substantial amount of somas fluorescent in one time point will also be present in the next one. Capturing this common subset is the goal. We also know that neuronal somas don't move, at least not significantly in the measured time frames of hundreds of milliseconds. Therefore, my custom features will be based on the detection of somas and their relationship to neighboring somas.

We'll compare the features extracted from two consecutive time points to find the subset of matching pairs. Then we'll filter these matching pairs down to a spatially coherent subset, from which to estimate a spatial transformation, to register the 3D volume of one time point to that of the time point before.

In order to efficiently triage different feature extraction and comparison parameters and approaches, and the model estimation parameters, we'll save features and point matches along the way into CSV files. These files will be invalidated, and ignored (and overwritten), when the parameters with which they were created are different than the ones currently in use.

The entire program shown here consists of stateless functions, that is, pure functions that do not depend on any global state. In this way, each function can be tested independently for correctness. It also breaks down the program into small, comprehensible chunks. Developing each function can be done independently of any others, naturally starting with functions that don't depend on others yet to be written. So as we progress down the program, we'll see functions that are more higher order, calling functions that we defined (and tested) before. We also define a few classes.

We start by defining a utility function syncPrint, which synchronizes access to the python's built-in print function. Ensures that output logs are readable when multiple concurrently executing threads are writing to them.

Then we define functions createDoG and getDoGPeaks to abstract away the finding of soma locations with the Difference of Gaussian method (see above, and DogDetection).

The Constellation class embodies a single feature. It's my answer to the question of what could be the simplest feature that could possibly work, given the data. Its properties are:

  • position: the spatial coordinates of the center, as a list or array of floating-point numbers.
  • angle: the angle defined by the triangle specified by the center and the two other spatial coordinates p1 and p2.
  • len1 and len2: the distances from the center to the other points p1, p2 with which the angle was defined.

The static function Constellation.fromSearch explains how each feature is constructed from 3 spatial coordinates (3 peaks in the Difference of Gaussian peak detection).

The matches member function compares a Constellation feature with another one. The parameters angle_epsilon and len_epsilon_sq describe the precision of the matching for evaluating the match as valid. Each Constellation feature has only 3 parameters (angle, len1, len2) which are compared to those of the other feature.

Note that this Constellation is position and rotation invariant, but not scale invariant. To make it scale invariant, modify it so that instead of storing len1, len2, it stores (and uses for comparisons) the relative length of the vector to p1 relative to the vector to p2. With such a modification, then these features could be use to estimate changes in the dimensions of the images.

The other functions of Constellation relate to the saving and reloading of features from CSV files.

(The inspiration for the name Constellation and its basic structure originates in the paper "Software for bead-based registration of selective plane illumination microscopy data" by Preibisch, Saalfeld, Schindelin & Tomancak, 2010).

The makeRadiusSearch function constructs a KDTree and passes it onto a RadiusNeighborSearchOnKDTree, which enables the swift location of peaks within a given radius distance of a specific peak. Otherwise, we'd have to check peaks all to all to find those near a specific peak, with horrifying O(n^2) performance. So we get O(log(n)) performance instead of O(n^2), which is a very significant difference (sublinear versus exponential on the number of peaks!).

The extractFeatures function is the last one related to feature extraction, and crucial: it defines how to construct Constellation features from detected peaks. The constraints I had to deal with have to do with comparing features, an operation that should be robust to some amount of noise in the measurement (the noise here being a bit of wiggle in the peak detections (i.e. in their spatial location), possibly magnified by anisotropy).

Here, I've taken the strategy of generating up to max_per_peak features for each peak (an adjustable parameter), and to do so by using those peaks furthest from the center peak in question (larger distances are more robust to noise in the detected position of somas in different images), if they define an angle larger than min_angle. The angle cannot be, by definition, larger than 180 degrees. Limiting features to those with angles larger than min_angle prevents creating features that are hard to compare because their angles, in being small, may fall within the noise range (keep in mind features extracted from one time point will never be exactly as features extracted from the next time point: they will be merely very similar at most). In summary, the choices in constructing a feature are guided by a desire for robustness to the noise inherent to the detection of soma positions in different time points.

After detecting peaks and creating features from them, we have to find which features of one image match those of another. The PointMatches class and its static method fromFeatures compares all features of one image (features1) with all features of another (features2), identifying matching features that are then stored as a list of PointMatch instances constructed with the position of each feature.

Note a possible significant optimization: if the volumes belong to a 4D series, and frequency of sampling is high, it is very likely that the samples haven't moved much or at all in the interveening time between the two consecutive image volume acquisitions. Therefore, instead of all to all, potentially corresponding features could be first filtered using the RadiusNeighborSearchOnKDTree, and only those Constellation features of one time point within the radius of the position of another Constellation of the other time point would have to be compared, saving a lot of computation time. Here, I left it all to all, given the example I use to test this program, below.

The other methods of the PointMatches class relate to loading and storing point matches from/to CSV files.

The next 4 functions saveFeatures, loadFeatures, savePointMatches, loadPointMatches implement the saving and loading of features and point matches to/from CSV files.

Saving features and point matches is straightforward. We open the file for writing (the 'w' parameter) wrapped in a with statement (to ensure the file handle is always closed, even when there is an error). We use the csv library to create a csv.writer (w), and then write two kinds of data: first the parameters with which the features or pointmatches were computed, and second the actual features or pointmatches. If you'd want to parse these CSV files in other programs, skip the first 3 lines containing the parameter headers (line 1), the parameter values (line 2) and the feature or pointmatches headers (line 3).

The file path for the CSV file is derived from the img_filename, to ensure it is unique. The os.path.join function is an operating system-independent way of joining a directory with a file name, avoiding issues with forward and backward slashes in file path representations.

Upon loading features or pointmatches from a CSV file (with a csv.reader), we first of all check whether the stored parameters match those with which we want to extract features and make pointmatches. (Parameters from the CSV file are parsed by imap'ing the float function to the sequence of strings provided by the csv reader.next() invocation.) When a parameter doesn't match, the loading functions return None: features or pointmatches will have to be computed, and stored, again. The checking of whether parameters match is crucial for determining whether the content of the CSV files can be reused. Cases when it can include e.g. the adjustment of parameters of the model to fit to the pointmatches, which is independent of how the latter were computed.

When writing the CSV files, note the call to os.fsync after csvfile.flush(), which ensures that the file is actually written to disk prior to continuing script execution. This is necessary because some (most) operating systems abstract the writing of files to disk and may defer it for later, whereas in this program we need files to be written and then read subsequently by other execution threads.

Note the keyword argument validateOnly in loadFeatures: it is used later, by ensureFeatures, to validate CSV files (by checking the parameter values) without having to parse all their content.

Note also the keyword argument epsilon in both loadFeatures and loadPointMatches: when writing floating-point numbers to text, their precision will not be preserved exactly. The epsilon specifies an acceptable precision. The default is 0.00001, which more than suffices here.

The helper function makeFeatures takes an image, detects peaks with DogDetection, extracts features and stores them in a CSV file, and returns the features. Uses nearly all the functions defined above.

The helper class Getter implements Future, and is useful for abstracting the reusing of loaded features from the invocation of a computation to make them anew. It is used by findPointMatches.

The findPointMatches function takes the file paths of two images, img1_filename and img2_filename, a directory to store CSV files (csv_dir), an ExecutorService exe (a thread pool), and the params dictionary, and sets out to return a list of PointMatch instances for the images.

First, findPointMatches attempts to load point matches from CSV files. If not found or not valid (parameters may be different), then it attempts to load features from CSV files. If not present or not valid, then it computes the features anew by invoking the makeFeatures function via the helper Task class, in parallel via the exe.

With the features ready, then PointMatches.fromFeatures is invoked, and then features are saved into a CSV file with savePointMatches. Note how only a subset of the params are used (the names) for writing the CSV file, as some params aren't related to features or pointmatches.

The function ensureFeatures will be run before anything else for every image, to check whether a CSV file with features exists. Note how it filters out of the params dictionary the subset of entries (names) necessary for feature extraction (for peak detection and the construction of Constellation instances) and pass only those to makeFeatures. This is important because this subset of parameters will be written into the CSV file along with the features.

The fit function does the actual estimation of a model from a list of pointmatches. The rest of parameters relate to the AbstractModel.filterRansac function which, despite its name, does the actual estimation of the transformation model, modifying the model instance provided to it as argument. It returns the boolean modelFound and the list of inliers, that is, the spatially coherent subset of PointMatch instances from which the model was estimated.

The fitModel is more higher-order, and will invoke the fit function defined just above. It's a convenience function that, given two image file names (img1_filename, img2_filename), it will retrieve their pointmatches, then fit the model and, in the case of not modelFound, return an identity matrix.

The Task class we had used in prior scripts (see above), and it is used to wrap a function and its arguments into a Callable, for deferred execution in a thread pool (our exe).

The computeForwardTransforms function is one of the main entry points into this custom feature extraction and image registration program (realize so far we haven't executed anything, but merely declared functions and classes). It takes a list of filenames, ordered, with each filename representing a timepoint in a 4D series; a csv_dir directory for loading and storing CSV files, an ExecutorService (exe) for running Tasks concurrently, and the params dictionary.

First, it runs ensureFeatures for every image, concurrently. This checks whether CSV files with features for every image exist, or create them if not.

Then it computes the transformation models, one per image starting on the second image. These are digested into the matrices, with the first image getting an identity transform (no tranformation).

There are two issues with the transformations computed so far:

  1. The transformation matrices define forward transforms from an image at time point i to the one at i+1. Instead, we want the opposite: transforms from i+1 to i.
  2. The transforms are local to i vs i+1. What we want instead is to concatenate all transforms up to time point i, and then apply the transform from i+1 to i.

Therefore, the function asBackwardAffineTransforms takes the list of matrices, expresses each as an imglib2 AffineTransform3D, inverts it, and preConcatenates to it all prior transforms for all prior time points. This is done iteratively, accumulating into the aff_previous transform.

With the returned list of affines, we can then invoke viewTransformed to view the original 3D volume for each time point as a registered volume, relative to the very first time point. This is what registeredView does, which is the main entry point into all of these stateless functions. With registeredView, we abstract all operations into a function call that takes the list of filenames (img_filenames), an img_loader, the directory of CSV files (csv_dir), a thread pool (exe) and the params dictionary.

Voilà, we are done: we can now measure e.g. GCaMP signal (the amount of fluorescence) on a particular soma of any time point, and trace the signal through the whole time series.

Note a critical advantage here is that images are never duplicated. Doing so wastes time and storage space, which costs money (in data storage, in computing time, and in resarcher time). Not duplicating the data into a transformed version also critically enables data reproducibility: any measurement can be tracked back to the original data and the coordinate transformations applied to it.

Note: we didn't define the function getCalibration, used by makeFeatures to invoke getDoGPeaks with an appropriate calibration. We'll define this function later (should return a list of 3 doubles, e.g. [1.0, 1.0, 1.0]).

Now that we have defined a whole framework for registering 4D data, let's test it.

from __future__ import with_statement
from net.imglib2.algorithm.dog import DogDetection
from net.imglib2.view import Views
from net.imglib2 import KDTree
from net.imglib2.neighborsearch import RadiusNeighborSearchOnKDTree
from net.imglib2.realtransform import RealViews, AffineTransform3D
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from org.scijava.vecmath import Vector3f
from mpicbg.models import Point, PointMatch, RigidModel3D, NotEnoughDataPointsException
from itertools import imap, izip, product
from jarray import array, zeros
from java.util import ArrayList
from java.util.concurrent import Executors, Callable, Future
import os, csv, sys
from synchronize import make_synchronized


@make_synchronized
def syncPrint(msg):
  print msg

def createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue):
  """ Create difference of Gaussian peak detection instance.
      sigmaSmaller and sigmalLarger are in calibrated units. """
  # Fixed parameters
  extremaType = DogDetection.ExtremaType.MAXIMA
  normalizedMinPeakValue = False
  
  imgE = Views.extendMirrorSingle(img)
  # In the differece of gaussian peak detection, the img acts as the interval
  # within which to look for peaks. The processing is done on the infinite imgE.
  return DogDetection(imgE, img, calibration, sigmaLarger, sigmaSmaller,
    extremaType, minPeakValue, normalizedMinPeakValue)

def getDoGPeaks(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue):
  """ Return a list of peaks as net.imglib2.RealPoint instances, calibrated. """
  dog = createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue)
  peaks = dog.getSubpixelPeaks()
  # Return peaks in calibrated units (modify in place)
  for peak in peaks:
    for d, cal in enumerate(calibration):
      peak.setPosition(peak.getFloatPosition(d) * cal, d)
  return peaks

# A custom feature, comparable with other features of the same kind
class Constellation:
  """ Expects 3 scalars and an iterable of scalars. """
  def __init__(self, angle, len1, len2, coords):
    self.angle = angle
    self.len1 = len1
    self.len2 = len2
    self.position = Point(array(coords, 'd'))

  def matches(self, other, angle_epsilon, len_epsilon_sq):
    """ Compare the angles, if less than epsilon, compare the vector lengths.
        Return True when deemed similar within measurement error brackets. """
    return abs(self.angle - other.angle) < angle_epsilon \
       and abs(self.len1 - other.len1) + abs(self.len2 - other.len2) < len_epsilon_sq

  @staticmethod
  def subtract(loc1, loc2):
    return (loc1.getFloatPosition(d) - loc2.getFloatPosition(d)
            for d in xrange(loc1.numDimensions()))

  @staticmethod
  def fromSearch(center, p1, d1, p2, d2):
    """ center, p1, p2 are 3 RealLocalizable, with center being the peak
        and p1, p2 being the wings (the other two points).
        p1 is always closer to center than p2 (d1 < d2).
        d1, d2 are the square distances from center to p1, p2
        (could be computed here, but RadiusNeighborSearchOnKDTree did it). """
    pos = tuple(center.getFloatPosition(d) for d in xrange(center.numDimensions()))
    v1 = Vector3f(Constellation.subtract(p1, center))
    v2 = Vector3f(Constellation.subtract(p2, center))
    return Constellation(v1.angle(v2), d1, d2, pos)

  @staticmethod
  def fromRow(row):
    """ Expects: row = [angle, len1, len2, x, y, z] """
    return Constellation(row[0], row[1], row[2], row[3:])

  def asRow(self):
    "Returns: [angle, len1, len2, position.x, position,y, position.z"
    return (self.angle, self.len1, self.len2) + tuple(self.position.getW())

  @staticmethod
  def csvHeader():
    return ["angle", "len1", "len2", "x", "y", "z"]


def makeRadiusSearch(peaks):
  """ Construct a KDTree-based radius search, for locating points
      within a given radius of a reference point. """
  return RadiusNeighborSearchOnKDTree(KDTree(peaks, peaks))


def extractFeatures(peaks, search, radius, min_angle, max_per_peak):
  """ Construct up to max_per_peak constellation features with furthest peaks. """
  constellations = []
  for peak in peaks:
    search.search(peak, radius, True) # sorted
    n = search.numNeighbors()
    if n > 2:
      yielded = 0
      # 0 is itself: skip from range of indices
      for i, j in izip(xrange(n -2, 0, -1), xrange(n -1, 0, -1)):
        if yielded == max_per_peak:
          break
        p1, d1 = search.getPosition(i), search.getSquareDistance(i)
        p2, d2 = search.getPosition(j), search.getSquareDistance(j)
        cons = Constellation.fromSearch(peak, p1, d1, p2, d2)
        if cons.angle >= min_angle:
          yielded += 1
          constellations.append(cons)
  #
  return constellations


class PointMatches():
  def __init__(self, pointmatches):
    self.pointmatches = pointmatches
  
  @staticmethod
  def fromFeatures(features1, features2, angle_epsilon, len_epsilon_sq):
    """ Compare all features of one image to all features of the other image,
        to identify matching features and then create PointMatch instances. """
    return PointMatches([PointMatch(c1.position, c2.position)
                         for c1, c2 in product(features1, features2)
                         if c1.matches(c2, angle_epsilon, len_epsilon_sq)])

  def toRows(self):
    return [tuple(p1.getW()) + tuple(p2.getW())
            for p1, p2 in self.pointmatches]

  @staticmethod
  def fromRows(rows):
    """ rows: from a CSV file, as lists of strings. """
    return PointMatches([PointMatch(Point(array(imap(float, row[0:3]), 'd')),
                                    Point(array(imap(float, row[3:6]), 'd')))
                         for row in rows])

  @staticmethod
  def csvHeader():
    return ["x1", "y1", "z1", "x2", "y2", "z2"]

  @staticmethod
  def asRow(pm):
    return tuple(pm.getP1().getW()) + tuple(pm.getP2().getW())


def saveFeatures(img_filename, directory, features, params):
  path = os.path.join(directory, img_filename) + ".features.csv"
  try:
    with open(path, 'w') as csvfile:
      w = csv.writer(csvfile, delimiter=',', quotechar="\"",
                     quoting=csv.QUOTE_NONNUMERIC)
      # First two rows: parameter names and values
      keys = params.keys()
      w.writerow(keys)
      w.writerow(tuple(params[key] for key in keys))
      # Feature header
      w.writerow(Constellation.csvHeader())
      # One row per Constellation feature
      for feature in features:
        w.writerow(feature.asRow())
      # Ensure it's written
      csvfile.flush()
      os.fsync(csvfile.fileno())
  except:
    syncPrint("Failed to save features at %s" % path)
    syncPrint(str(sys.exc_info()))


def loadFeatures(img_filename, directory, params, validateOnly=False, epsilon=0.00001):
  """ Attempts to load features from filename + ".features.csv" if it exists,
      returning a list of Constellation features or None.
      params: dictionary of parameters with which features are wanted now,
              to compare with parameter with which features were extracted.
              In case of mismatch, return None.
      epsilon: allowed error when comparing floating-point values.
      validateOnly: if True, return after checking that parameters match. """
  try:
    csvpath = os.path.join(directory, img_filename + ".features.csv")
    if os.path.exists(csvpath):
      with open(csvpath, 'r') as csvfile:
        reader = csv.reader(csvfile, delimiter=',', quotechar="\"")
        # First line contains parameter names, second line their values
        paramsF = dict(izip(reader.next(), imap(float, reader.next())))
        for name in paramsF:
          if abs(params[name] - paramsF[name]) > 0.00001:
            syncPrint("Mismatching parameters: '%s' - %f != %f" % \ 
                      (name, params[name], paramsF[name]))
            return None
        if validateOnly:
          return True # would return None above, which is falsy
        reader.next() # skip header with column names
        features = [Constellation.fromRow(map(float, row)) for row in reader]
        syncPrint("Loaded %i features for %s" % (len(features), img_filename))
        return features
    else:
      syncPrint("No stored features found at %s" % csvpath)
      return None
  except:
    syncPrint("Could not load features for %s" % img_filename)
    syncPrint(str(sys.exc_info()))
    return None


def savePointMatches(img_filename1, img_filename2, pointmatches, directory, params):
  filename = img_filename1 + '.' + img_filename2 + ".pointmatches.csv"
  path = os.path.join(directory, filename)
  try:
    with open(path, 'w') as csvfile:
      w = csv.writer(csvfile, delimiter=',', quotechar="\"",
                     quoting=csv.QUOTE_NONNUMERIC)
      # First two rows: parameter names and values
      keys = params.keys()
      w.writerow(keys)
      w.writerow(tuple(params[key] for key in keys))
      # PointMatches header
      w.writerow(PointMatches.csvHeader())
      # One PointMatch per row
      for pm in pointmatches:
        w.writerow(PointMatches.asRow(pm))
      # Ensure it's written
      csvfile.flush()
      os.fsync(csvfile.fileno())
  except:
    syncPrint("Failed to save pointmatches at %s" % path)
    syncPrint(str(sys.exc_info()))
    return None
    

def loadPointMatches(img1_filename, img2_filename, directory, params, epsilon=0.00001):
  """ Attempts to load point matches from
      filename1 + '.' + filename2 + ".pointmatches.csv" if it exists,
      returning a list of PointMatch instances or None.
      params: dictionary of parameters with which pointmatches are wanted now,
              to compare with parameter with which pointmatches were made.
              In case of mismatch, return None.
      epsilon: allowed error when comparing floating-point values. """
  try:
    csvfilename = img1_filename + '.' + img2_filename + ".pointmatches.csv"
    csvpath = os.path.join(directory, csvfilename)
    if not os.path.exists(csvpath):
      syncPrint("No stored pointmatches found at %s" % csvpath)
      return None
    with open(csvpath, 'r') as csvfile:
      reader = csv.reader(csvfile, delimiter=',', quotechar="\"")
      # First line contains parameter names, second line their values
      paramsF = dict(izip(reader.next(), imap(float, reader.next())))
      for name in paramsF:
        if abs(params[name] - paramsF[name]) > 0.00001:
          syncPrint("Mismatching parameters: '%s' - %f != %f" % \
                    (name, params[name], paramsF[name]))
          return None
      reader.next() # skip header with column names
      pointmatches = PointMatches.fromRows(reader).pointmatches
      syncPrint("Loaded %i pointmatches for %s, %s" % \
                (len(pointmatches), img1_filename, img2_filename))
      return pointmatches
  except:
    syncPrint("Could not load pointmatches for pair %s, %s" % \
              (img1_filename, img2_filename))
    syncPrint(str(sys.exc_info()))
    return None


def makeFeatures(img_filename, img_loader, getCalibration, csv_dir, params):
  """ Helper function to extract features from an image. """
  img = img_loader.load(img_filename)
  # Find a list of peaks by difference of Gaussian
  peaks = getDoGPeaks(img, getCalibration(img_filename),
                      params['sigmaSmaller'], params['sigmaLarger'],
                      params['minPeakValue'])
  # Create a KDTree-based search for nearby peaks
  search = makeRadiusSearch(peaks)
  # Create list of Constellation features
  features = extractFeatures(peaks, search,
                             params['radius'], params['min_angle'],
                             params['max_per_peak'])
  # Store features in a CSV file
  saveFeatures(img_filename, csv_dir, features, params)
  return features


# Partial implementation of a Future
class Getter(Future):
  def __init__(self, ob):
    self.ob = ob
  def get(self):
    return self.ob


def findPointMatches(img1_filename, img2_filename, getCalibration, csv_dir, exe, params):
  """ Attempt to load them from a CSV file, otherwise compute them and save them. """
  # Attempt to load pointmatches from CSV file
  pointmatches = loadPointMatches(img1_filename, img2_filename, csv_dir, params)
  if pointmatches is not None:
    return pointmatches

  # Load features from CSV files
  # otherwise compute them and save them.
  img_filenames = [img1_filename, img2_filename]
  names = set(["minPeakValue", "sigmaSmaller", "sigmaLarger",
                "radius", "min_angle", "max_per_peak"])
  feature_params = {k: params[k] for k in names}
  csv_features = [loadFeatures(img_filename, csv_dir, feature_params)
                  for img_filename in img_filenames]
  # If features were loaded, just return them, otherwise compute them
  # (and save them to CSV files)
  futures = [Getter(fs) if fs
             else exe.submit(Task(makeFeatures, img_filename, img_loader,
                                  getCalibration, csv_dir, feature_params))
             for fs, img_filename in izip(csv_features, img_filenames)]
  features = [f.get() for f in futures]
  
  for img_filename, fs in izip(img_filenames, features):
    syncPrint("Found %i constellation features in image %s" % (len(fs), img_filename))

  # Compare all possible pairs of constellation features: the PointMatches
  pm = PointMatches.fromFeatures(features[0], features[1],
                                 params["angle_epsilon"], params["len_epsilon_sq"])

  syncPrint("Found %i point matches between:\n    %s\n    %s" % \
            (len(pm.pointmatches), img1_filename, img2_filename))

  # Store as CSV file
  names = set(["minPeakValue", "sigmaSmaller", "sigmaLarger", # DoG peak params
               "radius", "min_angle", "max_per_peak",         # Constellation params
               "angle_epsilon", "len_epsilon_sq"])            # pointmatches params
  pm_params = {k: params[k] for k in names}
  savePointMatches(img1_filename, img2_filename, pm.pointmatches, csv_dir, pm_params)
  #
  return pm.pointmatches


def ensureFeatures(img_filename, img_loader, csv_dir, params):
  names = set(["minPeakValue", "sigmaSmaller", "sigmaLarger",
               "radius", "min_angle", "max_per_peak"])
  feature_params = {k: params[k] for k in names}
  if not loadFeatures(img_filename, csv_dir, feature_params, validateOnly=True):
    # Create features from scratch, which overwrites any CSV files
    makeFeatures(img_filename, img_loader, csv_dir, feature_params)
    # TODO: Delete CSV files for pointmatches, if any


def fit(model, pointmatches, n_iterations, maxEpsilon,
        minInlierRatio, minNumInliers, maxTrust):
  """ Fit a model to the pointmatches, finding the subset of inlier pointmatches
      that agree with a joint transformation model. """
  inliers = ArrayList()
  try:
    modelFound = model.filterRansac(pointmatches, inliers, n_iterations,
                                    maxEpsilon, minInlierRatio, minNumInliers, maxTrust)
  except NotEnoughDataPointsException, e:
    syncPrint(str(e))
  return modelFound, inliers


def fitModel(img1_filename, img2_filename, getCalibration, csv_dir, model, exe, params):
  """ Returns a transformation matrix. """
  pointmatches = findPointMatches(img1_filename, img2_filename, getCalibration,
                                  csv_dir, exe, params)
  modelFound, inliers = fit(model, pointmatches, params["n_iterations"],
                            params["maxEpsilon"], params["minInlierRatio"],
                            params["minNumInliers"], params["maxTrust"])
  if modelFound:
    syncPrint("Found %i inliers for:\n    %s\n    %s" % \
              (len(inliers), img1_filename, img2_filename))
    # 2-dimensional array to read the model's transformation matrix
    a = array((zeros(4, 'd'), zeros(4, 'd'), zeros(4, 'd')), Class.forName("[D"))
    model.toArray(a)
    return a[0] + a[1] + a[2] # Concat: flatten to 1-dimensional array
  else:
    syncPrint("Model not found for:\n    %s\n    %s" % \
              (img1_filename, img2_filename))
    # Return identity
    return array([1, 0, 0, 0,
                  0, 1, 0, 0,
                  0, 0, 1, 0], 'd')


# A wrapper for executing functions in concurrent threads
class Task(Callable):
  def __init__(self, fn, *args):
    self.fn = fn
    self.args = args
  def call(self):
    return self.fn(*self.args)


def computeForwardTransforms(img_filenames, img_loader, getCalibration, \
                             modelclass, csv_dir, exe, params):
  """ Compute transforms from image i to image i+1,
      returning an identity transform for the first image,
      and with each transform being from i to i+1 (forward transforms).
      Returns a list of affine 3D matrices, each a double[] with 12 values.
  """
  try:
    # Ensure features exist in CSV files, or create them
    futures = [exe.submit(Task(ensureFeatures, img_filename, img_loader,
                               csv_dir, params))
               for img_filename in img_filenames]
    # Wait until all complete
    for f in futures:
      f.get()

    # Create models: ensures that pointmatches exist in CSV files, or creates them
    futures = [exe.submit(Task(fitModel, img1_filename, img2_filename, getCalibration,
                               csv_dir, modelclass(), exe, params))
               for img1_filename, img2_filename
               in izip(img_filenames, img_filenames[1:])]
    # Wait until all complete
    # First image gets identity
    matrices = [array([1, 0, 0, 0,
                       0, 1, 0, 0,
                       0, 0, 1, 0], 'd')] + \
               [f.get() for f in futures]

    return matrices

  finally:
    exe.shutdown()


def asBackwardAffineTransforms(matrices):
    """ Transforms are img1 -> img2, and we want the opposite: so invert each.
        Also, each image was registered to the previous,
        so must concatenate all previous transforms. """
    aff_previous = AffineTransform3D()
    aff_previous.identity() # set to identity
    affines = [aff_previous] # first image at index 0

    for matrix in matrices[1:]: # skip zero
      aff = AffineTransform3D()
      aff.set(*matrix) # expand 12 double numbers into 12 arguments
      aff = aff.inverse() # transform defines img1 -> img2, we want the opposite
      aff.preConcatenate(aff_previous) # Make relative to prior image
      affines.append(aff) # Store
      aff_previous = aff # next iteration

    return affines


def viewTransformed(img, calibration, affine):
  """ View img transformed to isotropy (via the calibration)
      and transformed by the affine. """
  scale3d = AffineTransform3D()
  scale3d.set(calibration[0], 0, 0, 0,
              0, calibration[1], 0, 0,
              0, 0, calibration[2], 0)
  transform = affine.copy()
  transform.concatenate(scale3d)
  imgE = Views.extendZero(img)
  imgI = Views.interpolate(imgE, NLinearInterpolatorFactory())
  imgT = RealViews.transform(imgI, transform)
  # dimensions
  minC = [0, 0, 0]
  maxC = [int(img.dimension(d) * cal) -1 for d, cal in enumerate(calibration)]
  imgB = Views.interval(imgT, minC, maxC)
  return imgB


def registeredView(img_filenames, img_loader, getCalibration, \
                   modelclass, csv_dir, exe, params):
  """ Given a sequence of image filenames, return a registered view.
    img_filenames: a list of file names
    csv_dir: directory for CSV files
    exe: an ExecutorService for concurrent execution of tasks
    params: dictionary of parameters
    returns a stack view of all registered images, e.g. 3D volumes as a 4D. """
  matrices = computeForwardTransforms(img_filenames, img_loader, getCalibration,
                                      modelclass, csv_dir, exe, params)
  affines = asBackwardAffineTransforms(matrices)
  #
  for i, affine in enumerate(affines):
    matrix = affine.getRowPackedCopy()
    print i, "matrix: [", matrix[0:4]
    print "          ", matrix[4:8]
    print "          ", matrix[8:12], "]"
  #
  # NOTE: would be better to use a lazy-loading approach
  images = [img_loader.load(img_filename) for img_filename in img_filenames]
  registered = Views.stack([viewTransformed(img, getCalibration(img_filename), affine)
                            for img, img_filename, affine
                            in izip(images, img_filenames, affines)])
  return registered

				

  Testing custom features

A good test is one where you already know the solution. Therefore, here I open one of the the first 10 time points of a 4D series (the first one will do), and then transform it twice, to make a series of 3 volumes:

  1. img1: a view (a virtual cut out) of the image as is, where all dimensions are the same (378 pixels long in x, y and z).
  2. img2: a rotated view of img1, 90 degrees on the Y axis.
  3. img3: a rotated view of img1, 90 degrees on the X axis.

Of course, despite the rotation, the features (which are rotation-invariant) would be identical in each, and the test would be too easy. To make the test harder, I use the function dropSlices to remove every second slice from each volume (notice in the gif animation below showing a hyperstack how the Z is half of the width and height). Given the rotations, each image volume has lost different data: in the Z, X and Y axes, respectively.

All 3 images still have the same dimensions (378 x 378 x 189), and therefore we can stack them up and show them (the "unregistered" 4D hyperstack).

Then we define a pretend img_loader, and a getCalibration function, and the parameters as a dictionary (params). Note the use of the built-in globals function in the definition of the ImgLoader class: it returns a dictionary with variable names as keys and variable content as values, including everything: objects, functions, and classes.

Then we construct our exe thread pool with Executors.newFixedThreadPool, with as many concurrent threads as CPUs our computer has.

By invoking the registeredView function (see above), we compute the forward transforms between each consecutive pair of images, and store them into matrices, which we then turn into concatenated, inverted (backwards) transforms as affines. The function will print the affines and return the registered images as a 4D volume, that we visualize as an ImageJ hyperstack (we could also trivially use the BigDataViewer, see above scripts). Because the viewTransformed invoked within registeredView scales up image volumes to isotropy using the calibration, the resulting, registered images have now the dimensions of the original images (378 x 378 x 378).

This image is not stuck: it is just registered very well. Notice the scrollbar at the bottom.

The printed transforms tell a story: despite that we, for testing, defined two different 90-degree rotations, we didn't recover a matrix that exactly undoes these 90 degree rotations. We merely got a very close estimate. Notice, despite the verbosity, that most numbers are very small (essentially zero) or extremely close to 1 or -1, so the printed matrices (not shown) could be rounded to (I did this by hand):

0 matrix: [ [1, 0, 0, 0]
            [0, 1, 0, 0]
            [0, 0, 1, 0] ]

1 matrix: [ [0, 0, 1, 376]
            [0, 1, 0, 0]
            [1, 0, 0, -1] ]

2 matrix: [ [1, 0, 0, 0]
            [0, 0, -1, 376]
            [0, 1, 0, -1] ]
					

... which are exactly the inverse of the transformations that we created in the first place, with the 376 being the necessary translation in X (second matrix, for an image rotated in the Y axis) and Y (third matrix, for an image rotated in the X axis). Why it is 376 and not 378 may have to do with the loss of every second slice in the Z axis of each rotated input image.

 

Final remarks:

  • The whole script for custom feature extraction and 4D registration is available in github.
  • If you have a 4D series that you'd like to register, head first towards the Multiview reconstruction by Stephan Preibisch & co. It has everything you'd want for registering and visualizing 4D datasets, from a convenient point-and-click user interface.
  • There's a lot more to successful registration of 4D series that what I expressed here. One the one hand, you'd want increased robustness to incorrect estimation of transformations, for example by filtering out transformations that translate the image more than expected, or by averaging the transform, or weighting it, with the transforms of a few previous and subsequent time points. While here I provided some resilience to bad data (e.g. a time point for which a model is not found will get an identity transform, which equates to getting the same transform as the prior time point given the concatenation), a lot more could be done. Some of this "a lot more" is available in the Multiview Reconstruction. On the other hand, performance: some critical functions, such as any that iterate two lists all to all like PointMatches.fromFeatures, should be implemented in java, or even better, the algorithm should change from O(n^2) to O(log(n)), as hinted already above, by using knowledge about the expected position of potentially matching features in the adjacent time point volume.

# -- CONTINUES FROM ABOVE --
from ij import IJ
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from java.lang import Runtime

# Prepare test data

# Grap the current image
img = IL.wrap(IJ.getImage())

# Cut out a cube
img1 = Views.zeroMin(Views.interval(img, [39, 49, 0],
                                         [39 + 378 -1, 49 + 378 -1, 378 -1]))

# Rotate the cube on the Y axis to the left
img2 = Views.zeroMin(Views.rotate(img1, 2, 0)) # zeroMin is CRITICAL

# Rotate the cube on the X axis to the top
img3 = Views.zeroMin(Views.rotate(img1, 2, 1))

def dropSlices(img, nth):
  """ Drop every nth slice. Calibration is to be multipled by nth for Z.
      Counts slices 1-based so as to preserve the first slice (index zero). """
  return Views.stack([Views.hyperSlice(img, 2, i)
                      for i in xrange(img.dimension(2)) if 0 == (i+1) % nth])

# Reduce Z resolution: make each anisotropic but in a different direction
nth = 2
img1 = dropSlices(img1, nth)
img2 = dropSlices(img2, nth)
img3 = dropSlices(img3, nth)

# The sequence of images to transform, each relative to the previous
images = [img1, img2, img3]

IL.wrap(Views.stack(images), "unregistered").show()

# Reduce Z axis units by nth
calibrations = [[1.0, 1.0, 1.0 * nth],
                [1.0, 1.0, 1.0 * nth],
                [1.0, 1.0, 1.0 * nth]]

# Pretend file names:
img_filenames = ["img1", "img2", "img3"]

# Pretend calibration getter
def getCalibration(img_filename):
  return calibrations[img_filenames.index(img_filename)]

# Pretend loader:
class ImgLoader():
  def load(self, img_filename):
    return globals()[img_filename]

img_loader = ImgLoader()

# The ExecutionService for concurrent processing
n_threads = Runtime.getRuntime().availableProcessors()
exe = Executors.newFixedThreadPool(n_threads)

# Storage directory for CSV files with features and pointmatches
csv_dir = "/tmp/"


# Parameters for DoG difference of Gaussian to detect soma positions
somaDiameter = 8 * calibrations[0][0]
paramsDoG = {
  "minPeakValue": 30, # Determined by hand
  "sigmaSmaller": somaDiameter / 4.0, # in calibrated units: 1/4 soma
  "sigmaLarger": somaDiameter / 2.0, # in calibrated units: 1/2 soma
}

paramsFeatures = {
  # Parameters for features
  "radius": somaDiameter * 5, # for searching nearby peaks
  "min_angle": 1.57, # in radians, between vectors to p1 and p2
  "max_per_peak": 3, # maximum number of constellations to create per peak

  # Parameters for comparing constellations to find point matches
  "angle_epsilon": 0.02, # in radians. 0.05 is 2.8 degrees, 0.02 is 1.1 degrees
  "len_epsilon_sq": pow(somaDiameter, 2), # in calibrated units, squared
}

# RANSAC parameters: reduce list of pointmatches to a spatially coherent subset
paramsModel = {
  "maxEpsilon": somaDiameter, # max allowed alignment error in calibrated units
  "minInlierRatio": 0.0000001, # ratio inliers/candidates
  "minNumInliers": 5, # minimum number of good matches to accept the result
  "n_iterations": 2000, # for estimating the model
  "maxTrust": 4, # for rejecting candidates
}

# Joint dictionary of parameters
params = {}
params.update(paramsDoG)
params.update(paramsFeatures)
params.update(paramsModel)

# The model type to fit. Could also be any implementing mpicbg.models.Affine3D:
# TranslationModel3D, SimilarityModel3D, AffineModel3D, InterpolatedAffineModel3D
modelclass = RigidModel3D

registered = registeredView(img_filenames, img_loader, getCalibration,
                            modelclass, csv_dir, exe, params)

# Show as an ImageJ hyperstack
IL.wrap(registered4D, "registered").show()
					

To be continued...