We are currently using castor to reconstruct images from GATE Simulation, the process of running castor-GATEMacToGeom and castor-GATERootToCastor is going quite well. However when running castor-recon the generation of the sensitivity image is extremely slow. A simplified version of our problem (~28672 elements) takes 16 minutes to process through this step. We would like to full simulations with nearly 10 times as many elements.
Our question is, is the sensitivity image generation time purely driven by the number of elements? Is there any way to decrease the setup time, available benchmarks were done with it nearly instantaneous.
Any suggestions for us?
Sveta and Peter
By default, for list-mode data, the sensitivity image is generated by projecting all available lines of response. This process is indeed dependent on the number of detectors. The PET list-mode benchmark uses a system with ~20k detectors and a normalization datafile to generate the sensitivity image. In this case, the processing time depends on the number of data channels in the normalization datafile, which also depends on the number of detectors, so it still takes some time to process.
Make sure that all parallel computing were enabled when you compiled the code: CASTOR_OMP and CASTOR_MPI (if you use a cluster of machine compatible with MPI) variables must be set to 1 .
When using castor-recon, set -th 0 to enable multithreading.
If you know the minimum transaxial angle between two detectors required to form a line a response, you could reduce the computing time by setting the following variable in the geometry file of your scanner (*.geom file in /config/scanner):
min angle difference : x
(x being the angle in degree).
This should moderately reduce the computing time as by default, the code blindly projects all lines of response without first checking if they will cross the FOV or not. With this option, the lines of response drawn by detectors whose transaxial angle is below this value will be ignored.
- It should be possible (though not necessarily trivial) to analytically compute a sensitivity image for most systems. You could bypass the sensitivity image computation by providing the castor-recon executable with such image in interfile format with the option -sens path/to/analytic/sens/image.hdr
Note that this image should take account of attenuation in the object as well as calibration factor.
Hope this helps!