Dear CASToR users and developers,
I’m new to CASToR and trying to use the MLEM algorithms to reconstruct images from simulations of a new PET scanner we’re designing. The scanner uses a continuous xenon volume as scintillator but for now we’ve implemented a geometry dividing the volume in fake crystals. I’ve managed to reconstruct using attenuation correction and the results are more or less what I expected but I’m having a bit of trouble understanding what’s expected for the scatter intensity measurement in the input files.
The documentation implies that the value should be a per LOR expected number of scatters per second (“Un-normalized scatter intensity rate of the corresponding event (count/s)”) but when I look at the values in the benchmark data (castor_benchmark_v3_pet_list-mode/benchmark_pet_list-mode_tof.cdf) I get a maximum scatter factor of 10^-6 which would mean zero scatter for any realistic exposure. Is there some other precalculation needed that I’m missing?
On another front, what is the Calibration Factor exactly? I suspect it’s related to the acceptance and efficiency of detection but the numbers used in the examples are very big so I suspect I’m missing a factor somewhere there too.
Thanks in advance for your help and sorry that they’re probably beginner’s questions, my background is more detector based and I’m a bit new to image reconstruction in general,