scatter correction for PET

Dear CASToR users and developers,

I’m new to CASToR and trying to use the MLEM algorithms to reconstruct images from simulations of a new PET scanner we’re designing. The scanner uses a continuous xenon volume as scintillator but for now we’ve implemented a geometry dividing the volume in fake crystals. I’ve managed to reconstruct using attenuation correction and the results are more or less what I expected but I’m having a bit of trouble understanding what’s expected for the scatter intensity measurement in the input files.

The documentation implies that the value should be a per LOR expected number of scatters per second (“Un-normalized scatter intensity rate of the corresponding event (count/s)”) but when I look at the values in the benchmark data (castor_benchmark_v3_pet_list-mode/benchmark_pet_list-mode_tof.cdf) I get a maximum scatter factor of 10^-6 which would mean zero scatter for any realistic exposure. Is there some other precalculation needed that I’m missing?

On another front, what is the Calibration Factor exactly? I suspect it’s related to the acceptance and efficiency of detection but the numbers used in the examples are very big so I suspect I’m missing a factor somewhere there too.

Thanks in advance for your help and sorry that they’re probably beginner’s questions, my background is more detector based and I’m a bit new to image reconstruction in general,

Andrew Laing

Dear Andrew,

When I look into the PET TOF benchmark datafile, I see scatter rates going up to e-4 for non-zero data bins.

The duration of the acquisition is 360 seconds, this means 0.3 scatter counts of such bins for the acquisition.

There are 23 TOF bins that add up to values that end up to be completely normal for this benchmark dataset coming from a standard FDG scan.

About the calibration factor, its purpose is what you said and its value is typical for nowadays clinical scanners. The solid angle is quite low for a 25cm long cylinder of 800mm diameter with respect to 4pi. The efficiency of typical crystal length is not perfect either. Then you square everything because you work with coincidences. Finally it also compensates for the somewhat arbitrary scale of normalization values. I may miss other factors that affect this calibration value.