I am trying to understand the following concept in CASToR and reading documentation v3.
1- Calibration factor is not define in the document. I have found some definitions however, I am at loss yet. Can you please let me what is the calibration factor and its notion? In the document the default value is 1, however, in benchmark_pet_list-mode_tof.cdf it is set to a very large number, i,e, Calibration factor: 1.45762e+08. How do we assign calibration factor?
2- Normalization factor is not defined as well. I am not sure what does it actually means? Is normalization related to sensitivity? Does one contain the other one? Is there any relation between normalization binary file and the normalization factor?
3- In Tables 2 and 3 on pages 32 and 33 respectively it says (n) Normalization factor of the corresponding āhistogram binā/āeventā respectively. list mode event is different that a histogram mode bin. Is there any clarification for on this and use of normalization factor?
4- If we can normalize list mode, Is there any other reason than being more flexible to define a PET normalization mode?
Calibration factor is a voxelwise global factor to convert the counts into activity values or any value with a more physical meaning. You can have a look at the following topic for more about the image units, calibration factor and quantification:
2&3. Normalization should integrate any kind of multiplicative factors for the data channels, e.g detector efficiency, dead-time, some geometric factors depending on the system, etcā¦
In list-mode they are used during the generation of the sensitivity image through the back-projection of all possible LORs.
For list-mode, the event can be a single LOR or a group of LORs (in case of compression). Providing the normalization factor related to each recorded event is not enough as we need the factors from any possible LORs in order to properly generate the sensitivity images, so this is why there is a specific normalization datafile, which can also include ACFs. If you donāt provide this file, the sensitivity image will be generated from the system geometry assuming uniform detector efficiency, so only some geometric distortions will be corrected (which could be enough for some type of simulated data).
I donāt really understand what you mean by āIs there any other reason than being more flexible to define a PET normalization mode?ā
I am trying to understand and clarify the concept. Am I right in thinking that for each tiny voxel, I have a calibration factor (number/ coefficient)? In another word, does the calibration factor defines a matrix for a given FOV rather than a number?
However, I cannot understand why in benchmark_pet_list-mode_tof.cdf, it is set to a very large number, i,e, CF = 1.45762e+08? The default value for calibration is 1 and there is a big jump from 1 to CF = 1.45762e+08.
Here I mean if pet list-mod has normalization option, then why is a pet normalization-mode defined (since we already have a list-mode that can resemble normalization-mod)?
Hi,
The calibration factor mostly compensates for the (usually low) sensitivity of the system. Most of the data is lost due to the solid angle of the system (among other factors), so you need to scale by an appropriate factor to convert the number you get after reconstruction in an activity or equivalent value. The default value is arbitrary set to 1 as the value can change a lot depending on the system, or even remains 1 with some simulated data.
Regarding normalization, providing the factors for all data channels is mandatory in order to generate the sensitivity image, it is the role of the normalization datafile.
They are also present in each event of the list-mode datafile as the additive corrections (scatter, random) have to be corrected. It looks like redundant information as they could have been recovered from the normalization file, but it is also an optimization choice as it is more efficient to provide them directly in each event for computing purposes.
@tmerlin I want to continue this thread, as it is slightly not clear for me essentially one point:
why you always say that normalization factors are used in list-mode for sensitivity image computation?
As far as I know, sensitivity image is needed both in list-mode and histogram-mode, right?
p.s. Would it make sense in documentation to have more theoretical part, to show
how actually basic `algorithms are implemented (on pseudocode level) and at which stage normalization
factors and calibration ones are applied? It would clarify a lot their definitions and meanings.
For a histogram datafile, by definition, you have normalization factors for all LORs, they are then used during iterations for the computation of the sensitivity image (which is done at every subset/iteration because it can be dependent on the image itself for some optimizers) and for the computation of forward and backward projections.
For a listmode datafile, you have to know the set of all possible LORs for the computation of the sensitivity. This set is defined by the normalization datafile that should contain the normalization factor for every possible LOR. Then, in the listmode datafile, you also need to have the normalization factor for each event.
Having re-read the documentation and your answer I think I start to understand: in histogram file you list all bins, and for each bin there is a normalization factor (right?). However, list-mode data file contains only events - one-by-one and only normalizations for those events are given. And here - yes, you right, for optimization you need sensitivity image which requires āall possible LORSā not only those were events were detected. Tell me if I am wrong.
P.S. The only thing that I donāt understand just what a normalization factor on a particular example. Could you tell where I can look to understand on a simple example, how it is applied? In the documentation it is kept abstract, so I donāt understand when/where/why these factors multiply/add/normalize any other stuff.