Reconstructing with normalization

Dear CASToR developers and users,

I have been facing an issue when trying to reconstruct PET data (a 18F measurement with the NEMA IQ Body Phantom) acquired on a G.E. Discovery MI with CASToR, in particular when reconstructing with normalization correction factors embedded in the file. The correction terms (attenuation, scatter, random, normalization, dead-time, and pile-up) that I am using when reconstructing with CASToR have been extracted with the G.E. PET Toolbox. I am converting the G.E. PET list-file to a CASToR list-file, embedding all aforementioned correction terms in that file. Additionally, I am also constructing the CASToR normalization file by looping over all (unique) crystal-pairs and embedding the attenuation- and normalization correction factors as is suggested in the general documentation.

When performing the reconstruction (in my case OSEM with a 4 mm axial and transaxial FWHM post-filtering) with normalization factors embedded in both the list-file and the normalization file, every other slice has a reduced intensity - this does not appear if I do not use the CASToR normalization file (and thus not embedding attenuation- and/or normalization correction factors in the list-file). This can be seen in the attached image, (1) shows the reconstructed images (summed) with CASToR including attenuation and normalization correction embedded in the file, (2) shows the reconstructed images (summed) when using the pifa obtained from the PET Toolbox to account for attenuation correction and not including attenuation and normalization correction factors in the CASToR file, and (3) is when the reconstruction is performed with the PET Toolbox.

Now, if I look at the sinograms extracted from the PET Toolbox I can see that for the first axial indices (0-70) there is a notable difference in the number of counts, something that I am expecting to be corrected for by the normalization correction. So, I tried to look at the prompts sinogram by subtracting the randoms and scattered coincidences, and then multiplying each sinogram bin by the attenuation- and normalization correction factors. From this I get a smooth sinogram (summed over all views) which can be seen as (4) in the attached image. If I would do the same thing again, but this time not multiply with the normalization correction factors, this differing intensity becomes apparent in the first axial slices in the sinogram (ring difference 0 for the DMI with four axial units), seen to the left in (5) in the attached image.

This made me think that the error lies in the CASToR normalization file. So, to be on the safe side that I constructed the CASToR normalization file correctly, I looked at the normalization file in the benchmark of the G.E. SIGNA data event by event and could not find any discrepancy between how I constructed the normalization file and the organization to that of the benchmark normalization file.

I have tried changing the projector but the issue remains (although less apparent with the distance-driven projector).

Is there something obvious, or less obvious, that I might have missed? Is there something that you could point out that might be wrong?

I hope to hear from you!

Best regards,
Philip

Hi Philip,

In (2), you said that you provide an attenuation image but no attenuation factor in the CASToR datafile.

Does it means that you input the pifa using -attn option, that you do not input a normalization file and that in the list-mode file, there is no attenuation factor ?

If yes, then the corrections for scatters and randoms are severely underestimated.

Could you compare (1) with the following case: the attenuation map provided with -attn, no normalization file at all, attenuation correction factors included in the list-mode file ?

In the SIGNA, there is span 3 (indirect planes between rings are summed) for the first segment (copolar angle 0).

Is there any span in the acquired sinograms ?

I think there is some, from the image (4).

So it may be that you do not take this axial compression into account.

Simon

Hi,

thank you for your response Simon, and I am sorry for such a delayed reply.

I was not aware that you needed to provide the attenuation map and include acf in the data file when using the -atn option, which would explain the difference in my images when using -atn as oppose to -norm.

In the attached image (atn-norm.png) I have reconstructed embedding the acf in the CASToR data file and passing the attenuation map using -atn (upper row). I have also reconstructed embedding the acf in the data file and a passing a normalization file with acf embedded using -norm (bottom row). The images are labeled accordingly: no correction (No corr), with only attenuation correction (a corr), and with scatter-, random-, and attenuation corrections embedded in the data file (a+s+r corr). The streaking pattern becomes obvious (all images have been summed) when including scatter and/or random correction regardless of which attenuation correction option was chosen (-atn or -norm). This seems strange, should not these patterns appear already when reconstructing without any correction factors? Bear in mind, at this point I had still not accounted for the axial compression.

So, regarding the axial compression, you are correct, for the DMI as for the SIGNA there is an axial span of 3 for the first segment which I had not taken into consideration before. However, when trying to handle the compression in the CASToR data file the results still look strange(r). To account for the span of indirect slices of segment 0 (as the axial compression is the same for the SIGNA and DMI), I construct the data file according to table 3 (and the normalization file according to table 4) in the general documentation by providing the “list” of crystal pairs for each event where nb_lines is greater than 1, and filling with “garbage” bytes otherwise. The reconstructed images can be seen in the second attached image (recon.png). In recon.png I have included reconstructed images when 1) not accounting for axial compression (top row), 2) including axial compression (middle row), and 3) when reconstructing with the PET Toolbox just as reference (bottom row). In recon.png I also included a reconstructed imaged with normalization correction included in addition to scatter-, random-, and attenuation (a+s+r+n). As can be seen, the images still contain these streaking patterns (corresponding to every other slice) when including random- and scatter correction (regardless of including compression). When including normalization correction factors, these patterns are enhanced (and in the case of including compression, completely distorted).

Again, I am asking for some help; is this something that you have seen before and/or know how to remedy. Any tips or guidance that could point me in another direction would be helpful.

Additionally, may I perhaps ask, how is this handled in the v.3 listmode-benchmark? I am asking as the maximum number of lines per event is not specified in the header (i.e. should be set to the default of 1), yet I would assume that this compression is somehow accounted for. It would be additionally interesting to know as the distance driven projector is not compatible with compression in the data file.

Also, for your information, when trying to access the benchmarks from the website, a “page not found” is returned.

I am very grateful for any guidance in this matter.

Best regards,
Philip

Hi Philip,

For your first problem, my guess would be scatters.

From the PET toolbox, the estimation is corrected for normalization, however, to be generic, CASToR needs uncorrected scatter estimation.

Did you un-apply normalization correction to the scatters before including it into the datafile ?

If you separated scatters and randoms in the datafile, you can use the option ‘-ignore-corr scat’ to see if you get images without artifact while considering random correction.

About axial compression, as the span 3 is only for the first segment, we did not choose to include it in the exact way by using a maximum number of lines of 2, because it means having garbage LORs for 99% of the datafile.

We kept 1 line per event.

For indirect planes of the first segment, we created 2 events for each sinogram bin, where each event has half the data (half scatters and half randoms), twice the normalization coefficient, same attenuation of course, and is associated to one of the two crystal pairs.

We checked that the impact was not significant.

However, about the artifacts you get, it is hard to tell where there is a problem.

Hope this helps.

You are not far from accurate images!

Simon