Huge relative difference in benchmark PET test

Dear developers of CASToR,

Following the instructions of your general document, I just installed CASToR (version 3.1.1) on my linux workstation and to ensure the package has been installed properly, I then ran the benchmark test for PET histogram without any modification. The reconstruction process seems quite smooth but uplon completion, CASToR said there is huge difference between the reconstructed image and the reference one (detailed information provided below).

==================================================================================================================
Check benchmark results (compare reconstructed image with the reference one)
==================================================================================================================
→ !!! The mean relative difference over the whole image is -0.0465451 !
It is supposed to be between -5e-07 and 5e-07 !
→ !!! The mean relative difference for each slice has a maximum absolute value of 2.77493 !
It is supposed to be between -5e-05 and 5e-05 !
The test failed for 105 slices !
→ !!! The maximum absolute relative difference is 2.62767 !
It is supposed to be below 0.0001 !
→ !!! Benchmark tests failed (3 problems found) !
Double check all what you have done first. Then email the mailing list.

After reading relevant posts in this forum, I believe this a rather common problem introduced by the usage of compilers, but is the difference I observed fall into an acceptable range?

I have also visualized the reconstructions in Python and noticed that the difference mainly comes from the gaps between different ellipsoids. Since I am using the benchmark data for CASTOR v1, could this difference be introduced by the modification of the CASTOR among different versions?

Thanks for your attention and please do not hesitate to let me know if I should provide more information.

Kind regards,
Bo

Hi Bo,

Yes the benchmarks are specific to the version of the software. The reconstructed data is actually different depending on the version. You should get the correct benchmark related to CASToR v3 here and check again.

Best,
Thibaut

Hi Thibaut,

Thanks for your advice! I ran the benchmark test for CASToR v3 and this time it has passed sucessfully.

I have also took a look at the raw input data and they are indeed very different from that for the version 1. :smile:

Kind regards,
Bo