Magnetic resonance imaging–based radiation treatment plans for dogs may be feasible with the use of generative adversarial networks

Nicola Billings Department of Engineering, College of Engineering and Physical Sciences, University of Guelph, Guelph, ON, Canada

Search for other papers by Nicola Billings in
Current site
Google Scholar
PubMed
Close
 MASc
,
Ryan Appleby Department of Clinical Studies, Ontario Veterinary College, University of Guelph, Guelph, ON, Canada

Search for other papers by Ryan Appleby in
Current site
Google Scholar
PubMed
Close
 DVM, DACVR https://orcid.org/0000-0002-5515-8628
,
Amin Komeili Department of Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada

Search for other papers by Amin Komeili in
Current site
Google Scholar
PubMed
Close
 PhD
,
Valerie Poirier Department of Biomedical Science, Ontario Veterinary College, University of Guelph, Guelph, ON, Canada

Search for other papers by Valerie Poirier in
Current site
Google Scholar
PubMed
Close
 DMV, DACVIM, DACVR, DECVIM
,
Christopher Pinard Department of Clinical Studies, Ontario Veterinary College, University of Guelph, Guelph, ON, Canada
Department of Oncology, Lakeshore Animal Health Partners, Toronto, ON, Canada
Department of Radiogenomics, Sunnybrook Health Sciences Centre, Toronto, ON, Canada

Search for other papers by Christopher Pinard in
Current site
Google Scholar
PubMed
Close
 DVM, DVSc, DACVIM https://orcid.org/0000-0003-1311-5467
, and
Eranga Ukwatta Department of Engineering, College of Engineering and Physical Sciences, University of Guelph, Guelph, ON, Canada

Search for other papers by Eranga Ukwatta in
Current site
Google Scholar
PubMed
Close
 PhD
Open access

Abstract

Objective

The purpose of this research was to examine the feasibility of utilizing generative adversarial networks (GANs) to generate accurate pseudo-CT images for dogs.

Methods

This study used head standard CT images and T1-weighted transverse with contrast 3-D fast spoiled gradient echo head MRI images from 45 nonbrachycephalic dogs that received treatment between 2014 and 2023. Two conditional GANs (CGANs), one with a U-Net generator and a PatchGAN discriminator and another with a residual neural network (ResNet) U-Net generator and ResNet discriminator were used to generate the pseudo-CT images.

Results

The CGAN with a ResNet U-Net generator and ResNet discriminator had an average mean absolute error of 109.5 ± 153.7 HU, average peak signal-to-noise ratio of 21.2 ± 4.31 dB, normalized mutual information of 0.89 ± 0.05, and dice similarity coefficient of 0.91 ± 0.12. The dice similarity coefficient for the bone was 0.71 ± 0.17. Qualitative results indicated that the most common ranking was “slightly similar” for both models. The CGAN with a ResNet U-Net generator and ResNet discriminator produced more accurate pseudo-CT images than the CGAN with a U-Net generator and PatchGAN discriminator.

Conclusions

The study concludes that CGAN can generate relatively accurate pseudo-CT images but suggests exploring alternative GAN extensions.

Clinical Relevance

Implementing generative learning into veterinary radiation therapy planning demonstrates the potential to reduce imaging costs and time.

Abstract

Objective

The purpose of this research was to examine the feasibility of utilizing generative adversarial networks (GANs) to generate accurate pseudo-CT images for dogs.

Methods

This study used head standard CT images and T1-weighted transverse with contrast 3-D fast spoiled gradient echo head MRI images from 45 nonbrachycephalic dogs that received treatment between 2014 and 2023. Two conditional GANs (CGANs), one with a U-Net generator and a PatchGAN discriminator and another with a residual neural network (ResNet) U-Net generator and ResNet discriminator were used to generate the pseudo-CT images.

Results

The CGAN with a ResNet U-Net generator and ResNet discriminator had an average mean absolute error of 109.5 ± 153.7 HU, average peak signal-to-noise ratio of 21.2 ± 4.31 dB, normalized mutual information of 0.89 ± 0.05, and dice similarity coefficient of 0.91 ± 0.12. The dice similarity coefficient for the bone was 0.71 ± 0.17. Qualitative results indicated that the most common ranking was “slightly similar” for both models. The CGAN with a ResNet U-Net generator and ResNet discriminator produced more accurate pseudo-CT images than the CGAN with a U-Net generator and PatchGAN discriminator.

Conclusions

The study concludes that CGAN can generate relatively accurate pseudo-CT images but suggests exploring alternative GAN extensions.

Clinical Relevance

Implementing generative learning into veterinary radiation therapy planning demonstrates the potential to reduce imaging costs and time.

Brain tumors are abnormal masses within the skull that are caused by rapidly dividing cells.1,2 Tumors can be classified as benign, which are noncancerous, or malignant, which are cancerous.1,2 Brain tumors can occur in any dog breed, and at any age.3 The incidence of brain tumors in 8-year-old adult dogs is 2.8% to 4.5%.4,5 Radiation therapy in dogs4 and people6 uses high doses of ionizing radiation over several weeks to kill cancer cells by damaging their DNA beyond repair. Currently, radiation therapy treatment plans in people require CT imaging to obtain electron density information for dose planning and MRI for soft tissue delineation.6 Magnetic resonance imaging cannot be used alone for radiation therapy planning in people because it does not provide electron density information; therefore, a radiation dose cannot be planned.6 The factor that affects the accuracy of radiation therapy the most in humans is the delineation of the tumor volume.7 Typically CT with the addition of contrast can be used alone for radiation therapy planning and is often sufficient for tumor volume delineation in people.8 However, the inclusion of MRI can enhance the planning process by providing superior soft tissue contrast, which can be particularly useful for tumors located in areas where CT may have limited differentiation, such as the brain or prostate.9 Magnetic resonance imaging can also help improve the accuracy of tumor delineation, although it increases both the cost and time required for treatment planning in people.1012

Significant interest has been paid to developing MRI-based radiation treatment plans without the use of CT through machine learning and deep learning (DL) methods, which would reduce patient radiation exposure, imaging cost, and time.1012 There are no studies that have explored this concept for veterinary applications. Common methods used to generate human pseudo-CT images include convolutional neural networks (CNNs), generative adversarial networks (GANs), and ensemble approaches.6,1316 A study12 attempted to generate realistic head synthetic CT images from MRI images using 19 patients from an open-source dataset. It used a U-Net CNN with a 6-fold validation approach to generate synthetic CT images.12 Another study17 developed and evaluated an effective method to generate head pseudo-CT images using multiparametric MRI images using a multichannel multipath GAN. The study17 consisted of 32 patients, a residual neural network (ResNet) U-Net generator, and a CNN discriminator. The model had minimal errors for the soft tissue regions, but apparent errors in the bone regions, especially at the edges of the bony structures.17 A different study18 compared and implemented three different deep CNN architectures for pseudo-CT synthesis from MRI images. The different architectures used were U-Net, Atrous Net, and ResNet.18 All 3 of the models had moderate errors for the soft tissues but apparent errors in the bone regions.18 Pseudo-CT image generation from MRI images for head and neck radiation therapy was performed for 8 patients using a GAN with a U-Net generator and CNN discriminator.19 Another study14 proposed an ensemble approach with stacked generalization to generate accurate synthetic CT images that considered patch-based, texture, and spatial features. Feature extraction, fusion, and reduction were performed to get the most optimal features.14 The ensemble used artificial neural networks, random forest, and k-nearest neighbors.14 Stacked generalization was achieved using multiple linear regressions.14 The study14 reported that the generated images had poor resolution due to their limited hardware. A study exploring generative learning in veterinary radiation therapy planning is warranted due to the unique challenges associated with animal treatment planning when compared to human treatment planning. These differences include variations in anatomy and size. Additionally, animal patients often require anesthesia or sedation during imaging and treatment procedures, complicating the process of acquiring accurate, high-quality images for planning.20 Understanding these factors is crucial for improving treatment outcomes and ensuring the safe and effective delivery of radiation therapy in veterinary oncology.

The objective of this study was to observe if DL can be used to generate accurate pseudo-CT images from MRI images to improve veterinary radiation therapy planning. A comparative study between 2 CGANs was implemented to observe if CGANs are suitable for generating accurate pseudo-CT images.

Methods

Patient cases for this study included nonbrachycephalic canines with brain tumors that received radiation therapy from the Ontario Veterinary College between 2014 and 2023. The dogs needed a radiation therapy head standard CT (BrightSpeed CT; GE HealthCare) and a T1 transverse + C 3-D fast spoiled gradient echo MRI (Signa Explorer MRI; GE HealthCare) on the same day. The dogs were under general anesthesia. The radiation therapy head standard CT images were acquired with a slice thickness of 2.5 mm, 120 kVp, pixel spacing of 0.48 mm, and an in-plane resolution of 0.59 X 0.59 X 0.625 mm3. The MRI images were acquired with a slice thickness of 0.8 mm, magnetic field strength of 1.5 T, pixel spacing of 0.71 mm, in-plane resolution of 0.71 X 0.71 X 0.8 mm3, echo time of 4.2 ms, and repetition time between 4 and 8.5 ms. This study compared 2 different CGANs, one consisted of a U-Net generator and PatchGAN discriminator that was inspired by Tolpadi et al,21 and the other had a ResNet U-Net generator and ResNet discriminator.

Image preprocessing

Image preprocessing involved resizing, registering, and normalizing the pixel data. The CT image slices had an original resolution of 512 X 512 pixels, and the MRI image slices had a resolution of 256 X 256 pixels. The CT images were down-sampled to 256 X 256 pixels to have a consistent resolution with the MRI images. The MRI and CT data were then rigidly registered (3D Slicer 5.2.2)22 and normalized between 0 and 1. A 3-fold validation was used to train and validate the models. For this study, 80% (36 patients) of the data was used for training and validation. The remaining 20% (9 patients) of the data was used as a test set, where the best model from the 3-fold validation was used for evaluating the model performance.

Proposed U-Net generator and PatchGAN discriminator CGAN

The CGAN inspired by Tolpadi et al21 was selected for pseudo-CT image generation based on its proven ability to learn complex image-to-image translation tasks, particularly in medical imaging applications. This architecture has been successfully used in a variety of domains, including the generation of synthetic medical images from MRI data, due to its capability to generate high-quality, realistic images with preserved anatomical details.21 The CGAN (Tolpadi et al21) was selected because it has demonstrated superior performance in tasks where high fidelity and anatomical accuracy are critical. Additionally, its use of paired image datasets enables the model to learn the mapping between the 2 imaging modalities more effectively than other generative models.21

The CGAN inspired by Tolpadi et al21 consisted of a U-Net generator and a PatchGAN discriminator. The contracting component of the U-Net consisted of repeated 2-D convolutional layers, batch normalization, and a leaky rectified linear unit (Leaky ReLU) activation function. The spatial information is enlarged by 2-D transverse convolutions, batch normalization, dropout with a rate of 0.5, and a Leaky ReLU activation function. The output layer of the decoder used a sigmoid activation. A concatenation between the corresponding encoder layer's features is performed to assist the model in capturing low- and high-level features.23 The generator and discriminator were updated using the Adam optimizer with a learning rate of 0.0002. The generator loss function used both binary cross entropy (BCE) and mean absolute error (MAE) as seen in the following equation:
L=BCE(D(G(I,C), Label)+0.5(MAE(G(I,C),T)

In this equation, D represents the discriminator, G is the generator, I is the input MRI image, C is the condition, and Label is the fake label assigned to the synthetic CT images. This loss is then combined with the MAE calculated between the generated image G(I,C) and the actual image (T).

The PatchGAN discriminator is 6 layers deep, with an output patch of 5 X 5 pixels. The PatchGAN discriminator is designed to accept a pair of input images and the respective label. The image pair undergoes repeated 2-D convolutional layers, batch normalization, and Leaky ReLU activation functions. The output layer of the discriminator has a sigmoid activation function. The discriminator was trained using BCE loss.

Proposed ResNet U-Net generator and ResNet discriminator CGAN

The proposed CGAN model consisted of a ResNet U-Net generator. The contracting component of the U-Net consisted of repeated residual blocks that have 2 layers of 2-D convolutional layers, batch normalization, and Leaky ReLU activation functions. The output from the 2 layers is combined with a skip link, which has a 2-D convolution and batch normalization. The MRI spatial information is enlarged by a factor of 2 (2 X 2 upsampling) in both the horizontal and vertical directions before applying the residual blocks. The output layer of the decoder has a sigmoid activation function. A concatenation between the corresponding encoder layer's features is performed to assist the model in capturing low- and high-level features.23 The generator and discriminator were updated using the Adam optimizer with a learning rate of 0.0002. The generator loss function used both BCE and MAE as seen in the equation.

The proposed discriminator consisted of an input layer and 6 residual blocks. The discriminator is designed to accept a pair of input images and their respective label. The residual learning blocks consisted of a 2-D convolution and a Leaky ReLU activation function, followed by another 2-D convolution, and added to a skip link. The skip link consisted of a 2-D convolution with a kernel size of 4 X 4 pixels and a stride of 1. The discriminator was trained using BCE loss.

Evaluation metrics

Metrics used to evaluate the performance of the 2 CGAN models included dice similarity coefficient (DSC), MAE, normalized mutual information (NMI), and peak signal-to-noise ratio (PSNR). The DSC is a value between 0, meaning no overlap, and 1, representing perfect alignment.24 In this study, a binary mask was created using thresholding for the entire head (tissue, water, fat, and bone) using a threshold range of −100 to 2,000 HU for real and generated images.25 Binary masks for the real and generated images were also created for just the bone regions by using a 226 to 2,000 HU range.25 The DSC can be calculated using the following equation:
DSC(v)= 2i=1N[yiy^i]i=1N[yi]+i=1N[y^i]

where N is the number of voxels in the generated volume, yi is the ground truth voxel value, and y^i is the generated voxel value.

The MAE between the generated CT and real CT images for the entire head, tissue, and bone regions were calculated using the following equation:
MAE= i=1N|yixi|N

where N represents the number of pixels, yi represents the generated image pixel value, and xi represents the actual pixel value. Thresholding using a range between −100 and 226 HU was used to isolate the tissue.25

The PSNR can be calculated using the following equation:
PSNR=20×log10(Max)10×log10(MSE)

where Max is the maximum pixel value within the generated image and MSE is the error squared between the actual image and the generated image.

The NMI can be used to evaluate the similarity between the generated CT images and the actual CT images in the following equation:
NMI(CTt,CTf)= 2I(CTt,CTf)H(CTt)+H(CTf)

where I(CTt,CTf) is the mutual information between the pseudo-CT and actual CT. The H(CTt) and H(CTf) are the entropies of the actual and generated images. The closer the NMI is to 1, the better the image registration.

For qualitative results, a 5-point scale is used to evaluate the quality and accuracy of the pseudo-CT images. From the test dataset, 45 random MRI image slices were selected and each of the respective models generated a pseudo-CT image. The images were evaluated by a veterinary radiation oncologist, a veterinary radiologist, and a veterinary medical oncologist. Each reviewer received the real CT, the respective pseudo-CT from each model, and the following ranking scale in Supplementary Table S1, which contains parameters for ranking. It was created specifically for this study and is not adapted from any previously published material. Each reviewer also received an additional 9 image pairs that they had already seen to observe intrarater reliability.

Statistical analysis

The Wilcoxon signed-rank tests with a confidence threshold of 5% (P < .05) were used for all quantitative metrics to evaluate if there was a statistical difference between the 2 correlated groups. For the qualitative metrics where the results are based on 3 raters’ observations, interrater agreement is measured using the Fleiss kappa. The output of the Fleiss kappa is an output between −1 and 1, where values ≤ 0 indicate no agreement, 0.01 to 0.20 indicate none to slight, 0.21 to 0.40 indicate fair, 0.41 to 0.60 indicate moderate, 0.61 to 0.80 indicate substantial, and 0.81 to 1.00 indicate almost perfect agreement.26

Results

Quantitative results

A total of 118 patients received radiation therapy planning for brain tumors at the Ontario Veterinary College between 2014 and 2023, but only 45 patient cases met the inclusion criteria and were included in this study. In this study, the model inspired by Tolpadi et al21 produced MAE values of 126.2 ± 134.9 HU, 96.0 ± 28.3 HU, and 289.1 ± 152.6 HU for the entire head, tissue, and bone regions, respectively (Table 1). In comparison, the ResNet U-Net generator and discriminator model yielded MAE values of 109.5 ± 153.7 HU, 90.4 ± 26.3 HU, and 161.2 ± 153.7 HU for the same regions. The model inspired by Tolpadi et al21 also resulted in lower PSNR values of 20.5 ± 3.63 dB, 21.9 ± 4.53 dB, and 18.3 ± 4.12 dB for the head, tissue, and bone regions, respectively, compared to the ResNet U-Net model, which showed PSNR values of 21.2 ± 4.31 dB, 21.9 ± 4.71 dB, and 19.6 ± 4.81 dB. For the entire head, both models achieved a DSC of 0.91 ± 0.12, suggesting that both models were able to accurately capture the head shape. However, the ResNet U-Net model demonstrated a higher DSC for the bone regions (0.71 ± 0.17) compared to the model inspired by Tolpadi et al.21 Regarding NMI, the model based on Tolpadi et al21 produced values of 0.87 ± 0.07 and 0.64 ± 0.07 for the head and bone regions, respectively, which were lower than those of the ResNet U-Net model (0.89 ± 0.05 and 0.67 ± 0.09). Given that the P value is < .05, the difference between the CGAN inspired by Tolpadi et al21 and the proposed CGAN is statistically significant across all measured regions (Table 2).

Table 1

Quantitative metrics for both models.

Method/region Mean absolute error (HU) Peak signal- to-noise ratio (dB) Dice similarity coefficient Normalized mutual information
CGAN with U-Net generator and PatchGAN discriminator inspired by Tolpadi et al21
 Head 126.2 ± 134.9 20.5 ± 3.63 0.91 ± 0.12 0.87 ± 0.07
 Tissue 96.0 ± 28.3 21.9 ± 4.53
 Bone 289.1 ± 152.6 18.3 ± 4.12 0.69 ± 0.19 0.64 ± 0.07
CGAN with ResNet U-Net generator and ResNet discriminator
 Head 109.5 ± 153.7* 21.2 ± 4.31* 0.91 ± 0.12* 0.89 ± 0.05*
 Tissue 90.4 ± 26.3* 21.9 ± 4.71*
 Bone 161.2 ± 153.7* 19.5 ± 4.81* 0.71 ± 0.17* 0.67 ± 0.09*

CGAN = Conditional generative adversarial network. ResNet = Residual neural network.

*Indicates significant improvement via Wilcoxon signed rank test (P < .05).

Bold is used to show the best performance.

Table 2

Wilcoxon signed rank test for mean absolute error, peak signal-to-noise ratio, normalized mutual information, and dice similarity coefficient between the 2 models.

Metrics P value Standardized effect size Common language effect size
Mean absolute error (HU)
 Head P < .001* Medium (0.37) 0.71
 Tissue P < .001* Small (0.21) 0.62
 Bone P < .001* Small (0.26) 0.65
Peak signal-to-noise ratio (dB)
 Head P < .001* Small (0.25) 0.36
 Tissue P < .001* Small (0.05) 0.47
 Bone P < .001* Small (0.27) 0.34
Normalized mutual information
 Head P < .001* Medium (0.4) 0.27
 Bone P < .001* Small (0.00) 0.49
Dice similarity coefficient
 Head P < .001* Small (0.081) 0.45
 Bone P < .001* Small (0.24) 0.36

*Indicates significant improvement (P < .05).

Qualitative results

Examples of the pseudo-CT images generated by each model can be observed in Figure 1.

Figure 1
Figure 1

Representative transverse calvarium pseudo-CT images generated by model 1 (A) and model 2 (B) through the use of generative adversarial networks, compared with actual head standard CT images (C) and T1-weighted transverse with contrast 3-D fast spoiled gradient echo head MRI images (D) of dogs with brain tumors between January 2014 and December 2023. Model 1 used the U-NET generator and PatchGAN discriminator, and model 2 used the residual neural network (ResNet) U-Net generator and ResNet discriminator.

Citation: American Journal of Veterinary Research 86, 6; 10.2460/ajvr.24.08.0248

The number of votes for each category by each reviewer can be found in Figure 2. The interrater reliability for this study is 0.85 for the model inspired by Tolpadi et al21 and 0.81 for the ResNet U-Net generator and ResNet discriminator model. The intrarater variability for reviewers 1, 2, and 3 was 92%, 89%, and 89%, respectively (Table 3).

Figure 2
Figure 2

Ranking results for the pseudo-CT images, where model 1 represents the U-Net generator and PatchGAN discriminator and model 2 is the ResNet U-Net generator and ResNet discriminator.

Citation: American Journal of Veterinary Research 86, 6; 10.2460/ajvr.24.08.0248

Table 3

Intrarater reliability analysis for all 3 reviewers.

Reviewer/model Not similar Slightly similar Moderately similar Quite similar Identical Total reliability
1
Model 1 100% 100% 100% 100% 100% 92%
Model 2 89% 78% 67% 89% 100%
2
Model 1 100% 89% 89% 100% 100% 89%
Model 2 100% 56% 67% 89% 100%
3
Model 1 89% 89% 100% 100% 100% 89%
Model 2 78% 56% 78% 100% 100%

Discussion

The inclusion criteria were essential to reduce dataset heterogeneity and maintain consistency in skull shape, ensuring that the model could focus on learning patterns relevant to the study population (mesocephalic and dolichocephalic dogs). Large variations in skull structure could introduce difficulties for a DL model due to the significant morphological differences between skull types. As demonstrated by Ichikawa et al,27 brachycephalic, mesocephalic, and dolichocephalic dogs exhibit distinct variations in skull length and skull index, which can lead to differences in the spatial orientation and proportions of anatomical structures. To mitigate this, this study focused on mesocephalic and dolichocephalic dogs, ensuring a more uniform dataset that would minimize this challenge.

The results demonstrate that the CGAN model incorporating a ResNet U-Net generator and ResNet discriminator outperforms the U-Net generator and PatchGAN discriminator in terms of key performance metrics. Specifically, the proposed model achieved lower MAE and higher DSC, NMI, and PSNR across all regions. Furthermore, the proposed model showed improvements in the accuracy and clarity of the generated pseudo-CT images when compared to the CGAN inspired by Tolpadi et al.21 In addition, the classification of the pseudo-CT images revealed that the proposed model consistently generated images that were categorized as “slightly similar,” with only a small margin separating it from “moderately similar.” When comparing these findings with previous studies12,1719 listed in the introduction, the results are consistent with the typical errors observed in similar DL-based pseudo-CT generation approaches. For instance, the MAE values reported in this study (ranging from 90 to 290 HU) are slightly higher than those reported in some studies of the human brain pseudo-CT image generation, where typical MAE values are often below 100 HU for soft tissue regions. This suggests that, while the models perform well, further improvements in image quality, training data, and model refinement are necessary to reduce these errors and enhance generalizability. It is believed that this study has difficulty predicting bone-based regions due to the large variation of canine skull anatomical differences observed in different dog breeds.28 The variation in mesocephalic and dolichocephalic skull types may have made it challenging for the model to accurately predict bone structures in the skull. It is believed that both models have some blurring in the pseudo-CTs due to the MRI having a much lower voxel-wise resolution than the CT.

Future research could attempt to generate synthetic CT images from MRI images using other DL methods or GAN extensions.29 These DL methods may be able to generate more accurate pseudo-CT images from MRI images. Future work could examine if there is a difference between performing image-to-image translation tasks from CT to MRI. An extension of this study could explore calculating the prescribed radiation dose from the generated CT images to see the difference between dose planning.30 Finally, the next steps for this study could also involve expanding the dataset to include brachycephalic canines, which would increase the model's generalizability. This study has several limitations, including issues related to image quality, sample size, and generalizability.

One key limitation is the limited number of canines at the Ontario Veterinary College who received radiation therapy and had both MRI and CT scans on the same day. This restricted data set meant that even lower quality or noisy images were included. The presence of low-quality images could hinder the DL model's ability to effectively learn, potentially impacting its performance. Another limitation is that the study only included nonbrachycephalic canines, which limits its generalizability to other dog breeds. Additionally, the use of an unvalidated scale to assess the pseudo-CT images is a notable limitation. This introduces the risk of measurement bias and inconsistent interpretation of the evaluation criteria, which could affect the reliability of the results. Without validation, the scale's ability to accurately assess image quality or assess agreement across raters remains uncertain. The unvalidated scale was chosen due to practical constraints, this limitation should be considered when interpreting the study's findings.

In conclusion, while the ResNet U-Net model demonstrates improvements over the Tolpadi et al21 inspired model in terms of image quality and performance metrics, the errors identified in the study, particularly in bone delineation, highlight the need for further refinement. Future studies should focus on optimizing the models for different dog breeds, validating evaluation scales, and exploring methods to reduce these errors for improved clinical applicability in radiation therapy planning.

Supplementary Materials

Supplementary materials are posted online at the journal website: avmajournals.avma.org.

Acknowledgments

None reported.

Disclosures

The resources used for this study were provided by the University of Guelph and included the Alienware Aurora R13 desktop computer with a 2.1-GHz Intel Core i7-12700F 12-core CPU and a 10-GB NVIDIA GeForce RTX 3080 GDDR6X GPU. The code was written in Python and used Keras API with TensorFlow backend for the DL implementation. Rigid registration between the MRI and CT images was performed by using 3D Slicer open-source software version 5.2.2 (www.slicer.org). Data sharing is not applicable to this article.

No AI-assisted technologies were used in the composition of this manuscript.

Funding

The authors have nothing to disclose.

References

  • 1.

    Patel A. Benign vs malignant tumors. JAMA Oncol. 2020;6(9):1488. doi:10.1001/jamaoncol.2020.2592

  • 2.

    Sinha T. Tumors: benign and malignant. Cancer Ther Oncol Int J. 2018;10(3):CTOIJ.MS.ID.555790. doi:10.19080/CTOIJ.2018.10.555790

  • 3.

    Miller AD, Miller CR, Rossmeisl JH. Canine primary intracranial cancer: a clinicopathologic and comparative review of glioma, meningioma, and choroid plexus tumors. Front Oncol. 2019;9:1151. doi:10.3389/fonc.2019.01151

    • Search Google Scholar
    • Export Citation
  • 4.

    Hicks J, Platt S, Kent M, Haley A. Canine brain tumours: a model for the human disease? Vet Comp Oncol. 2017;15(1):252272. doi:10.1111/vco.12152

    • Search Google Scholar
    • Export Citation
  • 5.

    Rees JH. Diagnosis and treatment in neuro-oncology: an oncological perspective. Br J Radiol. 2011;84(special_issue_2):S82S89. doi:10.1259/bjr/18061999

    • Search Google Scholar
    • Export Citation
  • 6.

    Leu SC, Huang Z, Lin Z. Generation of pseudo-CT using high-degree polynomial regression on dual-contrast pelvic MRI data. Sci Rep. 2020;10(1):8118. doi:10.1038/s41598-020-64842-3

    • Search Google Scholar
    • Export Citation
  • 7.

    Owrangi AM, Greer PB, Glide-Hurst CK. MRI-only treatment planning: benefits and challenges. Phys Med Biol. 2018;63(5):05TR01. doi:10.1088/1361-6560/aaaca4

    • Search Google Scholar
    • Export Citation
  • 8.

    Martin CJ, Kron T, Vassileva J, et al. An international survey of imaging practices in radiotherapy. Phys Med. 2021;90:5365. doi:10.1016/j.ejmp.2021.09.004

    • Search Google Scholar
    • Export Citation
  • 9.

    Stieb S, McDonald B, Gronberg M, Engeseth GM, He R, Fuller CD. Imaging for target delineation and treatment planning in radiation oncology. Hematol Oncol Clin North Am. 2019;33(6):963975. doi:10.1016/j.hoc.2019.08.008

    • Search Google Scholar
    • Export Citation
  • 10.

    Sun H, Fan R, Li C, et al. Imaging study of pseudo-CT synthesized from cone-beam CT based on 3D CycleGAN in radiotherapy. Front Oncol. 2021;11:603844. doi:10.3389/fonc.2021.603844

    • Search Google Scholar
    • Export Citation
  • 11.

    Yousefi Moteghaed N, Mostaar A, Azadeh P. Generating pseudo-computerized tomography (P-CT) scan images from magnetic resonance imaging (MRI) images using machine learning algorithms based on fuzzy theory for radiotherapy treatment planning. Med Phys. 2021;48(11):70167027. doi:10.1002/mp.15174

    • Search Google Scholar
    • Export Citation
  • 12.

    Sreeja S, Muhammad Noorul Mubarak D. Pseudo computed tomography image generation from brain magnetic resonance image using integration of PCA & DCNN-UNET: a comparative analysis. J Intel Fuzzy Syst. 2022;43(3):30213037. doi:10.3233/JIFS-213367

    • Search Google Scholar
    • Export Citation
  • 13.

    Leynes AP, Yang J, Wiesinger F, et al. Zero-echo-time and dixon deep pseudo-CT (ZeDD CT): direct generation of pseudo-CT images for pelvic PET/MRI attenuation correction using deep convolutional neural networks with multiparametric MRI. J Nucl Med. 2018;59(5):852858. doi:10.2967/jnumed.117.198051

    • Search Google Scholar
    • Export Citation
  • 14.

    Boukellouz W, Moussaoui A. Magnetic resonance-driven pseudo CT image using patch-based multi-modal feature extraction and ensemble learning with stacked generalisation. J King Saud Univ Comp Info Sci. 2021;33(8):9991007. doi:10.1016/j.jksuci.2019.06.002

    • Search Google Scholar
    • Export Citation
  • 15.

    Lei Y, Jeong JJ, Wang T. MRI-based pseudo CT synthesis using anatomical signature and alternating random forest with iterative refinement model. J Med Imaging. 2018;5(4):043504. doi:10.1117/1.JMI.5.4.043504

    • Search Google Scholar
    • Export Citation
  • 16.

    Largent A, Barateau A, Nunes JC, et al. Comparison of deep learning-based and patch-based methods for pseudo-CT generation in MRI-based prostate dose planning. Int J Radiat Oncol Biol Phys. 2019;105(5):11371150. doi:10.1016/j.ijrobp.2019.08.049

    • Search Google Scholar
    • Export Citation
  • 17.

    Tie X, Lam S, Zhang Y, Lee K, Au K, Cai J. Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients. Med Phys. 2020;47(4):17501762. doi:10.1002/mp.14062

    • Search Google Scholar
    • Export Citation
  • 18.

    Vera-Olmos J, Torrado-Carvajal A, Prieto-de-la-Lastra C, et al. How to pseudo-CT: a comparative review of deep convolutional neural network architectures for CT synthesis. Appl Sci. 2022;12(22):11600. doi:10.3390/app122211600

    • Search Google Scholar
    • Export Citation
  • 19.

    Largent A, Marage L, Gicquiau I, et al. Head-and-Neck MRI-only radiotherapy treatment planning: from acquisition in treatment position to pseudo-CT generation. Cancer Radiother. 2020;24(4):288297. doi:10.1016/j.canrad.2020.01.008

    • Search Google Scholar
    • Export Citation
  • 20.

    Belotta AF, Beazley S, Hutcheson M, Mayer M, Beaufrère H, Sukut S. Comparison of sedation and general anesthesia protocols for 18F-FDG-PET/CT studies in dogs and cats: musculoskeletal uptake and radiation dose to workers. Vet Radiol Ultrasound. 66(1):e13439. 2024. doi:10.1111/vru.13439

    • Search Google Scholar
    • Export Citation
  • 21.

    Tolpadi AA, Luitjens J, Gassert FG, et al. Synthetic inflammation imaging with PatchGAN deep learning networks. Bioengineering. 2023;10(5):516. doi:10.3390/bioengineering10050516

    • Search Google Scholar
    • Export Citation
  • 22.

    Fedorov A, Beichel R, Kalpathy-Cramer J, et al. 3D slicer as an image computing platform for the QuantitativeImaging network. Magn Reson Imaging. 2012;30(9):13231341. doi:10.1016/j.mri.2012.05.001

    • Search Google Scholar
    • Export Citation
  • 23.

    Siddique N, Paheding S, Elkin CP, Devabhaktuni V. U-net and its variants for medical image segmentation: a review of theory and applications. IEEE Access. 2021;9:8203182057. doi:10.1109/ACCESS.2021.3086020

    • Search Google Scholar
    • Export Citation
  • 24.

    Ratke A, Darsht E, Heinzelmann F, Kröninger K, Timmermann B, Bäumer C. Deep-learning-based deformable image registration of head CT and MRI scans. Front Phys. 2023;11:1292437. doi:10.3389/fphy.2023.1292437

    • Search Google Scholar
    • Export Citation
  • 25.

    Broder J, Preston R. Imaging the head and brain. In: Diagnostic Imaging for the Emergency Physician. Elsevier; 2011:145. doi:10.1016/B978-1-4160-6113-7.10001-8

    • Search Google Scholar
    • Export Citation
  • 26.

    McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb). 2012;22(3):276282.

  • 27.

    Ichikawa Y, Kanemaki N, Kanai K. Breed-specific skull morphology reveals insights into canine optic chiasm positioning and orbital structure through 3D CT scan analysis. Animals. 2024;14(2):197. doi:10.3390/ani14020197

    • Search Google Scholar
    • Export Citation
  • 28.

    Ekenstedt KJ, Crosse KR, Risselada M. Canine brachycephaly: anatomy, pathology, genetics and welfare. J Comp Pathol. 2020;176:109115. doi:10.1016/j.jcpa.2020.02.008

    • Search Google Scholar
    • Export Citation
  • 29.

    Sun H, Xi Q, Fan R, et al. Synthesis of pseudo-CT images from pelvic MRI images based on an MD-CycleGAN model for radiotherapy. Phys Med Biol. 2022;67(3):035006. doi:10.1088/1361-6560/ac4123

    • Search Google Scholar
    • Export Citation
  • 30.

    Jabbarpour A, Mahdavi SR, Vafaei Sadr A, Esmaili G, Shiri I, Zaidi H. Unsupervised pseudo CT generation using heterogenous multicentric CT/MR images and CycleGAN: dosimetric assessment for 3D conformal radiotherapy. Comput Biol Med. 2022;143:105277. doi:10.1016/j.compbiomed.2022.105277

    • Search Google Scholar
    • Export Citation

Supplementary Materials

All Time Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 1721 1721 642
PDF Downloads 309 309 108
Advertisement