Occult shock can be defined as a state of tissue hypoperfusion despite normal macrocirculatory parameters. Occult shock is associated with higher morbidity and mortality in people.1,2 Therefore, early shock detection in a seemingly stable patient can be essential to a successful outcome. Effective hemorrhage detection requires both time sensitivity (ie, early detection) and patient specificity (ie, identifying individual patient pathophysiological status). Identifying occult shock using the standard traditional clinical approach relies on measures of legacy vital signs (eg, heart rate, arterial blood pressure), urine characteristics (eg, volume, specific gravity, color), and blood chemistry analysis (eg, systemic lactate concentration). This approach fails to provide adequate time sensitivity and patient specificity when measures remain in normal or near normal ranges during compensation phases; it also contains significant interindividual variability and offers little information for specific early diagnosis.3–6 This failure of current approaches to provide accurate and early detection of hemorrhage likely results from measuring only parts of a complex, integrated cascade of interrelated physiological compensatory mechanisms that function to avoid reduced tissue oxygenation after significant blood loss.
Compelling evidence from the basic and clinical literature now supports measuring changing features in the arterial waveform, which contains information about the integrated compensatory responses of individuals to blood loss.7–9 As such, it is important that a hemorrhage detection monitor includes the capability to obtain and analyze analog arterial pressure waveforms.
Over the past decade, the US Army Institute of Surgical Research (USAISR) and collaborators have developed a new monitoring capability to meet the criteria for early hemorrhage detection. Known as the compensatory reserve measurement (CRM), this technology utilizes AI with arterial waveform signals that are obtained noninvasively from photoplethysmography analog recordings and provides continuous assessment of the body's ability to compensate for reductions in central blood volume experienced during hemorrhage.10 The CRM analyzes changes in features of the arterial waveforms in real time and indicates impending hemodynamic decompensation specific to the individual patient.11 The CRM provides an earlier assessment with higher sensitivity and specificity to predict patient status compared to standard vital signs and other hemodynamic and metabolic measurements. The CRM is the first monitoring capability that incorporates and recognizes differences in fully integrating all physiological compensatory mechanisms that protect against inadequate blood pressure regulation, systemic blood flow, and tissue oxygenation associated with hypovolemia.8,11
The CRM algorithms were originally developed based on machine learning (ML) and deep learning (DL) approaches that incorporated analyses of hundreds of specific features from a learning library of more than 650,000 arterial waveforms collected from 201 human subjects who experienced progressive central hypovolemia from baseline rest to the onset of hemodynamic decompensation. With this approach, the CRM has been “trained” to identify the current compensatory status of each individual patient without requiring a patient's baseline.11–13 Investigators have also been able to validate the CRM with the use of pulse oximetry in comparison against values generated from arterial line waveforms. This allows for a noninvasive measurement of CRM in people with the use of pulse oximetry and has received FDA clearance.14
While initial approaches used a DL AI model to predict compensatory levels,15 recent approaches have successfully used simpler ML decision tree approaches with extracted arterial waveform features as the model input.12,16 The ML models used a more traditional approach, allowing the AI model to be more explainable, unlike DL models, which examine the signal and presents an output with no additional explainability. Additionally, the USAISR has developed blood loss monitoring metrics in canine hemorrhage models.17 These metrics could accurately predict blood loss in a canine hemorrhage model and had earlier detection time compared to traditional vital signs that are lagging indicators of hypovolemic shock due to compensatory mechanisms present in a subject. However, these ML models were computationally intensive for processing the arterial signals and developing, training, and testing the models.
Currently, the determination of volume status and tissue perfusion continues to be challenging in veterinary medicine.18 Using clinical judgment and the surrogates to volume status and tissue perfusion outlined above, veterinarians and their teams can spend significant intellectual capital in identifying if or when a patient is fully but not over-resuscitated. Veterinary teams would greatly benefit from technology that can provide an accurate prediction of the need for fluid resuscitation in real time. This technology could not only be essential in the emergency room and ICU settings but could also be equally effective in the operating room. Unlike human medicine, where highly trained physicians or advanced practice nurses typically deliver and monitor anesthesia, these tasks sometimes fall on less trained personnel in veterinary medicine under the supervision of a veterinarian. As such, veterinarians often find themselves tasked with not only concentrating on the technical challenges of the emergent surgeries at hand but also simultaneously needing to guide anesthesia and monitoring of their critical patients. Additionally, veterinary assets on the battlefield charged with caring for military working dog combat casualties may have staffing and equipment limitations that would warrant additional technological support to optimize their medical response. An easy-to-use and reliable volume status monitor would help unload the cognitive burden of veterinarians, assist veterinary technicians with making informed decisions about fluid requirements, and improve the care of injured dogs needing emergent resuscitation and surgery. Finally, such a tool could strengthen rigor and reproducibility in translational research using canine models of hemorrhagic shock by guiding resuscitation in a protocolized fashion.
The main objective of this study was to determine if the CRM algorithm validated in humans and baboons13,19 can be applied to canines by testing it against a dataset obtained from a canine controlled hemorrhage/resuscitation model. The authors hypothesized that the human CRM algorithm could be used on dogs to predict the onset of compensated hemorrhagic shock. The secondary objective of the study was to determine if a simpler waveform analysis could be used to predict the percentage of blood loss in the model instead of the more complex, computationally intensive models developed in previous work. The authors hypothesized that ML waveform analysis could predict the percentage of blood loss with at least 80% accuracy.
Methods
Data collection during canine study
This study was approved by the IACUC at the University of Utah (21–01012) with second-level review by the Department of Defense. A total of 6 adult (1 to 3 years old) male, sexually intact, purpose-bred dogs weighing between 30 and 50 kg were subjected to 5 independent rounds of controlled hemorrhage while anesthetized. All dogs were deemed normal based on physical examination, CBC, and biochemical profile prior to the onset of the study. Briefly, each dog was fasted for 8 to 12 hours and then premedicated with midazolam (0.3 mg/kg, IV) and anesthetized with fentanyl (5 μ/kg, IV) and propofol (2 to 4 mg/kg, IV). Anesthesia was initially maintained with isoflurane via endotracheal tube as needed until the dog was transitioned to total IV anesthesia (propofol; 1 to 20 mg/kg/h), midazolam (0.1 to 0.5 mg/kg/h), and fentanyl (0.05 to 0.3 μ/kg/min). Each dog underwent controlled hemorrhage through a jugular catheter to a target mean arterial pressure (MAP) < 35 mm Hg or 40% of estimated total blood volume (90 mL/kg of body weight), whichever endpoint occurred first. These dogs were maintained in a post-hemorrhage shock hold for 45 minutes before resuscitation with 1 of 5 treatments assigned randomly: (1) lactated Ringer solution/hetastarch; (2) chilled whole blood; (3) packed RBCs and fresh frozen plasma; (4) a hemoglobin-based oxygen carrier and canine freeze-dried plasma; or (5) a hemoglobin-based oxygen carrier, canine freeze-dried plasma, and canine lyophilized platelets. Three hours after the start of the hemorrhage, the dogs were weaned off anesthesia and monitored for 24 hours. All dogs were allowed at least 4 weeks between rounds for adequate recovery before being anesthetized, subjected to hemorrhage, and resuscitated with a different resuscitation strategy. All 6 dogs underwent 5 hemorrhage and resuscitation rounds and ultimately survived to be adopted after the study.20
During each hemorrhage and resuscitation event, dogs were instrumented with a 5-French catheter in their left or right femoral artery. This catheter was connected to a pressure transducer, which fed into a data acquisition device that continuously captured arterial waveforms at 1 kHz, which were later downsampled to 500 Hz.
Overview of waveform analysis for tracking hemorrhage
Three different waveform analysis methods were used in this effort. Two analysis models were previously developed using human data sets for measuring compensatory reserve (h). One uses a DL model framework (the human compensatory reserve measurement [hCRM-DL]), and the second uses an ML framework paired with extracted features from the arterial waveform (hCRM-ML). These features were based on past work in human simulated blood loss performed at the USAISR, downsizing the thousands of features from Gonzalez et al17 to only 54 used in Bedolla et al12 in an attempt to predict blood loss in the canines with a simpler model. The final waveform analysis model was purpose built for this canine (c) application for tracking blood loss volume. Details for each model are described in their respective sections below.
Human compensatory reserve measurement DL model
Investigators evaluated the canine datasets using the original hCRM-DL algorithm that was trained with human datasets and used a DL 1-D convolutional neural network to predict compensatory status from an arterial waveform input to the model.15 Briefly, a 20-second window of arterial data further downsampled to 100 Hz was inputted into the hCRM-DL model across the baseline, hemorrhage, and post-hemorrhage shock-hold regions to create CRM predictions for each dog. The resulting CRM predictions were aggregated for each dog and across all the subjects.
Arterial waveform feature extraction analysis
Machine learning models used in this effort relied on extracted arterial waveform features processed using MATLAB, version 2023b (MathWorks), similar to previously used methods.12 First, the arterial waveform underwent a finite impulse response window lowpass filter. Landmarks of the waveform were identified using peaks and troughs found on the waveform through analysis of the first and second derivatives of the arterial waveform. The 5 landmarks used for feature extraction were the pulse foot, the systolic peak, the half rise between the pulse foot and the systolic peak, the inflection point following the systolic peak, and the end of the pulse (pulse foot of the following pulse). A total of 54 features based on time and magnitude relationships between these landmarks were extracted for each arterial pulse across the entire dataset as detailed previously.12 These extracted features were the input for 2 different ML models in this study.
Human compensatory reserve measurement ML model
In addition to the hCRM-DL model, the investigators also evaluated the canine datasets using a simpler hCRM-ML, which was recently developed.12 The hCRM-ML model uses an ensemble-bagged decision tree model with an 8-leaf size and 30 learners, developed using the MATLAB Regression Learner toolbox. The existing hCRM-ML models were used for canine analysis, wherein data were inputted as blind testing to the trained models after features were extracted from the arterial waveform. Ten features served as the input to the hCRM-ML model as described previously.12
Development of an ML blood loss volume model
The original CRM algorithm was based on a 0-to-1 value calculated from the difference between a value of 1 and the ratio of the current pressure being experienced in a lower-body negative pressure (LBNP) model of human hemorrhage to the LBNP pressure at decompensation experienced by subjects in a study13 of humans. An analog for this equation was used for the canine dataset as the animals were subjected to hemorrhage likely beyond compensation. Thus, a metric was set up to estimate blood loss in canines, termed the canine blood loss volume metric (cBLVM), as shown in Equation 1.
Hemorrhage volumes over time were used for each canine dataset to calculate cBLVM ground-truth (gold standard) values. Similar to the hCRM-ML model, the ensemble-bagged decision tree model was chosen for developing cBLVM. Each of the 54 arterial waveform features that were previously used12 were extracted from the captured canine datasets and ranked using the minimal-redundancy–maximal-relevance criterion.21 This was done to identify which features are most correlative and not redundant with other features for calculating cBLVM. The resulting top 5 features, shown in Table 1, alongside their minimal-redundancy–maximal-relevance score, were used for training the ensemble-bagged decision tree cBLVM model.
Top 5 features using the minimal-redundancy–maximal-relevance (MRMR) feature ranking for the canine blood loss volume metric machine learning model.
Rank | Feature | MRMR score |
---|---|---|
1 | Peak-to-peak interval | 0.55 |
2 | Area under the curve of the systolic rise | 0.53 |
3 | Area from the systolic maximum to the start of the next beat subtracted by the pressure at the first inflection point and normalized by the number of samples in the entire waveform | 0.52 |
4 | Duration of the systolic rise | 0.44 |
5 | Area under the curve of the systolic decay | 0.40 |
To prevent bias in the training and testing of the cBLVM model, a cross-validation technique known as leave 1 subject out (LOSO) was used. The LOSO process was done 6 times, where each subject was held out for blind testing while the rest of the data was used for training. For example, a cBLVM model was trained with canines 1 through 5, followed by blind testing with the canine 6 dataset, resulting in an unbiased cBLVM prediction for that subject.
Statistical analysis
For comparing results for each advanced metric model and MAP, data were rescaled between key experimental regions as the length of each dataset region was inconsistent between canines. Instead, data were resampled so that 100 data points for each of 3 key regions were available: (1) baseline, (2) hemorrhage, and (3) post-hemorrhage shock hold remained. For hCRM-DL, hCRM-ML, and cBLVM, data were first smoothed using a moving mean window of 500 samples to extract more general trends.
Three main criteria were used to quantify model performance in this study. First, linear regression was used to determine how well the models track gold-standard values. For CRM models, predictions were compared against MAP for each blind canine testing data set. For cBLVM, predictions were compared against calculated cBLVM based on Equation 1. Root mean squared error (RMSE) and coefficient of determination (R2) results were aggregated for each model for this analysis. Second, receiver operating characteristic (ROC) curves were constructed based on the model's accuracy to predict a nonhemorrhage or hemorrhage binary state. For this analysis, only data for the first 2 regions were analyzed: baseline (ie, nonhemorrhage) and hemorrhage. Predictions were categorized for performance accuracy using different determination thresholds to calculate true positive rates and false positive prediction rates. The area under the ROC (AUROC) curve was estimated by trapezoidal rule (MATLAB, version 2023b; MathWorks) for MAP, hCRM-DL, hCRM-ML, and cBLVM. The final criterion used was time for hemorrhage detection for each metric. For this analysis, each data point was categorized as positive or negative for hemorrhage based on a threshold value set at the 25th percentile of baseline values for each hemorrhage data set. This threshold was determined from analysis as a stable criterion for hemorrhage detection compared to lower threshold values examined. Hemorrhage prediction time was evaluated for MAP, hCRM-DL, hCRM-ML, or cBLVM, and times were calculated as the difference between hemorrhage start time and model hemorrhage prediction time.
Additional analysis was performed to compare differences between each of the prediction models for their AUROC and hemorrhage prediction time metrics. We used a Shapiro-Wilk test to assess normal distribution for which each statistical test had portion of the datasets identified as non-normally distributed. This was performed using Prism, version 10.3 (GraphPad). As such, we used a repeated measures Friedman test where the 5-replicate hemorrhage events for each subject were the repeated measures in the test. A post hoc Nemenyi test was used to determine statistical differences between each ML model pairing, where P values less than .05 denoted statistical significance. These tests were performed using RStudio, version 4.4.1 (Posit PBC). These differences are denoted in the results where appropriate.
Results
Compensatory reserve measurement results for tracking hemorrhage in canines
On average, the MAP decreased during hemorrhage onset and rebounded during the post-hemorrhage shock-hold region even though no infusate was given (Figure 1; red line). The compensatory reserve measurement was calculated using 2 different models: hCRM-DL and hCRM-ML. For the hCRM-DL model, an average metric value of 0.41 ± 0.12 was predicted during the baseline region (Figure 1; blue line). During hemorrhage, hCRM-DL reached a minimum value earlier than MAP, but the drop in magnitude was small. Trends in hCRM-DL were inconsistent after the minimum value was reached, and average values ranged from 0.30 to 0.45, a small dynamic range for monitoring physiological status. Human compensatory reserve using machine learning was on average 0.88 ± 0.11 in the baseline region (Figure 1; green line). During hemorrhage, hCRM-ML dropped consistently but earlier than MAP. Similar to MAP, hCRM-ML increased during the post-hemorrhage shock-hold window. Overall, hCRM-ML had a wider operating range, with values spanning 0.45 to 0.95.
Summary of average results for compensatory reserve measurement (CRM) and mean arterial pressure (MAP) for canines subjected to controlled hemorrhage and post-hemorrhage shock hold. Average CRM deep learning (hCRM-DL) (blue), CRM machine learning (hCRM-ML) (green), and MAP (red) versus relative time across the baseline, hemorrhage, and post-hemorrhage shock-hold region.
Citation: American Journal of Veterinary Research 86, 2; 10.2460/ajvr.24.09.0256
Comparing algorithm predictions, hCRM-ML more accurately correlated to MAP, with an R2 value of 0.61 compared to 0.38 for hCRM-DL (Figure 2). This is further reflected by RMSE results, with hCRM-ML having an error of 8.38, whereas hCRM-DL was higher at 11.39 (Figure 2). To further evaluate these metrics for accurately predicting hemorrhage onset, the investigators developed ROC curves for hCRM-ML, hCRM-DL, and MAP and baseline criteria (Figure 2). Overall, hCRM-DL had the lowest capability to distinguish these hemorrhage states, with an AUROC of 0.60 (Figure 2). Human compensatory reserve using machine learning performed with greater sensitivity and specificity for distinguishing hemorrhage (AUROC, 0.73) compared to either hCRM-DL (AUROC, 0.60) or MAP alone (AUROC, 0.67). While hCRM-DL had less accuracy (ie, lower AUROC), it outperformed hCRM-ML and MAP at identifying hemorrhage onset, with a delay of 28.1 minutes compared to 41.6 and 43.3 minutes for hCRM-ML and MAP, respectively (Figure 2).
Comparison of hCRM-ML and hCRM-DL for tracking hemorrhage in canines. A—Average coefficient of determination (R2) and (B) root mean squared error (RMSE) for CRM models versus MAP for each canine test data set. C—Measurement of hemorrhage detection time relative to hemorrhage start time for MAP, hCRM-ML, and hCRM-DL based on 5 consistent hemorrhage class predictions. D—Receiver operating characteristic (ROC) curves and (E) area under the ROC (AUROC) values for MAP, hCRM-ML, and hCRM-DL models for categorizing baseline and hemorrhage regions.
Citation: American Journal of Veterinary Research 86, 2; 10.2460/ajvr.24.09.0256
Characterization of a blood loss volume ML model for tracking hemorrhage in canines
The cBLVM predictions were calculated for each subject, and for each hemorrhage replicate round separately, using the LOSO cross-validation training approach (see Methods section). Coefficient of determination and RMSE metrics are shown in Supplementary Tables S1 and S2, respectively. The average R2 and RMSE metrics for the cBLVM model compared to the calculated cBLVM were 0.74 and 0.16, respectively, outperforming hCRM-ML and hCRM-DL. The cBLVM model made the best predictions for canine 6, with an R2 of 0.86 and an RMSE of 0.12, whereas the model's worst performance for the cBLVM was on canine 3, with an R2 of 0.46 and an RMSE of 0.24.
On average, the predicted cBLVM trended with the calculated cBLVM score and closely tracked MAP during baseline and hemorrhage regions (Figure 3). The predicted cBLVM value remained mostly constant at baseline and dropped consistently throughout the hemorrhage period. After hemorrhage, during the post-hemorrhage shock hold, the predicted cBLVM held at a low score even as MAP began trending upward during this period.
Summary of average canine blood loss volume metric (cBLVM) results for tracking hemorrhage for dogs subjected to controlled hemorrhage and post-hemorrhage shock hold. Average calculated cBLVM (green line), predicted cBLVM (blue line), and MAP (red line) are shown versus relative time across the baseline, hemorrhage, and post-hemorrhage shock-hold region.
Citation: American Journal of Veterinary Research 86, 2; 10.2460/ajvr.24.09.0256
To further evaluate cBLVM, its performance was compared to hCRM-ML and hCRM-DL algorithm outputs. Root mean squared error and R2 metrics were stronger for cBLVM, but this may be due to comparison to its calculated cBLVM gold standard instead of MAP, which hCRM-ML and hCRM-DL were compared against. However, ROC analysis validates the strong performance of the cBLVM model for this application, with a high AUROC of 0.81, with the closest model, hCRM-ML, having an AUROC of 0.73 (Figure 4). The cBLVM model was significantly different from both MAP (P < .015) and hCRM-DL (P < .0009). Canine BLVM had the earliest hemorrhage detection time, with a delay of 24.7 minutes, with the next closest model, hCRM-DL, at 28.1 minutes (Figure 4). Similarly, the cBLVM model was statistically significant compared to hCRM-ML (P < .0001) and MAP (P < .0001), whereas hCRM-DL was not significantly different (P = .07). In addition, the hCRM-DL model was significantly quicker than the hCRM-ML model (P < .05). A signal-to-noise analysis across the study best reflects the greater sensitivity and specificity of cBLVM compared to MAP and CRM algorithms. In this regard, a metric score for cBLVM evaluated as ratios of threshold scores for distinguishing between baseline and hemorrhage regions was found to have a larger dynamic operating range than the other metrics (Figure 4). Further, while hCRM-DL provided early hemorrhage detection, this detection trend did not continue across the study as it did for cBLVM.
Comparison of cBLVM, hCRM-ML, hCRM-DL, and MAP for tracking for canines subjected to controlled hemorrhage. A—Receiver operating characteristic curves and (B) AUROC scores for MAP, hCRM-DL, hCRM-ML, and cBLVM. C—Measurement of hemorrhage detection time relative to hemorrhage start time for MAP, hCRM-ML, and hCRM-DL based on 5 consistent hemorrhage class predictions. Statistically significant differences as determined by the Friedman test post hoc Nemenyi test are denoted by asterisks. D—Signal-to-noise analysis for each metric relative to their threshold scores for distinguishing baseline and hemorrhage regions. Each region and its 100 data points are plotted on the y-axis, whereas scores from MAP, hCRM-DL, hCRM-ML, and cBLVM are shown on the x-axis. Scores above 1, indicating the baseline region identified, are shown in blue. In contrast, a gradient from light red to dark red is shown for the hemorrhage region, wherein smaller ratio scores and higher signals are indicated by darker red.
Citation: American Journal of Veterinary Research 86, 2; 10.2460/ajvr.24.09.0256
Discussion
This study showed that CRM models based on arterial waveform feature analysis developed for humans displayed high variability with relatively low accuracy when applied to dogs subjected to controlled hemorrhage. While the hCRM-DL output decreased more rapidly in the face of hemorrhage than MAP, prediction accuracies for detecting hemorrhage were low for both hCRM models as shown by AUROC scores. The authors hypothesized that CRM might fall from a baseline prehemorrhage level to a value near 0 CRM (ie, exhaustion of all compensatory mechanisms). Contrary to this expectation, hCRM-DL started at a low level and decreased very little during hemorrhage. In contrast, hCRM-ML decreased to 0.45 despite the animals reaching a physiological state of overt shock, indicated by a MAP < 50 mm Hg (Figure 1). These results were verified by the relatively low R2 values for hCRM-DL (0.38) and hCRM-ML (0.61) when correlated with MAP and the low AUROC for both algorithms (≤ 0.73). This stands in contrast to the performance of the CRM in humans. The clinical applicability of the CRM has been well documented by its ability to consistently generate greater discriminatory capacity (ie, higher ROC area under the curve values) compared to standard vital signs for prediction of the presence of hypovolemia in patients with trauma and the need for blood transfusion.22,23 Even with significant variability from the complexities introduced by differences in severity and types of injury, hypothermia, pain, gender, and age demographics, the accuracy of the CRM algorithms for detecting hemorrhage in human patients has consistently demonstrated higher ROC area under the curve (0.83 to 0.97) compared to low clinical value provided by standard vital signs.24
There are several possible explanations for the lack of fit in these models. First, human data used to develop the hCRM algorithms were derived from noninvasive arterial waveforms (volume clamp infrared finger photoplethysmography) instead of direct arterial catheters used in this study. However, noninvasive compared to invasive measures of arterial waveforms is an unlikely reason for discrepancies in CRM model fitting given that the infrared finger photoplethysmography technique produces blood pressure values that correlate highly in humans (r ≥ 0.93) with direct arterial catheter values.14 Second, the inherent variability in size and body conformation is significantly different between the upright posture of humans compared to the quadrupedal orientation of the canine cardiovascular system. The impact of a cardiovascular system that has evolved in the horizontal position in canines is reflected by the difference in the threshold for reduced oxygen delivery associated with the onset of hemorrhagic shock in dogs (approx 10 mL·kg−1·min−1) compared to humans (approx 5 to 5.5 mL·kg−1·min−1).8,25 This difference in oxygen delivery translates to an inherently smaller compensatory reserve in canines that can compromise the accuracy of applying any CRM algorithm derived from human data. Third, contraction of the canine spleen with subsequent release of RBCs represents a major compensatory mechanism for autoresuscitation in the face of hemorrhagic shock, contributing as much as 25% to 35% of lost RBCs in the dog.26 In contrast, CRM was developed for people whose spleen contributes only 1% to 12% of reduced RBCs during hemorrhage8 and has little ability to autoresuscitate. The results of the present study support the notion that species differences may significantly confound the accuracy of translating hCRM-DL and/or hCRM-ML algorithms to the application in our canine subjects.
Since the original CRM algorithm developed from human LBNP experiments was originally based on the premise that changing features of the arterial waveform tracked alterations in circulating blood volume,13 the investigators chose to develop a leaner algorithm that targeted the detection of a reduction in blood volume (cBLVM) compared to more complex cBLVM models previously developed. Not only did the leaner cBLVM model track a decrease in MAP during progressive blood loss with greater sensitivity and specificity (ie, an average AUROC of 0.81) compared to either the hCRM-ML (0.73) and hCRM-DL (0.60) algorithms, but cBLVM detected hemorrhage the earliest (24.7 minutes) compared to identifying a reduction in MAP (43.3 minutes; Figure 3). Thus, the tracking of blood loss in dogs using a simpler cBLVM model is consistent with previously reported human responses to progressive reductions in central blood volume using both hCRM-DL and hCRM-ML algorithms.15
This study reestablishes previously reported results that an arterial waveform analysis tool is clinically useful in predicting blood loss and could be derived using techniques of ML and algorithm development.8,15 Analysis of the arterial waveforms enabled the investigators of this investigation to predict the percentage of blood loss with a high degree of sensitivity and specificity. Arterial waveform analysis closely trended the known hemorrhage volume in this canine experimental model. After severe injury, the total blood loss is often impossible to estimate, leaving veterinary personnel to establish requirements for resuscitation fluids based on previously described hemodynamic parameters that lack sensitivity and specificity. However, using the waveform analysis described here, the authors propose an advanced monitoring tool that can be developed for veterinary personnel to quickly and accurately determine the blood loss of an injured dog utilizing a noninvasive arterial waveform analysis. The cBLVM model reported here was established with a known hemorrhage volume in laboratory conditions in order to derive appropriate algorithms. These algorithms are currently under investigation in dogs with naturally occurring closed abdominal hemorrhage. With additional testing in settings of natural blood loss, the investigators hope to be able to use this technology to estimate the percentage of blood loss by waveform analysis from either direct arterial waveforms or waveforms from plethysmography. These could be refined to be irrespective of blood loss as we have shown in a recent study.17 Further refinement and testing are in progress to prospectively determine the sensitivity and specificity of this technology. Also, clinical trials determining the accuracy of this technology in dogs with spontaneous, uncontrolled bleeding are needed.
The limitations in this study include the small sample size of 6 dogs. However, each dog was used 5 times, providing 30 datasets available for model development and analysis. Additionally, dogs in this study were of similar size and breed, so any extrapolation of these data to the overall canine population should be undertaken with caution. The hemorrhage model in this study did not include a component of trauma, so the body's response to significant tissue damage could not be examined in this dataset. Finally, the hCRM-DL and hCRM-ML algorithms were developed with datasets in which the reserve to compensate was directly measured in humans as the difference between each subject's resting baseline state and the onset of decompensated shock.10 This direct measurement of CRM has not been established in dogs, making any application of CRM algorithms developed in humans tenuous as it relates to the assessment accuracy of the canine compensatory status during blood loss.
When applied to a canine model of controlled hemorrhagic shock, hCRM models developed from human datasets did not perform well in tracking and predicting impending hemorrhage. However, analysis of arterial waveforms with the cBLVM collected from dogs for tracking blood loss volume during controlled hemorrhage resulted in strong prediction accuracy and earlier detection of hemorrhage than human hCRM-DL and hCRM-ML models or MAP could provide. Overall, the simpler cBLVM model may, with more refinement, provide an accurate triage tool for assessing the need for resuscitative intervention of canines with low computational power compared to previous models used to predict blood loss. Still, the prospective use of this model remains to confirm its clinical potential.
Supplementary Materials
Supplementary materials are posted online at the journal website: avmajournals.avma.org.
Acknowledgments
None reported.
Disclosures
The opinions or assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the Department of the Army or the Department of Defense.
No AI-assisted technologies were used in the generation of this manuscript.
Funding
Funded by the US Department of Defense (contract No. W81XWH-21-C-0002). This project was supported in part by an appointment to the Science Education Programs at the NIH, administered by Oak Ridge Associated Universities through the US Department of Energy Oak Ridge Institute for Science and Education.
ORCID
G. Hoareau https://orcid.org/0000-0002-8635-3960
T. Edwards https://orcid.org/0000-0002-1706-9536
J. M. Gonzalez https://orcid.org/0000-0002-4325-409X
E. Snider https://orcid.org/0000-0002-0293-4937
S. H. Torres https://orcid.org/0000-0002-0764-519X
References
- 1.↑
Hatton GE, McNutt MK, Cotton BA, Hudson JA, Wade CE, Kao LS. Age-dependent association of occult hypoperfusion and outcomes in trauma. J Am Coll Surg. 2020;230(4):417–425. doi:10.1016/j.jamcollsurg.2019.12.011
- 2.↑
Neville AL, Nemtsev D, Manasrah R, Bricker SD, Putnam BA. Mortality risk stratification in elderly trauma patients based on initial arterial lactate and base deficit levels. Am Surg. 2011;77(10):1337–1341. doi:10.1177/000313481107701014
- 3.↑
Fox EE, Holcomb JB, Wade CE, et al. Earlier endpoints are required for hemorrhagic shock trials among severely injured patients. Shock. 2017;47(5):567–573. doi:10.1097/SHK.0000000000000788
- 4.
Lynch AM, deLaforcade AM, Meola D, et al. Assessment of hemostatic changes in a model of acute hemorrhage in dogs. J Vet Emerg Crit Care (San Antonio). 2016;26(3):333–343. doi:10.1111/vec.12457
- 5.
Smart L, Boyd CJ, Claus MA, Bosio E, Hosgood G, Raisis A. Large-volume crystalloid fluid is associated with increased hyaluronan shedding and inflammation in a canine hemorrhagic shock model. Inflammation. 2018;41(4):1515–1523. doi:10.1007/s10753-018-0797-4
- 6.↑
Talbot CT, Zersen KM, Hess AM, Hall KE. Shock index is positively correlated with acute blood loss and negatively correlated with cardiac output in a canine hemorrhagic shock model. J Am Vet Med Assoc. 2023;261(6):874–880. doi:10.2460/javma.22.11.0521
- 7.↑
Chew MS, Aneman A. Haemodynamic monitoring using arterial waveform analysis. Curr Opin Crit Care. 2013;19(3):234–241. doi:10.1097/MCC.0b013e32836091ae
- 8.↑
Convertino VA, Koons NJ, Suresh MR. Physiology of human hemorrhage and compensation. Compr Physiol. 2021;11(1):1531–1574. doi:10.1002/cphy.c200016
- 9.↑
Compton FD, Zukunft B, Hoffmann C, Zidek W, Schaefer JH. Performance of a minimally invasive uncalibrated cardiac output monitoring system (Flotrac/Vigileo) in haemodynamically unstable patients. Br J Anaesth. 2008;100(4):451–456.
- 10.↑
Convertino VA, Schiller AM. Measuring the compensatory reserve to identify shock. J Trauma Acute Care Surg. 2017;82(suppl 1):S57–S56. doi:10.1097/TA.0000000000001430
- 11.↑
Convertino VA, Wirt MD, Glenn JF, Lein BC. The compensatory reserve for early and accurate prediction of hemodynamic compromise: a review of the underlying physiology. Shock. 2016;45(6):580–590. doi:10.1097/SHK.0000000000000559
- 12.↑
Bedolla CN, Gonzalez JM, Vega SJ, Convertino VA, Snider EJ. An explainable machine-learning model for compensatory reserve measurement: methods for feature selection and the effects of subject variability. Bioengineering (Basel). 2023;10(5):612. doi:10.3390/bioengineering10050612
- 13.↑
Convertino VA, Grudic G, Mulligan J, Moulton S. Estimation of individual-specific progression to impending cardiovascular instability using arterial waveforms. J Appl Physiol (1985). 2013;115(8):1196–1202. doi:10.1152/japplphysiol.00668.2013
- 14.↑
Roden RT, Webb KL, Pruter WW, et al. Physiologic validation of the compensatory reserve metric obtained from pulse oximetry for advanced medical monitoring on the battlefield. J Trauma Acute Care Surg. 2024;97(suppl 1):S98–S104. doi:10.1097/TA.0000000000004377
- 15.↑
Convertino VA, Techentin RW, Poole RJ, et al. AI-enabled advanced development for assessing low circulating blood volume for emergency medical care: comparison of compensatory reserve machine-learning algorithms. Sensors (Basel). 2022;22(7):2642. doi:10.3390/s22072642
- 16.↑
Gupta JF, Arshad SH, Telfer BA, Snider EJ, Convertino VA. Noninvasive monitoring of simulated hemorrhage and whole blood resuscitation. Biosensors (Basel). 2022;12(12):1168. doi:10.3390/bios12121168
- 17.↑
Gonzalez JM, Edwards TH, Hoareau GL, Snider EJ. Refinement of machine learning arterial waveform models for predicting blood loss in canines. Front Artif Intell. 2024;7:1408029. doi:10.3389/frai.2024.1408029
- 18.↑
Palmer L. Fluid management in patients with trauma: liberal versus restrictive approach. Vet Clin North Am Small Anim Pract. 2017;47(2):397–410. doi:10.1016/j.cvsm.2016.10.014
- 19.↑
Hinojosa-Laborde C, Howard JT, Mulligan J, Grudic GZ, Convertino VA. Comparison of compensatory reserve during lower-body negative pressure and hemorrhage in nonhuman primates. Am J Physiol Regul Integr Comp Physiol. 2016;310(11):R1154–R1159. doi:10.1152/ajpregu.00304.2015
- 20.↑
Edwards TH, Venn EC, Le TD, et al. Comparison of shelf-stable and conventional resuscitation products in a canine model of hemorrhagic shock. J Trauma Acute Care Surg. 2024;97(suppl 1):S105–S112. doi:10.1097/TA.0000000000004332
- 21.↑
Ding C, Peng H. Minimum redundancy feature selection from microarray gene expression data. J Bioinform Comput Biol. 2005;3(2):185–205. doi:10.1142/S0219720005001004
- 22.↑
Johnson MC, Alarhayem A, Convertino VA, et al. Compensatory reserve index: performance of a novel monitoring technology to identify the bleeding trauma patient. Shock. 2018;49(3):295–300. doi:10.1097/SHK.0000000000000959
- 23.↑
Benov A, Yaslowitz O, Hakim T, et al. The effect of blood transfusion on compensatory reserve: a prospective clinical trial. J Trauma Acute Care Surg. 2017;83(suppl 1):S71–S76. doi:10.1097/TA.0000000000001474
- 24.↑
Convertino VA, Cardin S. Advanced medical monitoring for the battlefield: a review on clinical applicability of compensatory reserve measurements for early and accurate hemorrhage detection. J Trauma Acute Care Surg. 2022;93(suppl 1):S147–S154. doi:10.1097/TA.0000000000003595
- 25.↑
Koons NJ, Moses CD, Thompson P, Strandenes G, Convertino VA. Identifying critical DO2 with compensatory reserve during simulated hemorrhage in humans. Transfusion. 2022;62(suppl 1):S122–S129. doi:10.1111/trf.16958
- 26.↑
Longhurst JC, Musch TI, Ordway GA. O2 consumption during exercise in dogs: roles of splenic contraction and alpha-adrenergic vasoconstriction. Am J Physiol Heart Circ Physiol. 1986;251(3):H502–H509. doi:10.1152/ajpheart.1986.251.3.H502