Purpose: For infants with congenital heart disease (CHD) requiring surgery, prolonged hospital stays, intermittent caregiver visitation, and constrained unit staffing ratios present barriers to adequately address post-operative stressors and associated need to retain cognitive and physiological reserves. Similar patients requiring high-engagement interventions, such as hospitalized infants with neonatal abstinence syndrome, have found success in utilizing responsive bassinets to soothe infants and save floor nurses' time. However, it remains unclear if such technology can be leveraged in the CHD population given their complex hemodynamics, feeding intolerance, and monitoring requirements.
Methods: This multidisciplinary feasibility study evaluated responsive bassinet use in a cohort of infants with CHD <6 months of age in a medical-surgical unit at a midwestern children's hospital. Specifically assessing 1) implementation requirements, challenges, and potential of utilizing the device, together with 2) ability to perform bedside monitoring (monitoring) and 3) measuring physiologic trends during use.
Results: Between 11/2020–1/2022, nine infants utilized a responsive bassinet over 599 h (mean 13, range 4–26 days per infant). No increase in monitoring alarms and accurate vital signs monitoring during bassinet activity were noted with appropriate physiologic responses for infants with single ventricle and biventricular surgeries.
Conclusions: Feasibility of introducing new technology into care, and successful use of its functionality for soothing was found to be plausible for infants with CHD.
Practice implications: After cardiac surgery, infants with CHD have need for interventions to reduce stress. Use of a soothing bassinet has the potential to aid in doing so without interference with monitoring requirements.
The post-Norwood interstage period for infants with hypoplastic left heart syndrome is a high-risk time with 10–20% of infants having a complication of recurrent coarctation of the aorta (RCoA). Many interstage programs utilize mobile applications allowing caregivers to submit home physiologic data and videos to the clinical team. This study aimed to investigate if caregiver-entered data resulted in earlier identification of patients requiring interventional catheterization for RCoA. Retrospective home monitoring data were extracted from five high-volume Children’s High Acuity Monitoring Program®-affiliated centers (defined as contributing > 20 patients to the registry) between 2014 and 2021 after IRB approval. Demographics and caregiver-recorded data evaluated include weight, heart rate (HR), oxygen saturation (SpO2), video recordings, and ‘red flag’ concerns prior to interstage readmissions. 27% (44/161) of infants required interventional catheterization for RCoA. In the 7 days prior to readmission, associations with higher odds of RCoA included (mean bootstrap coefficient, [90% CI]) increased number of total recorded videos (1.65, [1.07–2.62]) and days of recorded video (1.62, [1.03–2.59]); increased number of total recorded weights (1.66, [1.09–2.70]) and days of weights (1.56, [1.02–2.44]); increasing mean SpO2 (1.55, [1.02–2.44]); and increased variation and range of HR (1.59, [1.04–2.51]) and (1.71, [1.10–2.80]), respectively. Interstage patients with RCoA had increased caregiver-entered home monitoring data including weight and video recordings, as well as changes in HR and SpO2trends. Identifying these items by home monitoring teams may be beneficial in clinical decision-making for evaluation of RCoA in this high-risk population.
As the volume and complexity of health care data continue to grow, the field of gastroenterology has embraced the use of computational tools to identify, extract, and synthesize relevant information. Rapidly expanding from the use of medical record data only, machine learning and artificial intelligence (AI) techniques now routinely integrate data from procedural images and free-text documents (eg, clinical notes, academic articles, and online resources). Clinically, this has manifested in predictive models and risk stratification tools to improve prognosis, diagnosis, treatment, and patient management. For patients, unparalleled access to digital resources has facilitated engagement in their care. It is largely understood that these tools cannot, and should not, replace gastroenterologists. However, the ability to leverage this technology in an assistive capacity is fundamentally changing the way clinicians and patients interact with data.
Editorial of “ChatGPT answers common patient questions about colonoscopy,” by Lee T-S, Staller K, Botoman V, et al
Background: Acute flaccid myelitis (AFM) is a childhood illness characterized by sudden-onset weakness impairing function. The primary goal was to compare the motor recovery patterns of patients with AFM who were discharged home or to inpatient rehabilitation. Secondary analyses focused on recovery of respiratory status, nutritional status, and neurogenic bowel and bladder in both cohorts.
Methods: Eleven tertiary care centers in the United States performed a retrospective chart review of children with AFM between January 1, 2014, and October 1, 2019. Data included demographics, treatments, and outcomes on admission, discharge, and follow-up visits.
Results: Medical records of 109 children met inclusion criteria; 67 children required inpatient rehabilitation, whereas 42 children were discharged directly home. The median age was 5 years (range 4 months to 17 years), and the median time observed was 417 days (interquartile range = 645 days). Distal upper extremities recovered better than the proximal upper extremities. At acute presentation, children who needed inpatient rehabilitation had significantly higher rates of respiratory support (P < 0.001), nutritional support (P < 0.001), and neurogenic bowel (P = 0.004) and bladder (P = 0.002). At follow-up, those who attended inpatient rehabilitation continued to have higher rates of respiratory support (28% vs 12%, P = 0.043); however, the nutritional status and bowel/bladder function were no longer statistically different.
Conclusions: All children made improvements in strength. Proximal muscles remained weaker than distal muscles in the upper extremities. Children who qualified for inpatient rehabilitation had ongoing respiratory needs at follow-up; however, recovery of nutritional status and bowel/bladder were similar.
This study sought to identify sucking profiles among healthy, full-term infants and assess their predictive value for future weight gain and eating behaviors. Pressure waves of infant sucking were captured during a typical feeding at age 4 months and quantified via 14 metrics. Anthropometry was measured at 4 and 12 months, and eating behaviors were measured by parent report via the Children's Eating Behavior Questionnaire-Toddler (CEBQ-T) at 12 months. Sucking profiles were created using a clustering approach on the pressure wave metrics, and utility of these profiles was assessed for predicting which infants will have weight-for-age (WFA) percentile changes from ages 4-12 months that exceed thresholds of 5, 10, and 15 percentiles, and for estimating each CEBQ-T subscale score. Among 114 infants, three sucking profiles were identified: Vigorous (51%), Capable (28%), and Leisurely (21%). Sucking profiles were found to improve estimation of change in WFA from 4 to 12 months and 12-month maternal-reported eating behaviors above infant sex, race/ethnicity, birthweight, gestational age, and pre-pregnancy body mass index alone. Infants with a Vigorous sucking profile gained significantly more weight during the study period than infants with a Leisurely profile. Infant sucking characteristics may aid in predicting which infants may be at greater risk of obesity, and therefore sucking profiles deserve more investigation.
Objective: Infants receiving parenteral nutrition (PN) are at increased risk of PN-associated liver disease (PNALD), which can lead to hepatic fibrosis. Congenital heart disease (CHD) represents a risk factor for hepatic fibrosis, so this study sought to better understand whether infants with CHD were at elevated risk of PNALD when receiving long-term PN.
Study design: This study includes a retrospective cohort of infants at a level IV neonatal intensive care unit from 2010 to 2020 who received long-term PN during the first 8 weeks of life. A time-varying Cox survival model was used to model risk of PNALD between infants with and without CHD, adjusted for demographics, surgical intervention, and PN exposure, using a 5000-iteration bootstrap estimation. Secondary analyses evaluated risk against discrete CHD diagnoses, and sensitivity analysis was performed on the magnitude and quantity of direct bilirubin laboratory measurements making up the PNALD definition.
Results: Neonates with CHD were found to be at higher risk for PNALD during or soon after long-term PN exposure. A pattern of increasing association strength with increasing bilirubin threshold suggests infants with CHD may also experience higher degrees of injury.
Conclusions: This work offers a step in understanding how diagnoses can be factored into neonate PN prescription. Future work will explore modifications in lipid profiles and timing to mitigate risk in patients with CHD.
Background: Hepatic artery thrombosis (HAT) is a reported complication of 5%-10% of pediatric liver transplantations, rates 3-4 times that seen in adults. Early HAT (seen within 14 days after transplant) can lead to severe allograft damage and possible urgent re-transplantation. In this report, we present our analysis of HAT in pediatric liver transplant from a national clinical database and examine the association of HAT with anticoagulant or antiplatelet medication administered in the post-operative period.
Methods: Data were obtained from the Pediatric Health Information System database maintained by the Children's Hospital Association. For each liver transplant recipient identified in a 10-year period, diagnosis, demographic, and medication data were collected and analyzed.
Results: Our findings showed an average rate of HAT of 6.3% across 31 centers. Anticoagulant and antiplatelet medication strategies varied distinctly among and even within centers, likely due to the fact there are no consensus guidelines. Notably, in centers with similar medication usage, HAT rates continue to vary. At the patient level, use of aspirin within the first 72 h of transplantation was associated with a decreased risk of HAT, consistent with other reports in the literature.
Conclusions: We suggest that concerted efforts to standardize anticoagulation approaches in pediatric liver transplant may be of benefit in the prevention of HAT. A prospective multi-institutional study of regimen-possibly including aspirin-following transplantation could have significant value.
The aim of this study was to determine if machine learning can improve the specificity of detecting transplant hepatic artery pathology over conventional quantitative measures while maintaining a high sensitivity.This study presents a retrospective review of 129 patients with transplanted hepatic arteries. We illustrate how beyond common clinical metrics such as stenosis and resistive index, a more comprehensive set of waveform data (including flow half-lives and Fourier transformed waveforms) can be integrated into machine learning models to obtain more accurate screening of stenosis and occlusion. We present a novel framework of Extremely Randomized Trees and Shapley values, we allow for explainability at the individual level.The proposed framework identified cases of clinically significant stenosis and occlusion in hepatic arteries with a state-of-the-art specificity of 65%, while maintaining sensitivity at the current standard of 94%. Moreover, through 3 case studies of correct and mispredictions, we demonstrate examples of how specific features can be elucidated to aid in interpreting driving factors in a prediction.This work demonstrated that by utilizing a more complete set of waveform data and machine learning methodologies, it is possible to reduce the rate of false-positive results in using ultrasounds to screen for transplant hepatic artery pathology compared with conventional quantitative measures. An advantage of such techniques is explainability measures at the patient level, which allow for increased radiologists' confidence in the predictions.
The challenge of nurse staffing is amplified in the acute care neonatal intensive care unit (NICU) setting, where a wide range of highly variable factors affect staffing. A comprehensive overview of infant factors (severity, intensity), nurse factors (education, experience, preferences, team dynamics), and unit factors (structure, layout, shift length, care model) influencing pre-shift NICU staffing is presented, along with how intra-shift variability of these and other factors must be accounted for to maintain effective and efficient assignments. There is opportunity to improve workload estimations and acuity measures for pre-shift staffing using technology and predictive analytics. Nurse staffing decisions affected by intra-shift factor variability can be enhanced using novel care models that decentralize decision-making. Improving NICU staffing requires a deliberate, systematic, data-driven approach, with commitment from nurses, resources from the management team, and an institutional culture prioritizing patient safety.
Importance: A major barrier to therapeutic development in neonates is a lack of standardized drug response measures that can be used as clinical trial endpoints. The ability to quantify treatment response in a way that aligns with relevant downstream outcomes may be useful as a surrogate marker for new therapies, such as those for bronchopulmonary dysplasia (BPD).
Objective: To construct a measure of clinical response to dexamethasone that was well aligned with the incidence of severe BPD or death at 36 weeks' postmenstrual age.
Design: Retrospective cohort study.
Setting: Level IV Neonatal Intensive Care Unit.
Participants: Infants treated with dexamethasone for developing BPD between 2010 and 2020.
Main outcome(s) and measure(s): Two models were built based on demographics, changes in ventilatory support, and partial pressure of carbon dioxide (pCO2 ) after dexamethasone administration. An ordinal logistic regression and regularized binary logistic model for the composite outcome were used to associate response level to BPD outcomes defined by both the 2017 BPD Collaborative and 2018 Neonatal Research Network definitions.
Results: Ninety-five infants were treated with dexamethasone before 36 weeks. Compared to the baseline support and demographic data at the time of treatment, changes in ventilatory support improved ordinal model sensitivity and specificity. For the binary classification, BPD incidence was well aligned with risk levels, increasing from 16% to 59%.
Conclusions and relevance: Incorporation of response variables as measured by changes in ventilatory parameters and pCO2 following dexamethasone administration were associated with downstream outcomes. Incorporating drug response phenotype into a BPD model may enable more rapid development of future therapeutics.
Introduction: Government and health organizations in the United States and the United Kingdom have taken different stances on e-cigarettes policy. To explore the potential effects of these policies, we describe e-cigarette user characteristics, intentions to quit, and perceived attitudes toward vaping.
Methods: We used the online crowdsourcing platform Prolific to conduct a cross-sectional survey of current vapers in both countries. Measures were drawn from international surveys.
Results: The sample included 1044 vapers (524 United Kingdom; 520 United States) with a mean age of 34. Samples differed by gender (United States: 57% male vs 45% in United Kingdom), race (United States: 79% White vs 90% in United Kingdom) and employment (United States: 73% employed vs 79% in United Kingdom). UK respondents were more likely than US respondents to be ever smokers (89% vs 71%, p < .0001); be daily vapers (69% vs 53%, p < .0001) and to use e-cigarettes to quit smoking (75% vs 65%, p < .0007). Most vapers in the United Kingdom and the United States want to stop vaping (62% vs 61%; p < .9493), but US respondents plan to quit significantly sooner (odds ratio 0.47, p < .0004). Attitudes differed as well. Over half (56%) of UK respondents reported their government-approved e-cigarette use, and 24% felt health care providers had positive views on e-cigarettes versus 29% and 13% from the United States, respectively (p < .0004 for both).
Conclusions: Plans for quitting and perceptions regarding e-cigarettes differ markedly between demographically similar groups of vapers in the two countries. Future research should determine whether e-cigarette cessation for adults should be a public health goal, and if so, identify effective ways to stop.
Implications: The contribution of this study is that it describes differences in behaviors and attitudes of vapers recruited through the same research platform and adjusted to account for minor demographic differences across country samples. For clinicians, these findings suggest that most vapers would welcome assistance in quitting. For researchers and policymakers, findings suggest that government policy regarding nicotine devices might influence behaviors and attitudes related to use and also that future research is needed to determine effective ways to quit.
Objective: Utilizing integrated electronic health record (EHR) and consumer-grade wearable device data, we sought to provide real-world estimates for the proportion of wearers that would likely benefit from anticoagulation if an atrial fibrillation (AFib) diagnosis was made based on wearable device data.
Materials and methods: This study utilized EHR and Apple Watch data from an observational cohort of 1802 patients at Cedars-Sinai Medical Center who linked devices to the EHR between April 25, 2015 and November 16, 2018. Using these data, we estimated the number of high-risk patients who would be actionable for anticoagulation based on (1) medical history, (2) Apple Watch wear patterns, and (3) AFib risk, as determined by an existing validated model.
Results: Based on the characteristics of this cohort, a mean of 0.25% (n = 4.58, 95% CI, 2.0-8.0) of patients would be candidates for new anticoagulation based on AFib identified by their Apple Watch. Using EHR data alone, we find that only approximately 36% of the 1802 patients (n = 665.93, 95% CI, 626.0-706.0) would have anticoagulation recommended even after a new AFib diagnosis.
Discussion and conclusion: These data suggest that there is limited benefit to detect and treat AFib with anticoagulation among this cohort, but that accessing clinical and demographic data from the EHR could help target devices to the patients with the highest potential for benefit. Future research may analyze this relationship at other sites and among other wearable users, including among those who have not linked devices to their EHR.
The 13C-pantoprazole breath test (PAN-BT) is a safe, noninvasive, in vivo CYP2C19 phenotyping probe for adults. Our objective was to evaluate PAN-BT performance in children, with a focus on discriminating individuals who, according to guidelines from the Clinical Pharmacology Implementation Consortium (CPIC), would benefit from starting dose escalation versus reduction for proton pump inhibitors (PPIs). Children (n = 65, 6–17 years) genotyped for CYP2C19 variants *2, *3, *4, and *17 received a single oral dose of 13C-pantoprazole. Plasma concentrations of pantoprazole and its metabolites, and changes in exhaled 13CO2 (termed delta-over-baseline or DOB), were measured 10 times over 8 h using high performance liquid chromatography with ultraviolet detection and spectrophotometry, respectively. Pharmacokinetic parameters of interest were generated and DOB features derived using feature engineering for the first 180 min postadministration. DOB features, age, sex, and obesity status were used to run bootstrap analysis at each timepoint (Ti) independently. For each iteration, stratified samples were drawn based on genotype prevalence in the original cohort. A random forest was trained, and predictive performance of PAN-BT was evaluated. Strong discriminating ability for CYP2C19 intermediate versus normal/rapid metabolizer phenotype was noted at DOBT30 min (mean sensitivity: 0.522, specificity: 0.784), with consistent model outperformance over a random or a stratified classifier approach at each timepoint (p < 0.001). With additional refinement and investigation, the test could become a useful and convenient dosing tool in clinic to help identify children who would benefit most from PPI dose escalation versus dose reduction, in accordance with CPIC guidelines.
Attenuated heart rate recovery (HRR) following peak exercise has been shown to be a predictor of mortality in populations of adults with Fontan palliation, coronary artery disease, heart failure, and heart transplantation. However, few have studied HRR in children and adolescents with congenital heart disease (CHD). This case-control study compared HRR patterns from exercise stress testing in children and adolescents with and without repaired acyanotic CHD (raCHD). Retrospective analysis included patients aged 10–18 years who had exercise testing between 2007 and 2017. The raCHD cohort included patients with Tetralogy of Fallot, transposition of the great arteries, coarctation, truncus arteriosus, atrioventricular septal defect, pulmonary outflow obstruction, aortic stenosis and/or insufficiency, or septal defects. Those in the control cohort were matched for age, sex, BMI, peak METs achieved, and peak heart rate (HR). HR at 1-min intervals throughout the 10-min recovery period and HRR patterns were analyzed. The study included n = 584 individuals (raCHD: n = 146), median age 14 years old, 67.1% male. The cohorts had similar resting and peak HRs. Linear mixed-effects models (LMM) suggested statistically significant cohort-by-time interaction for HR in exercise recovery, with the largest mean difference at minute-6 (2.9 bpm, p = 0.008). When comparing lesion types, LMM found no cohort or cohort-by-time interaction. While minute-6 of exercise recovery was statistically significant, the difference was 2.9 bpm and may not have clinical significance. These results suggest that HRR in pediatric raCHD patients should not vary from their healthy peers, and an attenuated HRR may not be directly attributed to underlying raCHD.
Clustered count data, common in health-related research, are routinely analyzed using generalized linear mixed models. There are two well-known challenges in small-sample inference in mixed modeling: bias in the naïve standard error approximation for the empirical best linear unbiased estimator, and lack of clearly defined denominator degrees of freedom. The Kenward–Roger method was designed to address these issues in linear mixed modeling, but neither it nor the simpler option of using between-within denominator degrees of freedom has been thoroughly examined in generalized linear mixed modeling. We compared the Kenward–Roger and between-within methods in two simulation studies. For simulated cluster-randomized trial data, coverage rates for both methods were generally close to the nominal 95% level and never outside 93-97%, even for 5 clusters with an average of 3 observations each. For autocorrelated longitudinal data, between-within intervals were more accurate overall, and under some conditions both the original and improved Kenward–Roger methods behaved erratically. Overall, coverage for Kenward–Roger and between-within intervals was generally adequate, if often conservative. Based on the scenarios examined here, use of between-within degrees of freedom may be a suitable or even preferable alternative to the Kenward–Roger method in some analyses of clustered count data with simple covariance structures.
Objective: To explore glycated haemoglobin (HbA1c) patterns in 5- to 9-year-olds in the recent-onset period of type 1 diabetes and identify parent psychosocial factors that may predict children's HbA1c trajectory using a prospective, longitudinal design.
Research design and methods: We measured family demographics and parent psychosocial factors at baseline. We collected HbA1c levels from children every 3 months for up to 30 months. Deriving several features around HbA1c trends, we used k-means clustering to group trajectories and linear and logistic regressions to identify parent psychosocial predictors of children's HbA1c trajectories.
Results: The final cohort included 106 families (48 boys, mean child age 7.50 ± 1.35 years and mean diabetes duration 4.71 ± 3.19 months). We identified four unique HbA1c trajectories in children: high increasing, high stable, intermediate increasing and low stable. Compared to a low stable trajectory, increasing parent-reported hypoglycaemia fear total score was associated with decreased odds of having a high stable or intermediate increasing trajectory. Increasing parent-reported diabetes-specific family conflict total score was associated with increased odds of having a high stable or intermediate increasing trajectory.
Conclusions: We are the first to identify distinct HbA1c trajectories in 5- to 9-year-olds with recent-onset type 1 diabetes as well as parent psychosocial factors that may predict high stable or increasing trajectories and could represent future treatment targets.
Objective: To measure short-term outcomes of neonates with congenital diaphragmatic hernia (CDH) while on Neurally Adjusted Ventilator Assist (NAVA), and to measure the impact of a congenitally abnormal diaphragm on NAVA ventilator indices.
Study design: First, we conducted a retrospective-cohort analysis of 16 neonates with CDH placed on NAVA over a treatment period of 72 h. Second, we performed a case-control study comparing NAVA level and Edi between neonates with CDH and those without CDH.
Results: Compared to pre-NAVA, there were clinically meaningful improvements in PIP (p < 0.003), Respiratory Severity Score (p < 0.001), MAP (p < 0.001), morphine (p = 0.004), and midazolam use (p = 0.037). Compared to a 1:2 matched group without CDH, there was no meaningful difference in NAVA level (p = 0.286), Edi-Peak (p = 0.315), or Edi-Min (p = 0.266).
Conclusions: The potential benefits of NAVA extend to neonates with CDH. There is minimal compensatory change in Edis, and higher/lower ventilator settings compared to neonates without CDH.
Negative life events, such as the death of a loved one, are an unavoidable part of life. These events can be overwhelmingly stressful and may lead to the development of mental health disorders. To mitigate these adverse developments, prior literature has utilized measures of psychological responses to negative life events to better understand their effects on mental health. However, psychological changes represent only one aspect of an individual's potential response. We posit measuring additional dimensions of health, such as physical health, may also be beneficial, as physical health itself may be affected by negative life events and measuring its response could provide context to changes in mental health. Therefore, the primary aim of this work was to quantify how an individual's physical health changes in response to negative life events by testing for deviations in their physiological and behavioral state (PB-state). After capturing post-event, PB-state responses, our second aim sought to contextualize changes within known factors of psychological response to negative life events, namely coping strategies. To do so, we utilized a cohort of professionals across the United States monitored for 1 year and who experienced a negative life event while under observation. Garmin Vivosmart-3 devices provided a multidimensional representation of one's PB-state by collecting measures of resting heart rate, physical activity, and sleep. To test for deviations in PB-state following negative life events, One-Class Support Vector Machines were trained on a window of time prior to the event, which established a PB-state baseline. The model then evaluated participant's PB-state on the day of the life event and each day that followed, assigning each day a level of deviance relative to the participant's baseline. Resulting response curves were then examined in association with the use of various coping strategies using Bayesian gamma-hurdle regression models. The results from our objectives suggest that physical determinants of health also deviate in response to negative life events and that these deviations can be mitigated through different coping strategies. Taken together, these observations stress the need to examine physical determinants of health alongside psychological determinants when investigating the effects of negative life events.
Documentation and review of patient heart rate are a fundamental process across a myriad of clinical settings. While historically recorded manually, bedside monitors now provide for the automated collection of such data. Despite the availability of continuous streaming data, patients' charts continue to reflect only a subset of this information as snapshots recorded throughout a hospitalization. Over the past decade, prominent works have explored the implications of such practices and established fundamental differences in the alignment of discrete charted vitals and steaming data captured by monitoring systems. Limited work has examined the temporal properties of these differences, how they manifest, and their relation to clinical applications. The work presented in this article addresses this disparity, providing evidence that differences between charting techniques extend to measures of variability. Our results demonstrate how variability manifests with respect to temporal elements of charting timing and how it can facilitate personalized care by contextualizing deviations in magnitude. This work also highlights the utility of variability metrics with relation to clinical measures including associations to severity scores and a case study utilizing complex variability metrics derived from the complete set of monitor data.
The relationship between the Naranjo scaling system and pediatric adverse drug reactions (ADR) is poorly understood. We performed a retrospective review of 1,676 pediatric ADRs documented at our hospital from 2014–2018. We evaluated patient demographics, implicated medication, ADR severity, calculated Naranjo score, associated symptoms, and location within the hospital in which the ADR was documented. ADR severity was poorly correlated with Naranjo interpretation. Out of the 10 Naranjo scale questions, 4 had a response of “unknown” greater than 85% of the time. Cardiovascular and oncological/immunologic agents were more likely to have a probable or definite Naranjo interpretation compared to antimicrobials. Further strategies are needed to enhance the causality assessment of pediatric ADRs in clinical care.
Adverse drug reactions (ADRs) often go unreported or are inaccurately documented in the electronic medical recorded (EMR), even when they are severe and life-threatening. Incomplete reporting can lead to future prescribing challenges and ADR reoccurrence. The aim of this study was to evaluate the documentation of ADRs within the EMR and determine specific factors associated with appropriate and timely ADR documentation. Retrospective data were collected from a pediatric hospital system ADR reports from October 2010 to November 2018. Data included implicated medication, type, and severity of reaction, treatment location, the presence or absence of ADR documentation in the EMR alert profile within 24 hours of the ADR hospital or clinic encounter discharge, ADR identification method, and the presence or absence of pharmacovigilance oversight at the facility where the ADR was treated. A linear regression model was applied to identify factors contributing to optimal ADR documentation. A total of 3065 ADRs requiring medical care were identified. Of these, 961 ADRs (31%) did not have appropriate documentation added to the EMR alert profile prior to discharge. ADRs were documented in the EMR 87% of the time with the presence of pharmacovigilance oversight and only 61% without prospective pharmacovigilance (P < .01). Severity of ADR was not a predictor of ADR documentation in the EMR, yet the implicated medication and location of treatment did impact reporting. An active pharmacovigilance service significantly improved pediatric ADR documentation. Further work is needed to assure timely, accurate ADR documentation.
In line with the current trajectory of healthcare reform, significant emphasis has been placed on improving the utilization of data collected during a clinical encounter. Although the structured fields of electronic health records have provided a convenient foundation on which to begin such efforts, it was well understood that a substantial portion of relevant information is confined in the free-text narratives documenting care. Unfortunately, extracting meaningful information from such narratives is a non-trivial task, traditionally requiring significant manual effort. Today, computational approaches from a field known as Natural Language Processing (NLP) are poised to make a transformational impact in the analysis and utilization of these documents across healthcare practice and research, particularly in procedure-heavy sub-disciplines such as gastroenterology (GI). As such, this manuscript provides a clinically focused review of NLP systems in GI practice. It begins with a detailed synopsis around the state of NLP techniques, presenting state-of-the-art methods and typical use cases in both clinical settings and across other domains. Next, it will present a robust literature review around current applications of NLP within four prominent areas of gastroenterology including endoscopy, inflammatory bowel disease, pancreaticobiliary, and liver diseases. Finally, it concludes with a discussion of open problems and future opportunities of this technology in the field of gastroenterology and health care as a whole.
Despite proper sleep hygiene being critical to our health, guidelines for improving sleep habits often focus on only a single component, namely, sleep duration. Recent works, however, have brought to light the importance of another aspect of sleep: bedtime regularity, given its ties to cognitive and metabolic health outcomes. To further our understanding of this often-neglected component of sleep, the objective of this work was to investigate the association between bedtime regularity and resting heart rate (RHR): an important biomarker for cardiovascular health. Utilizing Fitbit Charge HRs to measure bedtimes, sleep and RHR, 255,736 nights of data were collected from a cohort of 557 college students. We observed that going to bed even 30 minutes later than one’s normal bedtime was associated with a significantly higher RHR throughout sleep (Coeff +0.18; 95% CI: +0.11, +0.26 bpm), persisting into the following day and converging with one’s normal RHR in the early evening. Bedtimes of at least 1 hour earlier were also associated with significantly higher RHRs throughout sleep; however, they converged with one’s normal rate by the end of the sleep session, not extending into the following day. These observations stress the importance of maintaining proper sleep habits, beyond sleep duration, as high variability in bedtimes may be detrimental to one’s cardiovascular health.
As the global prevalence of childhood obesity continues to rise, researchers and clinicians have sought to develop more effective and personalized intervention techniques. In doing so, obesity interventions have expanded beyond the traditional context of nutrition to address several facets of a child’s life, including their psychological state. While the consideration of psychological features has significantly advanced the view of obesity as a holistic condition, attempts to associate such features with outcomes of treatment have been inconclusive. We posit that such uncertainty may arise from the univariate manner in which features are evaluated, focusing on a particular aspect such as loneliness or insecurity, but failing to account for the impact of co-occurring psychological characteristics. Moreover, co-occurrence of psychological characteristics (both child and parent/guardian) have not been studied from the perspective of their relationship with nutritional intervention outcomes. To that end, this work looks to broaden the prevailing view: laying the foundation for the existence of complex interactions among psychological features. In collaboration with a non-profit nutritional clinic in Brazil, this paper demonstrates and models these interactions and their associations with the outcomes of a nutritional intervention.
Background: Known colloquially as the “weekend effect,” the association between weekend admissions and increased mortality within hospital settings has become a highly contested topic over the last two decades. Drawing interest from practitioners and researchers alike, a sundry of works have emerged arguing for and against the presence of the effect across various patient cohorts. However, it has become evident that simply studying population characteristics is insufficient for understanding how the effect manifests. Rather, to truly understand the effect, investigations into its underlying factors must be considered. As such, the work presented in this manuscript serves to address this consideration by moving beyond identification of patient cohorts to examining the role of ICU performance. Methods Employing a comprehensive, publicly available database of electronic medical records (EMR), we began by utilizing multiple logistic regression to identify and isolate a specific cohort in which the weekend effect was present. Next, we leveraged the highly detailed nature of the EMR to evaluate ICU performance using well-established ICU quality scorecards to assess differences in clinical factors among patients admitted to an ICU on the weekend versus weekday. Results Our results demonstrate the weekend effect to be most prevalent among emergency surgery patients (OR 1.53; 95% CI 1.19, 1.96), specifically those diagnosed with circulatory diseases (P<.001). Differences between weekday and weekend admissions for this cohort included a variety of clinical factors such as ventilatory support and night-time discharges. Conclusions This work reinforces the importance of accounting for differences in clinical factors as well as patient cohorts in studies investigating the weekend effect.
As the healthcare industry shifts from traditional fee-for-service payment to value-based care models, the need to accurately quantify and compare the performance of institutions has become an integral component of both policy and research. To date, several notable metrics have been introduced, including the Centers for Medicare and Medicaids Hospital Value Based Purchasing (HVBP) program. However, despite widespread adoption, these standards suffer from a fundamental oversight. Where the factors utilized to characterize performance reflect only intrinsic facets of an institutions care, capturing elements of mortality rates, patient satisfaction, outcomes, and spending. Yet, this approach is directly at odds with our current understanding of health and wellness, as it is well known that social, economic, and community factors are deeply intertwined with healthcare outcomes. To this end, with institutions spread across diverse geographic regions, our manuscript demonstrates that HVBP performance metrics do not exist in isolation. Rather, they possess strong associations to the community factors in which the institution resides. Aggregating a broad set of factors from disparate data sources, this work moves through the informatics pipeline. Identifying performance scoring profiles though clustering and employing robust linear models to uncover novel relationships and advance the discussion around the need for value-based care quality metrics.
BACKGROUND:
Extremely preterm infants represent one of the highest risk categories for impairments in social competence. Few studies have explored the impact of the neonatal intensive care unit (NICU) environment on social development. However, none have specifically analyzed the effects of the care structure the infant receives during hospitalization on later social competence indicators.
OBJECTIVE:
To identify associations between the care structures received by extremely preterm infants in the NICU and scores on the Brief Infant-Toddler Social and Emotional Assessment (BITSEA) post-discharge.
PARTICIPANTS:
50 extremely preterm infants (mean gestational age: 25 weeks during hospitalization; mean chronological age during follow-up assessment: 2 years, 4 months).
METHODS:
A secondary analysis of BITSEA data was performed exploring its relation to care structure data we extracted from electronic medical records (i.e., how much time infants were engaged in human interaction during their first thirty days of hospitalization and what types of interaction they were exposed to).
RESULTS:
Extremely preterm infants spend a considerable amount of time alone during hospitalization (80%) with nursing care comprising the majority of human interaction. Infants who experienced greater human interaction scored significantly higher on the Social Competence (p = 0.01) and lower on the Dysregulation (p = 0.03) BITSEA subscales.
CONCLUSION:
Human interaction and isolation in the NICU is associated with social competence and dysregulation outcomes in extremely preterm infants. Further research is needed to understand how various NICU care structures including centralized nursing teams, parental skin-to-skin care, and early therapy may synergistically play a positive role in developing social competence.
AIMS:
This pilot study explored how maternal stress experienced in the neonatal intensive care unit (NICU) is affected by the individual nursing structure and the network that provides care to extremely preterm infants.
BACKGROUND:
Mothers experience high stress when their extremely preterm infants are hospitalized in the NICU. This often translates into maladaptive parenting behaviours that negatively affect the long-term cognitive, social, and emotional development of the infant. Efforts to identify modifiable sources of maternal stress in the NICU could lead to improvement in maternal engagement and, ultimately, long-term neurodevelopmental outcomes.
METHOD:
Time- and date-stamped nursing shift data were extracted from the medical record and transformed into five structural nursing metrics with resultant nurse data networks. These were then analysed for associations with maternal stress outcomes on the Parental Stressor Scale (PSS: NICU).
RESULTS:
Infants experienced highly variable nursing care and networks of nurses throughout their hospitalization. This variability is associated with the PSS: NICU (a) Sights and Sounds and (b) Altered Parental Role subscales.
CONCLUSION:
Nursing structure and the resultant caregiving network have an impact on maternal stress.
IMPLICATIONS FOR NURSING MANAGEMENT:
Changing the pattern of nurse staffing may be a modifiable intervention target for reducing maternal stress in the NICU.
Coupled with the rise of data science and machine learning, the increasing availability of digitized health and wellness data has provided an exciting opportunity for complex analyses of problems throughout the healthcare domain. Whereas many early works focused on a particular aspect of patient care, often drawing on data from a specific clinical or administrative source, it has become clear such a single-source approach is insufficient to capture the complexity of the human condition. Instead, adequately modeling health and wellness problems requires the ability to draw upon data spanning multiple facets of an individual’s biology, their care, and the social aspects of their life. Although such an awareness has greatly expanded the breadth of health and wellness data collected, the diverse array of data sources and intended uses often leave researchers and practitioners with a scattered and fragmented view of any particular patient. As a result, there exists a clear need to catalogue and organize the range of healthcare data available for analysis. This work represents an effort at developing such an organization, presenting a patient-centric framework deemed the Healthcare Data Spectrum (HDS). Comprised of six layers, the HDS begins with the innermost micro-level omics and macro-level demographic data that directly characterize a patient, and extends at its outermost to aggregate population-level data derived from attributes of care for each individual patient. For each level of the HDS, this manuscript will examine the specific types of constituent data, provide examples of how the data aid in a broad set of research problems, and identify the primary terminology and standards used to describe the data.
While healthcare has traditionally existed within the confines of formal clinical environments, the emergence of population health initiatives has given rise to a new and diverse set of community interventions. As the number of interventions continues to grow, the ability to quickly and accurately identify those most relevant to an individual’s specific need has become essential in the care process. However, due to the diverse nature of the interventions, the determination need often requires non-clinical social and behavioral information that must be collected from the individuals themselves. Although survey tools have demonstrated success in the collection of this data, time restrictions and diminishing respondent interest have presented barriers to obtaining up-to-date information on a regular basis. In response, researchers have turned to analytical approaches to optimize surveys and quantify the importance of each question. To date, the majority of these works have approached the task from a univariate standpoint, identifying the next most important question to ask. However, such an approach fails to address the interconnected nature of the health conditions inherently captured by the broader set of survey questions. Utilizing data mining and machine learning methodology, this work demonstrates the value of capturing these relations. We present a novel framework that identifies a variable-length subset of survey questions most relevant in determining the need for a particular health intervention for a given individual. We evaluate the framework using a large national longitudinal dataset centered on aging, demonstrating the ability to identify the questions with the highest impact across a variety of interventions.
The rates of childhood obesity are at an all-time high, both domestically and around the globe. In an effort to stem current trends, researchers have looked to understand the components of effective and sustainable interventions. Although obesity interventions have often focused on a single lifestyle factors such as activity or nutrition, recent focus has shifted to a broader set of interventions that address multiple aspects of a child’s life concurrently. Yet, despite promising outcomes, an understanding of what drives the success or failure of an intervention for a specific child is not yet well understood. To this end we have designed FitSpace, a web-based health and wellness platform designed to help understand how goal setting, activity, and nutrition patterns are tied to a child’s social-demographic profile and their overall success in achieving sustainable healthy behavior. The work presented here represents the first step in our research, detailing a usability pilot study of the platform run in partnership with a local high school.
From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.
Predicting the onset of heart disease is of obvious importance as doctors try to improve the general health of their patients. If it were possible to identify high-risk patients before their heart failure diagnosis, doctors could use that information to implement preventative measures to keep a heart failure diagnosis from becoming a reality. Integration of Electronic Medical Records (EMRs) into clinical practice has enabled the use of computational techniques for personalized healthcare at scale. The larger goal of such modeling is to pivot from reactive medicine to preventative care and early detection of adverse conditions. In this paper, we present a trajectory-based disease progression model to detect chronic heart failure. We validate our work on a database of Medicare records of 1.1 million elderly US patients. Our supervised approach allows us to assign likelihood of chronic heart failure for an unseen patient’s disease history and identify key disease progression trajectories that intensify or diminish said likelihood. This information will be a tremendous help as patients and doctors try to understand what are the most dangerous diagnoses for those who are susceptible to heart failure. Using our model, we demonstrate some of the most common disease trajectories that eventually result in the development of heart failure.
The increasing availability of electronic health care records has provided remarkable progress in the field of population health. In particular the identification of disease risk factors has flourished under the surge of available data. Researchers can now access patient data across a broad range of demographics and geographic locations. Utilizing this Big healthcare data researchers have been able to empirically identify specific high-risk conditions found within differing populations. However to date the majority of studies approached the issue from the top down, focusing on the prevalence of specific diseases within a population. Through our work we demonstrate the power of addressing this issue bottom-up by identifying specifically which diseases are higher-risk for a specific population. In this work we demonstrate that network-based analysis can present a foundation to identify pairs of diagnoses that differentiate across population segments. We provide a case study highlighting differences between high and low income individuals in the United States. This work is particularly valuable when addressing population health management within resource-constrained environments such as community health programs where it can be used to provide insight and resource planning into targeted care for the population served.
Over the past decade, the application of data science techniques to clinical data has allowed practitioners and researchers to develop a sundry of analytical models. These models have traditionally relied on structured data drawn from Electronic Medical Records (EMR). Yet, a large portion of EMR data remains unstructured, primarily held within clinical notes. While recent work has produced techniques for extracting structured features from unstructured text, this work generally operates under the untested assumption that all clinical text can be processed in a similar manner. This paper provides what we believe to be the first comprehensive evaluation of the differences between four major sources of clinical text, providing an evaluation of the structural, linguistic, and topical differences among notes of each category. Our conclusions support the premise that tools designed to extract structured data from clinical text must account for the categories of text they process.
On April 2nd, 2014, the Department of Health and Human Services (HHS) announced a historic policy in its effort to increase the transparency in the American healthcare system. The Center for Medicare and Medicaid Service (CMS) would publicly release a dataset containing information about the types of Medicare services, requested charges, and payments issued by providers across the country. In its release, HHS stated that the data would shed light on “Medicare fraud, waste, and abuse.” While this is most certainly true, we believe that it can provide so much more. Beyond the purely financial aspects of procedure charges and payments, the procedures themselves may provide us with additional information, not only about the Medicare population, but also about the physicians themselves. The procedures a physician performs are for the most part not novel, but rather recommended, observed, and studied. However, whether a physician decides on advocating a procedure is somewhat discretionary. Some patients require a clear course of action, while others may benefit from a variety of options. This article poses the following question: How does a physician's past experience in medical school shape his or her practicing decisions? This article aims to open the analysis into how data, such as the CMS Medicare release, can help further our understanding of knowledge transfer and how experiences during education can shape a physician's decision's over the course of his or her career. This work begins with an evaluation into similarities between medical school charges, procedures, and payments. It then details how schools' procedure choices may link them in other, more interesting ways. Finally, the article includes a geographic analysis of how medical school procedure payments and charges are distributed nationally, highlighting potential deviations.
Today, advances in medical informatics brought on by the increasing availability of electronic medical records (EMR) have allowed for the proliferation of data-centric tools, especially in the context of personalized healthcare. While these tools have the potential to greatly improve the quality of patient care, the effective utilization of their techniques within clinical practice may encounter two significant challenges. First, the increasing amount of electronic data generated by clinical processes can impose scalability challenges for current computational tools, requiring parallel or distributed implementations of such tools to scale. Secondly, as technology becomes increasingly intertwined in clinical workflows these tools must not only operate efficiently, but also in an interpretable manner. Failure to identity areas of uncertainty or provide appropriate context creates a potentially complex situation for both physicians and patients. This paper will present a case study investigating the issues associated with first scaling a disease prediction algorithm to accommodate dataset sizes expected in large medical practices. It will then provide an analysis on the diagnoses predictions, attempting to provide contextual information to convey the certainty of the results to a physician. Finally it will investigate latent demographic features of the patient’s themselves, which may have an impact on the accuracy of the diagnosis predictions.
In today’s healthcare environment, nurses play an integral role in determining patient outcomes. This role becomes especially clear in intensive care units such as the Neonatal Intensive Care Unit (NICU). In the NICU, critically ill infants rely almost completely on the care of these nurses for survival. Given the importance of their role, and the volatile conditions of the infants, it is imperative that nurses be able to focus on the infants in their charge. In order to provide this level of care there must be an appropriate infant to nurse ratio each day. However traditional staffing models often utilize a number of factors, including historical census counts, which when incorrect leave a NICU at risk of operating barely reaching, or even below the recommended staffing level. This work will present the novel ADMIT (Admission Duration Model for Infant Treatment) model, which yields personalized length of stay estimates for an infant, utilizing data available from time of admission to the NICU.
Today the healthcare industry is undergoing one of the most important and challenging transitions to date, the move from paper to electronic healthcare records. While the healthcare industry has generally been an incrementally advancing field, this change has the potential to be revolutionarily. Using the data collected from these electronic records exciting tools such as disease recommendation systems have been created to deliver personalized models of an individual’s health profile. However despite their early success, tools such as these will soon encounter a significant problem. The amount of healthcare encounter data collected is increasing drastically, and the computational time for these applications will soon reach a point at which these systems can no longer function in a practical timeframe for clinical use. This paper will begin by analyzing the performance limitations of the personalized disease prediction engine CARE (Collaborative Assessment and Recommendation Engine). Next it will detail the creation and performance of a new single patient implementation of the algorithm. Finally this work will demonstrate a novel parallel implementation of the CARE algorithm, and demonstrate the performance benefits on big patient data.
Successful aging or aging well is a very complex process. This process is usually defined as having the following components: managing chronic conditions, maintaining physical and mental health, social engagement. There exist many solutions in isolation for these individual components that assist in aging well. However, while these components in isolation can work, we believe that the real strength of these components is in their synergism-in their combined power to help seniors in aging well. We are developing a unified framework "eSeniorCare", which addresses all of these components together. It is a mobile platform that has the following features: medication scheduling, daily activity tracker. As a part of this framework, sessions are conducted to provide motivational lectures in aging. In this paper, we present the first phase of the framework