Δευτέρα 30 Σεπτεμβρίου 2019

Elizabeth Barrett-Connor, 1935–2019
image,
Talc, Asbestos, and Epidemiology: Corporate Influence and Scientific Incognizance
No abstract available
Source-Apportioned PM2.5 and Cardiorespiratory Emergency Department Visits: Accounting for Source Contribution Uncertainty
imageBackground: Despite evidence suggesting that air pollution-related health effects differ by emissions source, epidemiologic studies on fine particulate matter (PM2.5) infrequently differentiate between particles from different sources. Those that do rarely account for the uncertainty of source apportionment methods. Methods: For each day in a 12-year period (1998–2010) in Atlanta, GA, we estimated daily PM2.5 source contributions from a Bayesian ensemble model that combined four source apportionment methods including chemical transport and receptor-based models. We fit Poisson generalized linear models to estimate associations between source-specific PM2.5 concentrations and cardiorespiratory emergency department visits (n = 1,598,117). We propagated uncertainty in the source contribution estimates through analyses using multiple imputation. Results: Respiratory emergency department visits were positively associated with biomass burning and secondary organic carbon. For a 1 µg/m3 increase in PM2.5 from biomass burning during the past 3 days, the rate of visits for all respiratory outcomes increased by 0.4% (95% CI 0.0%, 0.7%). There was less evidence for associations between PM2.5 sources and cardiovascular outcomes, with the exception of ischemic stroke, which was positively associated with most PM2.5 sources. Accounting for the uncertainty of source apportionment estimates resulted, on average, in an 18% increase in the standard error for rate ratio estimates for all respiratory and cardiovascular emergency department visits, but inflation varied across specific sources and outcomes, ranging from 2% to 39%. Conclusions: This study provides evidence of associations between PM2.5 sources and some cardiorespiratory outcomes and quantifies the impact of accounting for variability in source apportionment approaches.
Acute Air Pollution Exposure and the Risk of Violent Behavior in the United States
imageBackground: Violence is a leading cause of death and an important public health threat, particularly among adolescents and young adults. However, the environmental causes of violent behavior are not well understood. Emerging evidence suggests exposure to air pollution may be associated with aggressive or impulsive reactions in people. Methods: We applied a two-stage hierarchical time-series model to estimate change in risk of violent and nonviolent criminal behavior associated with short-term air pollution in U.S. counties (2000–2013). We used daily monitoring data for ozone and fine particulate matter (PM2.5) from the Environmental Protection Agency and daily crime counts from the Federal Bureau of Investigation. We evaluated the exposure–response relation and assessed differences in risk by community characteristics of poverty, urbanicity, race, and age. Results: Our analysis spans 301 counties in 34 states, representing 86.1 million people and 721,674 days. Each 10 µg/m3 change in daily PM2.5 was associated with a 1.17% (95% confidence interval [CI] = 0.90, 1.43) and a 10 ppb change in ozone with a 0.59% (95% CI = 0.41, 0.78) relative risk increase (RRI) for violent crime. However, we observed no risk increase for nonviolent property crime due to PM2.5 (RRI: 0.11%; 95% CI = −0.09, 0.31) or ozone (RRI: −0.05%; 95% CI = −0.22, 0.12). Our results were robust across all community types, except rural regions. Exposure–response curves indicated increased violent crime risk at concentrations below regulatory standards. Conclusions: Our results suggest that short-term changes in ambient air pollution may be associated with a greater risk of violent behavior, regardless of community type.
On the Relation Between G-formula and Inverse Probability Weighting Estimators for Generalizing Trial Results
imageWhen generalizing inferences from a randomized trial to a target population, two classes of estimators are used: g-formula estimators that depend on modeling the conditional outcome mean among trial participants and inverse probability (IP) weighting estimators that depend on modeling the probability of participation in the trial. In this article, we take a closer look at the relation between these two classes of estimators. We propose IP weighting estimators that combine models for the probability of trial participation and the probability of treatment among trial participants. We show that, when all models are estimated using nonparametric frequency methods, these estimators are finite-sample equivalent to the g-formula estimator. We argue for the use of augmented IP weighting (doubly robust) generalizability estimators when nonparametric estimation is infeasible due to the curse of dimensionality, and examine the finite-sample behavior of different estimators using parametric models in a simulation study.
Survival Bias in Mendelian Randomization Studies: A Threat to Causal Inference
imageIt has been argued that survival bias may distort results in Mendelian randomization studies in older populations. Through simulations of a simple causal structure we investigate the degree to which instrumental variable (IV)-estimators may become biased in the context of exposures that affect survival. We observed that selecting on survival decreased instrument strength and, for exposures with directionally concordant effects on survival (and outcome), introduced downward bias of the IV-estimator when the exposures reduced the probability of survival till study inclusion. Higher ages at study inclusion generally increased this bias, particularly when the true causal effect was not equal to null. Moreover, the bias in the estimated exposure-outcome relation depended on whether the estimation was conducted in the one- or two-sample setting. Finally, we briefly discuss which statistical approaches might help to alleviate this and other types of selection bias. See video abstract at, http://links.lww.com/EDE/B589.
Censoring for Loss to Follow-up in Time-to-event Analyses of Composite Outcomes or in the Presence of Competing Risks
imageBackground: In time-to-event analyses, there is limited guidance on when persons who are lost to follow-up (LTFU) should be censored. Methods: We simulated bias in risk estimates for: (1) a composite event of measured (outcome only observable in a patient encounter) and captured events (outcome observable outside a patient encounter); and a (2) measured or (3) captured event in the presence of a competing event of the other type, under three censoring strategies: (i) censor at the last study encounter; (ii) censor when LTFU definition is met; and (iii) a new, hybrid censoring strategy. We demonstrate the real-world impact of this decision by estimating: (1) time to acquired immune deficiency syndrome (AIDS) diagnosis or death, (2) time to initiation of antiretroviral therapy (ART), and (3) time to death before ART initiation among adults engaged in HIV care. Results: For (1) our hybrid censoring strategy was least biased. In our example, 5-year risk of AIDS or death was overestimated using last-encounter censoring (25%) and under-estimated using LTFU-definition censoring (21%), compared with results from our hybrid approach (24%). Last-encounter censoring was least biased for (2). When estimating 5-year risk of ART initiation, LTFU-definition censoring underestimated risk (80% vs. 85% using last-encounter censoring). LTFU-definition censoring was least biased for (3). When estimating 5-year risk of death before ART initiation, last-encounter censoring overestimated risk (5.2% vs. 4.7% using LTFU-definition censoring). Conclusions: The least biased censoring strategy for time-to-event analyses in the presence of LTFU depends on the event and estimand of interest.
Estimation of Natural Indirect Effects Robust to Unmeasured Confounding and Mediator Measurement Error
imageThe use of causal mediation analysis to evaluate the pathways by which an exposure affects an outcome is widespread in the social and biomedical sciences. Recent advances in this area have established formal conditions for identification and estimation of natural direct and indirect effects. However, these conditions typically involve stringent assumptions of no unmeasured confounding and that the mediator has been measured without error. These assumptions may fail to hold in many practical settings where mediation methods are applied. The goal of this article is two-fold. First, we formally establish that the natural indirect effect can in fact be identified in the presence of unmeasured exposure–outcome confounding provided there is no additive interaction between the mediator and unmeasured confounder(s). Second, we introduce a new estimator of the natural indirect effect that is robust to both classical measurement error of the mediator and unmeasured confounding of both exposure–outcome and mediator–outcome relations under certain no interaction assumptions. We provide formal proofs and a simulation study to illustrate our results. In addition, we apply the proposed methodology to data from the Harvard President’s Emergency Plan for AIDS Relief (PEPFAR) program in Nigeria.
Mediational E-values: Approximate Sensitivity Analysis for Unmeasured Mediator–Outcome Confounding
imageBackground: Mediation analysis is a powerful tool for understanding mechanisms, but conclusions about direct and indirect effects will be invalid if there is unmeasured confounding of the mediator–outcome relationship. Sensitivity analysis methods allow researchers to assess the extent of this bias but are not always used. One particularly straightforward technique that requires minimal assumptions is nonetheless difficult to interpret, and so would benefit from a more intuitive parameterization. Methods: We conducted an exhaustive numerical search over simulated mediation effects, calculating the proportion of scenarios in which a bound for unmeasured mediator–outcome confounding held under an alternative parameterization. Results: In over 99% of cases, the bound for the bias held when we described the strength of confounding directly via the confounder–mediator relationship instead of via the conditional exposure–confounder relationship. Conclusions: Researchers can conduct sensitivity analysis using a method that describes the strength of the confounder–outcome relationship and the approximate strength of the confounder–mediator relationship that, together, would be required to explain away a direct or indirect effect.
Test-Negative Designs: Differences and Commonalities with Other Case–Control Studies with “Other Patient” Controls
Test-negative studies recruit cases who attend a healthcare facility and test positive for a particular disease; controls are patients undergoing the same tests for the same reasons at the same healthcare facility and who test negative. The design is often used for vaccine efficacy studies, but not exclusively, and has been posited as a separate type of study design, different from case–control studies because the controls are not sampled from a wider source population. However, the design is a special case of a broader class of case–control designs that identify cases and sample “other patient” controls from the same healthcare facilities. Therefore, we consider that new insights into the test-negative design can be obtained by viewing them as case–control studies with “other patient” controls; in this context, we explore differences and commonalities, to better define the advantages and disadvantages of the test-negative design in various circumstances. The design has the advantage of similar participation rates, information quality and completeness, referral/catchment areas, initial presentation, diagnostic suspicion tendencies, and preferences by doctors. Under certain assumptions, valid population odds ratios can be estimated with the test-negative design, just as with case–control studies with “other patient” controls. Interestingly, directed acyclic graphs (DAGs) are not completely helpful in explaining why the design works. The use of test-negative designs may not completely resolve all potential biases, but they are a valid study design option, and will in some circumstances lead to less bias, as well as often the most practical one.

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου

Αρχειοθήκη ιστολογίου