Mathematical and Statistical Assessment of Biomarkers and Surrogate Endpoints in Clinical Trials

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Probability and Statistics".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 8733

Special Issue Editors


E-Mail Website
Guest Editor
Department of Statistics and O.R., Complutense University of Madrid, 28040 Madrid, Spain
Interests: survival analysis; assessment of diagnostic markers; joint models for longitudinal and survival data; longitudinal data analysis; evaluation of surrogate markers; statistical analysis with missing data

E-Mail Website
Guest Editor
Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA 94305, USA.
Interests: clinical trials; medical diagnosis; survival analysis; classification; medical decision making

Special Issue Information

Dear Colleagues,

You are kindly invited to contribute to this Special Issue on “Mathematical and statistical assessment of biomarkers and surrogate endpoints in clinical trials” with an original research article or comprehensive review. The submissions will be evaluated through the peer-review system of Mathematics.

Appropriate outcome measures are critical for the validity and efficiency of clinical trials. Clinical endpoints that directly measure how a patient feels, functions, or survives are the regulatory requirements to evaluate trial efficacy and safety. However, clinical endpoints may take time to observe and require large trials to reach conclusions. Biomarkers that include physical signs of disease, laboratory measures and radiological tests or intermediate clinical endpoints have a potential to be surrogate endpoints that are used as substitutes for clinical endpoints or are used to help early decisions, such as stopping trials for futility. Mathematical and statistical methods are critical for the appropriate uses of biomarkers and intermediate clinical endpoints in trials.

The focus of this Special Issue is mainly on new mathematical and statistical methods to assess whether biomarkers can be surrogate endpoints and when they should be used in clinical trials, medical practice, product development, and public health policy.

Prof. Dr. María del Carmen Pardo
Prof. Dr. Ying Lu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • clinical trial
  • biomarker
  • surrogate endpoint
  • validation
  • ROC

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1663 KiB  
Article
Improving the Robustness of Variable Selection and Predictive Performance of Regularized Generalized Linear Models and Cox Proportional Hazard Models
by Feng Hong, Lu Tian and Viswanath Devanarayan
Mathematics 2023, 11(3), 557; https://0-doi-org.brum.beds.ac.uk/10.3390/math11030557 - 20 Jan 2023
Cited by 2 | Viewed by 1342
Abstract
High-dimensional data applications often entail the use of various statistical and machine-learning algorithms to identify an optimal signature based on biomarkers and other patient characteristics that predicts the desired clinical outcome in biomedical research. Both the composition and predictive performance of such biomarker [...] Read more.
High-dimensional data applications often entail the use of various statistical and machine-learning algorithms to identify an optimal signature based on biomarkers and other patient characteristics that predicts the desired clinical outcome in biomedical research. Both the composition and predictive performance of such biomarker signatures are critical in various biomedical research applications. In the presence of a large number of features, however, a conventional regression analysis approach fails to yield a good prediction model. A widely used remedy is to introduce regularization in fitting the relevant regression model. In particular, a L1 penalty on the regression coefficients is extremely useful, and very efficient numerical algorithms have been developed for fitting such models with different types of responses. This L1-based regularization tends to generate a parsimonious prediction model with promising prediction performance, i.e., feature selection is achieved along with construction of the prediction model. The variable selection, and hence the composition of the signature, as well as the prediction performance of the model depend on the choice of the penalty parameter used in the L1 regularization. The penalty parameter is often chosen by K-fold cross-validation. However, such an algorithm tends to be unstable and may yield very different choices of the penalty parameter across multiple runs on the same dataset. In addition, the predictive performance estimates from the internal cross-validation procedure in this algorithm tend to be inflated. In this paper, we propose a Monte Carlo approach to improve the robustness of regularization parameter selection, along with an additional cross-validation wrapper for objectively evaluating the predictive performance of the final model. We demonstrate the improvements via simulations and illustrate the application via a real dataset. Full article
Show Figures

Figure 1

20 pages, 3316 KiB  
Article
Classification of Alzheimer’s Disease Based on Core-Large Scale Brain Network Using Multilayer Extreme Learning Machine
by Ramesh Kumar Lama, Ji-In Kim and Goo-Rak Kwon
Mathematics 2022, 10(12), 1967; https://0-doi-org.brum.beds.ac.uk/10.3390/math10121967 - 07 Jun 2022
Cited by 6 | Viewed by 1622
Abstract
Various studies suggest that the network deficit in default network mode (DMN) is prevalent in Alzheimer’s disease (AD) and mild cognitive impairment (MCI). Besides DMN, some studies reveal that network alteration occurs in salience network motor networks and large scale network. In this [...] Read more.
Various studies suggest that the network deficit in default network mode (DMN) is prevalent in Alzheimer’s disease (AD) and mild cognitive impairment (MCI). Besides DMN, some studies reveal that network alteration occurs in salience network motor networks and large scale network. In this study we performed classification of AD and MCI from healthy control considering the network alterations in large scale network and DMN. Thus, we constructed the brain network from functional magnetic resonance (fMR) images. Pearson’s correlation-based functional connectivity was used to construct the brain network. Graph features of the brain network were converted to feature vectors using Node2vec graph-embedding technique. Two classifiers, single layered extreme learning and multilayered extreme learning machine, were used for the classification together with feature selection approaches. We performed the classification test on the brain network of different sizes including the large scale brain network, the whole brain network and the combined brain network. Experimental results showed that the least absolute shrinkage and selection operator (LASSO) feature selection method generates better classification accuracy on large network size, and that feature selection with adaptive structure learning (FSAL) feature selection technique generates better classification accuracy on small network size. Full article
Show Figures

Figure 1

17 pages, 320 KiB  
Article
A Surrogate Measure for Time-Varying Biomarkers in Randomized Clinical Trials
by Rui Zhuang, Fan Xia, Yixin Wang and Ying-Qing Chen
Mathematics 2022, 10(4), 584; https://0-doi-org.brum.beds.ac.uk/10.3390/math10040584 - 13 Feb 2022
Viewed by 1672
Abstract
Clinical trials with rare or distant outcomes are usually designed to be large in size and long term. The resource-demand and time-consuming characteristics limit the feasibility and efficiency of the studies. There are motivations to replace rare or distal clinical endpoints by reliable [...] Read more.
Clinical trials with rare or distant outcomes are usually designed to be large in size and long term. The resource-demand and time-consuming characteristics limit the feasibility and efficiency of the studies. There are motivations to replace rare or distal clinical endpoints by reliable surrogate markers, which could be earlier and easier to collect. However, statistical challenges still exist to evaluate and rank potential surrogate markers. In this paper, we define a generalized proportion of treatment effect for survival settings. The measure’s definition and estimation do not rely on any model assumption. It is equipped with a consistent and asymptotically normal non-parametric estimator. Under proper conditions, the measure reflects the proportion of average treatment effect mediated by the surrogate marker among the group that would survive to mark the measurement time under both intervention and control arms. Full article
Show Figures

Figure 1

18 pages, 372 KiB  
Article
Evaluation of Surrogate Endpoints Using Information-Theoretic Measure of Association Based on Havrda and Charvat Entropy
by María del Carmen Pardo, Qian Zhao, Hua Jin and Ying Lu
Mathematics 2022, 10(3), 465; https://0-doi-org.brum.beds.ac.uk/10.3390/math10030465 - 31 Jan 2022
Viewed by 1900
Abstract
Surrogate endpoints have been used to assess the efficacy of a treatment and can potentially reduce the duration and/or number of required patients for clinical trials. Using information theory, Alonso et al. (2007) proposed a unified framework based on Shannon entropy, a new [...] Read more.
Surrogate endpoints have been used to assess the efficacy of a treatment and can potentially reduce the duration and/or number of required patients for clinical trials. Using information theory, Alonso et al. (2007) proposed a unified framework based on Shannon entropy, a new definition of surrogacy that departed from the hypothesis testing framework. In this paper, a new family of surrogacy measures under Havrda and Charvat (H-C) entropy is derived which contains Alonso’s definition as a particular case. Furthermore, we extend our approach to a new model based on the information-theoretic measure of association for a longitudinally collected continuous surrogate endpoint for a binary clinical endpoint of a clinical trial using H-C entropy. The new model is illustrated through the analysis of data from a completed clinical trial. It demonstrates advantages of H-C entropy-based surrogacy measures in the evaluation of scheduling longitudinal biomarker visits for a phase 2 randomized controlled clinical trial for treatment of multiple sclerosis. Full article
23 pages, 370 KiB  
Article
Confidence Intervals for Assessing Non-Inferiority with Assay Sensitivity in a Three-Arm Trial with Normally Distributed Endpoints
by Niansheng Tang and Fan Liang
Mathematics 2022, 10(2), 167; https://0-doi-org.brum.beds.ac.uk/10.3390/math10020167 - 06 Jan 2022
Cited by 1 | Viewed by 1413
Abstract
Various approaches including hypothesis test and confidence interval (CI) construction have been proposed to assess non-inferiority and assay sensitivity via a known fraction or pre-specified margin in three-arm trials with continuous or discrete endpoints. However, there is little work done on the construction [...] Read more.
Various approaches including hypothesis test and confidence interval (CI) construction have been proposed to assess non-inferiority and assay sensitivity via a known fraction or pre-specified margin in three-arm trials with continuous or discrete endpoints. However, there is little work done on the construction of the non-inferiority margin from historical data and simultaneous generalized CIs (SGCIs) in a three-arm trial with the normally distributed endpoints. Based on the generalized fiducial method and the square-and-add method, we propose two simultaneous CIs for assessing non-inferiority and assay sensitivity in a three-arm trial. For comparison, we also consider the Wald-type Bonferroni simultaneous CI and parametric bootstrap simultaneous CI. An algorithm for evaluating the optimal sample size for attaining the pre-specified power is given. Simulation studies are conducted to investigate the performance of the proposed CIs in terms of their empirical coverage probabilities. An example taken from the mildly asthmatic study is illustrated using the proposed simultaneous CIs. Empirical results show that the proposed generalized fiducial method and the square-and-add method behave better than other two compared CIs. Full article
Back to TopTop