FUNCTION: ahr_blinded TITLE: Blinded estimation of average hazard ratio DESCRIPTION: Based on blinded data and assumed hazard ratios in different intervals, compute a blinded estimate of average hazard ratio (AHR) and corresponding estimate of statistical information. This function is intended for use in computing futility bounds based on spending assuming the input hazard ratio (hr) values for intervals specified here. ARGUMENTS: surv Input survival object (see \code{\link[survival:Surv]{survival::Surv()}}); note that only 0 = censored, 1 = event for \code{\link[survival:Surv]{survival::Surv()}}. intervals Vector containing positive values indicating interval lengths where the exponential rates are assumed. Note that a final infinite interval is added if any events occur after the final interval specified. hr Vector of hazard ratios assumed for each interval. ratio Ratio of experimental to control randomization. EXAMPLE: ahr_blinded( surv = survival::Surv( time = simtrial::ex2_delayed_effect$month, event = simtrial::ex2_delayed_effect$evntd ), intervals = c(4, 100), hr = c(1, .55), ratio = 1 ) FUNCTION: ahr TITLE: Average hazard ratio under non-proportional hazards DESCRIPTION: Provides a geometric average hazard ratio under various non-proportional hazards assumptions for either single or multiple strata studies. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time. ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate A \code{fail_rate} data frame with or without stratum created by \code{\link[=define_fail_rate]{define_fail_rate()}}. total_duration Total follow-up from start of enrollment to data cutoff; this can be a single value or a vector of positive numbers. ratio Ratio of experimental to control randomization. EXAMPLE: # Example 1: default ahr() # Example 2: default with multiple analysis times (varying total_duration) ahr(total_duration = c(15, 30)) # Example 3: stratified population enroll_rate <- define_enroll_rate( stratum = c(rep("Low", 2), rep("High", 3)), duration = c(2, 10, 4, 4, 8), rate = c(5, 10, 0, 3, 6) ) fail_rate <- define_fail_rate( stratum = c(rep("Low", 2), rep("High", 2)), duration = c(1, Inf, 1, Inf), fail_rate = c(.1, .2, .3, .4), dropout_rate = .001, hr = c(.9, .75, .8, .6) ) ahr(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30)) FUNCTION: as_gt TITLE: Convert summary table of a fixed or group sequential design object to a gt object DESCRIPTION: Convert summary table of a fixed or group sequential design object to a gt object ARGUMENTS: x A summary object of a fixed or group sequential design. ... Additional arguments (not used). title A string to specify the title of the gt table. footnote A list containing \code{content}, \code{location}, and \code{attr}. \code{content} is a vector of string to specify the footnote text; \code{location} is a vector of string to specify the locations to put the superscript of the footnote index; \code{attr} is a vector of string to specify the attributes of the footnotes, for example, \code{c("colname", "title", "subtitle", "analysis", "spanner")}; users can use the functions in the \code{gt} package to customize the table. To disable footnotes, use \code{footnote = FALSE}. subtitle A string to specify the subtitle of the gt table. colname_spanner A string to specify the spanner of the gt table. colname_spannersub A vector of strings to specify the spanner details of the gt table. display_bound A vector of strings specifying the label of the bounds. The default is \code{c("Efficacy", "Futility")}. display_columns A vector of strings specifying the variables to be displayed in the summary table. display_inf_bound Logical, whether to display the +/-inf bound. EXAMPLE: # Fixed design examples ---- library(dplyr) # Enrollment rate enroll_rate <- define_enroll_rate( duration = 18, rate = 20 ) # Failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, dropout_rate = .001, hr = c(1, .6) ) # Study duration in months study_duration <- 36 # Experimental / Control randomization ratio ratio <- 1 # 1-sided Type I error alpha <- 0.025 # Type II error (1 - power) beta <- 0.1 # Example 1 ---- fixed_design_ahr( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() %>% as_gt() # Example 2 ---- fixed_design_fh( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() %>% as_gt() \donttest{ # Group sequential design examples --- library(dplyr) # Example 1 ---- # The default output gs_design_ahr() %>% summary() %>% as_gt() gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt() gs_design_wlr() %>% summary() %>% as_gt() gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt() gs_power_combo() %>% summary() %>% as_gt() gs_design_rd() %>% summary() %>% as_gt() gs_power_rd() %>% summary() %>% as_gt() # Example 2 ---- # Usage of title = ..., subtitle = ... # to edit the title/subtitle gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt( title = "Bound Summary", subtitle = "from gs_power_wlr" ) # Example 3 ---- # Usage of colname_spanner = ..., colname_spannersub = ... # to edit the spanner and its sub-spanner gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt( colname_spanner = "Cumulative probability to cross boundaries", colname_spannersub = c("under H1", "under H0") ) # Example 4 ---- # Usage of footnote = ... # to edit the footnote gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt( footnote = list( content = c( "approximate weighted hazard ratio to cross bound.", "wAHR is the weighted AHR.", "the crossing probability.", "this table is generated by gs_power_wlr." ), location = c("~wHR at bound", NA, NA, NA), attr = c("colname", "analysis", "spanner", "title") ) ) # Example 5 ---- # Usage of display_bound = ... # to either show efficacy bound or futility bound, or both(default) gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt(display_bound = "Efficacy") # Example 6 ---- # Usage of display_columns = ... # to select the columns to display in the summary table gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_gt(display_columns = c("Analysis", "Bound", "Nominal p", "Z", "Probability")) } FUNCTION: as_rtf TITLE: Write summary table of a fixed or group sequential design object to an RTF file DESCRIPTION: Write summary table of a fixed or group sequential design object to an RTF file ARGUMENTS: x A summary object of a fixed or group sequential design. ... Additional arguments (not used). title A string to specify the title of the RTF table. footnote A list containing \code{content}, \code{location}, and \code{attr}. \code{content} is a vector of string to specify the footnote text; \code{location} is a vector of string to specify the locations to put the superscript of the footnote index; \code{attr} is a vector of string to specify the attributes of the footnotes, for example, \code{c("colname", "title", "subtitle", "analysis", "spanner")}; users can use the functions in the \code{gt} package to customize the table. col_rel_width Column relative width in a vector e.g. c(2,1,1) refers to 2:1:1. Default is NULL for equal column width. orientation Orientation in 'portrait' or 'landscape'. text_font_size Text font size. To vary text font size by column, use numeric vector with length of vector equal to number of columns displayed e.g. c(9,20,40). file File path for the output. subtitle A string to specify the subtitle of the RTF table. colname_spanner A string to specify the spanner of the RTF table. colname_spannersub A vector of strings to specify the spanner details of the RTF table. display_bound A vector of strings specifying the label of the bounds. The default is \code{c("Efficacy", "Futility")}. display_columns A vector of strings specifying the variables to be displayed in the summary table. display_inf_bound Logical, whether to display the +/-inf bound. EXAMPLE: library(dplyr) # Enrollment rate enroll_rate <- define_enroll_rate( duration = 18, rate = 20 ) # Failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, dropout_rate = .001, hr = c(1, .6) ) # Study duration in months study_duration <- 36 # Experimental / Control randomization ratio ratio <- 1 # 1-sided Type I error alpha <- 0.025 # Type II error (1 - power) beta <- 0.1 # AHR ---- # under fixed power x <- fixed_design_ahr( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() x %>% as_rtf(file = tempfile(fileext = ".rtf")) x %>% as_rtf(title = "Fixed design", file = tempfile(fileext = ".rtf")) x %>% as_rtf( footnote = "Power computed with average hazard ratio method given the sample size", file = tempfile(fileext = ".rtf") ) x %>% as_rtf(text_font_size = 10, file = tempfile(fileext = ".rtf")) # FH ---- # under fixed power fixed_design_fh( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) #' \donttest{ # the default output library(dplyr) gs_design_ahr() %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_design_wlr() %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_power_combo() %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_design_rd() %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) gs_power_rd() %>% summary() %>% as_rtf(file = tempfile(fileext = ".rtf")) # usage of title = ..., subtitle = ... # to edit the title/subtitle gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf( title = "Bound Summary", subtitle = "from gs_power_wlr", file = tempfile(fileext = ".rtf") ) # usage of colname_spanner = ..., colname_spannersub = ... # to edit the spanner and its sub-spanner gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf( colname_spanner = "Cumulative probability to cross boundaries", colname_spannersub = c("under H1", "under H0"), file = tempfile(fileext = ".rtf") ) # usage of footnote = ... # to edit the footnote gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf( footnote = list( content = c( "approximate weighted hazard ratio to cross bound.", "wAHR is the weighted AHR.", "the crossing probability.", "this table is generated by gs_power_wlr." ), location = c("~wHR at bound", NA, NA, NA), attr = c("colname", "analysis", "spanner", "title") ), file = tempfile(fileext = ".rtf") ) # usage of display_bound = ... # to either show efficacy bound or futility bound, or both(default) gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf( display_bound = "Efficacy", file = tempfile(fileext = ".rtf") ) # usage of display_columns = ... # to select the columns to display in the summary table gs_power_wlr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) %>% summary() %>% as_rtf( display_columns = c("Analysis", "Bound", "Nominal p", "Z", "Probability"), file = tempfile(fileext = ".rtf") ) } FUNCTION: define_enroll_rate TITLE: Define enrollment rate DESCRIPTION: Define the enrollment rate of subjects for a study as following a piecewise exponential distribution. ARGUMENTS: duration A numeric vector of ordered piecewise study duration interval. rate A numeric vector of enrollment rate in each \code{duration}. stratum A character vector of stratum name. EXAMPLE: # Define enroll rate without stratum define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) # Define enroll rate with stratum define_enroll_rate( duration = rep(c(2, 2, 2, 18), 3), rate = c((1:4) / 3, (1:4) / 2, (1:4) / 6), stratum = c(array("High", 4), array("Moderate", 4), array("Low", 4)) ) FUNCTION: define_fail_rate TITLE: Define failure rate DESCRIPTION: Define subject failure rate for a study with two treatment groups. Also supports stratified designs that have different failure rates in each stratum. ARGUMENTS: duration A numeric vector of ordered piecewise study duration interval. fail_rate A numeric vector of failure rate in each \code{duration} in the control group. dropout_rate A numeric vector of dropout rate in each \code{duration}. hr A numeric vector of hazard ratio between treatment and control group. stratum A character vector of stratum name. EXAMPLE: # Define enroll rate define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) # Define enroll rate with stratum define_fail_rate( stratum = c(rep("Low", 2), rep("High", 2)), duration = 1, fail_rate = c(.1, .2, .3, .4), dropout_rate = .001, hr = c(.9, .75, .8, .6) ) FUNCTION: expected_accrual TITLE: Piecewise constant expected accrual DESCRIPTION: Computes the expected cumulative enrollment (accrual) given a set of piecewise constant enrollment rates and times. ARGUMENTS: time Times at which enrollment is to be computed. enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. EXAMPLE: library(tibble) # Example 1: default expected_accrual() # Example 2: unstratified design expected_accrual( time = c(5, 10, 20), enroll_rate = define_enroll_rate( duration = c(3, 3, 18), rate = c(5, 10, 20) ) ) # Example 3: stratified design expected_accrual( time = c(24, 30, 40), enroll_rate = define_enroll_rate( stratum = c("subgroup", "complement"), duration = c(33, 33), rate = c(30, 30) ) ) # Example 4: expected accrual over time # Scenario 4.1: for the enrollment in the first 3 months, # it is exactly 3 * 5 = 15. expected_accrual( time = 3, enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20)) ) # Scenario 4.2: for the enrollment in the first 6 months, # it is exactly 3 * 5 + 3 * 10 = 45. expected_accrual( time = 6, enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20)) ) # Scenario 4.3: for the enrollment in the first 24 months, # it is exactly 3 * 5 + 3 * 10 + 18 * 20 = 405. expected_accrual( time = 24, enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20)) ) # Scenario 4.4: for the enrollment after 24 months, # it is the same as that from the 24 months, since the enrollment is stopped. expected_accrual( time = 25, enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20)) ) # Instead of compute the enrolled subjects one time point by one time point, # we can also compute it once. expected_accrual( time = c(3, 6, 24, 25), enroll_rate = define_enroll_rate(duration = c(3, 3, 18), rate = c(5, 10, 20)) ) FUNCTION: expected_event TITLE: Expected events observed under piecewise exponential model DESCRIPTION: Computes expected events over time and by strata under the assumption of piecewise constant enrollment rates and piecewise exponential failure and censoring rates. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time. While the main purpose may be to generate a trial that can be analyzed at a single point in time or using group sequential methods, the routine can also be used to simulate an adaptive trial design. The intent is to enable sample size calculations under non-proportional hazards assumptions for stratified populations. ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate A \code{fail_rate} data frame with or without stratum created by \code{\link[=define_fail_rate]{define_fail_rate()}}. total_duration Total follow-up from start of enrollment to data cutoff. simple If default (\code{TRUE}), return numeric expected number of events, otherwise a data frame as described below. EXAMPLE: library(gsDesign2) # Default arguments, simple output (total event count only) expected_event() # Event count by time period expected_event(simple = FALSE) # Early cutoff expected_event(total_duration = .5) # Single time period example expected_event( enroll_rate = define_enroll_rate(duration = 10, rate = 10), fail_rate = define_fail_rate(duration = 100, fail_rate = log(2) / 6, dropout_rate = .01), total_duration = 22, simple = FALSE ) # Single time period example, multiple enrollment periods expected_event( enroll_rate = define_enroll_rate(duration = c(5, 5), rate = c(10, 20)), fail_rate = define_fail_rate(duration = 100, fail_rate = log(2) / 6, dropout_rate = .01), total_duration = 22, simple = FALSE ) FUNCTION: expected_time TITLE: Predict time at which a targeted event count is achieved DESCRIPTION: \code{expected_time()} is made to match input format with \code{\link[=ahr]{ahr()}} and to solve for the time at which the expected accumulated events is equal to an input target. Enrollment and failure rate distributions are specified as follows. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time. ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate A \code{fail_rate} data frame with or without stratum created by \code{\link[=define_fail_rate]{define_fail_rate()}}. target_event The targeted number of events to be achieved. ratio Experimental:Control randomization ratio. interval An interval that is presumed to include the time at which expected event count is equal to \code{target_event}. EXAMPLE: # Example 1 ---- # default \donttest{ expected_time() } # Example 2 ---- # check that result matches a finding using AHR() # Start by deriving an expected event count enroll_rate <- define_enroll_rate(duration = c(2, 2, 10), rate = c(3, 6, 9) * 5) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) total_duration <- 20 xx <- ahr(enroll_rate, fail_rate, total_duration) xx # Next we check that the function confirms the timing of the final analysis. \donttest{ expected_time(enroll_rate, fail_rate, target_event = xx$event, interval = c(.5, 1.5) * xx$time ) } # Example 3 ---- # In this example, we verify `expected_time()` by `ahr()`. \donttest{ x <- ahr( enroll_rate = enroll_rate, fail_rate = fail_rate, ratio = 1, total_duration = 20 ) cat("The number of events by 20 months is ", x$event, ".\n") y <- expected_time( enroll_rate = enroll_rate, fail_rate = fail_rate, ratio = 1, target_event = x$event ) cat("The time to get ", x$event, " is ", y$time, "months.\n") } FUNCTION: fixed_design TITLE: Fixed design under non-proportional hazards DESCRIPTION: Computes fixed design sample size (given power) or power (given sample size) by: \itemize{ \item \code{\link[=fixed_design_ahr]{fixed_design_ahr()}} - Average hazard ratio method. \item \code{\link[=fixed_design_fh]{fixed_design_fh()}} - Weighted logrank test with Fleming-Harrington weights (Farrington and Manning, 1990). \item \code{\link[=fixed_design_mb]{fixed_design_mb()}} - Weighted logrank test with Magirr-Burman weights. \item \code{\link[=fixed_design_lf]{fixed_design_lf()}} - Lachin-Foulkes method (Lachin and Foulkes, 1986). \item \code{\link[=fixed_design_maxcombo]{fixed_design_maxcombo()}} - MaxCombo method. \item \code{\link[=fixed_design_rmst]{fixed_design_rmst()}} - RMST method. \item \code{\link[=fixed_design_milestone]{fixed_design_milestone()}} - Milestone method. } Additionally, \code{\link[=fixed_design_rd]{fixed_design_rd()}} provides fixed design for binary endpoint with treatment effect measuring in risk difference. ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. alpha One-sided Type I error (strictly between 0 and 1). power Power (\code{NULL} to compute power or strictly between 0 and \code{1 - alpha} otherwise). ratio Experimental:Control randomization ratio. study_duration Study duration. event A numerical vector specifying the targeted events at each analysis. See details. rho A vector of numbers paring with gamma and tau for MaxCombo test. gamma A vector of numbers paring with rho and tau for MaxCombo test. tau Test parameter in RMST. w_max Test parameter of Magirr-Burman method. p_c A numerical value of the control arm rate. p_e A numerical value of the experimental arm rate. rd0 Risk difference under null hypothesis, default is 0. n Sample size. If NULL with power input, the sample size will be computed to achieve the targeted power EXAMPLE: # AHR method ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_ahr( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_ahr( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36 ) x %>% summary() # WLR test with FH weights ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_fh( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, rho = 1, gamma = 1 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_fh( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, rho = 1, gamma = 1 ) x %>% summary() # LF method ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_lf( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_lf( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36 ) x %>% summary() # MaxCombo test ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_maxcombo( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, rho = c(0, 0.5), gamma = c(0, 0), tau = c(-1, -1) ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_maxcombo( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, rho = c(0, 0.5), gamma = c(0, 0), tau = c(-1, -1) ) x %>% summary() # WLR test with MB weights ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_mb( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, tau = 4, w_max = 2 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_mb( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36, tau = 4, w_max = 2 ) x %>% summary() # Milestone method ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_milestone( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36, tau = 18 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_milestone( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36, tau = 18 ) x %>% summary() # Binary endpoint with risk differences ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_rd( alpha = 0.025, power = 0.9, p_c = .15, p_e = .1, rd0 = 0, ratio = 1 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_rd( alpha = 0.025, power = NULL, p_c = .15, p_e = .1, rd0 = 0, n = 2000, ratio = 1 ) x %>% summary() # RMST method ---- library(dplyr) # Example 1: given power and compute sample size x <- fixed_design_rmst( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36, tau = 18 ) x %>% summary() # Example 2: given sample size and compute power x <- fixed_design_rmst( alpha = .025, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = 100, fail_rate = log(2) / 12, hr = .7, dropout_rate = .001 ), study_duration = 36, tau = 18 ) x %>% summary() FUNCTION: gs_b TITLE: Default boundary generation DESCRIPTION: \code{gs_b()} is the simplest version of a function to be used with the \code{upper} and \code{lower} arguments in \code{\link[=gs_power_npe]{gs_power_npe()}} and \code{\link[=gs_design_npe]{gs_design_npe()}} or the \code{upper_bound} and \code{lower_bound} arguments in \code{gs_prob_combo()} and \code{pmvnorm_combo()}. It simply returns the vector of Z-values in the input vector \code{par} or, if \code{k} is specified, \code{par[k]} is returned. Note that if bounds need to change with changing information at analyses, \code{gs_b()} should not be used. For instance, for spending function bounds use \code{\link[=gs_spending_bound]{gs_spending_bound()}}. ARGUMENTS: par For \code{gs_b()}, this is just Z-values for the boundaries; can include infinite values. k Is \code{NULL} (default), return \code{par}, else return \code{par[k]}. ... Further arguments passed to or from other methods. EXAMPLE: # Simple: enter a vector of length 3 for bound gs_b(par = 4:2) # 2nd element of par gs_b(par = 4:2, k = 2) # Generate an efficacy bound using a spending function # Use Lan-DeMets spending approximation of O'Brien-Fleming bound # as 50%, 75% and 100% of final spending # Information fraction IF <- c(.5, .75, 1) gs_b(par = gsDesign::gsDesign( alpha = .025, k = length(IF), test.type = 1, sfu = gsDesign::sfLDOF, timing = IF )$upper$bound) FUNCTION: gs_bound_summary TITLE: Bound summary table DESCRIPTION: Summarizes the efficacy and futility bounds for each analysis. ARGUMENTS: x Design object. digits Number of digits past the decimal to be printed in the body of the table. ddigits Number of digits past the decimal to be printed for the natural parameter delta. tdigits Number of digits past the decimal point to be shown for estimated timing of each analysis. timename Text string indicating time unit. alpha Vector of alpha values to compute additional efficacy columns. EXAMPLE: library(gsDesign2) x <- gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36)) gs_bound_summary(x) x <- gs_design_wlr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36)) gs_bound_summary(x) # Report multiple efficacy bounds (only supported for AHR designs) x <- gs_design_ahr(analysis_time = 1:3*12, alpha = 0.0125) gs_bound_summary(x, alpha = c(0.025, 0.05)) FUNCTION: gs_cp_npe TITLE: Conditional power computation with non-constant effect size DESCRIPTION: Conditional power computation with non-constant effect size ARGUMENTS: theta A vector of length two, which specifies the natural parameter for treatment effect. The first element of \code{theta} is the treatment effect of an interim analysis i. The second element of \code{theta} is the treatment effect of a future analysis j. info A vector of two, which specifies the statistical information under the treatment effect \code{theta}. a Interim z-value at analysis i (scalar). b Future target z-value at analysis j (scalar). EXAMPLE: library(gsDesign2) library(dplyr) # Calculate conditional power under arbitrary theta and info # In practice, the value of theta and info commonly comes from a design. # More examples are available at the pkgdown vignettes. gs_cp_npe(theta = c(.1, .2), info = c(15, 35), a = 1.5, b = 1.96) FUNCTION: gs_create_arm TITLE: Create npsurvSS arm object DESCRIPTION: Create npsurvSS arm object ARGUMENTS: enroll_rate Enrollment rates from \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate Failure and dropout rates from \code{\link[=define_fail_rate]{define_fail_rate()}}. ratio Experimental:Control randomization ratio. total_time Total analysis time. EXAMPLE: enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) gs_create_arm(enroll_rate, fail_rate, ratio = 1) FUNCTION: gs_design_ahr TITLE: Calculate sample size and bounds given targeted power and Type I error in group sequential design using average hazard ratio under non-proportional hazards DESCRIPTION: Calculate sample size and bounds given targeted power and Type I error in group sequential design using average hazard ratio under non-proportional hazards ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. alpha One-sided Type I error. beta Type II error. info_frac Targeted information fraction for analyses. See details. analysis_time Targeted calendar timing of analyses. See details. ratio Experimental:Control randomization ratio. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} h1_spending Indicator that lower bound to be set by spending under alternate hypothesis (input \code{fail_rate}) if spending is used for lower bound. If this is \code{FALSE}, then the lower bound spending is under the null hypothesis. This is for two-sided symmetric or asymmetric testing under the null hypothesis; See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. interval An interval presumed to include the times at which expected event count is equal to targeted event. Normally, this can be ignored by the user as it is set to \code{c(.01, 1000)}. EXAMPLE: library(gsDesign) library(gsDesign2) library(dplyr) # Example 1 ---- # call with defaults gs_design_ahr() # Example 2 ---- # Single analysis gs_design_ahr(analysis_time = 40) # Example 3 ---- # Multiple analysis_time gs_design_ahr(analysis_time = c(12, 24, 36)) # Example 4 ---- # Specified information fraction \donttest{ gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = 36) } # Example 5 ---- # multiple analysis times & info_frac # driven by times gs_design_ahr(info_frac = c(.25, .75, 1), analysis_time = c(12, 25, 36)) # driven by info_frac \donttest{ gs_design_ahr(info_frac = c(1 / 3, .8, 1), analysis_time = c(12, 25, 36)) } # Example 6 ---- # 2-sided symmetric design with O'Brien-Fleming spending \donttest{ gs_design_ahr( analysis_time = c(12, 24, 36), binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), h1_spending = FALSE ) } # 2-sided asymmetric design with O'Brien-Fleming upper spending # Pocock lower spending under H1 (NPH) \donttest{ gs_design_ahr( analysis_time = c(12, 24, 36), binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDPocock, total_spend = 0.1, param = NULL, timing = NULL), h1_spending = TRUE ) } # Example 7 ---- \donttest{ gs_design_ahr( alpha = 0.0125, analysis_time = c(12, 24, 36), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.0125, param = NULL, timing = NULL), lower = gs_b, lpar = rep(-Inf, 3) ) gs_design_ahr( alpha = 0.0125, analysis_time = c(12, 24, 36), upper = gs_b, upar = gsDesign::gsDesign( k = 3, test.type = 1, n.I = c(.25, .75, 1), sfu = sfLDOF, sfupar = NULL, alpha = 0.0125 )$upper$bound, lower = gs_b, lpar = rep(-Inf, 3) ) } FUNCTION: gs_design_combo TITLE: Group sequential design using MaxCombo test under non-proportional hazards DESCRIPTION: Group sequential design using MaxCombo test under non-proportional hazards ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. fh_test A data frame to summarize the test in each analysis. See examples for its data structure. ratio Experimental:Control randomization ratio. alpha One-sided Type I error. beta Type II error. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} algorithm an object of class \code{\link[mvtnorm]{GenzBretz}}, \code{\link[mvtnorm]{Miwa}} or \code{\link[mvtnorm]{TVPACK}} specifying both the algorithm to be used as well as the associated hyper parameters. n_upper_bound A numeric value of upper limit of sample size. ... Additional parameters passed to \link[mvtnorm:pmvnorm]{mvtnorm::pmvnorm}. EXAMPLE: # The example is slow to run library(dplyr) library(mvtnorm) library(gsDesign) enroll_rate <- define_enroll_rate( duration = 12, rate = 500 / 12 ) fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 15, # median survival 15 month hr = c(1, .6), dropout_rate = 0.001 ) fh_test <- rbind( data.frame( rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3, analysis_time = c(12, 24, 36) ), data.frame( rho = c(0, 0.5), gamma = 0.5, tau = -1, test = 2:3, analysis = 3, analysis_time = 36 ) ) x <- gsSurv( k = 3, test.type = 4, alpha = 0.025, beta = 0.2, astar = 0, timing = 1, sfu = sfLDOF, sfupar = 0, sfl = sfLDOF, sflpar = 0, lambdaC = 0.1, hr = 0.6, hr0 = 1, eta = 0.01, gamma = 10, R = 12, S = NULL, T = 36, minfup = 24, ratio = 1 ) # Example 1 ---- # User-defined boundary \donttest{ gs_design_combo( enroll_rate, fail_rate, fh_test, alpha = 0.025, beta = 0.2, ratio = 1, binding = FALSE, upar = x$upper$bound, lpar = x$lower$bound ) } # Example 2 ---- \donttest{ # Boundary derived by spending function gs_design_combo( enroll_rate, fail_rate, fh_test, alpha = 0.025, beta = 0.2, ratio = 1, binding = FALSE, upper = gs_spending_combo, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), # alpha spending lower = gs_spending_combo, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2), # beta spending ) } FUNCTION: gs_design_npe TITLE: Group sequential design computation with non-constant effect and information DESCRIPTION: Derives group sequential design size, bounds and boundary crossing probabilities based on proportionate information and effect size at analyses. It allows a non-constant treatment effect over time, but also can be applied for the usual homogeneous effect size designs. It requires treatment effect and proportionate statistical information at each analysis as well as a method of deriving bounds, such as spending. The routine enables two things not available in the gsDesign package: \enumerate{ \item non-constant effect, 2) more flexibility in boundary selection. For many applications, the non-proportional-hazards design function \code{gs_design_nph()} will be used; it calls this function. Initial bound types supported are 1) spending bounds, \item fixed bounds, and 3) Haybittle-Peto-like bounds. The requirement is to have a boundary update method that can each bound without knowledge of future bounds. As an example, bounds based on conditional power that require knowledge of all future bounds are not supported by this routine; a more limited conditional power method will be demonstrated. Boundary family designs Wang-Tsiatis designs including the original (non-spending-function-based) O'Brien-Fleming and Pocock designs are not supported by \code{\link[=gs_power_npe]{gs_power_npe()}}. } ARGUMENTS: theta Natural parameter for group sequential design representing expected incremental drift at all analyses; used for power calculation. theta0 Natural parameter used for upper bound spending; if \code{NULL}, this will be set to 0. theta1 Natural parameter used for lower bound spending; if \code{NULL}, this will be set to \code{theta} which yields the usual beta-spending. If set to 0, spending is 2-sided under null hypothesis. info Proportionate statistical information at all analyses for input \code{theta}. info0 Proportionate statistical information under null hypothesis, if different than alternative; impacts null hypothesis bound calculation. info1 Proportionate statistical information under alternate hypothesis; impacts null hypothesis bound calculation. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } alpha One-sided Type I error. beta Type II error. upper Function to compute upper bound. upar Parameters passed to the function provided in \code{upper}. lower Function to compare lower bound. lpar Parameters passed to the function provided in \code{lower}. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicates no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale). EXAMPLE: library(dplyr) library(gsDesign) # Example 1 ---- # Single analysis # Lachin book p 71 difference of proportions example pc <- .28 # Control response rate pe <- .40 # Experimental response rate p0 <- (pc + pe) / 2 # Ave response rate under H0 # Information per increment of 1 in sample size info0 <- 1 / (p0 * (1 - p0) * 4) info <- 1 / (pc * (1 - pc) * 2 + pe * (1 - pe) * 2) # Result should round up to next even number = 652 # Divide information needed under H1 by information per patient added gs_design_npe(theta = pe - pc, info = info, info0 = info0) # Example 2 ---- # Fixed bound x <- gs_design_npe( alpha = 0.0125, theta = c(.1, .2, .3), info = (1:3) * 80, info0 = (1:3) * 80, upper = gs_b, upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF, alpha = 0.0125)$upper$bound, lower = gs_b, lpar = c(-1, 0, 0) ) x # Same upper bound; this represents non-binding Type I error and will total 0.025 gs_power_npe( theta = rep(0, 3), info = (x %>% filter(bound == "upper"))$info, upper = gs_b, upar = (x %>% filter(bound == "upper"))$z, lower = gs_b, lpar = rep(-Inf, 3) ) # Example 3 ---- # Spending bound examples # Design with futility only at analysis 1; efficacy only at analyses 2, 3 # Spending bound for efficacy; fixed bound for futility # NOTE: test_upper and test_lower DO NOT WORK with gs_b; must explicitly make bounds infinite # test_upper and test_lower DO WORK with gs_spending_bound gs_design_npe( theta = c(.1, .2, .3), info = (1:3) * 40, info0 = (1:3) * 40, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_b, lpar = c(-1, -Inf, -Inf), test_upper = c(FALSE, TRUE, TRUE) ) # one can try `info_scale = "h1_info"` or `info_scale = "h0_info"` here gs_design_npe( theta = c(.1, .2, .3), info = (1:3) * 40, info0 = (1:3) * 30, info_scale = "h1_info", upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_b, lpar = c(-1, -Inf, -Inf), test_upper = c(FALSE, TRUE, TRUE) ) # Example 4 ---- # Spending function bounds # 2-sided asymmetric bounds # Lower spending based on non-zero effect gs_design_npe( theta = c(.1, .2, .3), info = (1:3) * 40, info0 = (1:3) * 30, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL) ) # Example 5 ---- # Two-sided symmetric spend, O'Brien-Fleming spending # Typically, 2-sided bounds are binding xx <- gs_design_npe( theta = c(.1, .2, .3), info = (1:3) * 40, binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL) ) xx # Re-use these bounds under alternate hypothesis # Always use binding = TRUE for power calculations gs_power_npe( theta = c(.1, .2, .3), info = (1:3) * 40, binding = TRUE, upper = gs_b, lower = gs_b, upar = (xx %>% filter(bound == "upper"))$z, lpar = -(xx %>% filter(bound == "upper"))$z ) FUNCTION: gs_design_rd TITLE: Group sequential design of binary outcome measuring in risk difference DESCRIPTION: Group sequential design of binary outcome measuring in risk difference ARGUMENTS: p_c Rate at the control group. p_e Rate at the experimental group. info_frac Statistical information fraction. rd0 Treatment effect under super-superiority designs, the default is 0. alpha One-sided Type I error. beta Type II error. ratio Experimental:Control randomization ratio (not yet implemented). stratum_prev Randomization ratio of different stratum. If it is unstratified design then \code{NULL}. Otherwise it is a tibble containing two columns (stratum and prevalence). weight The weighting scheme for stratified population. upper Function to compute upper bound. lower Function to compute lower bound. upar Parameters passed to \code{upper}. lpar Parameters passed to \code{lower}. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value of \code{FALSE} indicates no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale). h1_spending Indicator that lower bound to be set by spending under alternate hypothesis (input \code{fail_rate}) if spending is used for lower bound. EXAMPLE: library(gsDesign) # Example 1 ---- # unstratified group sequential design x <- gs_design_rd( p_c = tibble::tibble(stratum = "All", rate = .2), p_e = tibble::tibble(stratum = "All", rate = .15), info_frac = c(0.7, 1), rd0 = 0, alpha = .025, beta = .1, ratio = 1, stratum_prev = NULL, weight = "unstratified", upper = gs_b, lower = gs_b, upar = gsDesign(k = 2, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) y <- gs_power_rd( p_c = tibble::tibble(stratum = "All", rate = .2), p_e = tibble::tibble(stratum = "All", rate = .15), n = tibble::tibble(stratum = "All", n = x$analysis$n, analysis = 1:2), rd0 = 0, ratio = 1, weight = "unstratified", upper = gs_b, lower = gs_b, upar = gsDesign(k = 2, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # The above 2 design share the same power with the same sample size and treatment effect x$bound$probability[x$bound$bound == "upper" & x$bound$analysis == 2] y$bound$probability[y$bound$bound == "upper" & y$bound$analysis == 2] # Example 2 ---- # stratified group sequential design gs_design_rd( p_c = tibble::tibble( stratum = c("biomarker positive", "biomarker negative"), rate = c(.2, .25) ), p_e = tibble::tibble( stratum = c("biomarker positive", "biomarker negative"), rate = c(.15, .22) ), info_frac = c(0.7, 1), rd0 = 0, alpha = .025, beta = .1, ratio = 1, stratum_prev = tibble::tibble( stratum = c("biomarker positive", "biomarker negative"), prevalence = c(.4, .6) ), weight = "ss", upper = gs_spending_bound, lower = gs_b, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lpar = rep(-Inf, 2) ) FUNCTION: gs_design_wlr TITLE: Group sequential design using weighted log-rank test under non-proportional hazards DESCRIPTION: Group sequential design using weighted log-rank test under non-proportional hazards ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. weight Weight of weighted log rank test: \itemize{ \item \code{"logrank"} = regular logrank test. \item \code{list(method = "fh", param = list(rho = ..., gamma = ...))} = Fleming-Harrington weighting functions. \item \code{list(method = "mb", param = list(tau = ..., w_max = ...))} = Magirr and Burman weighting functions. } approx Approximate estimation method for Z statistics. \itemize{ \item \code{"event_driven"} = only work under proportional hazard model with log rank test. \item \code{"asymptotic"}. } alpha One-sided Type I error. beta Type II error. ratio Experimental:Control randomization ratio. info_frac Targeted information fraction for analyses. See details. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } analysis_time Targeted calendar timing of analyses. See details. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. h1_spending Indicator that lower bound to be set by spending under alternate hypothesis (input \code{fail_rate}) if spending is used for lower bound. If this is \code{FALSE}, then the lower bound spending is under the null hypothesis. This is for two-sided symmetric or asymmetric testing under the null hypothesis; See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. interval An interval presumed to include the times at which expected event count is equal to targeted event. Normally, this can be ignored by the user as it is set to \code{c(.01, 1000)}. EXAMPLE: library(dplyr) library(mvtnorm) library(gsDesign) library(gsDesign2) # set enrollment rates enroll_rate <- define_enroll_rate(duration = 12, rate = 1) # set failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 15, # median survival 15 month hr = c(1, .6), dropout_rate = 0.001 ) # Example 1 ---- # Information fraction driven design gs_design_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, ratio = 1, alpha = 0.025, beta = 0.2, weight = list(method = "mb", param = list(tau = Inf, w_max = 2)), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2), analysis_time = 36, info_frac = c(0.6, 1) ) # Example 2 ---- # Calendar time driven design gs_design_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, ratio = 1, alpha = 0.025, beta = 0.2, weight = list(method = "mb", param = list(tau = Inf, w_max = 2)), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2), analysis_time = c(24, 36), info_frac = NULL ) # Example 3 ---- # Both calendar time and information fraction driven design gs_design_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, ratio = 1, alpha = 0.025, beta = 0.2, weight = list(method = "mb", param = list(tau = Inf, w_max = 2)), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2), analysis_time = c(24, 36), info_frac = c(0.6, 1) ) FUNCTION: gs_info_ahr TITLE: Information and effect size based on AHR approximation DESCRIPTION: Based on piecewise enrollment rate, failure rate, and dropout rates computes approximate information and effect size using an average hazard ratio model. ARGUMENTS: enroll_rate Enrollment rates from \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate Failure and dropout rates from \code{\link[=define_fail_rate]{define_fail_rate()}}. ratio Experimental:Control randomization ratio. event Targeted minimum events at each analysis. analysis_time Targeted minimum study duration at each analysis. interval An interval that is presumed to include the time at which expected event count is equal to targeted event. EXAMPLE: library(gsDesign) library(gsDesign2) # Example 1 ---- \donttest{ # Only put in targeted events gs_info_ahr(event = c(30, 40, 50)) } # Example 2 ---- # Only put in targeted analysis times gs_info_ahr(analysis_time = c(18, 27, 36)) # Example 3 ---- \donttest{ # Some analysis times after time at which targeted event accrue # Check that both Time >= input analysis_time and event >= input event gs_info_ahr(event = c(30, 40, 50), analysis_time = c(16, 19, 26)) gs_info_ahr(event = c(30, 40, 50), analysis_time = c(14, 20, 24)) } FUNCTION: gs_info_combo TITLE: Information and effect size for MaxCombo test DESCRIPTION: Information and effect size for MaxCombo test ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate A \code{fail_rate} data frame with or without stratum created by \code{\link[=define_fail_rate]{define_fail_rate()}}. ratio Experimental:Control randomization ratio (not yet implemented). event Targeted events at each analysis. analysis_time Minimum time of analysis. rho Weighting parameters. gamma Weighting parameters. tau Weighting parameters. approx Approximation method. EXAMPLE: gs_info_combo(rho = c(0, 0.5), gamma = c(0.5, 0), analysis_time = c(12, 24)) FUNCTION: gs_info_rd TITLE: Information and effect size under risk difference DESCRIPTION: Information and effect size under risk difference ARGUMENTS: p_c Rate at the control group. p_e Rate at the experimental group. n Sample size. rd0 The risk difference under H0. ratio Experimental:Control randomization ratio. weight Weighting method, can be \code{"unstratified"}, \code{"ss"}, or \code{"invar"}. EXAMPLE: # Example 1 ---- # unstratified case with H0: rd0 = 0 gs_info_rd( p_c = tibble::tibble(stratum = "All", rate = .15), p_e = tibble::tibble(stratum = "All", rate = .1), n = tibble::tibble(stratum = "All", n = c(100, 200, 300), analysis = 1:3), rd0 = 0, ratio = 1 ) # Example 2 ---- # unstratified case with H0: rd0 != 0 gs_info_rd( p_c = tibble::tibble(stratum = "All", rate = .2), p_e = tibble::tibble(stratum = "All", rate = .15), n = tibble::tibble(stratum = "All", n = c(100, 200, 300), analysis = 1:3), rd0 = 0.005, ratio = 1 ) # Example 3 ---- # stratified case under sample size weighting and H0: rd0 = 0 gs_info_rd( p_c = tibble::tibble(stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25)), p_e = tibble::tibble(stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19)), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(50, 100, 200, 40, 80, 160, 60, 120, 240) ), rd0 = 0, ratio = 1, weight = "ss" ) # Example 4 ---- # stratified case under inverse variance weighting and H0: rd0 = 0 gs_info_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(50, 100, 200, 40, 80, 160, 60, 120, 240) ), rd0 = 0, ratio = 1, weight = "invar" ) # Example 5 ---- # stratified case under sample size weighting and H0: rd0 != 0 gs_info_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(50, 100, 200, 40, 80, 160, 60, 120, 240) ), rd0 = 0.02, ratio = 1, weight = "ss" ) # Example 6 ---- # stratified case under inverse variance weighting and H0: rd0 != 0 gs_info_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(50, 100, 200, 40, 80, 160, 60, 120, 240) ), rd0 = 0.02, ratio = 1, weight = "invar" ) # Example 7 ---- # stratified case under inverse variance weighting and H0: rd0 != 0 and # rd0 difference for different statum gs_info_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(50, 100, 200, 40, 80, 160, 60, 120, 240) ), rd0 = tibble::tibble( stratum = c("S1", "S2", "S3"), rd0 = c(0.01, 0.02, 0.03) ), ratio = 1, weight = "invar" ) FUNCTION: gs_info_wlr TITLE: Information and effect size for weighted log-rank test DESCRIPTION: Based on piecewise enrollment rate, failure rate, and dropout rates computes approximate information and effect size using an average hazard ratio model. ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate Failure and dropout rates. ratio Experimental:Control randomization ratio. event Targeted minimum events at each analysis. analysis_time Targeted minimum study duration at each analysis. weight Weight of weighted log rank test: \itemize{ \item \code{"logrank"} = regular logrank test. \item \code{list(method = "fh", param = list(rho = ..., gamma = ...))} = Fleming-Harrington weighting functions. \item \code{list(method = "mb", param = list(tau = ..., w_max = ...))} = Magirr and Burman weighting functions. } approx Approximate estimation method for Z statistics. \itemize{ \item \code{"event_driven"} = only work under proportional hazard model with log rank test. \item \code{"asymptotic"}. } interval An interval that is presumed to include the time at which expected event count is equal to targeted event. EXAMPLE: library(gsDesign2) # Set enrollment rates enroll_rate <- define_enroll_rate(duration = 12, rate = 500 / 12) # Set failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 15, # median survival 15 month hr = c(1, .6), dropout_rate = 0.001 ) # Set the targeted number of events and analysis time event <- c(30, 40, 50) analysis_time <- c(10, 24, 30) gs_info_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = event, analysis_time = analysis_time ) FUNCTION: gs_power_ahr TITLE: Group sequential design power using average hazard ratio under non-proportional hazards DESCRIPTION: Calculate power given the sample size in group sequential design power using average hazard ratio under non-proportional hazards. ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. event A numerical vector specifying the targeted events at each analysis. See details. analysis_time Targeted calendar timing of analyses. See details. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. ratio Experimental:Control randomization ratio. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. h1_spending Indicator that lower bound to be set by spending under alternate hypothesis (input \code{fail_rate}) if spending is used for lower bound. If this is \code{FALSE}, then the lower bound spending is under the null hypothesis. This is for two-sided symmetric or asymmetric testing under the null hypothesis; See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. interval An interval presumed to include the times at which expected event count is equal to targeted event. Normally, this can be ignored by the user as it is set to \code{c(.01, 1000)}. integer Indicator of whether integer sample size and events are intended. This argument is used when using \code{\link[=to_integer]{to_integer()}}. EXAMPLE: library(gsDesign2) library(dplyr) # Example 1 ---- # The default output of `gs_power_ahr()` is driven by events, # i.e., `event = c(30, 40, 50)`, `analysis_time = NULL` \donttest{ gs_power_ahr(lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.1)) } # Example 2 ---- # 2-sided symmetric O'Brien-Fleming spending bound, driven by analysis time, # i.e., `event = NULL`, `analysis_time = c(12, 24, 36)` gs_power_ahr( analysis_time = c(12, 24, 36), event = NULL, binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025) ) # Example 3 ---- # 2-sided symmetric O'Brien-Fleming spending bound, driven by event, # i.e., `event = c(20, 50, 70)`, `analysis_time = NULL` # Note that this assumes targeted final events for the design is 70 events. # If actual targeted final events were 65, then `timing = c(20, 50, 70) / 65` # would be added to `upar` and `lpar` lists. # NOTE: at present the computed information fraction in output `analysis` is based # on 70 events rather than 65 events when the `timing` argument is used in this way. # A vignette on this topic will be forthcoming. \donttest{ gs_power_ahr( analysis_time = NULL, event = c(20, 50, 70), binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025) ) } # Example 4 ---- # 2-sided symmetric O'Brien-Fleming spending bound, # driven by both `event` and `analysis_time`, i.e., # both `event` and `analysis_time` are not `NULL`, # then the analysis will driven by the maximal one, i.e., # Time = max(analysis_time, calculated Time for targeted event) # Events = max(events, calculated events for targeted analysis_time) \donttest{ gs_power_ahr( analysis_time = c(12, 24, 36), event = c(30, 40, 50), h1_spending = FALSE, binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025) ) } FUNCTION: gs_power_combo TITLE: Group sequential design power using MaxCombo test under non-proportional hazards DESCRIPTION: Group sequential design power using MaxCombo test under non-proportional hazards ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. fh_test A data frame to summarize the test in each analysis. See examples for its data structure. ratio Experimental:Control randomization ratio. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} algorithm an object of class \code{\link[mvtnorm]{GenzBretz}}, \code{\link[mvtnorm]{Miwa}} or \code{\link[mvtnorm]{TVPACK}} specifying both the algorithm to be used as well as the associated hyper parameters. ... Additional parameters passed to \link[mvtnorm:pmvnorm]{mvtnorm::pmvnorm}. EXAMPLE: library(dplyr) library(mvtnorm) library(gsDesign) library(gsDesign2) enroll_rate <- define_enroll_rate( duration = 12, rate = 500 / 12 ) fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 15, # median survival 15 month hr = c(1, .6), dropout_rate = 0.001 ) fh_test <- rbind( data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3, analysis_time = c(12, 24, 36)), data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1, test = 2:3, analysis = 3, analysis_time = 36) ) # Example 1 ---- # Minimal Information Fraction derived bound \donttest{ gs_power_combo( enroll_rate = enroll_rate, fail_rate = fail_rate, fh_test = fh_test, upper = gs_spending_combo, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_combo, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2) ) } FUNCTION: gs_power_npe TITLE: Group sequential bound computation with non-constant effect DESCRIPTION: Derives group sequential bounds and boundary crossing probabilities for a design. It allows a non-constant treatment effect over time, but also can be applied for the usual homogeneous effect size designs. It requires treatment effect and statistical information at each analysis as well as a method of deriving bounds, such as spending. The routine enables two things not available in the gsDesign package: \enumerate{ \item non-constant effect, 2) more flexibility in boundary selection. For many applications, the non-proportional-hazards design function \code{gs_design_nph()} will be used; it calls this function. Initial bound types supported are 1) spending bounds, \item fixed bounds, and 3) Haybittle-Peto-like bounds. The requirement is to have a boundary update method that can each bound without knowledge of future bounds. As an example, bounds based on conditional power that require knowledge of all future bounds are not supported by this routine; a more limited conditional power method will be demonstrated. Boundary family designs Wang-Tsiatis designs including the original (non-spending-function-based) O'Brien-Fleming and Pocock designs are not supported by \code{gs_power_npe()}. } ARGUMENTS: theta Natural parameter for group sequential design representing expected incremental drift at all analyses; used for power calculation. theta0 Natural parameter for null hypothesis, if needed for upper bound computation. theta1 Natural parameter for alternate hypothesis, if needed for lower bound computation. info Statistical information at all analyses for input \code{theta}. info0 Statistical information under null hypothesis, if different than \code{info}; impacts null hypothesis bound calculation. info1 Statistical information under hypothesis used for futility bound calculation if different from \code{info}; impacts futility hypothesis bound calculation. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } upper Function to compute upper bound. upar Parameters passed to \code{upper}. lower Function to compare lower bound. lpar parameters passed to \code{lower}. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include a lower bound; single value of \code{TRUE} (default) indicates all analyses; single value of \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale). EXAMPLE: library(gsDesign) library(gsDesign2) library(dplyr) # Default (single analysis; Type I error controlled) gs_power_npe(theta = 0) %>% filter(bound == "upper") # Fixed bound gs_power_npe( theta = c(.1, .2, .3), info = (1:3) * 40, upper = gs_b, upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF)$upper$bound, lower = gs_b, lpar = c(-1, 0, 0) ) # Same fixed efficacy bounds, no futility bound (i.e., non-binding bound), null hypothesis gs_power_npe( theta = rep(0, 3), info = (1:3) * 40, upar = gsDesign::gsDesign(k = 3, sfu = gsDesign::sfLDOF)$upper$bound, lpar = rep(-Inf, 3) ) %>% filter(bound == "upper") # Fixed bound with futility only at analysis 1; efficacy only at analyses 2, 3 gs_power_npe( theta = c(.1, .2, .3), info = (1:3) * 40, upper = gs_b, upar = c(Inf, 3, 2), lower = gs_b, lpar = c(qnorm(.1), -Inf, -Inf) ) # Spending function bounds # Lower spending based on non-zero effect gs_power_npe( theta = c(.1, .2, .3), info = (1:3) * 40, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL) ) # Same bounds, but power under different theta gs_power_npe( theta = c(.15, .25, .35), info = (1:3) * 40, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfHSD, total_spend = 0.1, param = -1, timing = NULL) ) # Two-sided symmetric spend, O'Brien-Fleming spending # Typically, 2-sided bounds are binding x <- gs_power_npe( theta = rep(0, 3), info = (1:3) * 40, binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL) ) # Re-use these bounds under alternate hypothesis # Always use binding = TRUE for power calculations gs_power_npe( theta = c(.1, .2, .3), info = (1:3) * 40, binding = TRUE, upar = (x %>% filter(bound == "upper"))$z, lpar = -(x %>% filter(bound == "upper"))$z ) # Different values of `r` and `tol` lead to different numerical accuracy # Larger `r` and smaller `tol` give better accuracy, but leads to slow computation n_analysis <- 5 gs_power_npe( theta = 0.1, info = 1:n_analysis, info0 = 1:n_analysis, info1 = NULL, info_scale = "h0_info", upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_b, lpar = -rep(Inf, n_analysis), test_upper = TRUE, test_lower = FALSE, binding = FALSE, # Try different combinations of (r, tol) with # r in 6, 18, 24, 30, 35, 40, 50, 60, 70, 80, 90, 100 # tol in 1e-6, 1e-12 r = 6, tol = 1e-6 ) FUNCTION: gs_power_rd TITLE: Group sequential design power of binary outcome measuring in risk difference DESCRIPTION: Group sequential design power of binary outcome measuring in risk difference ARGUMENTS: p_c Rate at the control group. p_e Rate at the experimental group. n Sample size. rd0 Treatment effect under super-superiority designs, the default is 0. ratio Experimental:control randomization ratio. weight Weighting method, can be \code{"unstratified"}, \code{"ss"}, or \code{"invar"}. upper Function to compute upper bound. lower Function to compare lower bound. upar Parameters passed to \code{upper}. lpar Parameters passed to \code{lower}. info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include a lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale). EXAMPLE: # Example 1 ---- library(gsDesign) # unstratified case with H0: rd0 = 0 gs_power_rd( p_c = tibble::tibble( stratum = "All", rate = .2 ), p_e = tibble::tibble( stratum = "All", rate = .15 ), n = tibble::tibble( stratum = "All", n = c(20, 40, 60), analysis = 1:3 ), rd0 = 0, ratio = 1, upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # Example 2 ---- # unstratified case with H0: rd0 != 0 gs_power_rd( p_c = tibble::tibble( stratum = "All", rate = .2 ), p_e = tibble::tibble( stratum = "All", rate = .15 ), n = tibble::tibble( stratum = "All", n = c(20, 40, 60), analysis = 1:3 ), rd0 = 0.005, ratio = 1, upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # use spending function gs_power_rd( p_c = tibble::tibble( stratum = "All", rate = .2 ), p_e = tibble::tibble( stratum = "All", rate = .15 ), n = tibble::tibble( stratum = "All", n = c(20, 40, 60), analysis = 1:3 ), rd0 = 0.005, ratio = 1, upper = gs_spending_bound, lower = gs_b, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lpar = c(qnorm(.1), rep(-Inf, 2)) ) # Example 3 ---- # stratified case under sample size weighting and H0: rd0 = 0 gs_power_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(10, 20, 24, 18, 26, 30, 10, 20, 24) ), rd0 = 0, ratio = 1, weight = "ss", upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # Example 4 ---- # stratified case under inverse variance weighting and H0: rd0 = 0 gs_power_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(10, 20, 24, 18, 26, 30, 10, 20, 24) ), rd0 = 0, ratio = 1, weight = "invar", upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # Example 5 ---- # stratified case under sample size weighting and H0: rd0 != 0 gs_power_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(10, 20, 24, 18, 26, 30, 10, 20, 24) ), rd0 = 0.02, ratio = 1, weight = "ss", upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) # Example 6 ---- # stratified case under inverse variance weighting and H0: rd0 != 0 gs_power_rd( p_c = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.15, .2, .25) ), p_e = tibble::tibble( stratum = c("S1", "S2", "S3"), rate = c(.1, .16, .19) ), n = tibble::tibble( stratum = rep(c("S1", "S2", "S3"), each = 3), analysis = rep(1:3, 3), n = c(10, 20, 24, 18, 26, 30, 10, 20, 24) ), rd0 = 0.03, ratio = 1, weight = "invar", upper = gs_b, lower = gs_b, upar = gsDesign(k = 3, test.type = 1, sfu = sfLDOF, sfupar = NULL)$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) FUNCTION: gs_power_wlr TITLE: Group sequential design power using weighted log rank test under non-proportional hazards DESCRIPTION: Group sequential design power using weighted log rank test under non-proportional hazards ARGUMENTS: enroll_rate Enrollment rates defined by \code{define_enroll_rate()}. fail_rate Failure and dropout rates defined by \code{define_fail_rate()}. event A numerical vector specifying the targeted events at each analysis. See details. analysis_time Targeted calendar timing of analyses. See details. binding Indicator of whether futility bound is binding; default of \code{FALSE} is recommended. upper Function to compute upper bound. \itemize{ \item \code{gs_spending_bound()}: alpha-spending efficacy bounds. \item \code{gs_b()}: fixed efficacy bounds. } lower Function to compute lower bound, which can be set up similarly as \code{upper}. See \href{https://merck.github.io/gsDesign2/articles/story-seven-test-types.html}{this vignette}. upar Parameters passed to \code{upper}. \itemize{ \item If \code{upper = gs_b}, then \code{upar} is a numerical vector specifying the fixed efficacy bounds per analysis. \item If \code{upper = gs_spending_bound}, then \code{upar} is a list including \itemize{ \item \code{sf} for the spending function family. \item \code{total_spend} for total alpha spend. \item \code{param} for the parameter of the spending function. \item \code{timing} specifies spending time if different from information-based spending; see details. } } lpar Parameters passed to \code{lower}, which can be set up similarly as \code{upar.} test_upper Indicator of which analyses should include an upper (efficacy) bound; single value of \code{TRUE} (default) indicates all analyses; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have an efficacy bound. test_lower Indicator of which analyses should include an lower bound; single value of \code{TRUE} (default) indicates all analyses; single value \code{FALSE} indicated no lower bound; otherwise, a logical vector of the same length as \code{info} should indicate which analyses will have a lower bound. ratio Experimental:Control randomization ratio. weight Weight of weighted log rank test: \itemize{ \item \code{"logrank"} = regular logrank test. \item \code{list(method = "fh", param = list(rho = ..., gamma = ...))} = Fleming-Harrington weighting functions. \item \code{list(method = "mb", param = list(tau = ..., w_max = ...))} = Magirr and Burman weighting functions. } info_scale Information scale for calculation. Options are: \itemize{ \item \code{"h0_h1_info"} (default): variance under both null and alternative hypotheses is used. \item \code{"h0_info"}: variance under null hypothesis is used. \item \code{"h1_info"}: variance under alternative hypothesis is used. } approx Approximate estimation method for Z statistics. \itemize{ \item \code{"event_driven"} = only work under proportional hazard model with log rank test. \item \code{"asymptotic"}. } r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally, \code{r} will not be changed by the user. tol Tolerance parameter for boundary convergence (on Z-scale); normally not changed by the user. interval An interval presumed to include the times at which expected event count is equal to targeted event. Normally, this can be ignored by the user as it is set to \code{c(.01, 1000)}. integer Indicator of whether integer sample size and events are intended. This argument is used when using \code{\link[=to_integer]{to_integer()}}. EXAMPLE: library(gsDesign) library(gsDesign2) # set enrollment rates enroll_rate <- define_enroll_rate(duration = 12, rate = 500 / 12) # set failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 15, # median survival 15 month hr = c(1, .6), dropout_rate = 0.001 ) # set the targeted number of events and analysis time target_events <- c(30, 40, 50) target_analysisTime <- c(10, 24, 30) # Example 1 ---- \donttest{ # fixed bounds and calculate the power for targeted number of events gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = target_events, analysis_time = NULL, upper = gs_b, upar = gsDesign( k = length(target_events), test.type = 1, n.I = target_events, maxn.IPlan = max(target_events), sfu = sfLDOF, sfupar = NULL )$upper$bound, lower = gs_b, lpar = c(qnorm(.1), rep(-Inf, 2)) ) } # Example 2 ---- # fixed bounds and calculate the power for targeted analysis time \donttest{ gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = NULL, analysis_time = target_analysisTime, upper = gs_b, upar = gsDesign( k = length(target_events), test.type = 1, n.I = target_events, maxn.IPlan = max(target_events), sfu = sfLDOF, sfupar = NULL )$upper$bound, lower = gs_b, lpar = c(qnorm(.1), rep(-Inf, 2)) ) } # Example 3 ---- # fixed bounds and calculate the power for targeted analysis time & number of events \donttest{ gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = target_events, analysis_time = target_analysisTime, upper = gs_b, upar = gsDesign( k = length(target_events), test.type = 1, n.I = target_events, maxn.IPlan = max(target_events), sfu = sfLDOF, sfupar = NULL )$upper$bound, lower = gs_b, lpar = c(qnorm(.1), rep(-Inf, 2)) ) } # Example 4 ---- # spending bounds and calculate the power for targeted number of events \donttest{ gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = target_events, analysis_time = NULL, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2) ) } # Example 5 ---- # spending bounds and calculate the power for targeted analysis time \donttest{ gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = NULL, analysis_time = target_analysisTime, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2) ) } # Example 6 ---- # spending bounds and calculate the power for targeted analysis time & number of events \donttest{ gs_power_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, event = target_events, analysis_time = target_analysisTime, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2) ) } FUNCTION: gs_spending_bound TITLE: Derive spending bound for group sequential boundary DESCRIPTION: Computes one bound at a time based on spending under given distributional assumptions. While user specifies \code{gs_spending_bound()} for use with other functions, it is not intended for use on its own. Most important user specifications are made through a list provided to functions using \code{gs_spending_bound()}. Function uses numerical integration and Newton-Raphson iteration to derive an individual bound for a group sequential design that satisfies a targeted boundary crossing probability. Algorithm is a simple extension of that in Chapter 19 of Jennison and Turnbull (2000). ARGUMENTS: k Analysis for which bound is to be computed. par A list with the following items: \itemize{ \item \code{sf} (class spending function). \item \code{total_spend} (total spend). \item \code{param} (any parameters needed by the spending function \code{sf()}). \item \code{timing} (a vector containing values at which spending function is to be evaluated or \code{NULL} if information-based spending is used). \item \code{max_info} (when \code{timing} is \code{NULL}, this can be input as positive number to be used with \code{info} for information fraction at each analysis). } hgm1 Subdensity grid from \code{h1()} (k=2) or \code{hupdate()} (k>2) for analysis k-1; if k=1, this is not used and may be \code{NULL}. theta Natural parameter used for lower bound only spending; represents average drift at each time of analysis at least up to analysis k; upper bound spending is always set under null hypothesis (theta = 0). info Statistical information at all analyses, at least up to analysis k. efficacy \code{TRUE} (default) for efficacy bound, \code{FALSE} otherwise. test_bound A logical vector of the same length as \code{info} should indicate which analyses will have a bound. r Integer value controlling grid for numerical integration as in Jennison and Turnbull (2000); default is 18, range is 1 to 80. Larger values provide larger number of grid points and greater accuracy. Normally \code{r} will not be changed by the user. tol Tolerance parameter for convergence (on Z-scale). EXAMPLE: gs_power_ahr( analysis_time = c(12, 24, 36), event = c(30, 40, 50), binding = TRUE, upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL), lower = gs_spending_bound, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL) ) FUNCTION: gs_spending_combo TITLE: Derive spending bound for MaxCombo group sequential boundary DESCRIPTION: Derive spending bound for MaxCombo group sequential boundary ARGUMENTS: par A list with the following items: \itemize{ \item \code{sf} (class spending function). \item \code{total_spend} (total spend). \item \code{param} (any parameters needed by the spending function \code{sf()}). \item \code{timing} (a vector containing values at which spending function is to be evaluated or \code{NULL} if information-based spending is used). \item \code{max_info} (when \code{timing} is \code{NULL}, this can be input as positive number to be used with \code{info} for information fraction at each analysis). } info Statistical information at all analyses, at least up to analysis k. EXAMPLE: # alpha-spending par <- list(sf = gsDesign::sfLDOF, total_spend = 0.025) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfLDPocock, total_spend = 0.025) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfHSD, total_spend = 0.025, param = -40) gs_spending_combo(par, info = 1:3 / 3) # Kim-DeMets (power) Spending Function par <- list(sf = gsDesign::sfPower, total_spend = 0.025, param = 1.5) gs_spending_combo(par, info = 1:3 / 3) # Exponential Spending Function par <- list(sf = gsDesign::sfExponential, total_spend = 0.025, param = 1) gs_spending_combo(par, info = 1:3 / 3) # Two-parameter Spending Function Families par <- list(sf = gsDesign::sfLogistic, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfBetaDist, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfCauchy, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfExtremeValue, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfExtremeValue2, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfNormal, total_spend = 0.025, param = c(.1, .4, .01, .1)) gs_spending_combo(par, info = 1:3 / 3) # t-distribution Spending Function par <- list(sf = gsDesign::sfTDist, total_spend = 0.025, param = c(-1, 1.5, 4)) gs_spending_combo(par, info = 1:3 / 3) # Piecewise Linear and Step Function Spending Functions par <- list(sf = gsDesign::sfLinear, total_spend = 0.025, param = c(.2, .4, .05, .2)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfStep, total_spend = 0.025, param = c(1 / 3, 2 / 3, .1, .1)) gs_spending_combo(par, info = 1:3 / 3) # Pointwise Spending Function par <- list(sf = gsDesign::sfPoints, total_spend = 0.025, param = c(.25, .25)) gs_spending_combo(par, info = 1:3 / 3) # Truncated, trimmed and gapped spending functions par <- list(sf = gsDesign::sfTruncated, total_spend = 0.025, param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfTrimmed, total_spend = 0.025, param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1)) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfGapped, total_spend = 0.025, param = list(trange = c(.2, .8), sf = gsDesign::sfHSD, param = 1)) gs_spending_combo(par, info = 1:3 / 3) # Xi and Gallo conditional error spending functions par <- list(sf = gsDesign::sfXG1, total_spend = 0.025, param = 0.5) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfXG2, total_spend = 0.025, param = 0.14) gs_spending_combo(par, info = 1:3 / 3) par <- list(sf = gsDesign::sfXG3, total_spend = 0.025, param = 0.013) gs_spending_combo(par, info = 1:3 / 3) # beta-spending par <- list(sf = gsDesign::sfLDOF, total_spend = 0.2) gs_spending_combo(par, info = 1:3 / 3) FUNCTION: gs_update_ahr TITLE: Group sequential design using average hazard ratio under non-proportional hazards DESCRIPTION: Group sequential design using average hazard ratio under non-proportional hazards ARGUMENTS: x A design created by either \code{\link[=gs_design_ahr]{gs_design_ahr()}} or \code{\link[=gs_power_ahr]{gs_power_ahr()}}. alpha Type I error for the updated design. ustime Default is NULL in which case upper bound spending time is determined by timing. Otherwise, this should be a vector of length k (total number of analyses) with the spending time at each analysis. lstime Default is NULL in which case lower bound spending time is determined by timing. Otherwise, this should be a vector of length k (total number of analyses) with the spending time at each analysis. event_tbl A data frame with two columns: (1) analysis and (2) event, which represents the events observed at each analysis per piecewise interval. This can be defined via the \code{pw_observed_event()} function or manually entered. For example, consider a scenario with two intervals in the piecewise model: the first interval lasts 6 months with a hazard ratio (HR) of 1, and the second interval follows with an HR of 0.6. The data frame \code{event_tbl = data.frame(analysis = c(1, 1, 2, 2), event = c(30, 100, 30, 200))} indicates that 30 events were observed during the delayed effect period, 130 events were observed at the IA, and 230 events were observed at the FA. EXAMPLE: library(gsDesign) library(gsDesign2) library(dplyr) alpha <- 0.025 beta <- 0.1 ratio <- 1 # Enrollment enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = (1:3) / 3) # Failure and dropout fail_rate <- define_fail_rate( duration = c(3, Inf), fail_rate = log(2) / 9, hr = c(1, 0.6), dropout_rate = .0001) # IA and FA analysis time analysis_time <- c(20, 36) # Randomization ratio ratio <- 1 # ------------------------------------------------- # # Two-sided asymmetric design, # beta-spending with non-binding lower bound # ------------------------------------------------- # # Original design x <- gs_design_ahr( enroll_rate = enroll_rate, fail_rate = fail_rate, alpha = alpha, beta = beta, ratio = ratio, info_scale = "h0_info", info_frac = NULL, analysis_time = c(20, 36), upper = gs_spending_bound, upar = list(sf = sfLDOF, total_spend = alpha), test_upper = TRUE, lower = gs_spending_bound, lpar = list(sf = sfLDOF, total_spend = beta), test_lower = c(TRUE, FALSE), binding = FALSE) %>% to_integer() planned_event_ia <- x$analysis$event[1] planned_event_fa <- x$analysis$event[2] # Updated design with 190 events observed at IA, # where 50 events observed during the delayed effect. # IA spending = observed events / final planned events, the remaining alpha will be allocated to FA. gs_update_ahr( x = x, ustime = c(190 / planned_event_fa, 1), lstime = c(190 / planned_event_fa, 1), event_tbl = data.frame(analysis = c(1, 1), event = c(50, 140))) # Updated design with 190 events observed at IA, and 300 events observed at FA, # where 50 events observed during the delayed effect. # IA spending = observed events / final planned events, the remaining alpha will be allocated to FA. gs_update_ahr( x = x, ustime = c(190 / planned_event_fa, 1), lstime = c(190 / planned_event_fa, 1), event_tbl = data.frame(analysis = c(1, 1, 2, 2), event = c(50, 140, 50, 250))) # Updated design with 190 events observed at IA, and 300 events observed at FA, # where 50 events observed during the delayed effect. # IA spending = minimal of planned and actual information fraction spending gs_update_ahr( x = x, ustime = c(min(190, planned_event_ia) / planned_event_fa, 1), lstime = c(min(190, planned_event_ia) / planned_event_fa, 1), event_tbl = data.frame(analysis = c(1, 1, 2, 2), event = c(50, 140, 50, 250))) # Alpha is updated to 0.05 gs_update_ahr(x = x, alpha = 0.05) FUNCTION: gsDesign2-package TITLE: gsDesign2: Group Sequential Design with Non-Constant Effect DESCRIPTION: \if{html\figure{logo.pngoptions: style='float: right' alt='logo' width='120'}} The goal of 'gsDesign2' is to enable fixed or group sequential design under non-proportional hazards. To enable highly flexible enrollment, time-to-event and time-to-dropout assumptions, 'gsDesign2' offers piecewise constant enrollment, failure rates, and dropout rates for a stratified population. This package includes three methods for designs: average hazard ratio, weighted logrank tests in Yung and Liu (2019) c("\\Sexpr[results=rd]{tools:::Rd_expr_doi(\"#1\")}", "10.1111/biom.13196")\Sexpr{tools:::Rd_expr_doi("10.1111/biom.13196")}, and MaxCombo tests. Substantial flexibility on top of what is in the 'gsDesign' package is intended for selecting boundaries. ARGUMENTS: EXAMPLE: FUNCTION: ppwe TITLE: Piecewise exponential cumulative distribution function DESCRIPTION: Computes the cumulative distribution function (CDF) or survival rate for a piecewise exponential distribution. ARGUMENTS: x Times at which distribution is to be computed. duration A numeric vector of time duration. rate A numeric vector of event rate. lower_tail Indicator of whether lower (\code{TRUE}) or upper tail (\code{FALSE}; default) of CDF is to be computed. EXAMPLE: # Plot a survival function with 2 different sets of time values # to demonstrate plot precision corresponding to input parameters. x1 <- seq(0, 10, 10 / pi) duration <- c(3, 3, 1) rate <- c(.2, .1, .005) survival <- ppwe( x = x1, duration = duration, rate = rate ) plot(x1, survival, type = "l", ylim = c(0, 1)) x2 <- seq(0, 10, .25) survival <- ppwe( x = x2, duration = duration, rate = rate ) lines(x2, survival, col = 2) FUNCTION: pw_info TITLE: Average hazard ratio under non-proportional hazards DESCRIPTION: Provides a geometric average hazard ratio under various non-proportional hazards assumptions for either single or multiple strata studies. The piecewise exponential distribution allows a simple method to specify a distribution and enrollment pattern where the enrollment, failure and dropout rates changes over time. ARGUMENTS: enroll_rate An \code{enroll_rate} data frame with or without stratum created by \code{\link[=define_enroll_rate]{define_enroll_rate()}}. fail_rate A \code{fail_rate} data frame with or without stratum created by \code{\link[=define_fail_rate]{define_fail_rate()}}. total_duration Total follow-up from start of enrollment to data cutoff; this can be a single value or a vector of positive numbers. ratio Ratio of experimental to control randomization. EXAMPLE: # Example: default pw_info() # Example: default with multiple analysis times (varying total_duration) pw_info(total_duration = c(15, 30)) # Stratified population enroll_rate <- define_enroll_rate( stratum = c(rep("Low", 2), rep("High", 3)), duration = c(2, 10, 4, 4, 8), rate = c(5, 10, 0, 3, 6) ) fail_rate <- define_fail_rate( stratum = c(rep("Low", 2), rep("High", 2)), duration = c(1, Inf, 1, Inf), fail_rate = c(.1, .2, .3, .4), dropout_rate = .001, hr = c(.9, .75, .8, .6) ) # Give results by change-points in the piecewise model ahr(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30)) # Same example, give results by strata and time period pw_info(enroll_rate = enroll_rate, fail_rate = fail_rate, total_duration = c(15, 30)) FUNCTION: s2pwe TITLE: Approximate survival distribution with piecewise exponential distribution DESCRIPTION: Converts a discrete set of points from an arbitrary survival distribution to a piecewise exponential approximation. ARGUMENTS: times Positive increasing times at which survival distribution is provided. survival Survival (1 - cumulative distribution function) at specified \code{times}. EXAMPLE: # Example: arbitrary numbers s2pwe(1:9, (9:1) / 10) # Example: lognormal s2pwe(c(1:6, 9), plnorm(c(1:6, 9), meanlog = 0, sdlog = 2, lower.tail = FALSE)) FUNCTION: summary TITLE: Summary for fixed design or group sequential design objects DESCRIPTION: Summary for fixed design or group sequential design objects ARGUMENTS: object A design object returned by fixed_design_xxx() and gs_design_xxx(). ... Additional parameters (not used). analysis_vars The variables to be put at the summary header of each analysis. analysis_decimals The displayed number of digits of \code{analysis_vars}. If the vector is unnamed, it must match the length of \code{analysis_vars}. If the vector is named, you only have to specify the number of digits for the variables you want to be displayed differently than the defaults. col_vars The variables to be displayed. col_decimals The decimals to be displayed for the displayed variables in \code{col_vars}. If the vector is unnamed, it must match the length of \code{col_vars}. If the vector is named, you only have to specify the number of digits for the columns you want to be displayed differently than the defaults. bound_names Names for bounds; default is \code{c("Efficacy", "Futility")}. EXAMPLE: library(dplyr) # Enrollment rate enroll_rate <- define_enroll_rate( duration = 18, rate = 20 ) # Failure rates fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ) # Study duration in months study_duration <- 36 # Experimental / Control randomization ratio ratio <- 1 # 1-sided Type I error alpha <- 0.025 # Type II error (1 - power) beta <- 0.1 # AHR ---- # under fixed power fixed_design_ahr( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() # FH ---- # under fixed power fixed_design_fh( alpha = alpha, power = 1 - beta, enroll_rate = enroll_rate, fail_rate = fail_rate, study_duration = study_duration, ratio = ratio ) %>% summary() # Design parameters ---- library(gsDesign) library(gsDesign2) library(dplyr) # enrollment/failure rates enroll_rate <- define_enroll_rate( stratum = "All", duration = 12, rate = 1 ) fail_rate <- define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ) # Information fraction info_frac <- (1:3) / 3 # Analysis times in months; first 2 will be ignored as info_frac will not be achieved analysis_time <- c(.01, .02, 36) # Experimental / Control randomization ratio ratio <- 1 # 1-sided Type I error alpha <- 0.025 # Type II error (1 - power) beta <- .1 # Upper bound upper <- gs_spending_bound upar <- list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = NULL) # Lower bound lower <- gs_spending_bound lpar <- list(sf = gsDesign::sfHSD, total_spend = 0.1, param = 0, timing = NULL) # test in COMBO fh_test <- rbind( data.frame(rho = 0, gamma = 0, tau = -1, test = 1, analysis = 1:3, analysis_time = c(12, 24, 36)), data.frame(rho = c(0, 0.5), gamma = 0.5, tau = -1, test = 2:3, analysis = 3, analysis_time = 36) ) # Example 1 ---- \donttest{ x_ahr <- gs_design_ahr( enroll_rate = enroll_rate, fail_rate = fail_rate, info_frac = info_frac, # Information fraction analysis_time = analysis_time, ratio = ratio, alpha = alpha, beta = beta, upper = upper, upar = upar, lower = lower, lpar = lpar ) x_ahr %>% summary() # Customize the digits to display x_ahr %>% summary(analysis_vars = c("time", "event", "info_frac"), analysis_decimals = c(1, 0, 2)) # Customize the labels of the crossing probability x_ahr %>% summary(bound_names = c("A is better", "B is better")) # Customize the variables to be summarized for each analysis x_ahr %>% summary(analysis_vars = c("n", "event"), analysis_decimals = c(1, 1)) # Customize the digits for the columns x_ahr %>% summary(col_decimals = c(z = 4)) # Customize the columns to display x_ahr %>% summary(col_vars = c("z", "~hr at bound", "nominal p")) # Customize columns and digits x_ahr %>% summary(col_vars = c("z", "~hr at bound", "nominal p"), col_decimals = c(4, 2, 2)) } # Example 2 ---- \donttest{ x_wlr <- gs_design_wlr( enroll_rate = enroll_rate, fail_rate = fail_rate, weight = list(method = "fh", param = list(rho = 0, gamma = 0.5)), info_frac = NULL, analysis_time = sort(unique(x_ahr$analysis$time)), ratio = ratio, alpha = alpha, beta = beta, upper = upper, upar = upar, lower = lower, lpar = lpar ) x_wlr %>% summary() } # Maxcombo ---- \donttest{ x_combo <- gs_design_combo( ratio = 1, alpha = 0.025, beta = 0.2, enroll_rate = define_enroll_rate(duration = 12, rate = 500 / 12), fail_rate = tibble::tibble( stratum = "All", duration = c(4, 100), fail_rate = log(2) / 15, hr = c(1, .6), dropout_rate = .001 ), fh_test = fh_test, upper = gs_spending_combo, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025), lower = gs_spending_combo, lpar = list(sf = gsDesign::sfLDOF, total_spend = 0.2) ) x_combo %>% summary() } # Risk difference ---- \donttest{ gs_design_rd( p_c = tibble::tibble(stratum = "All", rate = .2), p_e = tibble::tibble(stratum = "All", rate = .15), info_frac = c(0.7, 1), rd0 = 0, alpha = .025, beta = .1, ratio = 1, stratum_prev = NULL, weight = "unstratified", upper = gs_b, lower = gs_b, upar = gsDesign::gsDesign( k = 3, test.type = 1, sfu = gsDesign::sfLDOF, sfupar = NULL )$upper$bound, lpar = c(qnorm(.1), rep(-Inf, 2)) ) %>% summary() } FUNCTION: text_summary TITLE: Generates a textual summary of a group sequential design using the AHR method. DESCRIPTION: Generates a textual summary of a group sequential design using the AHR method. ARGUMENTS: x A design object created by \code{\link[=gs_design_ahr]{gs_design_ahr()}} with or without \code{\link[=to_integer]{to_integer()}}. information A logical value indicating whether to include statistical information in the textual summary. Default is FALSE. time_unit A character string specifying the time unit used in the design. Options include "days", "weeks", "months" (default), and "years". EXAMPLE: library(gsDesign) # Text summary of a 1-sided design x <- gs_design_ahr(info_frac = 1:3/3, test_lower = FALSE) %>% to_integer() x %>% text_summary() # Text summary of a 2-sided symmetric design x <- gs_design_ahr(info_frac = 1:3/3, upper = gs_spending_bound, lower = gs_spending_bound, upar = list(sf = sfLDOF, total_spend = 0.025), lpar = list(sf = sfLDOF, total_spend = 0.025), binding = TRUE, h1_spending = FALSE) %>% to_integer() x %>% text_summary() # Text summary of a asymmetric 2-sided design with beta-spending and non-binding futility bound x <- gs_design_ahr(info_frac = 1:3/3, alpha = 0.025, beta = 0.1, upper = gs_spending_bound, lower = gs_spending_bound, upar = list(sf = sfLDOF, total_spend = 0.025), lpar = list(sf = sfHSD, total_spend = 0.1, param = -4), binding = FALSE, h1_spending = TRUE) %>% to_integer() x %>% text_summary() # Text summary of a asymmetric 2-sided design with fixed non-binding futility bound x <- gs_design_ahr(info_frac = 1:3/3, alpha = 0.025, beta = 0.1, upper = gs_spending_bound, lower = gs_b, upar = list(sf = sfLDOF, total_spend = 0.025), test_upper = c(FALSE, TRUE, TRUE), lpar = c(-1, -Inf, -Inf), test_lower = c(TRUE, FALSE, FALSE), binding = FALSE, h1_spending = TRUE) %>% to_integer() x %>% text_summary() # If there are >5 pieces of HRs, we provide a brief summary of HR. gs_design_ahr( fail_rate = define_fail_rate(duration = c(rep(3, 5), Inf), hr = c(0.9, 0.8, 0.7, 0.6, 0.5, 0.4), fail_rate = log(2) / 10, dropout_rate = 0.001), info_frac = 1:3/3, test_lower = FALSE) %>% text_summary() FUNCTION: to_integer TITLE: Round sample size and events DESCRIPTION: Round sample size and events ARGUMENTS: x An object returned by fixed_design_xxx() and gs_design_xxx(). ... Additional parameters (not used). round_up_final Events at final analysis is rounded up if \code{TRUE}; otherwise, just rounded, unless it is very close to an integer. ratio Positive integer for randomization ratio (experimental:control). A positive integer will result in rounded sample size, which is a multiple of (ratio + 1). A positive non-integer will result in round sample size, which may not be a multiple of (ratio + 1). A negative number will result in an error. EXAMPLE: library(dplyr) library(gsDesign2) # Average hazard ratio \donttest{ x <- fixed_design_ahr( alpha = .025, power = .9, enroll_rate = define_enroll_rate(duration = 18, rate = 1), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), study_duration = 36 ) x %>% to_integer() %>% summary() # FH x <- fixed_design_fh( alpha = 0.025, power = 0.9, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), rho = 0.5, gamma = 0.5, study_duration = 36, ratio = 1 ) x %>% to_integer() %>% summary() # MB x <- fixed_design_mb( alpha = 0.025, power = 0.9, enroll_rate = define_enroll_rate(duration = 18, rate = 20), fail_rate = define_fail_rate( duration = c(4, 100), fail_rate = log(2) / 12, hr = c(1, .6), dropout_rate = .001 ), tau = Inf, w_max = 2, study_duration = 36, ratio = 1 ) x %>% to_integer() %>% summary() } \donttest{ # Example 1: Information fraction based spending gs_design_ahr( analysis_time = c(18, 30), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL), lower = gs_b, lpar = c(-Inf, -Inf) ) %>% to_integer() %>% summary() gs_design_wlr( analysis_time = c(18, 30), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL), lower = gs_b, lpar = c(-Inf, -Inf) ) %>% to_integer() %>% summary() gs_design_rd( p_c = tibble::tibble(stratum = c("A", "B"), rate = c(.2, .3)), p_e = tibble::tibble(stratum = c("A", "B"), rate = c(.15, .27)), weight = "ss", stratum_prev = tibble::tibble(stratum = c("A", "B"), prevalence = c(.4, .6)), info_frac = c(0.7, 1), upper = gs_spending_bound, upar = list(sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL), lower = gs_b, lpar = c(-Inf, -Inf) ) %>% to_integer() %>% summary() # Example 2: Calendar based spending x <- gs_design_ahr( upper = gs_spending_bound, analysis_time = c(18, 30), upar = list( sf = gsDesign::sfLDOF, total_spend = 0.025, param = NULL, timing = c(18, 30) / 30 ), lower = gs_b, lpar = c(-Inf, -Inf) ) %>% to_integer() # The IA nominal p-value is the same as the IA alpha spending x$bound$`nominal p`[1] gsDesign::sfLDOF(alpha = 0.025, t = 18 / 30)$spend } FUNCTION: wlr_weight TITLE: Weight functions for weighted log-rank test DESCRIPTION: \itemize{ \item \code{wlr_weight_fh} is Fleming-Harrington, FH(rho, gamma) weight function. \item \code{wlr_weight_1} is constant for log rank test. \item \code{wlr_weight_power} is Gehan-Breslow and Tarone-Ware weight function. \item \code{wlr_weight_mb} is Magirr (2021) weight function. } ARGUMENTS: x A vector of numeric values. arm0 An \code{arm} object defined in the npsurvSS package. arm1 An \code{arm} object defined in the npsurvSS package. rho A scalar parameter that controls the type of test. gamma A scalar parameter that controls the type of test. tau A scalar parameter of the cut-off time for modest weighted log rank test. power A scalar parameter that controls the power of the weight function. w_max A scalar parameter of the cut-off weight for modest weighted log rank test. EXAMPLE: enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1) arm0 <- gs_arm$arm0 arm1 <- gs_arm$arm1 wlr_weight_fh(1:3, arm0, arm1, rho = 0, gamma = 0, tau = NULL) enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1) arm0 <- gs_arm$arm0 arm1 <- gs_arm$arm1 wlr_weight_1(1:3, arm0, arm1) enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1) arm0 <- gs_arm$arm0 arm1 <- gs_arm$arm1 wlr_weight_n(1:3, arm0, arm1, power = 2) enroll_rate <- define_enroll_rate( duration = c(2, 2, 10), rate = c(3, 6, 9) ) fail_rate <- define_fail_rate( duration = c(3, 100), fail_rate = log(2) / c(9, 18), hr = c(.9, .6), dropout_rate = .001 ) gs_arm <- gs_create_arm(enroll_rate, fail_rate, ratio = 1) arm0 <- gs_arm$arm0 arm1 <- gs_arm$arm1 wlr_weight_mb(1:3, arm0, arm1, tau = -1, w_max = 1.2)