flowchart TD A[Shiny Testing Strategy] --> B[Unit Testing] A --> C[Integration Testing] A --> D[End-to-End Testing] A --> E[Performance Testing] B --> F[Reactive Functions] B --> G[Data Processing] B --> H[Utility Functions] C --> I[Reactive Dependencies] C --> J[Module Interactions] C --> K[Server Logic Flow] D --> L[User Workflows] D --> M[UI Interactions] D --> N[Complete Scenarios] E --> O[Load Testing] E --> P[Memory Profiling] E --> Q[Response Time Analysis] style A fill:#e1f5fe style B fill:#f3e5f5 style C fill:#e8f5e8 style D fill:#fff3e0 style E fill:#fce4ec
Key Takeaways
- Multi-Layer Testing Strategy: Professional Shiny development requires unit tests for individual functions, integration tests for reactive systems, and end-to-end tests for complete user workflows
- Reactive Debugging Mastery: Advanced debugging techniques for reactive programming enable you to trace complex dependency chains and identify performance bottlenecks in real-time applications
- Automated Testing Workflows: Continuous integration and automated testing ensure application reliability across deployments and prevent regressions in complex codebases
- Performance Profiling Excellence: Systematic performance analysis identifies bottlenecks, optimizes resource usage, and ensures scalable applications that perform well under load
- Production Debugging Tools: Professional debugging strategies enable rapid issue identification and resolution in live applications without disrupting user experience
Introduction
Testing and debugging represent the critical difference between applications that work in development and applications that thrive in production environments. While Shiny’s reactive programming model enables powerful interactive applications, it also introduces unique challenges for testing complex reactive dependencies and debugging asynchronous behaviors that don’t exist in traditional R programming.
This comprehensive guide covers the complete spectrum of testing and debugging strategies specifically designed for Shiny applications. You’ll master unit testing for reactive functions, integration testing for complex user workflows, performance profiling for resource optimization, and advanced debugging techniques for identifying issues in live applications. These skills transform your development process from reactive problem-solving to proactive quality assurance.
The testing and debugging techniques you’ll learn are essential for any application that needs to be reliable, maintainable, and scalable. Whether you’re building departmental tools or enterprise-grade platforms, these professional development practices ensure your applications meet the rigorous standards expected in production environments while enabling rapid iteration and continuous improvement.
Understanding Shiny Testing Architecture
Testing Shiny applications requires a multi-layered approach that addresses the unique characteristics of reactive programming, user interface interactions, and asynchronous data processing.
Testing Pyramid for Shiny Applications
Unit Tests (Foundation): Test individual functions, reactive expressions, and data transformations in isolation. These tests run quickly and provide immediate feedback during development.
Integration Tests (Middle Layer): Test how different components work together, including reactive dependencies, module communications, and server-UI interactions.
End-to-End Tests (Top Layer): Test complete user workflows through the browser interface, simulating real user interactions and validating the entire application stack.
Performance Tests (Cross-Cutting): Monitor application performance, resource usage, and scalability across all testing layers.
Foundation: Unit Testing Framework
Setting Up the Testing Environment
# Essential testing packages
library(testthat)
library(shiny)
library(reactlog)
library(profvis)
library(shinytest2)
library(mockery)
# Create comprehensive testing structure
<- function(app_path) {
create_testing_framework
# Create testing directories
<- c(
test_dirs file.path(app_path, "tests"),
file.path(app_path, "tests", "testthat"),
file.path(app_path, "tests", "testthat", "fixtures"),
file.path(app_path, "tests", "integration"),
file.path(app_path, "tests", "e2e"),
file.path(app_path, "tests", "performance")
)
for(dir in test_dirs) {
if(!dir.exists(dir)) {
dir.create(dir, recursive = TRUE)
}
}
# Create test configuration files
create_test_config(app_path)
# Initialize test fixtures
create_test_fixtures(app_path)
cat("Testing framework initialized at:", app_path, "\n")
return(TRUE)
}
# Test configuration setup
<- function(app_path) {
create_test_config
# Main test file
<- '
test_main library(testthat)
library(shiny)
# Source application files
source("../R/app.R", local = TRUE)
source("../R/modules.R", local = TRUE)
source("../R/utils.R", local = TRUE)
# Run all tests
test_check("your_app")
'
writeLines(test_main, file.path(app_path, "tests", "testthat.R"))
# Helper functions for testing
<- '
test_helpers # Testing helper functions
# Create test data
create_test_data <- function(n = 100) {
data.frame(
id = 1:n,
category = sample(c("A", "B", "C"), n, replace = TRUE),
value = rnorm(n, 50, 15),
date = seq.Date(Sys.Date() - n + 1, Sys.Date(), by = "day"),
stringsAsFactors = FALSE
)
}
# Mock reactive values
mock_reactive_values <- function(...) {
values <- list(...)
structure(values, class = "reactivevalues")
}
# Test session mock
create_test_session <- function() {
list(
input = list(),
output = list(),
clientData = list(
url_hostname = "localhost",
url_port = 3838
),
userData = list()
)
}
# Capture reactive output
capture_reactive <- function(reactive_expr, input_values = list()) {
# Set up test environment
session <- MockShinySession$new()
# Set input values
for(name in names(input_values)) {
session$setInputs(!!name := input_values[[name]])
}
# Execute reactive expression
result <- withMockSession(session, {
reactive_expr()
})
return(result)
}
'
writeLines(test_helpers, file.path(app_path, "tests", "testthat", "helper-functions.R"))
}
# Create test fixtures
<- function(app_path) {
create_test_fixtures
<- file.path(app_path, "tests", "testthat", "fixtures")
fixtures_path
# Sample data fixtures
<- data.frame(
test_data id = 1:50,
name = paste("Item", 1:50),
category = sample(c("Type A", "Type B", "Type C"), 50, replace = TRUE),
value = runif(50, 10, 100),
date = seq.Date(Sys.Date() - 49, Sys.Date(), by = "day")
)
saveRDS(test_data, file.path(fixtures_path, "sample_data.rds"))
# Configuration fixtures
<- list(
test_config database = list(
host = "localhost",
port = 5432,
name = "test_db"
),api = list(
base_url = "https://api.test.com",
timeout = 30
)
)
saveRDS(test_config, file.path(fixtures_path, "test_config.rds"))
}
Unit Testing Reactive Functions
# Unit tests for reactive expressions and functions
<- function() {
test_reactive_functions
# Test reactive data processing
test_that("Data filtering works correctly", {
# Create test data
<- data.frame(
test_data category = c("A", "B", "A", "C", "B"),
value = c(10, 20, 15, 25, 30),
stringsAsFactors = FALSE
)
# Test filtering function
<- function(data, category_filter) {
filter_data if(is.null(category_filter) || category_filter == "All") {
return(data)
else {
} return(data[data$category == category_filter, ])
}
}
# Test cases
expect_equal(nrow(filter_data(test_data, "A")), 2)
expect_equal(nrow(filter_data(test_data, "All")), 5)
expect_equal(nrow(filter_data(test_data, NULL)), 5)
expect_equal(nrow(filter_data(test_data, "D")), 0)
})
# Test reactive calculations
test_that("Statistical calculations are accurate", {
<- function(data, group_var = NULL) {
calculate_summary
if(is.null(group_var)) {
return(data.frame(
count = nrow(data),
mean_value = mean(data$value, na.rm = TRUE),
median_value = median(data$value, na.rm = TRUE),
sd_value = sd(data$value, na.rm = TRUE)
))
else {
}
%>%
data group_by(!!sym(group_var)) %>%
summarise(
count = n(),
mean_value = mean(value, na.rm = TRUE),
median_value = median(value, na.rm = TRUE),
sd_value = sd(value, na.rm = TRUE),
.groups = "drop"
)
}
}
<- data.frame(
test_data category = c("A", "A", "B", "B"),
value = c(10, 20, 30, 40)
)
# Test overall summary
<- calculate_summary(test_data)
overall expect_equal(overall$count, 4)
expect_equal(overall$mean_value, 25)
# Test grouped summary
<- calculate_summary(test_data, "category")
grouped expect_equal(nrow(grouped), 2)
expect_equal(grouped$mean_value[grouped$category == "A"], 15)
expect_equal(grouped$mean_value[grouped$category == "B"], 35)
})
# Test input validation
test_that("Input validation works correctly", {
<- function(value, type = "numeric", min_val = NULL, max_val = NULL) {
validate_input
<- c()
errors
# Type validation
if(type == "numeric" && !is.numeric(value)) {
<- c(errors, "Value must be numeric")
errors
}
if(type == "character" && !is.character(value)) {
<- c(errors, "Value must be character")
errors
}
# Range validation for numeric values
if(is.numeric(value)) {
if(!is.null(min_val) && value < min_val) {
<- c(errors, paste("Value must be at least", min_val))
errors
}
if(!is.null(max_val) && value > max_val) {
<- c(errors, paste("Value must be at most", max_val))
errors
}
}
return(list(
valid = length(errors) == 0,
errors = errors
))
}
# Test valid inputs
expect_true(validate_input(50, "numeric")$valid)
expect_true(validate_input("test", "character")$valid)
expect_true(validate_input(25, "numeric", min_val = 10, max_val = 50)$valid)
# Test invalid inputs
expect_false(validate_input("abc", "numeric")$valid)
expect_false(validate_input(5, "numeric", min_val = 10)$valid)
expect_false(validate_input(100, "numeric", max_val = 50)$valid)
}) }
Testing Shiny Modules
# Comprehensive module testing
<- function() {
test_shiny_modules
# Sample module for testing
<- function(id) {
data_display_module_ui <- NS(id)
ns
tagList(
selectInput(ns("category"), "Category:", choices = NULL),
plotOutput(ns("plot")),
tableOutput(ns("table"))
)
}
<- function(id, data) {
data_display_module_server moduleServer(id, function(input, output, session) {
# Update category choices
observe({
req(data())
<- c("All", unique(data()$category))
choices updateSelectInput(session, "category", choices = choices)
})
# Filter data based on selection
<- reactive({
filtered_data req(data(), input$category)
if(input$category == "All") {
data()
else {
} data()[data()$category == input$category, ]
}
})
# Generate plot
$plot <- renderPlot({
outputreq(filtered_data())
ggplot(filtered_data(), aes(x = category, y = value)) +
geom_boxplot() +
theme_minimal() +
labs(title = "Value Distribution by Category")
})
# Generate table
$table <- renderTable({
outputreq(filtered_data())
filtered_data() %>%
group_by(category) %>%
summarise(
count = n(),
mean_value = round(mean(value), 2),
.groups = "drop"
)
})
# Return filtered data for testing
return(filtered_data)
})
}
# Test module functionality
test_that("Data display module works correctly", {
# Create test application
<- function() {
test_app
# Test data
<- reactive({
test_data data.frame(
category = c("A", "A", "B", "B", "C"),
value = c(10, 15, 20, 25, 30)
)
})
<- fluidPage(
ui data_display_module_ui("test")
)
<- function(input, output, session) {
server <- data_display_module_server("test", test_data)
filtered_data
# Store result for testing
$userData$filtered_data <- filtered_data
session
}
list(ui = ui, server = server)
}
# Test with shinytest2
<- AppDriver$new(test_app())
app
# Wait for app to load
$wait_for_idle()
app
# Test initial state
expect_true(app$get_value(input = "test-category") %in% c("All", "A", "B", "C"))
# Test category selection
$set_inputs("test-category" = "A")
app$wait_for_idle()
app
# Verify filtering worked (would need to access internal state)
# This is a simplified example - actual implementation would verify outputs
$stop()
app
}) }
Advanced Debugging Techniques
Reactive Debugging with reactlog
# Comprehensive reactive debugging setup
<- function() {
setup_reactive_debugging
# Enable reactive logging
options(shiny.reactlog = TRUE)
# Enhanced reactive debugging functions
<- function(app_function) {
debug_reactive_graph
cat("Starting reactive debugging session...\n")
# Clear previous logs
::reactlog_reset()
reactlog
# Run app with logging
<- app_function()
app
# Instructions for debugging
cat("Debugging instructions:\n")
cat("1. Interact with your application\n")
cat("2. Stop the app\n")
cat("3. Run reactlog::reactlogShow() to view the reactive graph\n")
cat("4. Use show_reactive_dependencies() to analyze specific reactives\n")
return(app)
}
# Analyze reactive dependencies
<- function(reactive_name = NULL) {
show_reactive_dependencies
# Get reactive log
<- reactlog::reactlog()
log
if(is.null(log)) {
cat("No reactive log available. Make sure to enable reactlog and interact with your app.\n")
return(invisible(NULL))
}
# Analyze dependencies
if(is.null(reactive_name)) {
# Show overview
cat("Reactive Graph Overview:\n")
cat("======================\n")
# Count different reactive types
<- sum(sapply(log, function(x) x$type == "input"))
inputs <- sum(sapply(log, function(x) x$type == "reactive"))
reactives <- sum(sapply(log, function(x) x$type == "output"))
outputs
cat("Inputs:", inputs, "\n")
cat("Reactives:", reactives, "\n")
cat("Outputs:", outputs, "\n\n")
# Show execution order
cat("Recent Reactive Executions:\n")
<- tail(log, 10)
recent for(i in seq_along(recent)) {
<- recent[[i]]
item cat(sprintf("%d. %s (%s) - %s\n",
$label %||% "unlabeled",
i, item$type, item$status))
item
}
else {
}
# Show specific reactive details
<- sapply(log, function(x)
matches grepl(reactive_name, x$label %||% "", ignore.case = TRUE))
if(sum(matches) == 0) {
cat("No reactives found matching:", reactive_name, "\n")
return(invisible(NULL))
}
cat("Details for reactive(s) matching '", reactive_name, "':\n", sep = "")
cat("================================================\n")
<- log[matches]
matching_items for(item in matching_items) {
cat("Label:", item$label %||% "unlabeled", "\n")
cat("Type:", item$type, "\n")
cat("Status:", item$status, "\n")
cat("Dependencies:", length(item$deps %||% list()), "\n\n")
}
}
}
# Performance profiling for reactives
<- function(app_function, duration = 30) {
profile_reactive_performance
cat("Starting reactive performance profiling for", duration, "seconds...\n")
# Enable profiling
::profvis({
profvis
# Run app
<- app_function()
app runApp(app, launch.browser = FALSE, port = 3838)
interval = 0.01, prof_output = "reactive_profile.prof")
},
cat("Profiling complete. View results with profvis output.\n")
}
return(list(
debug_graph = debug_reactive_graph,
show_dependencies = show_reactive_dependencies,
profile_performance = profile_reactive_performance
))
}
# Reactive debugging utilities
<- function() {
reactive_debugging_utils
# Add debugging to reactive expressions
<- function(reactive_expr, label = "reactive") {
debug_reactive
reactive({
cat("Executing reactive:", label, "at", format(Sys.time()), "\n")
# Execute original reactive
<- reactive_expr()
result
cat("Reactive", label, "completed. Result type:", class(result)[1], "\n")
if(is.data.frame(result)) {
cat("Data frame dimensions:", nrow(result), "x", ncol(result), "\n")
else if(is.vector(result)) {
} cat("Vector length:", length(result), "\n")
}
return(result)
})
}
# Monitor reactive invalidation
<- function(reactive_expr, label = "reactive") {
monitor_invalidation
reactive({
# Set up invalidation monitoring
onInvalidate(function() {
cat("Reactive", label, "invalidated at", format(Sys.time()), "\n")
})
reactive_expr()
})
}
# Trace reactive execution chain
<- function(session) {
trace_reactive_chain
# Store original reactive context
<- getCurrentReactiveContext()
original_context
if(is.null(original_context)) {
cat("No reactive context available\n")
return(invisible(NULL))
}
cat("Current Reactive Context:\n")
cat("Label:", original_context$label %||% "unlabeled", "\n")
cat("Type:", class(original_context)[1], "\n")
# Trace parent contexts
<- original_context
context <- 1
level
while(!is.null(context) && level <= 10) { # Prevent infinite loops
cat("Level", level, ":", context$label %||% "unlabeled", "\n")
# Try to get parent context (this is implementation-dependent)
<- tryCatch({
context $.parent
contexterror = function(e) NULL)
},
<- level + 1
level
}
}
return(list(
debug_reactive = debug_reactive,
monitor_invalidation = monitor_invalidation,
trace_chain = trace_reactive_chain
)) }
Browser-Based Debugging
# Advanced browser debugging for Shiny applications
<- function() {
browser_debugging_toolkit
# JavaScript debugging integration
<- function() {
inject_js_debugger
<- '
js_code // Shiny debugging utilities
window.ShinyDebug = {
// Monitor input changes
monitorInputs: function() {
$(document).on("shiny:inputchanged", function(event) {
console.log("Input changed:", event.name, "->", event.value);
});
},
// Monitor output updates
monitorOutputs: function() {
$(document).on("shiny:value", function(event) {
console.log("Output updated:", event.name);
});
},
// Track reactive messages
monitorMessages: function() {
$(document).on("shiny:message", function(event) {
console.log("Message received:", event.message);
});
},
// Performance monitoring
startPerformanceMonitoring: function() {
this.performanceStart = performance.now();
$(document).on("shiny:idle", function() {
if(window.ShinyDebug.performanceStart) {
const duration = performance.now() - window.ShinyDebug.performanceStart;
console.log("Reactive cycle completed in:", duration.toFixed(2), "ms");
}
});
},
// Get current input values
getCurrentInputs: function() {
return Shiny.shinyapp.$inputValues;
},
// Force reactive flush
forceReactiveFlush: function() {
Shiny.shinyapp.$flushReact();
}
};
// Auto-initialize monitoring
$(document).ready(function() {
window.ShinyDebug.monitorInputs();
window.ShinyDebug.monitorOutputs();
window.ShinyDebug.monitorMessages();
window.ShinyDebug.startPerformanceMonitoring();
console.log("Shiny debugging tools initialized");
});
'
$script(HTML(js_code))
tags
}
# Server-side debugging helpers
<- function(server_function) {
debug_server_function
function(input, output, session) {
# Add session debugging
$onSessionEnded(function() {
sessioncat("Session ended:", session$token, "at", format(Sys.time()), "\n")
})
# Monitor input changes
observe({
<- names(reactiveValuesToList(input))
input_names
for(name in input_names) {
observeEvent(input[[name]], {
cat("Input change:", name, "->", input[[name]], "\n")
ignoreInit = TRUE, ignoreNULL = FALSE)
},
}
})
# Execute original server function
server_function(input, output, session)
}
}
# Error handling and logging
<- function(server_function) {
enhanced_error_handler
function(input, output, session) {
# Set up error handling
options(shiny.error = function() {
# Get stack trace
<- sys.calls()
calls
cat("Shiny Error Occurred:\n")
cat("====================\n")
cat("Time:", format(Sys.time()), "\n")
cat("Session:", session$token, "\n")
cat("User Agent:", session$clientData$url_search %||% "Unknown", "\n\n")
cat("Stack Trace:\n")
for(i in seq_along(calls)) {
<- deparse(calls[[i]])[1]
call_str if(nchar(call_str) > 80) {
<- paste0(substr(call_str, 1, 77), "...")
call_str
}cat(sprintf("%2d: %s\n", i, call_str))
}
# Log to file as well
<- data.frame(
error_log timestamp = Sys.time(),
session_id = session$token,
error_trace = paste(sapply(calls, function(x) deparse(x)[1]), collapse = " -> "),
stringsAsFactors = FALSE
)
# Append to error log file
if(file.exists("error_log.csv")) {
write.table(error_log, "error_log.csv", append = TRUE,
sep = ",", row.names = FALSE, col.names = FALSE)
else {
} write.csv(error_log, "error_log.csv", row.names = FALSE)
}
})
# Execute server function with error handling
tryCatch({
server_function(input, output, session)
error = function(e) {
}, cat("Server function error:", e$message, "\n")
# Could send error to monitoring service here
})
}
}
return(list(
inject_js_debugger = inject_js_debugger,
debug_server = debug_server_function,
enhanced_error_handler = enhanced_error_handler
)) }
Performance Testing and Optimization
Load Testing Framework
# Comprehensive load testing for Shiny applications
<- function() {
create_load_testing_framework
# Load testing configuration
<- list(
load_test_config base_url = "http://localhost:3838",
concurrent_users = c(1, 5, 10, 25, 50),
test_duration = 60, # seconds
ramp_up_time = 10, # seconds
scenarios = list()
)
# Define test scenarios
<- function() {
define_test_scenarios
<- list(
scenarios
# Basic navigation scenario
basic_navigation = list(
name = "Basic Navigation",
steps = list(
list(action = "navigate", url = "/"),
list(action = "wait", duration = 2),
list(action = "click", selector = "#tab-data"),
list(action = "wait", duration = 3),
list(action = "click", selector = "#tab-analysis"),
list(action = "wait", duration = 5)
)
),
# Data interaction scenario
data_interaction = list(
name = "Data Interaction",
steps = list(
list(action = "navigate", url = "/"),
list(action = "select", selector = "#dataset", value = "mtcars"),
list(action = "wait", duration = 3),
list(action = "select", selector = "#x_var", value = "mpg"),
list(action = "select", selector = "#y_var", value = "hp"),
list(action = "wait", duration = 5),
list(action = "slider", selector = "#point_size", value = 3),
list(action = "wait", duration = 2)
)
),
# Heavy computation scenario
heavy_computation = list(
name = "Heavy Computation",
steps = list(
list(action = "navigate", url = "/"),
list(action = "select", selector = "#analysis_type", value = "complex"),
list(action = "click", selector = "#run_analysis"),
list(action = "wait", duration = 15),
list(action = "click", selector = "#download_results"),
list(action = "wait", duration = 3)
)
)
)
return(scenarios)
}
# Execute load tests using shinyloadtest
<- function(app_url, scenario_name, concurrent_users = 5, duration = 60) {
run_load_test
cat("Starting load test:", scenario_name, "\n")
cat("URL:", app_url, "\n")
cat("Concurrent users:", concurrent_users, "\n")
cat("Duration:", duration, "seconds\n\n")
# Create temporary script for shinyloadtest
<- sprintf('
test_script library(shinyloadtest)
# Record user session first
record_session(
target_app_url = "%s",
output_file = "loadtest_recording.log",
seed = 12345
)
# Run load test
load_test_result <- load_test(
recording = "loadtest_recording.log",
target_app_url = "%s",
workers = %d,
duration = %d
)
# Generate report
shinyloadtest_report(
load_test_result,
output = "loadtest_report.html"
)
', app_url, app_url, concurrent_users, duration)
writeLines(test_script, "temp_loadtest.R")
# Execute load test
tryCatch({
source("temp_loadtest.R")
cat("Load test completed successfully\n")
cat("Report saved to: loadtest_report.html\n")
error = function(e) {
}, cat("Load test failed:", e$message, "\n")
finally = {
}, if(file.exists("temp_loadtest.R")) {
file.remove("temp_loadtest.R")
}
})
}
# Memory usage monitoring
<- function(app_function, duration = 300) {
monitor_memory_usage
cat("Starting memory monitoring for", duration, "seconds...\n")
# Start monitoring
<- data.frame()
memory_log <- Sys.time()
start_time
# Memory monitoring function
<- function() {
monitor_memory
while(difftime(Sys.time(), start_time, units = "secs") < duration) {
# Get memory usage
<- gc(verbose = FALSE)
memory_info <- sum(memory_info[, "used"] * c(8, 8)) # Convert to bytes
used_memory
# Get system memory if available
<- tryCatch({
sys_memory if(Sys.info()["sysname"] == "Linux") {
system("free -m | grep '^Mem:' | awk '{print $3}'", intern = TRUE)
else {
} NA
}error = function(e) NA)
},
# Log memory usage
<- data.frame(
memory_entry timestamp = Sys.time(),
r_memory_mb = used_memory / (1024^2),
system_memory_mb = ifelse(is.na(sys_memory), NA, as.numeric(sys_memory)),
stringsAsFactors = FALSE
)
<- rbind(memory_log, memory_entry)
memory_log
Sys.sleep(5) # Check every 5 seconds
}
return(memory_log)
}
# Run app with memory monitoring
<- callr::r_bg(function(app_func) {
app_process app_func()
args = list(app_function))
},
<- monitor_memory()
memory_data
# Stop app
$kill()
app_process
# Analyze memory usage
cat("\nMemory Usage Analysis:\n")
cat("=====================\n")
cat("Peak R memory usage:", round(max(memory_data$r_memory_mb), 2), "MB\n")
cat("Average R memory usage:", round(mean(memory_data$r_memory_mb), 2), "MB\n")
cat("Memory growth rate:", round(
tail(memory_data$r_memory_mb, 1) - head(memory_data$r_memory_mb, 1)) / duration * 60, 2
("MB/minute\n")
),
# Save memory log
write.csv(memory_data, "memory_usage_log.csv", row.names = FALSE)
cat("Memory log saved to: memory_usage_log.csv\n")
return(memory_data)
}
return(list(
config = load_test_config,
scenarios = define_test_scenarios(),
run_test = run_load_test,
monitor_memory = monitor_memory_usage
))
}
# Performance profiling utilities
<- function() {
performance_profiling_tools
# Comprehensive performance analysis
<- function(app_function, interactions = NULL) {
analyze_app_performance
cat("Starting comprehensive performance analysis...\n")
# Default interactions if none provided
if(is.null(interactions)) {
<- list(
interactions list(input = "dataset", value = "mtcars", wait = 2),
list(input = "x_var", value = "mpg", wait = 1),
list(input = "y_var", value = "hp", wait = 3),
list(input = "color", value = "steelblue", wait = 1)
)
}
# Profile with profvis
<- profvis::profvis({
profile_result
# Create test session
<- MockShinySession$new()
session
# Initialize app
<- app_function()
app <- app$server
server_func
# Execute interactions
for(interaction in interactions) {
# Set input
$setInputs(!!interaction$input := interaction$value)
session
# Wait
Sys.sleep(interaction$wait)
# Flush reactive updates
$flushReact()
session
}
interval = 0.01)
},
cat("Performance profiling completed\n")
return(profile_result)
}
# Reactive performance benchmarking
<- function(reactive_list, iterations = 100) {
benchmark_reactive_expressions
cat("Benchmarking reactive expressions...\n")
<- data.frame()
results
for(name in names(reactive_list)) {
cat("Testing:", name, "\n")
# Benchmark the reactive
<- microbenchmark::microbenchmark(
timing
reactive_list[[name]](),times = iterations,
unit = "ms"
)
# Extract summary statistics
<- summary(timing)
summary_stats
<- data.frame(
result_row reactive_name = name,
min_ms = summary_stats$min,
median_ms = summary_stats$median,
mean_ms = summary_stats$mean,
max_ms = summary_stats$max,
iterations = iterations,
stringsAsFactors = FALSE
)
<- rbind(results, result_row)
results
}
# Display results
cat("\nReactive Performance Benchmark Results:\n")
cat("======================================\n")
print(results)
# Identify slow reactives
<- results[results$median_ms > 100, ]
slow_reactives if(nrow(slow_reactives) > 0) {
cat("\nSlow reactives (>100ms median):\n")
print(slow_reactives[, c("reactive_name", "median_ms")])
}
return(results)
}
# Database query performance testing
<- function(db_pool, queries, iterations = 50) {
test_database_performance
cat("Testing database query performance...\n")
<- data.frame()
results
for(query_name in names(queries)) {
cat("Testing query:", query_name, "\n")
<- queries[[query_name]]
query
# Benchmark query execution
<- microbenchmark::microbenchmark(
timing
{<- pool::dbGetQuery(db_pool, query$sql, params = query$params)
result nrow(result) # Force evaluation
},times = iterations,
unit = "ms"
)
<- summary(timing)
summary_stats
<- data.frame(
result_row query_name = query_name,
min_ms = summary_stats$min,
median_ms = summary_stats$median,
mean_ms = summary_stats$mean,
max_ms = summary_stats$max,
stringsAsFactors = FALSE
)
<- rbind(results, result_row)
results
}
cat("\nDatabase Query Performance Results:\n")
cat("==================================\n")
print(results)
return(results)
}
return(list(
analyze_performance = analyze_app_performance,
benchmark_reactives = benchmark_reactive_expressions,
test_db_performance = test_database_performance
)) }
Integration Testing with shinytest2
End-to-End Testing Framework
# Comprehensive end-to-end testing with shinytest2
<- function() {
create_e2e_testing_framework
# Test application workflows
<- function(app_function) {
test_complete_workflows
test_that("Complete data analysis workflow", {
# Start application
<- AppDriver$new(app_function, name = "data-analysis-workflow")
app
# Wait for initial load
$wait_for_idle(timeout = 10000)
app
# Test initial state
expect_true(app$get_value(output = "welcome_message") != "")
# Navigate to data section
$click("data_tab")
app$wait_for_idle()
app
# Select dataset
$set_inputs(dataset = "mtcars")
app$wait_for_idle()
app
# Verify data loaded
expect_true(nrow(app$get_value(output = "data_table")) > 0)
# Navigate to analysis section
$click("analysis_tab")
app$wait_for_idle()
app
# Configure analysis
$set_inputs(
appx_variable = "mpg",
y_variable = "hp",
analysis_type = "correlation"
)$wait_for_idle()
app
# Run analysis
$click("run_analysis")
app$wait_for_idle(timeout = 15000)
app
# Verify results
expect_true(app$get_value(output = "analysis_results") != "")
expect_true(app$get_value(output = "correlation_plot") != "")
# Test export functionality
$click("export_results")
app$wait_for_idle()
app
# Clean up
$stop()
app
})
test_that("User permission workflows", {
# Test different user roles
<- c("viewer", "analyst", "admin")
user_roles
for(role in user_roles) {
<- AppDriver$new(app_function, name = paste0("user-role-", role))
app
# Simulate login
$set_inputs(
appusername = paste0("test_", role),
password = "test_password"
)$click("login_button")
app$wait_for_idle()
app
# Test role-specific access
if(role == "viewer") {
# Viewer should see data but not edit controls
expect_true(app$get_value(output = "data_display") != "")
expect_false(app$get_element("#edit_data")$is_visible())
else if(role == "analyst") {
} # Analyst should see analysis tools
expect_true(app$get_element("#analysis_tools")$is_visible())
expect_true(app$get_element("#run_analysis")$is_visible())
else if(role == "admin") {
} # Admin should see user management
expect_true(app$get_element("#user_management")$is_visible())
expect_true(app$get_element("#system_settings")$is_visible())
}
$stop()
app
}
})
test_that("Error handling and recovery", {
<- AppDriver$new(app_function, name = "error-handling")
app $wait_for_idle()
app
# Test invalid input handling
$set_inputs(numeric_input = "invalid_number")
app$wait_for_idle()
app
# Should show error message
expect_true(grepl("error", app$get_value(output = "validation_message"), ignore.case = TRUE))
# Test recovery from error
$set_inputs(numeric_input = 42)
app$wait_for_idle()
app
# Error should be cleared
expect_false(grepl("error", app$get_value(output = "validation_message"), ignore.case = TRUE))
# Test server error simulation
$click("trigger_server_error")
app$wait_for_idle()
app
# Should handle gracefully
expect_true(app$get_element("#error_notification")$is_visible())
$stop()
app
})
}
# Visual regression testing
<- function(app_function) {
test_visual_regression
test_that("Visual appearance remains consistent", {
<- AppDriver$new(app_function, name = "visual-regression")
app $wait_for_idle()
app
# Take screenshots of key pages
<- list(
pages_to_test "home" = list(tab = NULL, inputs = list()),
"data_view" = list(tab = "data_tab", inputs = list(dataset = "mtcars")),
"analysis" = list(tab = "analysis_tab", inputs = list(x_var = "mpg", y_var = "hp")),
"settings" = list(tab = "settings_tab", inputs = list())
)
for(page_name in names(pages_to_test)) {
<- pages_to_test[[page_name]]
page_config
# Navigate to page
if(!is.null(page_config$tab)) {
$click(page_config$tab)
app$wait_for_idle()
app
}
# Set inputs
if(length(page_config$inputs) > 0) {
$set_inputs(!!!page_config$inputs)
app$wait_for_idle()
app
}
# Take screenshot
$expect_screenshot(
appname = page_name,
screenshot_args = list(
selector = "body",
delay = 1
)
)
}
$stop()
app
})
}
# Performance testing in browser
<- function(app_function) {
test_browser_performance
test_that("Browser performance meets standards", {
<- AppDriver$new(app_function, name = "browser-performance")
app
# Test initial load time
<- Sys.time()
start_time $wait_for_idle(timeout = 10000)
app<- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
load_time
expect_lt(load_time, 5, info = "Initial load should be under 5 seconds")
# Test interaction responsiveness
<- c()
interaction_times
for(i in 1:5) {
<- Sys.time()
start_time $set_inputs(test_slider = runif(1, 1, 100))
app$wait_for_idle()
app<- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
interaction_time <- c(interaction_times, interaction_time)
interaction_times
}
<- mean(interaction_times)
avg_interaction_time expect_lt(avg_interaction_time, 2, info = "Average interaction time should be under 2 seconds")
# Test memory usage (if available)
if(app$get_js("window.performance.memory") != "undefined") {
<- app$get_js("window.performance.memory.usedJSHeapSize")
initial_memory
# Perform memory-intensive operations
for(i in 1:10) {
$set_inputs(dataset = sample(c("mtcars", "iris", "airquality"), 1))
app$wait_for_idle()
app
}
<- app$get_js("window.performance.memory.usedJSHeapSize")
final_memory <- final_memory - initial_memory
memory_growth
expect_lt(memory_growth, 50000000, info = "Memory growth should be reasonable")
}
$stop()
app
})
}
return(list(
test_workflows = test_complete_workflows,
test_visual = test_visual_regression,
test_performance = test_browser_performance
)) }
Automated Testing and CI/CD Integration
Continuous Integration Setup
# Automated testing workflow for CI/CD
<- function() {
create_ci_testing_workflow
# GitHub Actions workflow configuration
<- '
github_actions_config name: Shiny App Testing
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
r-version: [4.2, 4.3]
steps:
- uses: actions/checkout@v3
- name: Set up R ${{ matrix.r-version }}
uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.r-version }}
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y libcurl4-openssl-dev libssl-dev libxml2-dev
- name: Install R dependencies
run: |
install.packages(c("shiny", "testthat", "shinytest2", "profvis"))
install.packages("remotes")
remotes::install_deps(dependencies = TRUE)
shell: Rscript {0}
- name: Run unit tests
run: |
testthat::test_dir("tests/testthat")
shell: Rscript {0}
- name: Run integration tests
run: |
source("tests/run_integration_tests.R")
shell: Rscript {0}
- name: Run performance tests
run: |
source("tests/run_performance_tests.R")
shell: Rscript {0}
- name: Upload test results
uses: actions/upload-artifact@v3
if: failure()
with:
name: test-results
path: |
tests/results/
*.log
'
# Create test runner scripts
<- function(app_path) {
create_test_runners
# Integration test runner
<- '
integration_runner # Integration Test Runner
library(testthat)
library(shinytest2)
cat("Running integration tests...\\n")
# Set up test environment
Sys.setenv("SHINY_TEST_MODE" = "true")
# Run integration tests
test_results <- test_dir(
"tests/integration",
reporter = "summary",
env = parent.frame()
)
# Check results
if(any(test_results$failed > 0)) {
cat("Integration tests failed!\\n")
quit(status = 1)
} else {
cat("All integration tests passed!\\n")
}
'
writeLines(integration_runner, file.path(app_path, "tests", "run_integration_tests.R"))
# Performance test runner
<- '
performance_runner # Performance Test Runner
library(profvis)
library(microbenchmark)
cat("Running performance tests...\\n")
# Source app
source("app.R")
# Run performance benchmarks
source("tests/performance/benchmark_reactives.R")
source("tests/performance/memory_usage_test.R")
cat("Performance tests completed!\\n")
'
writeLines(performance_runner, file.path(app_path, "tests", "run_performance_tests.R"))
}
# Test reporting and notifications
<- function() {
setup_test_reporting
# Test result aggregator
<- function(test_dirs) {
aggregate_test_results
<- list()
all_results
for(dir in test_dirs) {
if(dir.exists(dir)) {
# Run tests and capture results
<- testthat::test_dir(dir, reporter = "silent")
results
basename(dir)]] <- list(
all_results[[total = length(results),
passed = sum(sapply(results, function(x) x$passed)),
failed = sum(sapply(results, function(x) x$failed)),
warnings = sum(sapply(results, function(x) x$warning)),
skipped = sum(sapply(results, function(x) x$skipped))
)
}
}
return(all_results)
}
# Generate test report
<- function(results, output_file = "test_report.html") {
generate_test_report
# Create HTML report
<- '
html_content <!DOCTYPE html>
<html>
<head>
<title>Shiny App Test Report</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
.passed { color: green; }
.failed { color: red; }
.warning { color: orange; }
table { border-collapse: collapse; width: 100%; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #f2f2f2; }
</style>
</head>
<body>
<h1>Shiny Application Test Report</h1>
<p>Generated: ' + format(Sys.time()) + '</p>
<h2>Test Summary</h2>
<table>
<tr>
<th>Test Suite</th>
<th>Total</th>
<th>Passed</th>
<th>Failed</th>
<th>Warnings</th>
<th>Skipped</th>
</tr>'
for(suite_name in names(results)) {
<- results[[suite_name]]
suite
<- paste0(html_content, '
html_content <tr>
<td>', suite_name, '</td>
<td>', suite$total, '</td>
<td class="passed">', suite$passed, '</td>
<td class="failed">', suite$failed, '</td>
<td class="warning">', suite$warnings, '</td>
<td>', suite$skipped, '</td>
</tr>')
}
<- paste0(html_content, '
html_content </table>
</body>
</html>')
writeLines(html_content, output_file)
cat("Test report generated:", output_file, "\n")
}
return(list(
aggregate_results = aggregate_test_results,
generate_report = generate_test_report
))
}
return(list(
github_config = github_actions_config,
create_runners = create_test_runners,
reporting = setup_test_reporting()
)) }
Common Issues and Solutions
Issue 1: Reactive Testing Challenges
Problem: Testing reactive expressions and complex reactive dependencies is difficult due to asynchronous execution and context requirements.
Solution:
Implement specialized reactive testing utilities:
# Advanced reactive testing framework
<- function() {
reactive_testing_framework
# Mock reactive context for testing
<- function() {
create_mock_reactive_context
<- R6::R6Class(
MockReactiveContext "MockReactiveContext",
public = list(
values = NULL,
invalidated = FALSE,
initialize = function() {
$values <- reactiveValues()
self
},
set_value = function(name, value) {
$values[[name]] <- value
self
},
get_value = function(name) {
$values[[name]]
self
},
invalidate = function() {
$invalidated <- TRUE
self
},
is_invalidated = function() {
$invalidated
self
}
)
)
return(MockReactiveContext$new())
}
# Test reactive expressions
<- function(reactive_expr, inputs, expected_outputs) {
test_reactive_expression
# Create mock session
<- MockShinySession$new()
session
# Set up inputs
for(input_name in names(inputs)) {
$setInputs(!!input_name := inputs[[input_name]])
session
}
# Execute reactive in mock context
<- withMockSession(session, {
result reactive_expr()
})
# Verify outputs
for(output_name in names(expected_outputs)) {
expect_equal(
result[[output_name]],
expected_outputs[[output_name]],info = paste("Output", output_name, "does not match expected value")
)
}
}
# Test reactive dependencies
<- function(reactive_expr, dependency_inputs) {
test_reactive_dependencies
<- MockShinySession$new()
session <- 0
invalidation_count
# Create reactive with invalidation tracking
<- reactive({
tracked_reactive
# Track invalidations
onInvalidate(function() {
<<- invalidation_count + 1
invalidation_count
})
reactive_expr()
})
# Test each dependency
for(input_name in dependency_inputs) {
<- invalidation_count
initial_count
# Change input value
$setInputs(!!input_name := runif(1))
session$flushReact()
session
# Verify invalidation occurred
expect_gt(
invalidation_count,
initial_count,info = paste("Reactive should invalidate when", input_name, "changes")
)
}
}
return(list(
create_context = create_mock_reactive_context,
test_expression = test_reactive_expression,
test_dependencies = test_reactive_dependencies
)) }
Issue 2: Asynchronous Testing Complexity
Problem: Testing applications with asynchronous operations, file uploads, and external API calls requires special handling.
Solution:
Implement asynchronous testing patterns:
# Asynchronous testing utilities
<- function() {
async_testing_utilities
# Test file upload operations
<- function(app_driver, file_input_id, test_file_path) {
test_file_upload
test_that("File upload works correctly", {
# Upload file
$upload_file(file_input_id, test_file_path)
app_driver
# Wait for processing
$wait_for_idle(timeout = 30000)
app_driver
# Verify upload success
expect_true(
$get_value(output = "upload_status") == "success",
app_driverinfo = "File upload should succeed"
)
# Verify file processing
<- app_driver$get_value(output = "uploaded_data")
uploaded_data expect_true(
!is.null(uploaded_data) && nrow(uploaded_data) > 0,
info = "Uploaded file should be processed and contain data"
)
})
}
# Test API integration
<- function(app_function, mock_responses) {
test_api_integration
test_that("API integration works with various responses", {
# Mock HTTP requests
with_mock(
`httr::GET` = function(url, ...) {
# Return appropriate mock response based on URL
for(pattern in names(mock_responses)) {
if(grepl(pattern, url)) {
return(mock_responses[[pattern]])
}
}
# Default error response
return(list(status_code = 404))
},
{<- AppDriver$new(app_function)
app $wait_for_idle()
app
# Test successful API call
$click("fetch_data_button")
app$wait_for_idle(timeout = 10000)
app
expect_true(
$get_value(output = "api_status") == "success",
appinfo = "API call should succeed with mocked response"
)
$stop()
app
}
)
})
}
# Test long-running computations
<- function(app_driver, computation_trigger, max_wait = 60) {
test_long_running_computation
test_that("Long-running computation completes successfully", {
# Start computation
$click(computation_trigger)
app_driver
# Monitor progress
<- Sys.time()
start_time <- FALSE
completed
while(!completed && difftime(Sys.time(), start_time, units = "secs") < max_wait) {
Sys.sleep(1)
$wait_for_idle(timeout = 1000)
app_driver
# Check if computation completed
<- app_driver$get_value(output = "computation_status")
status if(!is.null(status) && status %in% c("completed", "error")) {
<- TRUE
completed
}
}
expect_true(completed, info = "Computation should complete within time limit")
<- app_driver$get_value(output = "computation_status")
final_status expect_equal(final_status, "completed", info = "Computation should complete successfully")
})
}
return(list(
test_file_upload = test_file_upload,
test_api_integration = test_api_integration,
test_long_computation = test_long_running_computation
)) }
Issue 3: Performance Debugging and Optimization
Problem: Identifying performance bottlenecks in complex reactive applications with multiple data sources and computations.
Solution:
Implement systematic performance debugging:
# Performance debugging toolkit
<- function() {
performance_debugging_toolkit
# Reactive performance profiler
<- function(app_function, scenario_steps) {
profile_reactive_performance
cat("Starting reactive performance profiling...\n")
# Enable reactive logging
options(shiny.reactlog = TRUE)
::reactlog_reset()
reactlog
# Profile application
<- profvis::profvis({
profile_result
<- AppDriver$new(app_function)
app $wait_for_idle()
app
# Execute scenario steps
for(step in scenario_steps) {
switch(step$action,
"click" = app$click(step$selector),
"input" = app$set_inputs(!!step$input := step$value),
"wait" = Sys.sleep(step$duration),
"scroll" = app$run_js(paste0("document.querySelector('", step$selector, "').scrollIntoView()"))
)$wait_for_idle()
app
}
$stop()
app
interval = 0.005) # High resolution profiling
},
# Analyze reactive log
<- analyze_reactive_log()
reactive_summary
# Generate performance report
<- list(
performance_report profiling_result = profile_result,
reactive_analysis = reactive_summary,
recommendations = generate_performance_recommendations(reactive_summary)
)
return(performance_report)
}
# Analyze reactive execution patterns
<- function() {
analyze_reactive_log
<- reactlog::reactlog()
log
if(is.null(log)) {
return(list(error = "No reactive log available"))
}
# Analyze reactive patterns
<- list(
reactive_stats total_executions = length(log),
unique_reactives = length(unique(sapply(log, function(x) x$label %||% "unlabeled"))),
execution_times = sapply(log, function(x) x$time %||% 0),
invalidation_chains = analyze_invalidation_chains(log)
)
# Identify performance issues
<- identify_slow_reactives(log)
slow_reactives <- identify_excessive_invalidations(log)
excessive_invalidations
return(list(
stats = reactive_stats,
slow_reactives = slow_reactives,
excessive_invalidations = excessive_invalidations
))
}
# Identify slow reactive expressions
<- function(log, threshold_ms = 100) {
identify_slow_reactives
<- list()
slow_reactives
for(item in log) {
<- item$time %||% 0
execution_time
if(execution_time > threshold_ms) {
length(slow_reactives) + 1]] <- list(
slow_reactives[[label = item$label %||% "unlabeled",
type = item$type,
execution_time = execution_time,
complexity_score = calculate_complexity_score(item)
)
}
}
return(slow_reactives)
}
# Calculate reactive complexity score
<- function(reactive_item) {
calculate_complexity_score
<- 0
score
# Base score from execution time
<- score + (reactive_item$time %||% 0) / 10
score
# Add score for dependencies
<- length(reactive_item$deps %||% list())
deps_count <- score + deps_count * 5
score
# Add score for type complexity
<- list(
type_weights "reactive" = 10,
"output" = 15,
"observer" = 20,
"input" = 5
)
<- score + (type_weights[[reactive_item$type]] %||% 10)
score
return(round(score, 2))
}
# Generate performance recommendations
<- function(reactive_analysis) {
generate_performance_recommendations
<- list()
recommendations
# Check for slow reactives
if(length(reactive_analysis$slow_reactives) > 0) {
$slow_reactives <- list(
recommendationsissue = "Slow reactive expressions detected",
count = length(reactive_analysis$slow_reactives),
suggestions = c(
"Consider caching expensive computations with reactive values",
"Break complex reactives into smaller, focused expressions",
"Use isolate() to prevent unnecessary re-execution",
"Optimize data processing algorithms"
)
)
}
# Check for excessive invalidations
if(length(reactive_analysis$excessive_invalidations) > 0) {
$excessive_invalidations <- list(
recommendationsissue = "Excessive reactive invalidations detected",
count = length(reactive_analysis$excessive_invalidations),
suggestions = c(
"Review reactive dependencies for unnecessary connections",
"Use debounce() or throttle() for user input processing",
"Consider using eventReactive() for user-triggered computations",
"Minimize reactive graph complexity"
)
)
}
# General performance recommendations
$general <- list(
recommendationssuggestions = c(
"Profile database queries for optimization opportunities",
"Implement data pagination for large datasets",
"Use reactive values for expensive computations",
"Consider lazy loading for non-critical components"
)
)
return(recommendations)
}
# Memory leak detection
<- function(app_function, test_cycles = 10) {
detect_memory_leaks
cat("Detecting memory leaks over", test_cycles, "cycles...\n")
<- data.frame()
memory_measurements
for(cycle in 1:test_cycles) {
cat("Cycle", cycle, "of", test_cycles, "\n")
# Force garbage collection
gc(verbose = FALSE)
# Measure initial memory
<- sum(gc(verbose = FALSE)[, "used"] * c(8, 8))
initial_memory
# Run app cycle
<- AppDriver$new(app_function)
app $wait_for_idle()
app
# Simulate user interaction
$set_inputs(test_input = runif(1))
app$wait_for_idle()
app
$stop()
app
# Measure final memory
gc(verbose = FALSE)
<- sum(gc(verbose = FALSE)[, "used"] * c(8, 8))
final_memory
# Record measurement
<- data.frame(
measurement cycle = cycle,
initial_memory = initial_memory,
final_memory = final_memory,
memory_growth = final_memory - initial_memory,
timestamp = Sys.time()
)
<- rbind(memory_measurements, measurement)
memory_measurements
Sys.sleep(1) # Allow cleanup
}
# Analyze memory growth
<- sum(memory_measurements$memory_growth)
total_growth <- mean(memory_measurements$memory_growth)
avg_growth_per_cycle
cat("\nMemory Leak Analysis:\n")
cat("====================\n")
cat("Total memory growth:", round(total_growth / 1024^2, 2), "MB\n")
cat("Average growth per cycle:", round(avg_growth_per_cycle / 1024^2, 2), "MB\n")
# Detect potential leaks
if(avg_growth_per_cycle > 1024^2) { # More than 1MB per cycle
cat("⚠️ POTENTIAL MEMORY LEAK DETECTED\n")
cat("Recommendations:\n")
cat("- Check for unclosed database connections\n")
cat("- Review reactive cleanup and onInvalidate handlers\n")
cat("- Verify proper session cleanup\n")
cat("- Check for circular references in reactive values\n")
else {
} cat("✅ No significant memory leaks detected\n")
}
return(memory_measurements)
}
return(list(
profile_performance = profile_reactive_performance,
analyze_log = analyze_reactive_log,
detect_leaks = detect_memory_leaks,
identify_slow = identify_slow_reactives
)) }
Integration Testing with shinytest2
End-to-End Testing Framework
# Comprehensive end-to-end testing with shinytest2
<- function() {
create_e2e_testing_framework
# Test application workflows
<- function(app_function) {
test_complete_workflows
test_that("Complete data analysis workflow", {
# Start application
<- AppDriver$new(app_function, name = "data-analysis-workflow")
app
# Wait for initial load
$wait_for_idle(timeout = 10000)
app
# Test initial state
expect_true(app$get_value(output = "welcome_message") != "")
# Navigate to data section
$click("data_tab")
app$wait_for_idle()
app
# Select dataset
$set_inputs(dataset = "mtcars")
app$wait_for_idle()
app
# Verify data loaded
expect_true(nrow(app$get_value(output = "data_table")) > 0)
# Navigate to analysis section
$click("analysis_tab")
app$wait_for_idle()
app
# Configure analysis
$set_inputs(
appx_variable = "mpg",
y_variable = "hp",
analysis_type = "correlation"
)$wait_for_idle()
app
# Run analysis
$click("run_analysis")
app$wait_for_idle(timeout = 15000)
app
# Verify results
expect_true(app$get_value(output = "analysis_results") != "")
expect_true(app$get_value(output = "correlation_plot") != "")
# Test export functionality
$click("export_results")
app$wait_for_idle()
app
# Clean up
$stop()
app
})
test_that("User permission workflows", {
# Test different user roles
<- c("viewer", "analyst", "admin")
user_roles
for(role in user_roles) {
<- AppDriver$new(app_function, name = paste0("user-role-", role))
app
# Simulate login
$set_inputs(
appusername = paste0("test_", role),
password = "test_password"
)$click("login_button")
app$wait_for_idle()
app
# Test role-specific access
if(role == "viewer") {
# Viewer should see data but not edit controls
expect_true(app$get_value(output = "data_display") != "")
expect_false(app$get_element("#edit_data")$is_visible())
else if(role == "analyst") {
} # Analyst should see analysis tools
expect_true(app$get_element("#analysis_tools")$is_visible())
expect_true(app$get_element("#run_analysis")$is_visible())
else if(role == "admin") {
} # Admin should see user management
expect_true(app$get_element("#user_management")$is_visible())
expect_true(app$get_element("#system_settings")$is_visible())
}
$stop()
app
}
})
test_that("Error handling and recovery", {
<- AppDriver$new(app_function, name = "error-handling")
app $wait_for_idle()
app
# Test invalid input handling
$set_inputs(numeric_input = "invalid_number")
app$wait_for_idle()
app
# Should show error message
expect_true(grepl("error", app$get_value(output = "validation_message"), ignore.case = TRUE))
# Test recovery from error
$set_inputs(numeric_input = 42)
app$wait_for_idle()
app
# Error should be cleared
expect_false(grepl("error", app$get_value(output = "validation_message"), ignore.case = TRUE))
# Test server error simulation
$click("trigger_server_error")
app$wait_for_idle()
app
# Should handle gracefully
expect_true(app$get_element("#error_notification")$is_visible())
$stop()
app
})
}
# Visual regression testing
<- function(app_function) {
test_visual_regression
test_that("Visual appearance remains consistent", {
<- AppDriver$new(app_function, name = "visual-regression")
app $wait_for_idle()
app
# Take screenshots of key pages
<- list(
pages_to_test "home" = list(tab = NULL, inputs = list()),
"data_view" = list(tab = "data_tab", inputs = list(dataset = "mtcars")),
"analysis" = list(tab = "analysis_tab", inputs = list(x_var = "mpg", y_var = "hp")),
"settings" = list(tab = "settings_tab", inputs = list())
)
for(page_name in names(pages_to_test)) {
<- pages_to_test[[page_name]]
page_config
# Navigate to page
if(!is.null(page_config$tab)) {
$click(page_config$tab)
app$wait_for_idle()
app
}
# Set inputs
if(length(page_config$inputs) > 0) {
$set_inputs(!!!page_config$inputs)
app$wait_for_idle()
app
}
# Take screenshot
$expect_screenshot(
appname = page_name,
screenshot_args = list(
selector = "body",
delay = 1
)
)
}
$stop()
app
})
}
# Performance testing in browser
<- function(app_function) {
test_browser_performance
test_that("Browser performance meets standards", {
<- AppDriver$new(app_function, name = "browser-performance")
app
# Test initial load time
<- Sys.time()
start_time $wait_for_idle(timeout = 10000)
app<- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
load_time
expect_lt(load_time, 5, info = "Initial load should be under 5 seconds")
# Test interaction responsiveness
<- c()
interaction_times
for(i in 1:5) {
<- Sys.time()
start_time $set_inputs(test_slider = runif(1, 1, 100))
app$wait_for_idle()
app<- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
interaction_time <- c(interaction_times, interaction_time)
interaction_times
}
<- mean(interaction_times)
avg_interaction_time expect_lt(avg_interaction_time, 2, info = "Average interaction time should be under 2 seconds")
# Test memory usage (if available)
if(app$get_js("window.performance.memory") != "undefined") {
<- app$get_js("window.performance.memory.usedJSHeapSize")
initial_memory
# Perform memory-intensive operations
for(i in 1:10) {
$set_inputs(dataset = sample(c("mtcars", "iris", "airquality"), 1))
app$wait_for_idle()
app
}
<- app$get_js("window.performance.memory.usedJSHeapSize")
final_memory <- final_memory - initial_memory
memory_growth
expect_lt(memory_growth, 50000000, info = "Memory growth should be reasonable")
}
$stop()
app
})
}
return(list(
test_workflows = test_complete_workflows,
test_visual = test_visual_regression,
test_performance = test_browser_performance
)) }
Automated Testing and CI/CD Integration
Continuous Integration Setup
# Automated testing workflow for CI/CD
<- function() {
create_ci_testing_workflow
# GitHub Actions workflow configuration
<- '
github_actions_config name: Shiny App Testing
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
r-version: [4.2, 4.3]
steps:
- uses: actions/checkout@v3
- name: Set up R ${{ matrix.r-version }}
uses: r-lib/actions/setup-r@v2
with:
r-version: ${{ matrix.r-version }}
- name: Install system dependencies
run: |
sudo apt-get update
sudo apt-get install -y libcurl4-openssl-dev libssl-dev libxml2-dev
- name: Install R dependencies
run: |
install.packages(c("shiny", "testthat", "shinytest2", "profvis"))
install.packages("remotes")
remotes::install_deps(dependencies = TRUE)
shell: Rscript {0}
- name: Run unit tests
run: |
testthat::test_dir("tests/testthat")
shell: Rscript {0}
- name: Run integration tests
run: |
source("tests/run_integration_tests.R")
shell: Rscript {0}
- name: Run performance tests
run: |
source("tests/run_performance_tests.R")
shell: Rscript {0}
- name: Upload test results
uses: actions/upload-artifact@v3
if: failure()
with:
name: test-results
path: |
tests/results/
*.log
'
# Create test runner scripts
<- function(app_path) {
create_test_runners
# Integration test runner
<- '
integration_runner # Integration Test Runner
library(testthat)
library(shinytest2)
cat("Running integration tests...\\n")
# Set up test environment
Sys.setenv("SHINY_TEST_MODE" = "true")
# Run integration tests
test_results <- test_dir(
"tests/integration",
reporter = "summary",
env = parent.frame()
)
# Check results
if(any(test_results$failed > 0)) {
cat("Integration tests failed!\\n")
quit(status = 1)
} else {
cat("All integration tests passed!\\n")
}
'
writeLines(integration_runner, file.path(app_path, "tests", "run_integration_tests.R"))
# Performance test runner
<- '
performance_runner # Performance Test Runner
library(profvis)
library(microbenchmark)
cat("Running performance tests...\\n")
# Source app
source("app.R")
# Run performance benchmarks
source("tests/performance/benchmark_reactives.R")
source("tests/performance/memory_usage_test.R")
cat("Performance tests completed!\\n")
'
writeLines(performance_runner, file.path(app_path, "tests", "run_performance_tests.R"))
}
# Test reporting and notifications
<- function() {
setup_test_reporting
# Test result aggregator
<- function(test_dirs) {
aggregate_test_results
<- list()
all_results
for(dir in test_dirs) {
if(dir.exists(dir)) {
# Run tests and capture results
<- testthat::test_dir(dir, reporter = "silent")
results
basename(dir)]] <- list(
all_results[[total = length(results),
passed = sum(sapply(results, function(x) x$passed)),
failed = sum(sapply(results, function(x) x$failed)),
warnings = sum(sapply(results, function(x) x$warning)),
skipped = sum(sapply(results, function(x) x$skipped))
)
}
}
return(all_results)
}
# Generate test report
<- function(results, output_file = "test_report.html") {
generate_test_report
# Create HTML report
<- '
html_content <!DOCTYPE html>
<html>
<head>
<title>Shiny App Test Report</title>
<style>
body { font-family: Arial, sans-serif; margin: 20px; }
.passed { color: green; }
.failed { color: red; }
.warning { color: orange; }
table { border-collapse: collapse; width: 100%; }
th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
th { background-color: #f2f2f2; }
</style>
</head>
<body>
<h1>Shiny Application Test Report</h1>
<p>Generated: ' + format(Sys.time()) + '</p>
<h2>Test Summary</h2>
<table>
<tr>
<th>Test Suite</th>
<th>Total</th>
<th>Passed</th>
<th>Failed</th>
<th>Warnings</th>
<th>Skipped</th>
</tr>'
for(suite_name in names(results)) {
<- results[[suite_name]]
suite
<- paste0(html_content, '
html_content <tr>
<td>', suite_name, '</td>
<td>', suite$total, '</td>
<td class="passed">', suite$passed, '</td>
<td class="failed">', suite$failed, '</td>
<td class="warning">', suite$warnings, '</td>
<td>', suite$skipped, '</td>
</tr>')
}
<- paste0(html_content, '
html_content </table>
</body>
</html>')
writeLines(html_content, output_file)
cat("Test report generated:", output_file, "\n")
}
return(list(
aggregate_results = aggregate_test_results,
generate_report = generate_test_report
))
}
return(list(
github_config = github_actions_config,
create_runners = create_test_runners,
reporting = setup_test_reporting()
)) }
Common Issues and Solutions
Issue 1: Reactive Testing Challenges
Problem: Testing reactive expressions and complex reactive dependencies is difficult due to asynchronous execution and context requirements.
Solution:
Implement specialized reactive testing utilities:
# Advanced reactive testing framework
<- function() {
reactive_testing_framework
# Mock reactive context for testing
<- function() {
create_mock_reactive_context
<- R6::R6Class(
MockReactiveContext "MockReactiveContext",
public = list(
values = NULL,
invalidated = FALSE,
initialize = function() {
$values <- reactiveValues()
self
},
set_value = function(name, value) {
$values[[name]] <- value
self
},
get_value = function(name) {
$values[[name]]
self
},
invalidate = function() {
$invalidated <- TRUE
self
},
is_invalidated = function() {
$invalidated
self
}
)
)
return(MockReactiveContext$new())
}
# Test reactive expressions
<- function(reactive_expr, inputs, expected_outputs) {
test_reactive_expression
# Create mock session
<- MockShinySession$new()
session
# Set up inputs
for(input_name in names(inputs)) {
$setInputs(!!input_name := inputs[[input_name]])
session
}
# Execute reactive in mock context
<- withMockSession(session, {
result reactive_expr()
})
# Verify outputs
for(output_name in names(expected_outputs)) {
expect_equal(
result[[output_name]],
expected_outputs[[output_name]],info = paste("Output", output_name, "does not match expected value")
)
}
}
# Test reactive dependencies
<- function(reactive_expr, dependency_inputs) {
test_reactive_dependencies
<- MockShinySession$new()
session <- 0
invalidation_count
# Create reactive with invalidation tracking
<- reactive({
tracked_reactive
# Track invalidations
onInvalidate(function() {
<<- invalidation_count + 1
invalidation_count
})
reactive_expr()
})
# Test each dependency
for(input_name in dependency_inputs) {
<- invalidation_count
initial_count
# Change input value
$setInputs(!!input_name := runif(1))
session$flushReact()
session
# Verify invalidation occurred
expect_gt(
invalidation_count,
initial_count,info = paste("Reactive should invalidate when", input_name, "changes")
)
}
}
return(list(
create_context = create_mock_reactive_context,
test_expression = test_reactive_expression,
test_dependencies = test_reactive_dependencies
)) }
Issue 2: Asynchronous Testing Complexity
Problem: Testing applications with asynchronous operations, file uploads, and external API calls requires special handling.
Solution:
Implement asynchronous testing patterns:
# Asynchronous testing utilities
<- function() {
async_testing_utilities
# Test file upload operations
<- function(app_driver, file_input_id, test_file_path) {
test_file_upload
test_that("File upload works correctly", {
# Upload file
$upload_file(file_input_id, test_file_path)
app_driver
# Wait for processing
$wait_for_idle(timeout = 30000)
app_driver
# Verify upload success
expect_true(
$get_value(output = "upload_status") == "success",
app_driverinfo = "File upload should succeed"
)
# Verify file processing
<- app_driver$get_value(output = "uploaded_data")
uploaded_data expect_true(
!is.null(uploaded_data) && nrow(uploaded_data) > 0,
info = "Uploaded file should be processed and contain data"
)
})
}
# Test API integration
<- function(app_function, mock_responses) {
test_api_integration
test_that("API integration works with various responses", {
# Mock HTTP requests
with_mock(
`httr::GET` = function(url, ...) {
# Return appropriate mock response based on URL
for(pattern in names(mock_responses)) {
if(grepl(pattern, url)) {
return(mock_responses[[pattern]])
}
}
# Default error response
return(list(status_code = 404))
},
{<- AppDriver$new(app_function)
app $wait_for_idle()
app
# Test successful API call
$click("fetch_data_button")
app$wait_for_idle(timeout = 10000)
app
expect_true(
$get_value(output = "api_status") == "success",
appinfo = "API call should succeed with mocked response"
)
$stop()
app
}
)
})
}
# Test long-running computations
<- function(app_driver, computation_trigger, max_wait = 60) {
test_long_running_computation
test_that("Long-running computation completes successfully", {
# Start computation
$click(computation_trigger)
app_driver
# Monitor progress
<- Sys.time()
start_time <- FALSE
completed
while(!completed && difftime(Sys.time(), start_time, units = "secs") < max_wait) {
Sys.sleep(1)
$wait_for_idle(timeout = 1000)
app_driver
# Check if computation completed
<- app_driver$get_value(output = "computation_status")
status if(!is.null(status) && status %in% c("completed", "error")) {
<- TRUE
completed
}
}
expect_true(completed, info = "Computation should complete within time limit")
<- app_driver$get_value(output = "computation_status")
final_status expect_equal(final_status, "completed", info = "Computation should complete successfully")
})
}
return(list(
test_file_upload = test_file_upload,
test_api_integration = test_api_integration,
test_long_computation = test_long_running_computation
)) }
Common Questions About Shiny Testing and Debugging
Your testing strategy should match your application’s complexity and criticality:
Simple Analytical Tools: Focus on unit tests for data processing functions and basic integration tests for reactive workflows. Use manual testing for UI interactions.
Business Dashboards: Implement comprehensive unit tests, integration tests for all user workflows, and automated visual regression testing. Include performance testing for data loading scenarios.
Enterprise Applications: Require full testing pyramid with extensive unit tests, integration tests for all modules, end-to-end tests for complete user journeys, performance testing under load, and security testing for authentication and authorization.
Public-Facing Applications: Need all enterprise-level testing plus accessibility testing, cross-browser compatibility testing, and stress testing for high concurrent user loads.
The key is starting with unit tests for critical functions, then adding integration and end-to-end tests as your application grows in complexity and importance.
Reactive loops typically occur when reactive expressions have circular dependencies. Here’s a systematic debugging approach:
Enable Reactive Logging: Use options(shiny.reactlog = TRUE)
and reactlog::reactlogShow()
to visualize the reactive graph and identify circular dependencies.
Trace Execution Order: Add debug prints with timestamps to reactive expressions to understand the execution sequence and identify where loops occur.
Use isolate() Strategically: Break circular dependencies by isolating certain reactive reads that shouldn’t trigger invalidation chains.
Implement Conditional Logic: Use req()
and conditional statements to prevent reactive execution under certain conditions that might cause loops.
Separate Read and Write Operations: Ensure that reactive expressions that read values don’t also modify the same reactive values they depend on.
The reactive debugger tools in this tutorial provide systematic ways to trace these issues and implement solutions.
Performance targets depend on your user expectations and application complexity:
Response Time Benchmarks:
- Initial page load: < 3 seconds for simple apps, < 5 seconds for complex dashboards
- User interactions: < 1 second for simple updates, < 3 seconds for complex computations
- Data refresh operations: < 5 seconds for typical datasets, < 15 seconds for complex analyses
Resource Usage Benchmarks:
- Memory usage: < 100MB per user session for typical applications
- Memory growth: < 10MB per hour of continuous use
- CPU usage: < 50% during normal operations, < 90% during peak computations
Scalability Benchmarks:
- Concurrent users: 10-20 users per CPU core for typical applications
- Database connections: < 5 connections per user session
- Network bandwidth: < 1MB/minute per user for typical data applications
Monitor these metrics using the profiling tools provided and optimize when performance falls below targets.
External dependencies require special testing strategies to ensure reliability and maintainability:
Database Dependencies: Use test databases with known data sets, implement database mocking for unit tests, and use transaction rollbacks to ensure test isolation.
API Dependencies: Mock external API calls with predictable responses, test error handling with various failure scenarios, and implement offline testing capabilities.
File System Dependencies: Use temporary directories for test files, mock file operations in unit tests, and ensure proper cleanup after tests complete.
Authentication Systems: Mock authentication responses for different user types, test permission boundaries thoroughly, and implement test user accounts for integration testing.
Best Practices: Use dependency injection to make external dependencies mockable, implement circuit breaker patterns for resilience, and maintain separate test configurations that don’t affect production systems.
The testing framework provided includes patterns for mocking these dependencies effectively.
Test Your Understanding
You’re building a Shiny application for financial analysis that processes sensitive data, serves 50+ concurrent users, and integrates with multiple external APIs. Which testing approach provides the most appropriate coverage?
- Unit tests for calculations + manual UI testing
- Integration tests for workflows + basic performance testing
- Complete testing pyramid: unit + integration + E2E + performance + security testing
- End-to-end tests only with comprehensive user scenarios
- Consider the application’s criticality (financial data)
- Think about the scale requirements (50+ concurrent users)
- Remember the complexity (multiple external APIs)
- Consider the regulatory and security requirements
C) Complete testing pyramid: unit + integration + E2E + performance + security testing
Financial applications with high user loads require comprehensive testing:
Unit Tests: Essential for financial calculations where accuracy is critical and errors could have serious consequences.
Integration Tests: Necessary to verify API integrations work correctly and data flows properly between components.
End-to-End Tests: Required to validate complete user workflows and ensure the application works as users expect.
Performance Tests: Mandatory for 50+ concurrent users to ensure the application remains responsive under load.
Security Tests: Critical for financial applications to protect sensitive data and ensure compliance with financial regulations.
This comprehensive approach provides the confidence needed for production financial applications.
Complete this reactive debugging function to trace execution order and identify performance bottlenecks:
<- function(reactive_expr, label) {
debug_reactive_chain reactive({
# Log start time
<- ______
start_time
# Log execution start
cat("Starting", label, "at", format(start_time), "\n")
# Set up invalidation monitoring
onInvalidate(function() {
cat("Reactive", label, "______ at", format(Sys.time()), "\n")
})
# Execute reactive expression
<- ______
result
# Log completion time
<- Sys.time()
end_time <- difftime(______, ______, units = "secs")
execution_time
cat("Completed", label, "in", round(execution_time, 3), "seconds\n")
return(result)
}) }
- What function gets the current system time?
- What word describes what happens when a reactive becomes invalid?
- How do you call the original reactive expression?
- What parameters does difftime() need to calculate execution time?
<- function(reactive_expr, label) {
debug_reactive_chain reactive({
# Log start time
<- Sys.time()
start_time
# Log execution start
cat("Starting", label, "at", format(start_time), "\n")
# Set up invalidation monitoring
onInvalidate(function() {
cat("Reactive", label, "invalidated", "at", format(Sys.time()), "\n")
})
# Execute reactive expression
<- reactive_expr()
result
# Log completion time
<- Sys.time()
end_time <- difftime(end_time, start_time, units = "secs")
execution_time
cat("Completed", label, "in", round(execution_time, 3), "seconds\n")
return(result)
}) }
Key concepts:
- Sys.time() captures current timestamp for performance measurement
- “invalidated” describes when a reactive becomes invalid and needs re-execution
- reactive_expr() calls the original reactive expression with parentheses
- difftime(end_time, start_time, units = “secs”) calculates execution duration
You notice your Shiny application becomes slow when processing large datasets. Design a systematic approach to identify and resolve the performance bottleneck. What steps should you take in order?
- Profile with profvis 2. Check database queries 3. Optimize algorithms 4. Test load performance
- Add more server resources 2. Optimize all code 3. Test performance 4. Profile execution
- Enable reactive logging 2. Profile with profvis 3. Identify bottlenecks 4. Optimize specific issues 5. Verify improvements
- Rewrite application 2. Use faster frameworks 3. Add caching 4. Test again
- Consider the systematic approach to performance debugging
- Think about measuring before optimizing
- Remember to verify that optimizations actually help
- Consider the reactive-specific debugging tools available
C) 1. Enable reactive logging 2. Profile with profvis 3. Identify bottlenecks 4. Optimize specific issues 5. Verify improvements
This systematic approach follows performance debugging best practices:
Enable Reactive Logging: First step is to capture detailed information about reactive execution patterns and dependencies.
Profile with profvis: Get comprehensive performance data showing where time is actually spent in the application.
Identify Bottlenecks: Analyze profiling data to pinpoint specific functions, reactive expressions, or operations causing slowdowns.
Optimize Specific Issues: Target optimizations to the actual bottlenecks rather than guessing or optimizing everything.
Verify Improvements: Measure performance again to confirm optimizations actually improved performance and didn’t introduce new issues.
This methodical approach ensures you solve the real performance problems rather than wasting time on premature optimization.
Conclusion
Mastering testing and debugging for Shiny applications transforms your development process from reactive problem-solving to proactive quality assurance. The comprehensive testing strategies and advanced debugging techniques you’ve learned provide the foundation for building reliable, maintainable applications that perform well under real-world conditions and scale to meet growing user demands.
The multi-layered testing approach you’ve implemented—from unit tests for individual functions to end-to-end tests for complete user workflows—ensures that your applications work correctly at every level. The reactive debugging tools and performance profiling techniques enable you to identify and resolve complex issues that are unique to Shiny’s reactive programming model.
These testing and debugging skills are essential for any application that needs to be reliable in production environments. Whether you’re building departmental tools or enterprise-grade platforms, the professional development practices you’ve mastered ensure your applications meet the rigorous standards expected in production while enabling rapid iteration and continuous improvement.
Next Steps
Based on your comprehensive testing and debugging knowledge, here are recommended paths for implementing these practices and advancing your Shiny development skills:
Immediate Implementation Steps (Complete These First)
- Code Organization and Project Structure - Implement proper project structure to support comprehensive testing workflows
- Version Control with Git - Set up version control systems that integrate with your testing and CI/CD workflows
- Practice Exercise: Implement the complete testing framework in an existing Shiny project, including unit tests, integration tests, and performance monitoring
Advanced Testing and Quality Assurance (Choose Your Focus)
For Production Deployment: - Production Deployment and Monitoring - Scaling and Long-term Maintenance
For Enterprise Development: - Security Best Practices - Documentation and Maintenance
For Continuous Improvement: - Documentation and Maintenance - Accessibility and Performance
Long-term Development Excellence (2-4 Weeks)
- Implement comprehensive testing pipelines with automated CI/CD workflows
- Establish performance monitoring and alerting systems for production applications
- Create testing standards and documentation for your development team
- Build automated testing tools and utilities specific to your application domain
Explore More Articles
Here are more articles from the same category to help you dive deeper into the topic.
Reuse
Citation
@online{kassambara2025,
author = {Kassambara, Alboukadel},
title = {Testing and {Debugging} {Shiny} {Applications:} {Complete}
{Guide} to {Reliable} {Development}},
date = {2025-05-23},
url = {https://www.datanovia.com/learn/tools/shiny-apps/interactive-features/testing-debugging.html},
langid = {en}
}