Database Connectivity and Data Persistence: Build Data-Driven Applications

Master Professional Database Integration for Scalable Shiny Applications

Learn to connect Shiny applications to databases with secure, scalable patterns including connection pooling, CRUD operations, transaction management, and data persistence strategies. Master PostgreSQL, MySQL, and other database integrations for enterprise applications.

Tools
Author
Affiliation
Published

May 23, 2025

Modified

June 19, 2025

Keywords

shiny database connection, shiny SQL integration, data persistence shiny, shiny PostgreSQL MySQL, database pooling shiny, CRUD operations shiny

Key Takeaways

Tip
  • Enterprise Data Architecture: Database integration transforms Shiny applications from static analysis tools into dynamic, data-driven platforms that serve entire organizations
  • Scalable Performance: Connection pooling and optimized query patterns enable applications that handle thousands of concurrent users while maintaining responsive performance
  • Data Security Excellence: Proper authentication, parameterized queries, and encryption ensure enterprise-grade security that meets compliance requirements and protects sensitive information
  • Persistent State Management: Database-backed applications maintain user preferences, session data, and application state across sessions, creating professional user experiences
  • Real-Time Data Integration: Live database connections enable applications that reflect current business conditions and support collaborative workflows with immediate data synchronization

Introduction

Database connectivity represents the evolution from standalone analytical applications to enterprise-grade systems that integrate seamlessly with organizational data infrastructure. While file-based Shiny applications work well for individual analysis, professional applications require persistent data storage, user management, audit trails, and real-time synchronization with business systems that only database integration can provide.



This comprehensive guide covers the complete spectrum of database integration, from basic connection patterns to sophisticated enterprise architectures with connection pooling, transaction management, and security implementations. You’ll master the techniques that enable Shiny applications to serve as front-ends for complex business systems while maintaining the analytical power and development efficiency that makes Shiny superior for data-driven applications.

The database integration patterns you’ll learn bridge the gap between analytical prototypes and production business systems, enabling applications that scale from departmental tools to organization-wide platforms while providing the data persistence, security, and performance characteristics required for mission-critical business applications.

Understanding Database Integration Architecture

Database-connected Shiny applications create a multi-tier architecture where the application server manages user interactions while database systems handle data persistence, concurrent access control, and business logic enforcement.

flowchart TD
    A["Shiny Application Server"] --> B["Connection Pool Manager"]
    B --> C["Database Connections"]
    
    C --> D["PostgreSQL"]
    C --> E["MySQL/MariaDB"]
    C --> F["SQL Server"]
    C --> G["SQLite"]
    
    H["Application Layers"] --> I["Presentation Layer (UI)"]
    H --> J["Business Logic Layer"]
    H --> K["Data Access Layer"]
    H --> L["Database Layer"]
    
    M["Security Components"] --> N["Authentication"]
    M --> O["Authorization"]
    M --> P["Encryption"]
    M --> Q["Audit Logging"]
    
    R["Performance Features"] --> S["Connection Pooling"]
    R --> T["Query Optimization"]
    R --> U["Caching Strategies"]
    R --> V["Transaction Management"]
    
    style A fill:#e1f5fe
    style H fill:#f3e5f5
    style M fill:#e8f5e8
    style R fill:#fff3e0

Core Database Integration Concepts

Connection Management: Efficient database connections through pooling and lifecycle management that support concurrent users while minimizing resource usage.

Data Access Patterns: Structured approaches to database operations that ensure data integrity, performance, and maintainability across complex applications.

Security Implementation: Comprehensive security measures including authentication, authorization, SQL injection prevention, and data encryption.

Transaction Control: Proper transaction management that ensures data consistency and enables complex business operations with rollback capabilities.

Foundation Database Connections

Basic Database Connection Patterns

Start with fundamental connection patterns that demonstrate secure, efficient database integration:

library(shiny)
library(DBI)
library(RPostgreSQL)  # or RMySQL, RSQLite, etc.
library(pool)
library(dplyr)
library(dbplyr)

# Database configuration management
create_database_config <- function() {
  
  # Environment-based configuration
  config <- list(
    
    # Development configuration
    development = list(
      driver = "PostgreSQL",
      host = Sys.getenv("DB_HOST", "localhost"),
      port = as.integer(Sys.getenv("DB_PORT", "5432")),
      dbname = Sys.getenv("DB_NAME", "shiny_dev"),
      user = Sys.getenv("DB_USER", "shiny_user"),
      password = Sys.getenv("DB_PASSWORD", ""),
      
      # Pool configuration
      minSize = 1,
      maxSize = 5,
      idleTimeout = 60000,
      validationQuery = "SELECT 1"
    ),
    
    # Production configuration
    production = list(
      driver = "PostgreSQL", 
      host = Sys.getenv("PROD_DB_HOST"),
      port = as.integer(Sys.getenv("PROD_DB_PORT", "5432")),
      dbname = Sys.getenv("PROD_DB_NAME"),
      user = Sys.getenv("PROD_DB_USER"),
      password = Sys.getenv("PROD_DB_PASSWORD"),
      
      # Production pool settings
      minSize = 2,
      maxSize = 20,
      idleTimeout = 300000,
      validationQuery = "SELECT 1",
      
      # SSL configuration for production
      sslmode = "require"
    )
  )
  
  # Get environment
  env <- Sys.getenv("SHINY_ENV", "development")
  
  if(!env %in% names(config)) {
    stop("Unknown environment: ", env)
  }
  
  return(config[[env]])
}

# Secure connection pool creation
create_connection_pool <- function(config = NULL) {
  
  if(is.null(config)) {
    config <- create_database_config()
  }
  
  # Validate required configuration
  required_fields <- c("host", "dbname", "user", "password")
  
  missing_fields <- required_fields[!required_fields %in% names(config)]
  
  if(length(missing_fields) > 0) {
    stop("Missing required database configuration: ", paste(missing_fields, collapse = ", "))
  }
  
  tryCatch({
    
    # Create connection pool
    pool <- pool::dbPool(
      drv = RPostgreSQL::PostgreSQL(),
      host = config$host,
      port = config$port %||% 5432,
      dbname = config$dbname,
      user = config$user,
      password = config$password,
      
      # Pool configuration
      minSize = config$minSize %||% 1,
      maxSize = config$maxSize %||% 5,
      idleTimeout = config$idleTimeout %||% 60000,
      validationQuery = config$validationQuery %||% "SELECT 1"
    )
    
    # Test connection
    test_query <- "SELECT current_timestamp as server_time, version() as server_version"
    test_result <- pool::dbGetQuery(pool, test_query)
    
    cat("Database connection established successfully\n")
    cat("Server time:", as.character(test_result$server_time), "\n")
    cat("Server version:", substr(test_result$server_version, 1, 50), "...\n")
    
    return(pool)
    
  }, error = function(e) {
    
    cat("Database connection failed:", e$message, "\n")
    
    # Provide helpful debugging information
    cat("Connection details:\n")
    cat("  Host:", config$host, "\n")
    cat("  Port:", config$port %||% 5432, "\n")
    cat("  Database:", config$dbname, "\n")
    cat("  User:", config$user, "\n")
    
    stop("Failed to create database connection pool: ", e$message)
  })
}

# Basic CRUD operations
create_database_operations <- function(pool) {
  
  list(
    
    # Create operations
    insert_user = function(username, email, role = "user") {
      
      query <- "
        INSERT INTO users (username, email, role, created_at)
        VALUES ($1, $2, $3, CURRENT_TIMESTAMP)
        RETURNING user_id, username, email, role, created_at
      "
      
      tryCatch({
        
        result <- pool::dbGetQuery(
          pool, query, 
          params = list(username, email, role)
        )
        
        return(list(success = TRUE, data = result))
        
      }, error = function(e) {
        
        return(list(success = FALSE, error = e$message))
      })
    },
    
    # Read operations
    get_user = function(user_id) {
      
      query <- "
        SELECT user_id, username, email, role, created_at, last_login
        FROM users 
        WHERE user_id = $1
      "
      
      result <- pool::dbGetQuery(pool, query, params = list(user_id))
      
      if(nrow(result) == 0) {
        return(NULL)
      }
      
      return(result[1, ])
    },
    
    get_users = function(limit = 50, offset = 0, role_filter = NULL) {
      
      base_query <- "
        SELECT user_id, username, email, role, created_at, last_login
        FROM users
      "
      
      conditions <- c()
      params <- list()
      
      if(!is.null(role_filter)) {
        conditions <- c(conditions, "role = $" %+% (length(params) + 1))
        params <- append(params, role_filter)
      }
      
      if(length(conditions) > 0) {
        base_query <- paste(base_query, "WHERE", paste(conditions, collapse = " AND "))
      }
      
      query <- paste(
        base_query,
        "ORDER BY created_at DESC",
        "LIMIT $" %+% (length(params) + 1),
        "OFFSET $" %+% (length(params) + 2)
      )
      
      params <- append(params, list(limit, offset))
      
      result <- pool::dbGetQuery(pool, query, params = params)
      
      return(result)
    },
    
    # Update operations
    update_user = function(user_id, username = NULL, email = NULL, role = NULL) {
      
      updates <- c()
      params <- list()
      
      if(!is.null(username)) {
        updates <- c(updates, "username = $" %+% (length(params) + 1))
        params <- append(params, username)
      }
      
      if(!is.null(email)) {
        updates <- c(updates, "email = $" %+% (length(params) + 1))
        params <- append(params, email)
      }
      
      if(!is.null(role)) {
        updates <- c(updates, "role = $" %+% (length(params) + 1))
        params <- append(params, role)
      }
      
      if(length(updates) == 0) {
        return(list(success = FALSE, error = "No updates specified"))
      }
      
      query <- paste(
        "UPDATE users SET",
        paste(updates, collapse = ", "),
        ", updated_at = CURRENT_TIMESTAMP",
        "WHERE user_id = $" %+% (length(params) + 1),
        "RETURNING user_id, username, email, role, updated_at"
      )
      
      params <- append(params, user_id)
      
      tryCatch({
        
        result <- pool::dbGetQuery(pool, query, params = params)
        
        if(nrow(result) == 0) {
          return(list(success = FALSE, error = "User not found"))
        }
        
        return(list(success = TRUE, data = result[1, ]))
        
      }, error = function(e) {
        
        return(list(success = FALSE, error = e$message))
      })
    },
    
    # Delete operations
    delete_user = function(user_id) {
      
      query <- "
        DELETE FROM users 
        WHERE user_id = $1
        RETURNING user_id, username
      "
      
      tryCatch({
        
        result <- pool::dbGetQuery(pool, query, params = list(user_id))
        
        if(nrow(result) == 0) {
          return(list(success = FALSE, error = "User not found"))
        }
        
        return(list(success = TRUE, message = paste("User", result$username, "deleted")))
        
      }, error = function(e) {
        
        return(list(success = FALSE, error = e$message))
      })
    },
    
    # Authentication operations
    authenticate_user = function(username, password_hash) {
      
      query <- "
        SELECT user_id, username, email, role, active
        FROM users 
        WHERE username = $1 AND password_hash = $2 AND active = TRUE
      "
      
      result <- pool::dbGetQuery(
        pool, query, 
        params = list(username, password_hash)
      )
      
      if(nrow(result) == 0) {
        return(NULL)
      }
      
      # Update last login
      update_query <- "
        UPDATE users 
        SET last_login = CURRENT_TIMESTAMP 
        WHERE user_id = $1
      "
      
      pool::dbExecute(pool, update_query, params = list(result$user_id))
      
      return(result[1, ])
    },
    
    # Activity logging
    log_activity = function(user_id, action, details = NULL) {
      
      query <- "
        INSERT INTO activity_log (user_id, action, details, timestamp)
        VALUES ($1, $2, $3, CURRENT_TIMESTAMP)
      "
      
      tryCatch({
        
        pool::dbExecute(
          pool, query,
          params = list(user_id, action, details)
        )
        
        return(TRUE)
        
      }, error = function(e) {
        
        cat("Failed to log activity:", e$message, "\n")
        return(FALSE)
      })
    }
  )
}

# Helper functions
`%||%` <- function(x, y) if(is.null(x)) y else x
`%+%` <- function(x, y) paste0(x, y)

Database Schema Creation

# Database schema setup for Shiny applications
create_application_schema <- function(pool) {
  
  # Users table
  users_schema <- "
    CREATE TABLE IF NOT EXISTS users (
      user_id SERIAL PRIMARY KEY,
      username VARCHAR(50) UNIQUE NOT NULL,
      email VARCHAR(255) UNIQUE NOT NULL,
      password_hash VARCHAR(255) NOT NULL,
      role VARCHAR(20) DEFAULT 'user' CHECK (role IN ('admin', 'user', 'viewer')),
      active BOOLEAN DEFAULT TRUE,
      created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      last_login TIMESTAMP WITH TIME ZONE
    );
    
    CREATE INDEX IF NOT EXISTS idx_users_username ON users(username);
    CREATE INDEX IF NOT EXISTS idx_users_email ON users(email);
    CREATE INDEX IF NOT EXISTS idx_users_role ON users(role);
  "
  
  # Sessions table for user session management
  sessions_schema <- "
    CREATE TABLE IF NOT EXISTS user_sessions (
      session_id VARCHAR(255) PRIMARY KEY,
      user_id INTEGER REFERENCES users(user_id) ON DELETE CASCADE,
      created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      expires_at TIMESTAMP WITH TIME ZONE NOT NULL,
      ip_address INET,
      user_agent TEXT,
      active BOOLEAN DEFAULT TRUE
    );
    
    CREATE INDEX IF NOT EXISTS idx_sessions_user_id ON user_sessions(user_id);
    CREATE INDEX IF NOT EXISTS idx_sessions_expires_at ON user_sessions(expires_at);
  "
  
  # Activity log table
  activity_log_schema <- "
    CREATE TABLE IF NOT EXISTS activity_log (
      log_id SERIAL PRIMARY KEY,
      user_id INTEGER REFERENCES users(user_id) ON DELETE SET NULL,
      session_id VARCHAR(255),
      action VARCHAR(100) NOT NULL,
      details JSONB,
      timestamp TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      ip_address INET
    );
    
    CREATE INDEX IF NOT EXISTS idx_activity_log_user_id ON activity_log(user_id);
    CREATE INDEX IF NOT EXISTS idx_activity_log_timestamp ON activity_log(timestamp);
    CREATE INDEX IF NOT EXISTS idx_activity_log_action ON activity_log(action);
  "
  
  # Application data table (example)
  app_data_schema <- "
    CREATE TABLE IF NOT EXISTS application_data (
      data_id SERIAL PRIMARY KEY,
      user_id INTEGER REFERENCES users(user_id) ON DELETE CASCADE,
      data_name VARCHAR(255) NOT NULL,
      data_content JSONB NOT NULL,
      data_type VARCHAR(50) DEFAULT 'general',
      is_public BOOLEAN DEFAULT FALSE,
      created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      
      CONSTRAINT unique_user_data_name UNIQUE(user_id, data_name)
    );
    
    CREATE INDEX IF NOT EXISTS idx_app_data_user_id ON application_data(user_id);
    CREATE INDEX IF NOT EXISTS idx_app_data_type ON application_data(data_type);
    CREATE INDEX IF NOT EXISTS idx_app_data_public ON application_data(is_public);
  "
  
  # User preferences table
  preferences_schema <- "
    CREATE TABLE IF NOT EXISTS user_preferences (
      preference_id SERIAL PRIMARY KEY,
      user_id INTEGER REFERENCES users(user_id) ON DELETE CASCADE,
      preference_key VARCHAR(100) NOT NULL,
      preference_value JSONB NOT NULL,
      created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
      
      CONSTRAINT unique_user_preference UNIQUE(user_id, preference_key)
    );
    
    CREATE INDEX IF NOT EXISTS idx_preferences_user_id ON user_preferences(user_id);
    CREATE INDEX IF NOT EXISTS idx_preferences_key ON user_preferences(preference_key);
  "
  
  # Execute schema creation
  schemas <- list(
    "users" = users_schema,
    "sessions" = sessions_schema,
    "activity_log" = activity_log_schema,
    "application_data" = app_data_schema,
    "preferences" = preferences_schema
  )
  
  tryCatch({
    
    for(schema_name in names(schemas)) {
      
      cat("Creating schema:", schema_name, "\n")
      
      pool::dbExecute(pool, schemas[[schema_name]])
      
      cat("✓ Schema", schema_name, "created successfully\n")
    }
    
    cat("All database schemas created successfully\n")
    return(TRUE)
    
  }, error = function(e) {
    
    cat("Error creating database schema:", e$message, "\n")
    return(FALSE)
  })
}

Advanced Database Operations with Transactions

# Transaction management for complex operations
create_transaction_manager <- function(pool) {
  
  list(
    
    # Execute multiple operations in a transaction
    execute_transaction = function(operations) {
      
      conn <- pool::poolCheckout(pool)
      
      tryCatch({
        
        # Begin transaction
        DBI::dbBegin(conn)
        
        results <- list()
        
        # Execute each operation
        for(i in seq_along(operations)) {
          
          operation <- operations[[i]]
          
          if(is.function(operation)) {
            result <- operation(conn)
          } else if(is.list(operation) && !is.null(operation$query)) {
            result <- DBI::dbGetQuery(
              conn, 
              operation$query, 
              params = operation$params %||% list()
            )
          } else {
            stop("Invalid operation format")
          }
          
          results[[i]] <- result
        }
        
        # Commit transaction
        DBI::dbCommit(conn)
        
        return(list(success = TRUE, results = results))
        
      }, error = function(e) {
        
        # Rollback on error
        DBI::dbRollback(conn)
        
        return(list(success = FALSE, error = e$message))
        
      }, finally = {
        
        pool::poolReturn(conn)
      })
    },
    
    # Create user with initial data
    create_user_with_data = function(username, email, password_hash, 
                                    initial_data = NULL, preferences = NULL) {
      
      operations <- list(
        
        # Insert user
        function(conn) {
          query <- "
            INSERT INTO users (username, email, password_hash)
            VALUES ($1, $2, $3)
            RETURNING user_id
          "
          
          result <- DBI::dbGetQuery(
            conn, query,
            params = list(username, email, password_hash)
          )
          
          return(result$user_id[1])
        },
        
        # Insert initial data if provided
        if(!is.null(initial_data)) {
          function(conn) {
            
            user_id <- results[[1]]  # Get user_id from previous operation
            
            query <- "
              INSERT INTO application_data (user_id, data_name, data_content, data_type)
              VALUES ($1, $2, $3, $4)
            "
            
            DBI::dbExecute(
              conn, query,
              params = list(
                user_id,
                initial_data$name,
                jsonlite::toJSON(initial_data$content),
                initial_data$type %||% "initial"
              )
            )
            
            return(TRUE)
          }
        },
        
        # Insert preferences if provided  
        if(!is.null(preferences)) {
          function(conn) {
            
            user_id <- results[[1]]
            
            for(pref_key in names(preferences)) {
              
              query <- "
                INSERT INTO user_preferences (user_id, preference_key, preference_value)
                VALUES ($1, $2, $3)
              "
              
              DBI::dbExecute(
                conn, query,
                params = list(
                  user_id,
                  pref_key,
                  jsonlite::toJSON(preferences[[pref_key]])
                )
              )
            }
            
            return(TRUE)
          }
        }
      )
      
      # Remove NULL operations
      operations <- operations[!sapply(operations, is.null)]
      
      return(self$execute_transaction(operations))
    },
    
    # Bulk data operations
    bulk_insert = function(table_name, data_frame, chunk_size = 1000) {
      
      if(nrow(data_frame) == 0) {
        return(list(success = TRUE, rows_affected = 0))
      }
      
      conn <- pool::poolCheckout(pool)
      
      tryCatch({
        
        DBI::dbBegin(conn)
        
        total_rows <- 0
        
        # Process in chunks
        for(i in seq(1, nrow(data_frame), by = chunk_size)) {
          
          end_idx <- min(i + chunk_size - 1, nrow(data_frame))
          chunk <- data_frame[i:end_idx, ]
          
          # Use dbWriteTable for efficient bulk insert
          DBI::dbWriteTable(
            conn, 
            table_name, 
            chunk, 
            append = TRUE,
            row.names = FALSE
          )
          
          total_rows <- total_rows + nrow(chunk)
          
          cat("Inserted", total_rows, "of", nrow(data_frame), "rows\n")
        }
        
        DBI::dbCommit(conn)
        
        return(list(success = TRUE, rows_affected = total_rows))
        
      }, error = function(e) {
        
        DBI::dbRollback(conn)
        return(list(success = FALSE, error = e$message))
        
      }, finally = {
        
        pool::poolReturn(conn)
      })
    }
  )
}


Production Database Integration Patterns

Complete Database-Driven Application

# Complete Shiny application with database integration
create_database_driven_app <- function() {
  
  # Initialize database connection
  db_config <- create_database_config()
  db_pool <- create_connection_pool(db_config)
  db_ops <- create_database_operations(db_pool)
  tx_manager <- create_transaction_manager(db_pool)
  
  # Initialize database schema
  schema_created <- create_application_schema(db_pool)
  
  if(!schema_created) {
    stop("Failed to create database schema")
  }
  
  ui <- fluidPanel(
    
    titlePanel("Database-Driven Shiny Application"),
    
    # Navigation
    navbarPage(
      "Data Management System",
      
      # User Management Tab
      tabPanel("User Management",
        
        fluidRow(
          column(4,
            wellPanel(
              h4("<i class='bi bi-person-plus-fill'></i> Add New User"),
              
              textInput("new_username", "Username:", 
                       placeholder = "Enter username"),
              
              textInput("new_email", "Email:", 
                       placeholder = "user@example.com"),
              
              selectInput("new_role", "Role:",
                         choices = c("User" = "user", 
                                   "Admin" = "admin", 
                                   "Viewer" = "viewer"),
                         selected = "user"),
              
              passwordInput("new_password", "Password:",
                           placeholder = "Enter password"),
              
              br(),
              
              actionButton("create_user", "Create User", 
                          class = "btn-primary", 
                          style = "width: 100%;"),
              
              br(), br(),
              
              div(id = "user_creation_status")
            )
          ),
          
          column(8,
            wellPanel(
              h4("<i class='bi bi-people-fill'></i> User Directory"),
              
              fluidRow(
                column(6,
                  selectInput("role_filter", "Filter by Role:",
                             choices = c("All Roles" = "",
                                       "Admin" = "admin",
                                       "User" = "user", 
                                       "Viewer" = "viewer"),
                             selected = "")
                ),
                
                column(6,
                  div(style = "margin-top: 25px;",
                    actionButton("refresh_users", "Refresh", 
                                class = "btn-info")
                  )
                )
              ),
              
              DT::dataTableOutput("users_table")
            )
          )
        )
      ),
      
      # Data Management Tab
      tabPanel("Data Management",
        
        fluidRow(
          column(6,
            wellPanel(
              h4("<i class='bi bi-database-fill-add'></i> Save Data"),
              
              textInput("data_name", "Data Name:",
                       placeholder = "Enter data name"),
              
              selectInput("data_type", "Data Type:",
                         choices = c("Analysis" = "analysis",
                                   "Report" = "report",
                                   "Dataset" = "dataset",
                                   "Configuration" = "config")),
              
              textAreaInput("data_content", "Data Content (JSON):",
                           rows = 8,
                           placeholder = '{"key": "value", "number": 123}'),
              
              checkboxInput("is_public", "Make Public", FALSE),
              
              br(),
              
              actionButton("save_data", "Save Data", 
                          class = "btn-success",
                          style = "width: 100%;")
            )
          ),
          
          column(6,
            wellPanel(
              h4("<i class='bi bi-database-fill-gear'></i> Saved Data"),
              
              selectInput("data_type_filter", "Filter by Type:",
                         choices = c("All Types" = "",
                                   "Analysis" = "analysis",
                                   "Report" = "report", 
                                   "Dataset" = "dataset",
                                   "Configuration" = "config")),
              
              br(),
              
              DT::dataTableOutput("saved_data_table"),
              
              br(),
              
              fluidRow(
                column(6,
                  actionButton("load_selected_data", "Load Selected", 
                              class = "btn-info")
                ),
                
                column(6,
                  actionButton("delete_selected_data", "Delete Selected", 
                              class = "btn-danger")
                )
              )
            )
          )
        ),
        
        fluidRow(
          column(12,
            wellPanel(
              h4("<i class='bi bi-code-square'></i> Loaded Data Preview"),
              
              div(id = "loaded_data_preview",
                div(class = "alert alert-info",
                    "Select and load data to see preview here.")
              )
            )
          )
        )
      ),
      
      # Analytics Tab
      tabPanel("Analytics",
        
        fluidRow(
          column(4,
            wellPanel(
              h4("<i class='bi bi-graph-up'></i> Analytics Settings"),
              
              selectInput("analytics_data_source", "Data Source:",
                         choices = c()),
              
              selectInput("analysis_type", "Analysis Type:",
                         choices = c("Summary Statistics" = "summary",
                                   "Time Series" = "timeseries",
                                   "Correlation" = "correlation",
                                   "Distribution" = "distribution")),
              
              conditionalPanel(
                condition = "input.analysis_type == 'timeseries'",
                dateRangeInput("date_range", "Date Range:",
                              start = Sys.Date() - 30,
                              end = Sys.Date())
              ),
              
              conditionalPanel(
                condition = "input.analysis_type == 'correlation'",
                numericInput("correlation_threshold", "Correlation Threshold:",
                            value = 0.5, min = 0, max = 1, step = 0.1)
              ),
              
              br(),
              
              actionButton("run_analysis", "Run Analysis", 
                          class = "btn-primary",
                          style = "width: 100%;"),
              
              br(), br(),
              
              checkboxInput("save_analysis_results", "Save Results", FALSE)
            )
          ),
          
          column(8,
            tabsetPanel(
              
              tabPanel("Results",
                verbatimTextOutput("analysis_results")
              ),
              
              tabPanel("Visualization",
                plotOutput("analysis_plot", height = "500px")
              ),
              
              tabPanel("Export",
                wellPanel(
                  h5("Export Options"),
                  
                  fluidRow(
                    column(6,
                      downloadButton("download_results_csv", "Download CSV", 
                                   class = "btn-info")
                    ),
                    
                    column(6,
                      downloadButton("download_plot_png", "Download Plot", 
                                   class = "btn-info")
                    )
                  )
                )
              )
            )
          )
        )
      ),
      
      # Activity Log Tab
      tabPanel("Activity Log",
        
        fluidRow(
          column(12,
            wellPanel(
              h4("<i class='bi bi-journal-text'></i> System Activity Log"),
              
              fluidRow(
                column(3,
                  dateInput("log_start_date", "Start Date:",
                           value = Sys.Date() - 7)
                ),
                
                column(3,
                  dateInput("log_end_date", "End Date:",
                           value = Sys.Date())
                ),
                
                column(3,
                  selectInput("log_action_filter", "Action Filter:",
                             choices = c("All Actions" = "",
                                       "Login" = "login",
                                       "Create" = "create",
                                       "Update" = "update",
                                       "Delete" = "delete",
                                       "Analysis" = "analysis"))
                ),
                
                column(3,
                  div(style = "margin-top: 25px;",
                    actionButton("refresh_log", "Refresh Log", 
                                class = "btn-info")
                  )
                )
              ),
              
              br(),
              
              DT::dataTableOutput("activity_log_table")
            )
          )
        )
      ),
      
      # System Status Tab
      tabPanel("System Status",
        
        fluidRow(
          column(6,
            wellPanel(
              h4("<i class='bi bi-database-check'></i> Database Status"),
              
              verbatimTextOutput("database_status")
            ),
            
            wellPanel(
              h4("<i class='bi bi-speedometer2'></i> Performance Metrics"),
              
              verbatimTextOutput("performance_metrics")
            )
          ),
          
          column(6,
            wellPanel(
              h4("<i class='bi bi-gear-fill'></i> System Configuration"),
              
              verbatimTextOutput("system_config")
            ),
            
            wellPanel(
              h4("<i class='bi bi-tools'></i> Database Tools"),
              
              actionButton("test_connection", "Test Connection", 
                          class = "btn-info"),
              
              br(), br(),
              
              actionButton("cleanup_sessions", "Cleanup Old Sessions", 
                          class = "btn-warning"),
              
              br(), br(),
              
              actionButton("vacuum_database", "Optimize Database", 
                          class = "btn-secondary"),
              
              br(), br(),
              
              div(id = "db_tools_output")
            )
          )
        )
      )
    )
  )
  
  server <- function(input, output, session) {
    
    # Reactive values for application state
    app_state <- reactiveValues(
      current_user = NULL,
      saved_data = data.frame(),
      analysis_results = NULL,
      activity_log = data.frame()
    )
    
    # Session initialization
    observe({
      
      # Log session start
      db_ops$log_activity(
        user_id = 1,  # Default user for demo
        action = "session_start",
        details = jsonlite::toJSON(list(
          session_id = session$token,
          user_agent = session$clientData$url_hostname
        ))
      )
    })
    
    # User Management Functions
    
    # Create new user
    observeEvent(input$create_user, {
      
      req(input$new_username, input$new_email, input$new_password)
      
      # Simple password hashing (use proper hashing in production)
      password_hash <- digest::digest(input$new_password, algo = "sha256")
      
      result <- db_ops$insert_user(
        username = input$new_username,
        email = input$new_email,
        role = input$new_role
      )
      
      if(result$success) {
        
        showNotification("User created successfully!", type = "success")
        
        # Clear form
        updateTextInput(session, "new_username", value = "")
        updateTextInput(session, "new_email", value = "")
        updateTextInput(session, "new_password", value = "")
        
        # Refresh users table
        refresh_users_table()
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "create_user",
          details = jsonlite::toJSON(list(
            created_username = input$new_username,
            created_role = input$new_role
          ))
        )
        
      } else {
        showNotification(paste("Error creating user:", result$error), 
                        type = "error")
      }
    })
    
    # Users table
    output$users_table <- DT::renderDataTable({
      
      role_filter <- if(input$role_filter == "") NULL else input$role_filter
      
      users_data <- db_ops$get_users(role_filter = role_filter)
      
      if(nrow(users_data) == 0) {
        return(data.frame("No users found" = character(0)))
      }
      
      # Format data for display
      display_data <- users_data %>%
        mutate(
          created_at = format(created_at, "%Y-%m-%d %H:%M"),
          last_login = ifelse(is.na(last_login), "Never", 
                             format(last_login, "%Y-%m-%d %H:%M"))
        ) %>%
        select(user_id, username, email, role, created_at, last_login)
      
      DT::datatable(
        display_data,
        options = list(
          pageLength = 15,
          scrollX = TRUE,
          dom = 'Bfrtip',
          buttons = c('copy', 'csv')
        ),
        extensions = 'Buttons',
        selection = 'single',
        rownames = FALSE,
        colnames = c("ID", "Username", "Email", "Role", "Created", "Last Login")
      )
    })
    
    # Refresh users table
    refresh_users_table <- function() {
      output$users_table <- DT::renderDataTable({
        # Re-render users table
      })
    }
    
    observeEvent(input$refresh_users, {
      refresh_users_table()
    })
    
    # Data Management Functions
    
    # Save data
    observeEvent(input$save_data, {
      
      req(input$data_name, input$data_content)
      
      # Validate JSON
      tryCatch({
        
        json_content <- jsonlite::fromJSON(input$data_content)
        
        # Save to database
        query <- "
          INSERT INTO application_data (user_id, data_name, data_content, data_type, is_public)
          VALUES ($1, $2, $3, $4, $5)
          ON CONFLICT (user_id, data_name) 
          DO UPDATE SET 
            data_content = EXCLUDED.data_content,
            data_type = EXCLUDED.data_type,
            is_public = EXCLUDED.is_public,
            updated_at = CURRENT_TIMESTAMP
        "
        
        pool::dbExecute(
          db_pool, query,
          params = list(
            1,  # Default user ID
            input$data_name,
            input$data_content,
            input$data_type,
            input$is_public
          )
        )
        
        showNotification("Data saved successfully!", type = "success")
        
        # Refresh saved data table
        refresh_saved_data_table()
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "save_data",
          details = jsonlite::toJSON(list(
            data_name = input$data_name,
            data_type = input$data_type
          ))
        )
        
      }, error = function(e) {
        showNotification(paste("Error saving data:", e$message), type = "error")
      })
    })
    
    # Saved data table
    output$saved_data_table <- DT::renderDataTable({
      
      query <- "
        SELECT data_id, data_name, data_type, is_public, created_at, updated_at
        FROM application_data
        WHERE user_id = $1
      "
      
      if(input$data_type_filter != "") {
        query <- paste(query, "AND data_type = $2")
        params <- list(1, input$data_type_filter)
      } else {
        params <- list(1)
      }
      
      query <- paste(query, "ORDER BY updated_at DESC")
      
      saved_data <- pool::dbGetQuery(db_pool, query, params = params)
      
      if(nrow(saved_data) == 0) {
        return(data.frame("No saved data" = character(0)))
      }
      
      # Format for display
      display_data <- saved_data %>%
        mutate(
          created_at = format(created_at, "%Y-%m-%d %H:%M"),
          updated_at = format(updated_at, "%Y-%m-%d %H:%M"),
          is_public = ifelse(is_public, "Yes", "No")
        )
      
      DT::datatable(
        display_data,
        options = list(
          pageLength = 10,
          scrollX = TRUE
        ),
        selection = 'single',
        rownames = FALSE,
        colnames = c("ID", "Name", "Type", "Public", "Created", "Updated")
      )
    })
    
    refresh_saved_data_table <- function() {
      # Force refresh of saved data table
      output$saved_data_table <- DT::renderDataTable({
        # Re-render logic here
      })
    }
    
    # Load selected data
    observeEvent(input$load_selected_data, {
      
      req(input$saved_data_table_rows_selected)
      
      selected_row <- input$saved_data_table_rows_selected
      
      # Get data details
      query <- "
        SELECT data_name, data_content, data_type
        FROM application_data
        WHERE user_id = $1
        ORDER BY updated_at DESC
        LIMIT 1 OFFSET $2
      "
      
      data_details <- pool::dbGetQuery(
        db_pool, query,
        params = list(1, selected_row - 1)
      )
      
      if(nrow(data_details) > 0) {
        
        # Display loaded data
        output$loaded_data_preview <- renderUI({
          
          div(
            h5(paste("Loaded:", data_details$data_name)),
            h6(paste("Type:", data_details$data_type)),
            
            pre(
              style = "background: #f8f9fa; padding: 15px; border-radius: 5px;",
              jsonlite::prettify(data_details$data_content)
            )
          )
        })
        
        showNotification("Data loaded successfully!", type = "success")
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "load_data",
          details = jsonlite::toJSON(list(
            data_name = data_details$data_name
          ))
        )
      }
    })
    
    # Delete selected data
    observeEvent(input$delete_selected_data, {
      
      req(input$saved_data_table_rows_selected)
      
      selected_row <- input$saved_data_table_rows_selected
      
      # Get data ID
      query <- "
        SELECT data_id, data_name
        FROM application_data
        WHERE user_id = $1
        ORDER BY updated_at DESC
        LIMIT 1 OFFSET $2
      "
      
      data_info <- pool::dbGetQuery(
        db_pool, query,
        params = list(1, selected_row - 1)
      )
      
      if(nrow(data_info) > 0) {
        
        # Confirm deletion
        showModal(modalDialog(
          title = "Confirm Deletion",
          paste("Are you sure you want to delete:", data_info$data_name, "?"),
          
          footer = tagList(
            modalButton("Cancel"),
            actionButton("confirm_delete", "Delete", class = "btn-danger")
          )
        ))
        
        # Store data ID for confirmation
        app_state$delete_data_id <- data_info$data_id
        app_state$delete_data_name <- data_info$data_name
      }
    })
    
    # Confirm deletion
    observeEvent(input$confirm_delete, {
      
      req(app_state$delete_data_id)
      
      query <- "DELETE FROM application_data WHERE data_id = $1"
      
      pool::dbExecute(db_pool, query, params = list(app_state$delete_data_id))
      
      showNotification("Data deleted successfully!", type = "success")
      
      # Log activity
      db_ops$log_activity(
        user_id = 1,
        action = "delete_data",
        details = jsonlite::toJSON(list(
          data_name = app_state$delete_data_name
        ))
      )
      
      # Refresh table
      refresh_saved_data_table()
      
      # Close modal
      removeModal()
      
      # Clear loaded data preview
      output$loaded_data_preview <- renderUI({
        div(class = "alert alert-info",
            "Select and load data to see preview here.")
      })
    })
    
    # Analytics Functions
    
    # Update data source choices
    observe({
      
      query <- "
        SELECT DISTINCT data_name
        FROM application_data
        WHERE user_id = $1 AND data_type IN ('dataset', 'analysis')
        ORDER BY data_name
      "
      
      data_sources <- pool::dbGetQuery(db_pool, query, params = list(1))
      
      choices <- setNames(data_sources$data_name, data_sources$data_name)
      
      updateSelectInput(session, "analytics_data_source", 
                       choices = c("Select data source..." = "", choices))
    })
    
    # Run analysis
    observeEvent(input$run_analysis, {
      
      req(input$analytics_data_source, input$analysis_type)
      
      # Get data
      query <- "
        SELECT data_content
        FROM application_data
        WHERE user_id = $1 AND data_name = $2
      "
      
      data_result <- pool::dbGetQuery(
        db_pool, query,
        params = list(1, input$analytics_data_source)
      )
      
      if(nrow(data_result) == 0) {
        showNotification("Selected data not found!", type = "error")
        return()
      }
      
      tryCatch({
        
        # Parse JSON data
        analysis_data <- jsonlite::fromJSON(data_result$data_content)
        
        # Perform analysis based on type
        analysis_result <- switch(
          input$analysis_type,
          
          "summary" = {
            if(is.data.frame(analysis_data)) {
              summary(analysis_data)
            } else {
              "Data is not in tabular format for summary statistics"
            }
          },
          
          "correlation" = {
            if(is.data.frame(analysis_data)) {
              numeric_cols <- sapply(analysis_data, is.numeric)
              if(sum(numeric_cols) >= 2) {
                cor(analysis_data[numeric_cols], use = "complete.obs")
              } else {
                "Insufficient numeric columns for correlation analysis"
              }
            } else {
              "Data is not in tabular format for correlation analysis"
            }
          },
          
          "distribution" = {
            if(is.data.frame(analysis_data)) {
              numeric_cols <- analysis_data[sapply(analysis_data, is.numeric)]
              if(ncol(numeric_cols) > 0) {
                lapply(numeric_cols, function(x) {
                  list(
                    mean = mean(x, na.rm = TRUE),
                    median = median(x, na.rm = TRUE),
                    sd = sd(x, na.rm = TRUE),
                    min = min(x, na.rm = TRUE),
                    max = max(x, na.rm = TRUE)
                  )
                })
              } else {
                "No numeric columns found for distribution analysis"
              }
            } else {
              "Data is not in tabular format for distribution analysis"
            }
          },
          
          "Analysis type not implemented"
        )
        
        app_state$analysis_results <- analysis_result
        
        # Save results if requested
        if(input$save_analysis_results) {
          
          results_name <- paste0("analysis_", input$analysis_type, "_", 
                                format(Sys.time(), "%Y%m%d_%H%M%S"))
          
          query <- "
            INSERT INTO application_data (user_id, data_name, data_content, data_type)
            VALUES ($1, $2, $3, 'analysis_result')
          "
          
          pool::dbExecute(
            db_pool, query,
            params = list(
              1,
              results_name,
              jsonlite::toJSON(analysis_result, auto_unbox = TRUE)
            )
          )
          
          showNotification("Analysis completed and results saved!", type = "success")
        } else {
          showNotification("Analysis completed!", type = "success")
        }
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "analysis",
          details = jsonlite::toJSON(list(
            analysis_type = input$analysis_type,
            data_source = input$analytics_data_source,
            saved = input$save_analysis_results
          ))
        )
        
      }, error = function(e) {
        showNotification(paste("Analysis error:", e$message), type = "error")
      })
    })
    
    # Display analysis results
    output$analysis_results <- renderPrint({
      
      if(!is.null(app_state$analysis_results)) {
        
        cat("Analysis Results:\n")
        cat("================\n\n")
        
        if(is.matrix(app_state$analysis_results)) {
          print(app_state$analysis_results)
        } else if(is.list(app_state$analysis_results)) {
          str(app_state$analysis_results)
        } else {
          print(app_state$analysis_results)
        }
        
      } else {
        cat("No analysis results available.\nRun an analysis to see results here.")
      }
    })
    
    # Activity Log Functions
    
    # Activity log table
    output$activity_log_table <- DT::renderDataTable({
      
      query <- "
        SELECT al.timestamp, u.username, al.action, al.details
        FROM activity_log al
        LEFT JOIN users u ON al.user_id = u.user_id
        WHERE al.timestamp >= $1 AND al.timestamp <= $2
      "
      
      params <- list(
        as.POSIXct(paste(input$log_start_date, "00:00:00")),
        as.POSIXct(paste(input$log_end_date, "23:59:59"))
      )
      
      if(input$log_action_filter != "") {
        query <- paste(query, "AND al.action = $3")
        params <- c(params, input$log_action_filter)
      }
      
      query <- paste(query, "ORDER BY al.timestamp DESC LIMIT 1000")
      
      activity_data <- pool::dbGetQuery(db_pool, query, params = params)
      
      if(nrow(activity_data) == 0) {
        return(data.frame("No activity found" = character(0)))
      }
      
      # Format for display
      display_data <- activity_data %>%
        mutate(
          timestamp = format(timestamp, "%Y-%m-%d %H:%M:%S"),
          username = ifelse(is.na(username), "System", username),
          details = ifelse(is.na(details), "", 
                          substr(details, 1, 100))  # Truncate long details
        )
      
      DT::datatable(
        display_data,
        options = list(
          pageLength = 25,
          scrollX = TRUE,
          order = list(list(0, 'desc'))  # Sort by timestamp descending
        ),
        rownames = FALSE,
        colnames = c("Timestamp", "User", "Action", "Details")
      )
    })
    
    observeEvent(input$refresh_log, {
      # Refresh activity log
      output$activity_log_table <- DT::renderDataTable({
        # Re-render activity log
      })
    })
    
    # System Status Functions
    
    # Database status
    output$database_status <- renderPrint({
      
      tryCatch({
        
        # Connection pool status
        pool_info <- pool::dbPool_info(db_pool)
        
        # Database version and stats
        db_info <- pool::dbGetQuery(db_pool, "
          SELECT 
            version() as db_version,
            current_database() as db_name,
            current_user as db_user,
            inet_server_addr() as server_ip,
            inet_server_port() as server_port
        ")
        
        # Table counts
        table_counts <- pool::dbGetQuery(db_pool, "
          SELECT 
            (SELECT COUNT(*) FROM users) as users_count,
            (SELECT COUNT(*) FROM application_data) as data_count,
            (SELECT COUNT(*) FROM activity_log) as log_count,
            (SELECT COUNT(*) FROM user_sessions) as session_count
        ")
        
        cat("Database Connection Status:\n")
        cat("==========================\n")
        cat("Status: Connected ✓\n")
        cat("Database:", db_info$db_name, "\n")
        cat("User:", db_info$db_user, "\n")
        cat("Server:", paste0(db_info$server_ip, ":", db_info$server_port), "\n\n")
        
        cat("Connection Pool:\n")
        cat("Active connections:", pool_info$active, "\n")
        cat("Idle connections:", pool_info$idle, "\n")
        cat("Total connections:", pool_info$total, "\n\n")
        
        cat("Database Statistics:\n")
        cat("Users:", table_counts$users_count, "\n")
        cat("Data records:", table_counts$data_count, "\n")
        cat("Activity logs:", table_counts$log_count, "\n")
        cat("Active sessions:", table_counts$session_count, "\n")
        
      }, error = function(e) {
        
        cat("Database Status: ERROR\n")
        cat("======================\n")
        cat("Error:", e$message, "\n")
      })
    })
    
    # Performance metrics
    output$performance_metrics <- renderPrint({
      
      tryCatch({
        
        # Query performance stats
        perf_stats <- pool::dbGetQuery(db_pool, "
          SELECT 
            pg_database_size(current_database()) as db_size_bytes,
            (SELECT COUNT(*) FROM pg_stat_activity WHERE state = 'active') as active_queries,
            (SELECT MAX(query_start) FROM pg_stat_activity) as last_query_time
        ")
        
        # Connection timing test
        start_time <- Sys.time()
        pool::dbGetQuery(db_pool, "SELECT 1")
        query_time <- as.numeric(difftime(Sys.time(), start_time, units = "secs"))
        
        cat("Performance Metrics:\n")
        cat("===================\n")
        cat("Database size:", round(perf_stats$db_size_bytes / 1024 / 1024, 2), "MB\n")
        cat("Active queries:", perf_stats$active_queries, "\n")
        cat("Query response time:", round(query_time * 1000, 2), "ms\n")
        cat("Last query:", format(perf_stats$last_query_time, "%H:%M:%S"), "\n")
        
      }, error = function(e) {
        
        cat("Performance Metrics: ERROR\n")
        cat("=========================\n")
        cat("Error:", e$message, "\n")
      })
    })
    
    # System configuration
    output$system_config <- renderPrint({
      
      cat("System Configuration:\n")
      cat("====================\n")
      cat("R version:", R.version.string, "\n")
      cat("Shiny version:", packageVersion("shiny"), "\n")
      cat("DBI version:", packageVersion("DBI"), "\n")
      cat("Pool version:", packageVersion("pool"), "\n")
      cat("Environment:", Sys.getenv("SHINY_ENV", "development"), "\n")
      cat("Session start:", format(Sys.time(), "%Y-%m-%d %H:%M:%S"), "\n")
    })
    
    # Database tools
    observeEvent(input$test_connection, {
      
      result <- tryCatch({
        
        test_result <- pool::dbGetQuery(db_pool, "SELECT 'Connection OK' as status, CURRENT_TIMESTAMP as test_time")
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-success",
              strong("Connection Test: PASSED"),
              br(),
              "Test time:", format(test_result$test_time, "%Y-%m-%d %H:%M:%S"))
        })
        
      }, error = function(e) {
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-danger",
              strong("Connection Test: FAILED"),
              br(),
              "Error:", e$message)
        })
      })
    })
    
    observeEvent(input$cleanup_sessions, {
      
      tryCatch({
        
        # Clean up expired sessions
        cleanup_result <- pool::dbExecute(db_pool, "
          DELETE FROM user_sessions 
          WHERE expires_at < CURRENT_TIMESTAMP OR active = FALSE
        ")
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-info",
              strong("Session Cleanup: COMPLETED"),
              br(),
              paste("Removed", cleanup_result, "expired sessions"))
        })
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "cleanup_sessions",
          details = jsonlite::toJSON(list(sessions_removed = cleanup_result))
        )
        
      }, error = function(e) {
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-danger",
              strong("Session Cleanup: FAILED"),
              br(),
              "Error:", e$message)
        })
      })
    })
    
    observeEvent(input$vacuum_database, {
      
      tryCatch({
        
        # Note: VACUUM cannot be run inside a transaction in PostgreSQL
        # This is a simplified example
        pool::dbExecute(db_pool, "ANALYZE")
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-success",
              strong("Database Optimization: COMPLETED"),
              br(),
              "Database statistics updated successfully")
        })
        
        # Log activity
        db_ops$log_activity(
          user_id = 1,
          action = "optimize_database",
          details = jsonlite::toJSON(list(operation = "analyze"))
        )
        
      }, error = function(e) {
        
        output$db_tools_output <- renderUI({
          div(class = "alert alert-danger",
              strong("Database Optimization: FAILED"),
              br(),
              "Error:", e$message)
        })
      })
    })
    
    # Session cleanup on disconnect
    session$onSessionEnded(function() {
      
      # Log session end
      db_ops$log_activity(
        user_id = 1,
        action = "session_end",
        details = jsonlite::toJSON(list(
          session_id = session$token,
          duration_minutes = as.numeric(difftime(Sys.time(), 
                                                session$startTime, 
                                                units = "mins"))
        ))
      )
    })
  }
  
  # Cleanup function for application shutdown
  onStop(function() {
    cat("Closing database connection pool...\n")
    pool::poolClose(db_pool)
    cat("Database connections closed.\n")
  })
  
  return(list(ui = ui, server = server))
}

Common Issues and Solutions

Issue 1: Connection Pool Exhaustion

Problem: Application runs out of database connections under high load, causing errors and timeouts.

Solution:

Implement proper connection pool management and monitoring:

# Advanced connection pool configuration
create_robust_connection_pool <- function(config) {
  
  pool <- pool::dbPool(
    drv = RPostgreSQL::PostgreSQL(),
    host = config$host,
    port = config$port,
    dbname = config$dbname,
    user = config$user,
    password = config$password,
    
    # Optimized pool settings
    minSize = config$minSize %||% 2,
    maxSize = config$maxSize %||% 10,
    idleTimeout = 600000,  # 10 minutes
    validationQuery = "SELECT 1",
    
    # Connection retry settings
    onActivate = function(conn) {
      # Set connection-specific parameters
      DBI::dbExecute(conn, "SET application_name = 'shiny_app'")
      DBI::dbExecute(conn, "SET statement_timeout = '30s'")
},
    
    onPassivate = function(conn) {
      # Clean up connection before returning to pool
      DBI::dbExecute(conn, "RESET ALL")
    }
  )
  
  # Add pool monitoring
  pool_monitor <- function() {
    
    pool_info <- pool::dbPool_info(pool)
    
    # Alert if pool utilization is high
    utilization <- pool_info$active / (pool_info$active + pool_info$idle)
    
    if(utilization > 0.8) {
      warning("High database pool utilization: ", round(utilization * 100, 1), "%")
    }
    
    # Log pool statistics
    cat("Pool status - Active:", pool_info$active, 
        "Idle:", pool_info$idle, 
        "Total:", pool_info$total, "\n")
  }
  
  # Schedule periodic monitoring
  later::later(pool_monitor, delay = 60)  # Check every minute
  
  return(pool)
}

Issue 2: SQL Injection Vulnerabilities

Problem: Dynamic SQL queries with user input create security vulnerabilities.

Solution:

Always use parameterized queries and input validation:

# Secure query patterns
create_secure_query_functions <- function(pool) {
  
  list(
    
    # Secure search with user input
    search_data = function(search_term, data_type = NULL, user_id) {
      
      # Input validation
      if(!is.character(search_term) || nchar(search_term) == 0) {
        stop("Invalid search term")
      }
      
      if(nchar(search_term) > 100) {
        stop("Search term too long")
      }
      
      # Sanitize search term for LIKE pattern
      safe_search_term <- paste0("%", gsub("[%_]", "\\\\&", search_term), "%")
      
      # Build parameterized query
      base_query <- "
        SELECT data_id, data_name, data_type, created_at
        FROM application_data
        WHERE user_id = $1 
        AND (data_name ILIKE $2 OR data_content::text ILIKE $2)
      "
      
      params <- list(user_id, safe_search_term)
      
      # Add optional type filter
      if(!is.null(data_type)) {
        
        if(!data_type %in% c("analysis", "report", "dataset", "config")) {
          stop("Invalid data type")
        }
        
        base_query <- paste(base_query, "AND data_type = $3")
        params <- append(params, data_type)
      }
      
      query <- paste(base_query, "ORDER BY created_at DESC LIMIT 50")
      
      tryCatch({
        
        result <- pool::dbGetQuery(pool, query, params = params)
        return(list(success = TRUE, data = result))
        
      }, error = function(e) {
        
        # Log security event
        cat("Search query error:", e$message, "\n")
        return(list(success = FALSE, error = "Search failed"))
      })
    },
    
    # Secure dynamic filtering
    filter_data = function(filters, user_id) {
      
      # Validate filters structure
      if(!is.list(filters) || length(filters) == 0) {
        stop("Invalid filters")
      }
      
      # Allowed filter columns (whitelist approach)
      allowed_columns <- c("data_type", "is_public", "created_at")
      
      base_query <- "
        SELECT data_id, data_name, data_type, is_public, created_at
        FROM application_data
        WHERE user_id = $1
      "
      
      conditions <- c()
      params <- list(user_id)
      
      for(filter_name in names(filters)) {
        
        # Validate column name
        if(!filter_name %in% allowed_columns) {
          warning("Invalid filter column:", filter_name)
          next
        }
        
        filter_value <- filters[[filter_name]]
        
        # Type-specific validation
        if(filter_name == "data_type") {
          
          if(!filter_value %in% c("analysis", "report", "dataset", "config")) {
            warning("Invalid data_type value:", filter_value)
            next
          }
          
          conditions <- c(conditions, paste0("data_type = $", length(params) + 1))
          params <- append(params, filter_value)
          
        } else if(filter_name == "is_public") {
          
          if(!is.logical(filter_value)) {
            warning("Invalid is_public value:", filter_value)
            next
          }
          
          conditions <- c(conditions, paste0("is_public = $", length(params) + 1))
          params <- append(params, filter_value)
          
        } else if(filter_name == "created_at") {
          
          if(!inherits(filter_value, "Date") && !inherits(filter_value, "POSIXct")) {
            warning("Invalid created_at value:", filter_value)
            next
          }
          
          conditions <- c(conditions, paste0("created_at >= $", length(params) + 1))
          params <- append(params, filter_value)
        }
      }
      
      # Build final query
      if(length(conditions) > 0) {
        query <- paste(base_query, "AND", paste(conditions, collapse = " AND "))
      } else {
        query <- base_query
      }
      
      query <- paste(query, "ORDER BY created_at DESC LIMIT 100")
      
      result <- pool::dbGetQuery(pool, query, params = params)
      
      return(result)
    }
  )
}

Issue 3: Database Transaction Management

Problem: Complex operations fail partway through, leaving data in inconsistent state.

Solution:

Implement comprehensive transaction management:

# Robust transaction management
create_transaction_wrapper <- function(pool) {
  
  execute_with_transaction <- function(operations, isolation_level = "READ COMMITTED") {
    
    conn <- pool::poolCheckout(pool)
    
    tryCatch({
      
      # Set transaction isolation level
      DBI::dbExecute(conn, paste("SET TRANSACTION ISOLATION LEVEL", isolation_level))
      
      # Begin transaction
      DBI::dbBegin(conn)
      
      results <- list()
      
      # Execute operations
      for(i in seq_along(operations)) {
        
        operation <- operations[[i]]
        
        # Execute operation with error context
        tryCatch({
          
          if(is.function(operation)) {
            
            # Function-based operation
            result <- operation(conn)
            
          } else if(is.list(operation)) {
            
            # Query-based operation
            if(!is.null(operation$query)) {
              
              if(!is.null(operation$params)) {
                result <- DBI::dbGetQuery(conn, operation$query, params = operation$params)
              } else {
                result <- DBI::dbGetQuery(conn, operation$query)
              }
              
            } else if(!is.null(operation$execute)) {
              
              if(!is.null(operation$params)) {
                result <- DBI::dbExecute(conn, operation$execute, params = operation$params)
              } else {
                result <- DBI::dbExecute(conn, operation$execute)
              }
              
            } else {
              stop("Invalid operation format")
            }
            
          } else {
            stop("Unsupported operation type")
          }
          
          results[[i]] <- result
          
        }, error = function(e) {
          
          # Add context to error
          stop(paste("Operation", i, "failed:", e$message))
        })
      }
      
      # Commit transaction
      DBI::dbCommit(conn)
      
      return(list(success = TRUE, results = results))
      
    }, error = function(e) {
      
      # Rollback transaction
      tryCatch({
        DBI::dbRollback(conn)
      }, error = function(rollback_error) {
        warning("Rollback failed:", rollback_error$message)
      })
      
      return(list(success = FALSE, error = e$message))
      
    }, finally = {
      
      # Always return connection to pool
      pool::poolReturn(conn)
    })
  }
  
  return(execute_with_transaction)
}

# Example usage of transaction wrapper
transfer_data_between_users <- function(from_user_id, to_user_id, data_id, tx_manager) {
  
  operations <- list(
    
    # Verify source user owns the data
    list(
      query = "
        SELECT user_id FROM application_data 
        WHERE data_id = $1 AND user_id = $2
      ",
      params = list(data_id, from_user_id)
    ),
    
    # Create audit log entry
    list(
      execute = "
        INSERT INTO activity_log (user_id, action, details)
        VALUES ($1, 'data_transfer_start', $2)
      ",
      params = list(
        from_user_id,
        jsonlite::toJSON(list(
          data_id = data_id,
          to_user = to_user_id,
          timestamp = Sys.time()
        ))
      )
    ),
    
    # Transfer ownership
    list(
      execute = "
        UPDATE application_data 
        SET user_id = $1, updated_at = CURRENT_TIMESTAMP
        WHERE data_id = $2
      ",
      params = list(to_user_id, data_id)
    ),
    
    # Log completion
    list(
      execute = "
        INSERT INTO activity_log (user_id, action, details)
        VALUES ($1, 'data_transfer_complete', $2)
      ",
      params = list(
        to_user_id,
        jsonlite::toJSON(list(
          data_id = data_id,
          from_user = from_user_id,
          timestamp = Sys.time()
        ))
      )
    )
  )
  
  result <- tx_manager(operations)
  
  return(result)
}
Database Security Best Practices

Always use parameterized queries, implement proper authentication and authorization, encrypt sensitive data both in transit and at rest, regularly audit database access, and implement proper backup and recovery procedures for production systems.

Common Questions About Database Integration

Consider these factors when choosing a database:

PostgreSQL - Best for:

  • Complex analytical queries and advanced data types
  • High concurrent user loads and enterprise applications
  • Applications requiring ACID compliance and data integrity
  • Advanced features like JSON support and full-text search

MySQL/MariaDB - Best for:

  • Web applications with moderate complexity
  • Environments where MySQL expertise already exists
  • Applications with high read/write ratios
  • Cost-conscious deployments

SQLite - Best for:

  • Development and testing environments
  • Single-user or low-concurrency applications
  • Embedded scenarios with minimal setup requirements
  • Prototyping and proof-of-concept applications

SQL Server - Best for:

  • Microsoft-centric enterprise environments
  • Applications requiring integration with other Microsoft tools
  • Organizations with existing SQL Server infrastructure

Choose based on your scalability needs, team expertise, infrastructure constraints, and budget considerations.

Direct Connections:

  • Each operation opens/closes a database connection
  • Simple but inefficient for multiple operations
  • Higher latency due to connection overhead
  • Not suitable for concurrent users

Connection Pooling:

  • Maintains a pool of reusable connections
  • Significantly better performance under load
  • Supports concurrent users efficiently
  • Requires proper configuration and monitoring

Implementation comparison:

# Direct connection (avoid in production)
direct_query <- function(query) {
  conn <- DBI::dbConnect(RPostgreSQL::PostgreSQL(), ...)
  result <- DBI::dbGetQuery(conn, query)
  DBI::dbDisconnect(conn)
  return(result)
}

# Connection pooling (recommended)
pooled_query <- function(query, pool) {
  result <- pool::dbGetQuery(pool, query)
  return(result)
}

Always use connection pooling for production applications with multiple users or frequent database operations.

Implement a structured migration system:

# Database migration framework
create_migration_system <- function(pool) {
  
  # Create migrations table
  setup_migrations_table <- function() {
    
    migration_table_sql <- "
      CREATE TABLE IF NOT EXISTS schema_migrations (
        migration_id SERIAL PRIMARY KEY,
        version VARCHAR(50) UNIQUE NOT NULL,
        description TEXT,
        applied_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
        checksum VARCHAR(64)
      )
    "
    
    pool::dbExecute(pool, migration_table_sql)
  }
  
  # Apply migration
  apply_migration <- function(version, description, sql_commands) {
    
    # Check if already applied
    existing <- pool::dbGetQuery(
      pool,
      "SELECT version FROM schema_migrations WHERE version = $1",
      params = list(version)
    )
    
    if(nrow(existing) > 0) {
      cat("Migration", version, "already applied\n")
      return(TRUE)
    }
    
    # Apply migration in transaction
    conn <- pool::poolCheckout(pool)
    
    tryCatch({
      
      DBI::dbBegin(conn)
      
      # Execute migration commands
      for(sql in sql_commands) {
        DBI::dbExecute(conn, sql)
      }
      
      # Record migration
      DBI::dbExecute(
        conn,
        "INSERT INTO schema_migrations (version, description) VALUES ($1, $2)",
        params = list(version, description)
      )
      
      DBI::dbCommit(conn)
      
      cat("Migration", version, "applied successfully\n")
      return(TRUE)
      
    }, error = function(e) {
      
      DBI::dbRollback(conn)
      stop("Migration failed: ", e$message)
      
    }, finally = {
      
      pool::poolReturn(conn)
    })
  }
  
  return(list(
    setup = setup_migrations_table,
    apply = apply_migration
  ))
}

# Example migration usage
migration_system <- create_migration_system(db_pool)
migration_system$setup()

# Apply a migration
migration_system$apply(
  version = "001_add_user_preferences",
  description = "Add user preferences table",
  sql_commands = c(
    "CREATE TABLE user_preferences (...)",
    "CREATE INDEX idx_preferences_user_id ON user_preferences(user_id)"
  )
)

Use comprehensive error handling patterns:

# Robust error handling wrapper
create_error_handler <- function(pool) {
  
  safe_db_operation <- function(operation_name, query_function) {
    
    tryCatch({
      
      # Log operation start
      cat("Starting operation:", operation_name, "\n")
      
      # Execute query function
      result <- query_function()
      
      # Log success
      cat("Operation completed:", operation_name, "\n")
      
      return(list(
        success = TRUE,
        data = result,
        operation = operation_name,
        timestamp = Sys.time()
      ))
      
    }, error = function(e) {
      
      # Categorize error types
      error_type <- if(grepl("connection", e$message, ignore.case = TRUE)) {
        "connection_error"
      } else if(grepl("syntax", e$message, ignore.case = TRUE)) {
        "sql_syntax_error"
      } else if(grepl("duplicate", e$message, ignore.case = TRUE)) {
        "constraint_violation"
      } else if(grepl("timeout", e$message, ignore.case = TRUE)) {
        "timeout_error"
      } else {
        "general_error"
      }
      
      # Log error with context
      cat("Operation failed:", operation_name, "\n")
      cat("Error type:", error_type, "\n")
      cat("Error message:", e$message, "\n")
      
      # Return structured error
      return(list(
        success = FALSE,
        error = e$message,
        error_type = error_type,
        operation = operation_name,
        timestamp = Sys.time()
      ))
    })
  }
  
  return(safe_db_operation)
}

# Usage example
safe_operation <- create_error_handler(db_pool)

result <- safe_operation("user_creation", function() {
  pool::dbGetQuery(
    db_pool,
    "INSERT INTO users (username, email) VALUES ($1, $2) RETURNING user_id",
    params = list("newuser", "user@example.com")
  )
})

if(result$success) {
  cat("User created with ID:", result$data$user_id, "\n")
} else {
  cat("Failed to create user:", result$error, "\n")
}

Key performance strategies include:

Query Optimization:

  • Use EXPLAIN ANALYZE to understand query performance
  • Create appropriate indexes for frequently queried columns
  • Limit result sets with LIMIT and pagination
  • Use parameterized queries to enable query plan caching

Connection Management:

  • Size connection pools appropriately (typically 2-3x number of CPU cores)
  • Monitor pool utilization and adjust as needed
  • Use connection validation to handle stale connections
  • Implement connection retry logic for temporary failures

Caching Strategies:

# Implement query result caching
create_query_cache <- function(ttl_seconds = 300) {
  
  cache <- list()
  
  cached_query <- function(query, params = NULL, pool) {
    
    # Create cache key
    cache_key <- digest::digest(list(query = query, params = params))
    
    # Check cache
    if(cache_key %in% names(cache)) {
      
      cached_entry <- cache[[cache_key]]
      
      # Check if still valid
      if(difftime(Sys.time(), cached_entry$timestamp, units = "secs") < ttl_seconds) {
        return(cached_entry$result)
      }
    }
    
    # Execute query
    result <- pool::dbGetQuery(pool, query, params = params)
    
    # Cache result
    cache[[cache_key]] <<- list(
      result = result,
      timestamp = Sys.time()
    )
    
    return(result)
  }
  
  return(cached_query)
}

Memory Management:

  • Process large datasets in chunks
  • Use streaming for bulk operations
  • Clear unused reactive values and objects
  • Monitor memory usage in production

Test Your Understanding

You’re deploying a Shiny application that expects 50 concurrent users performing database operations. What’s the recommended connection pool configuration?

pool <- pool::dbPool(
  drv = RPostgreSQL::PostgreSQL(),
  # ... connection parameters ...
  minSize = ?,
  maxSize = ?,
  idleTimeout = ?
)
  1. minSize = 1, maxSize = 50, idleTimeout = 30000
  2. minSize = 5, maxSize = 15, idleTimeout = 300000
  3. minSize = 50, maxSize = 100, idleTimeout = 60000
  4. minSize = 2, maxSize = 200, idleTimeout = 600000
  • Consider that not all users will be active simultaneously
  • Think about database server resource limitations
  • Remember that connections consume memory on both client and server
  • Consider typical user behavior patterns in web applications

B) minSize = 5, maxSize = 15, idleTimeout = 300000

Reasoning:

  • minSize = 5: Ensures immediate availability without cold start delays
  • maxSize = 15: Typically 20-30% of concurrent users is sufficient since not all users perform database operations simultaneously
  • idleTimeout = 300000 (5 minutes): Balances resource conservation with responsiveness

Why other options are problematic:

  • Option A: maxSize too low for 50 concurrent users, idleTimeout too short
  • Option C: Excessive connections that would overwhelm the database server
  • Option D: Extremely high maxSize wastes resources, though timeout is reasonable

Additional considerations:

# Production-ready pool configuration
pool <- pool::dbPool(
  drv = RPostgreSQL::PostgreSQL(),
  # ... connection parameters ...
  minSize = 5,
  maxSize = 15,
  idleTimeout = 300000,  # 5 minutes
  validationQuery = "SELECT 1",
  
  # Additional production settings
  onActivate = function(conn) {
    DBI::dbExecute(conn, "SET application_name = 'shiny_app'")
  }
)

Rule of thumb: Connection pool size should be 2-3x the number of CPU cores on your database server, adjusted for your specific usage patterns.

You need to create a search function that allows users to search through their saved data. Which implementation is secure against SQL injection?

search_data <- function(search_term, pool) {
  query <- paste0("SELECT * FROM data WHERE content LIKE '%", search_term, "%'")
  pool::dbGetQuery(pool, query)
}
search_data <- function(search_term, pool) {
  query <- "SELECT * FROM data WHERE content LIKE $1"
  pool::dbGetQuery(pool, query, params = list(paste0("%", search_term, "%")))
}
search_data <- function(search_term, pool) {
  safe_term <- gsub("'", "''", search_term)
  query <- paste0("SELECT * FROM data WHERE content LIKE '%", safe_term, "%'")
  pool::dbGetQuery(pool, query)
}
search_data <- function(search_term, pool) {
  query <- "SELECT * FROM data WHERE content LIKE ?"
  pool::dbGetQuery(pool, query, params = list(search_term))
}
  • Consider how user input is incorporated into the SQL query
  • Think about parameterized queries vs string concatenation
  • Remember that simple escaping is not sufficient protection
  • Consider which approach separates SQL structure from data

B) Parameterized query with proper parameter binding

search_data <- function(search_term, pool) {
  # Input validation
  if (!is.character(search_term) || nchar(search_term) == 0) {
    stop("Invalid search term")
  }
  
  if (nchar(search_term) > 100) {
    stop("Search term too long")
  }
  
  # Parameterized query with LIKE pattern
  query <- "SELECT * FROM data WHERE content LIKE $1"
  
  # Safely construct LIKE pattern
  pattern <- paste0("%", search_term, "%")
  
  pool::dbGetQuery(pool, query, params = list(pattern))
}

Why this is secure:

  • Uses parameterized queries ($1) instead of string concatenation
  • Database driver handles proper escaping and quoting
  • SQL structure is separate from user data
  • Includes input validation for additional safety

Why other options are wrong:

  • Option A: Direct string concatenation - classic SQL injection vulnerability
  • Option C: Manual escaping is insufficient and error-prone
  • Option D: Uses ? placeholder which may not work with all R database drivers

Enhanced secure version:

secure_search_data <- function(search_term, user_id, pool) {
  
  # Comprehensive input validation
  if (!is.character(search_term) || nchar(search_term) == 0) {
    return(list(success = FALSE, error = "Invalid search term"))
  }
  
  if (nchar(search_term) > 100) {
    return(list(success = FALSE, error = "Search term too long"))
  }
  
  # Sanitize for LIKE pattern (escape special LIKE characters)
  safe_pattern <- gsub("[%_\\\\]", "\\\\\\0", search_term)
  like_pattern <- paste0("%", safe_pattern, "%")
  
  # Secure parameterized query with user authorization
  query <- "
    SELECT data_id, data_name, data_type, created_at
    FROM application_data
    WHERE user_id = $1 
    AND (data_name ILIKE $2 OR data_content::text ILIKE $2)
    ORDER BY created_at DESC
    LIMIT 50
  "
  
  tryCatch({
    
    result <- pool::dbGetQuery(
      pool, query,
      params = list(user_id, like_pattern)
    )
    
    return(list(success = TRUE, data = result))
    
  }, error = function(e) {
    
    # Log error without exposing details to user
    cat("Search error:", e$message, "\n")
    return(list(success = FALSE, error = "Search failed"))
  })
}

You need to implement a user registration process that creates a user record, initializes their preferences, and logs the activity. If any step fails, all changes should be rolled back. What’s the correct approach?

  1. Execute each operation separately and handle errors individually
  2. Use a database transaction to wrap all operations
  3. Create the user first, then handle preferences and logging separately
  4. Use separate database connections for each operation
  • Consider data consistency requirements
  • Think about what happens if an operation fails partway through
  • Remember ACID properties (Atomicity, Consistency, Isolation, Durability)
  • Consider the user experience if partial operations succeed

B) Use a database transaction to wrap all operations

register_user_with_transaction <- function(username, email, password_hash, initial_prefs, pool) {
  
  conn <- pool::poolCheckout(pool)
  
  tryCatch({
    
    # Begin transaction
    DBI::dbBegin(conn)
    
    # Step 1: Create user
    user_result <- DBI::dbGetQuery(
      conn,
      "INSERT INTO users (username, email, password_hash) VALUES ($1, $2, $3) RETURNING user_id",
      params = list(username, email, password_hash)
    )
    
    user_id <- user_result$user_id[1]
    
    # Step 2: Initialize preferences
    for (pref_key in names(initial_prefs)) {
      
      DBI::dbExecute(
        conn,
        "INSERT INTO user_preferences (user_id, preference_key, preference_value) VALUES ($1, $2, $3)",
        params = list(user_id, pref_key, jsonlite::toJSON(initial_prefs[[pref_key]]))
      )
    }
    
    # Step 3: Log registration activity
    DBI::dbExecute(
      conn,
      "INSERT INTO activity_log (user_id, action, details) VALUES ($1, $2, $3)",
      params = list(
        user_id,
        "user_registration",
        jsonlite::toJSON(list(
          username = username,
          registration_time = Sys.time(),
          preferences_count = length(initial_prefs)
        ))
      )
    )
    
    # Commit transaction - all operations succeed together
    DBI::dbCommit(conn)
    
    return(list(
      success = TRUE,
      user_id = user_id,
      message = "User registered successfully"
    ))
    
  }, error = function(e) {
    
    # Rollback transaction - all operations fail together
    DBI::dbRollback(conn)
    
    return(list(
      success = FALSE,
      error = paste("Registration failed:", e$message)
    ))
    
  }, finally = {
    
    # Always return connection to pool
    pool::poolReturn(conn)
  })
}

Why transactions are essential:

Atomicity: All operations succeed or all fail together
Consistency: Database remains in valid state
Isolation: Other users don’t see partial results
Durability: Committed changes are permanent

Why other approaches fail:

  • Option A: Risk of partial completion leaving inconsistent data
  • Option C: User could exist without preferences, breaking application assumptions
  • Option D: Separate connections can’t participate in same transaction

Alternative pattern for complex transactions:

# Transaction manager pattern
create_user_registration_transaction <- function(pool) {
  
  operations <- list(
    
    create_user = function(conn, username, email, password_hash) {
      DBI::dbGetQuery(
        conn,
        "INSERT INTO users (username, email, password_hash) VALUES ($1, $2, $3) RETURNING user_id",
        params = list(username, email, password_hash)
      )
    },
    
    create_preferences = function(conn, user_id, preferences) {
      for (key in names(preferences)) {
        DBI::dbExecute(
          conn,
          "INSERT INTO user_preferences (user_id, preference_key, preference_value) VALUES ($1, $2, $3)",
          params = list(user_id, key, jsonlite::toJSON(preferences[[key]]))
        )
      }
      return(length(preferences))
    },
    
    log_registration = function(conn, user_id, username) {
      DBI::dbExecute(
        conn,
        "INSERT INTO activity_log (user_id, action, details) VALUES ($1, $2, $3)",
        params = list(user_id, "registration", jsonlite::toJSON(list(username = username)))
      )
      return(TRUE)
    }
  )
  
  return(operations)
}

Conclusion

Database connectivity transforms Shiny applications from isolated analytical tools into integrated business systems that persist data, manage users, and coordinate workflows across entire organizations. Through mastering connection pooling, secure query patterns, transaction management, and performance optimization, you’ve gained the ability to build enterprise-grade applications that handle real-world data requirements while maintaining the analytical power that makes Shiny unique.

The database integration patterns you’ve learned provide the foundation for applications that scale from individual projects to organization-wide platforms, supporting hundreds of concurrent users while maintaining data integrity, security, and performance. This combination of R’s analytical capabilities with robust data persistence creates powerful business intelligence platforms that compete with commercial solutions while providing superior flexibility and cost-effectiveness.

Your understanding of production database patterns, security implementations, and performance optimization positions you to build mission-critical applications that organizations depend on for daily operations, strategic decision-making, and long-term data management.

Next Steps

Based on your mastery of database integration, here are the recommended paths for continuing your advanced Shiny development journey:

Immediate Next Steps (Complete These First)

  • User Authentication and Security - Secure your database-connected applications with proper user management and access control
  • Testing and Debugging Strategies - Implement comprehensive testing for database-driven applications including transaction testing and data integrity validation
  • Practice Exercise: Build a complete multi-user application with user registration, data sharing, and audit logging using your database integration skills

Building on Your Foundation (Choose Your Path)

For Production Systems:

For Advanced Development:

For Enterprise Applications:

Long-term Goals (2-4 Weeks)

  • Build a complete multi-tenant application with user isolation, role-based access control, and comprehensive audit trails
  • Implement a real-time collaboration platform with database synchronization and conflict resolution
  • Create a scalable analytics platform that handles millions of records with optimized query performance
  • Develop a comprehensive backup and disaster recovery strategy for production database applications
Back to top

Reuse

Citation

BibTeX citation:
@online{kassambara2025,
  author = {Kassambara, Alboukadel},
  title = {Database {Connectivity} and {Data} {Persistence:} {Build}
    {Data-Driven} {Applications}},
  date = {2025-05-23},
  url = {https://www.datanovia.com/learn/tools/shiny-apps/advanced-concepts/database-connections.html},
  langid = {en}
}
For attribution, please cite this work as:
Kassambara, Alboukadel. 2025. “Database Connectivity and Data Persistence: Build Data-Driven Applications.” May 23, 2025. https://www.datanovia.com/learn/tools/shiny-apps/advanced-concepts/database-connections.html.