Statistical Analysis Tools
Perform comprehensive statistical analyses directly in your browser with no installation required. StatFusion’s analysis tools combine powerful statistical methods with intuitive interfaces and educational content to help you understand your data.
Statistical Testing Categories
StatFusion offers a wide range of statistical tests organized by category. Each tool includes interactive analysis capabilities, assumption checking, visualization options, and detailed result interpretation.
Our collection of statistical tools is continuously expanding. Subscribe to our updates to be notified when new tools become available.
Descriptive Statistics
Essential tools for summarizing and understanding the basic properties of your data.
Basic Summaries
Understand the central tendencies and distributions in your data:
- Summary Statistics - Compute means, medians, standard deviations and more
- Frequency Analysis - Create frequency tables and distributions
- Percentile Calculator - Calculate percentiles and quartiles
Relationship Measures
Analyze relationships between variables:
- Correlation Analysis - Measure and test variable relationships
- Covariance Calculator - Compute covariance between variables
- Contingency Tables - Analyze relationships between categorical variables
Inferential Statistics
Statistical tests to make inferences about populations based on sample data.
Mean Comparisons
Compare means between groups or against reference values:
- Independent Samples t-Test - Compare means between two independent groups
- Paired Samples t-Test - Compare paired measurements
- One-Sample t-Test - Compare a sample mean to a known value
- One-Way ANOVA - Compare means across multiple groups
- Repeated Measures ANOVA - Analyze repeated measurements
- Factorial ANOVA - Test effects of multiple factors and interactions
Non-Parametric Tests
Distribution-free alternatives when parametric assumptions aren’t met:
- Mann-Whitney U Test - Non-parametric alternative to independent t-test
- Wilcoxon Signed-Rank Test - Non-parametric alternative to paired t-test
- Kruskal-Wallis Test - Non-parametric alternative to one-way ANOVA
- Friedman Test - Non-parametric alternative to repeated measures ANOVA
Categorical Data Analysis
Tests for analyzing categorical and count data:
- Chi-Square Test - Test independence between categorical variables
- Fisher’s Exact Test - Alternative to Chi-Square for small samples
- McNemar’s Test - Compare paired proportions
- Proportion Test - Compare proportions between groups
Correlation & Association Tests
Tests for measuring relationships between variables:
- Pearson Correlation - Test linear relationship between variables
- Spearman Correlation - Test monotonic relationship between variables
- Point-Biserial Correlation - Correlation between binary and continuous variables
Distribution & Variance Tests
Tests for analyzing data distributions and variances:
- Normality Tests - Test if data follows a normal distribution
- Levene’s Test - Test homogeneity of variances
- F-Test - Compare variances between two groups
- Bartlett’s Test - Test homogeneity of variances across multiple groups
Post-Hoc & Multiple Comparison Tests
Controlled tests after finding significant overall effects:
- Tukey’s HSD - Compare all possible pairs of means
- Bonferroni Correction - Control family-wise error rate
- Scheffé’s Method - Test all possible contrasts
- Dunnett’s Test - Compare each group to a control
Regression Analysis
Methods for modeling relationships between variables and making predictions.
Linear Regression
Model linear relationships between variables:
- Simple Linear Regression - Model relationship between two variables
- Multiple Linear Regression - Model using multiple predictors
- Polynomial Regression - Model non-linear relationships
- ANCOVA - Combine ANOVA with regression
Generalized Linear Models
Models for various types of dependent variables:
- Logistic Regression - Model binary outcomes
- Poisson Regression - Model count data
- Ordinal Regression - Model ordered categorical outcomes
Advanced Statistical Methods
Sophisticated techniques for complex data analysis problems.
Multivariate Analysis
Techniques for analyzing multiple variables simultaneously:
- Principal Component Analysis - Reduce dimensionality while preserving variance
- Factor Analysis - Identify underlying factors in data
- Discriminant Analysis - Classify observations into groups
Cluster Analysis
Methods for finding natural groupings in data:
- K-Means Clustering - Partition data into k clusters
- Hierarchical Clustering - Build a hierarchy of clusters
Time Series Analysis
Methods for analyzing time-ordered data:
- Time Series Decomposition - Separate trend, seasonality, and residuals
- Autocorrelation Analysis - Measure serial correlation
- Forecasting Models - Predict future values based on historical data
Survival Analysis
Methods for analyzing time-to-event data:
- Kaplan-Meier Estimator - Estimate survival function
- Cox Proportional Hazards - Model influence of covariates on survival
Study Design & Power Analysis
Tools to plan studies and ensure adequate statistical power.
Sample Size Calculation
Determine required sample sizes for robust studies:
- t-Test Sample Size - Calculate sample size for t-tests
- ANOVA Sample Size - Calculate sample size for ANOVA
- Proportion Sample Size - Calculate sample size for proportion tests
Effect Size Calculators
Quantify the magnitude of effects in your data:
- Cohen’s d Calculator - Effect size for t-tests
- Eta-squared Calculator - Effect size for ANOVA
- Odds Ratio Calculator - Effect size for categorical data
How to Choose the Right Statistical Test
Selecting the appropriate statistical test is crucial for valid analysis. Here’s a guide to help you choose:
Based on Research Question
- Comparing two independent groups: Independent Samples t-Test
- Comparing paired measurements: Paired Samples t-Test
- Comparing a sample to a known value: One-Sample t-Test
- Comparing multiple groups: One-Way ANOVA
- Examining relationship between variables: Correlation/Regression
- Analyzing categorical data: Chi-Square Test
Based on Data Type
- Two continuous variables: Correlation, Regression
- One categorical, one continuous variable: t-Test, ANOVA
- Two categorical variables: Chi-Square Test, Fisher’s Exact Test
- Ordered categorical data: Spearman Correlation, Ordinal Regression
- Count data: Poisson Regression, Chi-Square Test
- Time-to-event data: Survival Analysis
Not sure which test to use?
Our interactive Statistical Test Selector can help you choose the right statistical test based on your research question and data characteristics.
Statistical Analysis Workflow
For most analyses in StatFusion, we recommend following this general workflow:
- Data Preparation
- Check for missing values and outliers
- Verify data types and structure
- Consider necessary transformations
- Exploratory Analysis
- Calculate descriptive statistics
- Create visualizations to understand distributions
- Examine relationships between variables
- Assumption Checking
- Verify the assumptions of your planned statistical test
- Consider alternative tests if assumptions are violated
- Hypothesis Testing
- Run the appropriate statistical test
- Interpret the p-value and effect size
- Consider practical significance alongside statistical significance
- Post-Hoc Analysis
- Conduct follow-up tests as needed
- Explore unexpected findings
- Report Results
- Present findings with appropriate visualizations
- Include test statistics, p-values, and effect sizes
- Provide confidence intervals when applicable
Each tool in StatFusion guides you through this workflow with specific instructions for that analysis.
Common Statistical Questions
Parametric tests (like t-tests and ANOVA) assume that your data follows a specific distribution (usually normal) and work with parameters like means and standard deviations. Non-parametric tests (like Mann-Whitney U and Kruskal-Wallis) don’t make assumptions about the underlying distribution and often work with ranks rather than raw values. Non-parametric tests are generally less powerful but more robust to violations of assumptions.
You can assess normality using: 1. Visual methods: Histograms, Q-Q plots 2. Statistical tests: Shapiro-Wilk test, Kolmogorov-Smirnov test
Remember that with large sample sizes, minor deviations from normality usually don’t affect parametric tests due to the Central Limit Theorem.
The conventional threshold is p < 0.05, meaning there’s less than a 5% chance of observing your results if the null hypothesis is true. However, this is somewhat arbitrary. In some fields, more stringent thresholds (p < 0.01 or p < 0.001) are used, especially for multiple comparisons. Consider your research context, sample size, and potential implications when interpreting p-values.
P-values tell you whether an effect exists but not how large or meaningful it is. Effect sizes quantify the magnitude of the effect, helping you assess practical significance. With very large samples, even tiny, practically meaningless differences can be statistically significant. Effect sizes provide context for interpreting your results.
- Two-tailed tests (more common) detect effects in either direction. Use when you’re interested in whether there’s a difference, regardless of direction.
- One-tailed tests only detect effects in one specific direction. Use only when you have a strong theoretical reason to expect an effect in only one direction and would consider any effect in the opposite direction as equivalent to no effect.
Note that one-tailed tests should be specified before seeing the data to avoid misuse.
Statistical Analysis Resources
For more information on statistical methods and best practices, we recommend these resources:
- Introduction to Statistical Learning
- Statistics How To
- UCLA’s Statistics Consulting Resources
- Journal of Statistics Education
Reuse
Citation
@online{kassambara2025,
author = {Kassambara, Alboukadel},
title = {Statistical {Analysis} {Tools} \textbar{} {StatFusion}},
date = {2025-04-10},
url = {https://www.datanovia.com/apps/statfusion/analysis/index.html},
langid = {en}
}