Statistics Dictionary
To see a definition, select a term from the dropdown text box below. The statistics
dictionary will display the definition, plus links to related web pages.

Select term:
Statistics Dictionary
Absolute Value
Accuracy
Addition Rule
Alpha
Alternative Hypothesis
Back-to-Back Stemplots
Balanced Design
Bar Chart
Bartlett's Test
Bayes Rule
Bayes Theorem
Bias
Biased Estimate
Bimodal Distribution
Binomial Distribution
Binomial Experiment
Binomial Probability
Binomial Random Variable
Bivariate Data
Blinding
Blocking
Blocking Variable
Boxplot
Cartesian Plane
Categorical Variable
Census
Central Limit Theorem
Chi-Square Distribution
Chi-Square Goodness of Fit Test
Chi-Square Statistic
Chi-Square Test for Homogeneity
Chi-Square Test for Independence
Cluster
Cluster Sampling
Coefficient of Determination
Coefficient of Multiple Determination
Column Vector
Combination
Complement
Completely Randomized Design
Conditional Distribution
Conditional Frequency
Conditional Probability
Confidence Interval
Confidence Level
Confounding
Contingency Table
Continuous Probability Distribution
Continuous Variable
Control Group
Convenience Sample
Correlation
Critical Parameter Value
Critical Value
Cumulative Frequency
Cumulative Frequency Plot
Cumulative Probability
Decision Rule
Degrees of Freedom
Dependent Variable
Determinant
Deviation Score
Diagonal Matrix
Discrete Probability Distribution
Discrete Variable
Discriminant Analysis
Disjoint
Disproportionate Stratification
Dotplot
Double Bar Chart
Double Blinding
Dummy Variable
E Notation
Echelon Matrix
Effect Size
Element
Elementary Matrix Operations
Elementary Operators
Empty Set
Estimation
Estimator
Event
Event Multiple
Expected Value
Experiment
Experimental Design
Extraneous Variable
F Distribution
F Statistic
Factor
Factorial
Factorial Experiment
Finite Population Correction
Fixed Effects Model
Fixed Factor
Frequency Count
Frequency Table
Full Rank
Gaps in Graphs
Geometric Distribution
Geometric Probability
Hartley's Fmax Test
Heterogeneous
Histogram
Homogeneous
Hypergeometric Distribution
Hypergeometric Experiment
Hypergeometric Probability
Hypergeometric Random Variable
Hypothesis Test
Identity Matrix
Independent
Independent Groups Design
Independent Variable
Influential Point
Inner Product
Interaction Plot
Interactions
Interquartile Range
Intersection
Interval Estimate
Interval Scale
Inverse
IQR
Joint Frequency
Joint Probability Distribution
Law of Large Numbers
Level
Line
Linear Combination of Vectors
Linear Dependence of Vectors
Linear Transformation
Logarithm
Lurking Variable
Margin of Error
Marginal Distribution
Marginal Frequency
Marginal Mean
Matched Pairs Design
Matched-Pairs t-Test
Matrix
Matrix Dimension
Matrix Inverse
Matrix Order
Matrix Rank
Matrix Transpose
Mean
Mean Square
Measurement Scales
Median
Mixed Model
Mode
Multicollinearity
Multinomial Distribution
Multinomial Experiment
Multiple Regression
Multiplication Rule
Multistage Sampling
Mutually Exclusive
Natural Logarithm
Negative Binomial Distribution
Negative Binomial Experiment
Negative Binomial Probability
Negative Binomial Random Variable
Neyman Allocation
Nominal Scale
Nonlinear Transformation
Non-Probability Sampling
Nonresponse Bias
Normal Distribution
Normal Random Variable
Null Hypothesis
Null Set
Observational Study
One-Sample t-Test
One-Sample z-Test
One-stage Sampling
One-Tailed Test
One-Way ANOVA
One-Way Table
Optimum Allocation
Ordinal Scale
Outer Product
Outlier
Paired Data
Parallel Boxplots
Parameter
Pearson Product-Moment Correlation
Percentage
Percentile
Permutation
Placebo
Point Estimate
Poisson Distribution
Poisson Experiment
Poisson Probability
Poisson Random Variable
Population
Power
Precision
Probability
Probability Density Function
Probability Distribution
Probability Sampling
Proportion
Proportionate Stratification
P-Value
Qualitative Variable
Quantitative Variable
Quartile
Random Effects Model
Random Factor
Random Number Table
Random Numbers
Random Sampling
Random Variable
Randomization
Randomized Block Design
Randomized Block Experiment
Range
Ratio Scale
Reduced Row Echelon Form
Region of Acceptance
Region of Rejection
Regression
Relative Frequency
Relative Frequency Table
Repeated Measures Design
Replication
Representative
Residual
Residual Plot
Response Bias
Row Echelon Form
Row Vector
Sample
Sample Design
Sample Point
Sample Space
Sample Survey
Sampling
Sampling Distribution
Sampling Error
Sampling Fraction
Sampling Method
Sampling With Replacement
Sampling Without Replacement
Scalar Matrix
Scalar Multiple
Scatterplot
Selection Bias
Set
Significance Level
Simple Random Sampling
Simple Regression
Singular Matrix
Skewness
Slope
Standard Deviation
Standard Error
Standard Normal Distribution
Standard Score
Statistic
Statistical Experiment
Statistical Hypothesis
Statistics
Stemplot
Strata
Stratified Sampling
Subset
Subtraction Rule
Sum Vector
Sums of Squares
Symmetric Matrix
Symmetry
Systematic Sampling
T Distribution
T Score
T Statistic
Test Statistic
Transpose
Treatment
t-Test
Two-Sample t-Test
Two-stage Sampling
Two-Tailed Test
Two-Way Table
Type I Error
Type II Error
Unbiased Estimate
Undercoverage
Uniform Distribution
Unimodal Distribution
Union
Univariate Data
Variable
Variance
Variance Inflation Factor
Vector Inner Product
Vector Outer Product
Vectors
Venn Diagram
Voluntary Response Bias
Voluntary Sample
Y Intercept
z-score

Negative Binomial Experiment
A negative binomial experiment is a
statistical experiment
that has the following properties:

The experiment consists of x repeated trials.
Each trial can result in just two possible outcomes. We call one of these
outcomes a success and the other, a failure.
The probability of success, denoted by p , is the same on every
trial.
The trials are independent ;
that is, the outcome on one trial does not affect the outcome on other trials.
The experiment continues until r successes are observed, where r
is specified in advance.
Consider the following statistical experiment. You flip a coin repeatedly and count
the number of times the coin lands on heads. You continue flipping the coin until
it has landed 5 times on heads. This is a negative binomial experiment
because:

The experiment consists of repeated trials. We flip a coin repeatedly until it
has landed 5 times on heads.
Each trial can result in just two possible outcomes - heads or tails.
The probability of success is constant - 0.5 on every trial.
The trials are independent; that is, getting heads on one trial does not affect
whether we get heads on other trials.
The experiment continues until a fixed number of successes have occurred;
in this case, 5 heads.