Statistics Dictionary
To see a definition, select a term from the dropdown text box below. The statistics
dictionary will display the definition, plus links to related web pages.

Select term:
Statistics Dictionary
Absolute Value
Accuracy
Addition Rule
Alpha
Alternative Hypothesis
Back-to-Back Stemplots
Bar Chart
Bayes Rule
Bayes Theorem
Bias
Biased Estimate
Bimodal Distribution
Binomial Distribution
Binomial Experiment
Binomial Probability
Binomial Random Variable
Bivariate Data
Blinding
Boxplot
Cartesian Plane
Categorical Variable
Census
Central Limit Theorem
Chi-Square Distribution
Chi-Square Goodness of Fit Test
Chi-Square Statistic
Chi-Square Test for Homogeneity
Chi-Square Test for Independence
Cluster
Cluster Sampling
Coefficient of Determination
Column Vector
Combination
Complement
Completely Randomized Design
Conditional Distribution
Conditional Frequency
Conditional Probability
Confidence Interval
Confidence Level
Confounding
Contingency Table
Continuous Probability Distribution
Continuous Variable
Control Group
Convenience Sample
Correlation
Critical Parameter Value
Critical Value
Cumulative Frequency
Cumulative Frequency Plot
Cumulative Probability
Decision Rule
Degrees of Freedom
Dependent Variable
Determinant
Deviation Score
Diagonal Matrix
Discrete Probability Distribution
Discrete Variable
Disjoint
Disproportionate Stratification
Dotplot
Double Bar Chart
Double Blinding
E Notation
Echelon Matrix
Effect Size
Element
Elementary Matrix Operations
Elementary Operators
Empty Set
Estimation
Estimator
Event
Event Multiple
Expected Value
Experiment
Experimental Design
F Distribution
F Statistic
Factor
Factorial
Finite Population Correction
Frequency Count
Frequency Table
Full Rank
Gaps in Graphs
Geometric Distribution
Geometric Probability
Heterogeneous
Histogram
Homogeneous
Hypergeometric Distribution
Hypergeometric Experiment
Hypergeometric Probability
Hypergeometric Random Variable
Hypothesis Test
Identity Matrix
Independent
Independent Variable
Influential Point
Inner Product
Interquartile Range
Intersection
Interval Estimate
Interval Scale
Inverse
IQR
Joint Frequency
Joint Probability Distribution
Law of Large Numbers
Level
Line
Linear Combination of Vectors
Linear Dependence of Vectors
Linear Transformation
Logarithm
Lurking Variable
Margin of Error
Marginal Distribution
Marginal Frequency
Matched Pairs Design
Matched-Pairs t-Test
Matrix
Matrix Dimension
Matrix Inverse
Matrix Order
Matrix Rank
Matrix Transpose
Mean
Measurement Scales
Median
Mode
Multinomial Distribution
Multinomial Experiment
Multiplication Rule
Multistage Sampling
Mutually Exclusive
Natural Logarithm
Negative Binomial Distribution
Negative Binomial Experiment
Negative Binomial Probability
Negative Binomial Random Variable
Neyman Allocation
Nominal Scale
Nonlinear Transformation
Non-Probability Sampling
Nonresponse Bias
Normal Distribution
Normal Random Variable
Null Hypothesis
Null Set
Observational Study
One-Sample t-Test
One-Sample z-Test
One-stage Sampling
One-Tailed Test
One-Way Table
Optimum Allocation
Ordinal Scale
Outer Product
Outlier
Paired Data
Parallel Boxplots
Parameter
Pearson Product-Moment Correlation
Percentage
Percentile
Permutation
Placebo
Point Estimate
Poisson Distribution
Poisson Experiment
Poisson Probability
Poisson Random Variable
Population
Power
Precision
Probability
Probability Density Function
Probability Distribution
Probability Sampling
Proportion
Proportionate Stratification
P-Value
Qualitative Variable
Quantitative Variable
Quartile
Random Number Table
Random Numbers
Random Sampling
Random Variable
Randomization
Randomized Block Design
Range
Ratio Scale
Reduced Row Echelon Form
Region of Acceptance
Region of Rejection
Regression
Relative Frequency
Relative Frequency Table
Replication
Representative
Residual
Residual Plot
Response Bias
Row Echelon Form
Row Vector
Sample
Sample Design
Sample Point
Sample Space
Sample Survey
Sampling
Sampling Distribution
Sampling Error
Sampling Fraction
Sampling Method
Sampling With Replacement
Sampling Without Replacement
Scalar Matrix
Scalar Multiple
Scatterplot
Selection Bias
Set
Significance Level
Simple Random Sampling
Singular Matrix
Skewness
Slope
Standard Deviation
Standard Error
Standard Normal Distribution
Standard Score
Statistic
Statistical Experiment
Statistical Hypothesis
Statistics
Stemplot
Strata
Stratified Sampling
Subset
Subtraction Rule
Sum Vector
Symmetric Matrix
Symmetry
Systematic Sampling
T Distribution
T Score
T Statistic
Test Statistic
Transpose
Treatment
t-Test
Two-Sample t-Test
Two-stage Sampling
Two-Tailed Test
Two-Way Table
Type I Error
Type II Error
Unbiased Estimate
Undercoverage
Uniform Distribution
Unimodal Distribution
Union
Univariate Data
Variable
Variance
Vector Inner Product
Vector Outer Product
Vectors
Voluntary Response Bias
Voluntary Sample
Y Intercept
z-score

z-score
A z-score (aka, a standard score )
indicates how many
standard deviations
an element is from the mean. A z-score can be
calculated from the following formula.

z = (X - μ) / σ

where z is the z-score, X is the value of the element, μ is the population
mean, and σ is the standard deviation.

Here is how to interpret z-scores.

A z-score less than 0 represents an element less than the mean.
A z-score greater than 0 represents an element greater than the mean.
A z-score equal to 0 represents an element equal to the mean.
A z-score equal to 1 represents an element that is 1 standard
deviation greater than the mean; a z-score equal to 2, 2
standard deviations greater than the mean; etc.
A z-score equal to -1 represents an element that is 1 standard
deviation less than the mean; a z-score equal to -2, 2
standard deviations less than the mean; etc.
If the number of elements in the set is large, about 68% of the
elements have a z-score between -1 and 1; about 95% have a
z-score between -2 and 2; and about 99% have a z-score between
-3 and 3.
Here is another way to think about z-scores. A z-score is
the
normal random variable
of a
standard normal distribution
.