Oxford University Press - Online Resource Centres

Davis & Pecar: Quantitative Methods for Decision Making using Excel

Online glossary

The glossary terms are arranged alphabetically. Access the hyperlinks below to 'jump' to various sections in the glossary. You can also access the full version in PDF format.

[A] [B] [C] [D] [E] [F] [G] [H] [I] [J] [K] [L] [M] [N] [O] [P] [Q] [R] [S] [T] [U] [V] [W] [X] [Y] [Z]

A

a priori probability
Method assumes that you already know what the probability should be from a theoretical understanding of the experiment being conducted.

Abstractiveness
Availability of appropriate data and information.

Acceptance sampling
Acceptance sampling is used by industries worldwide for assuring the quality of incoming and outgoing goods.

Acceptance sampling process
Acceptance sampling uses statistical sampling to determine whether to accept or reject a production lot of material.

Accuracy
It is the difference between the average, or mean, of a number of readings and the true, or target, value.

Activities
Specific jobs or tasks that are components of a project.

Activity cost function
A methodology that measures the cost of activities, resources, and cost objects.

Activity on arrow
A network diagram showing sequence of activities, in which each activity is represented by an arrow, with a circle representing a node or event at each end.

Activity on node
A network where activities are represented by a box or a node linked by dependencies.

Addition law
The addition rule is a result used to determine the probability that event A or event B occurs or both occur.

Additive model
A model from the classical decomposition method that assumes that the components are related in an additive way.

Aggregate price indices
Monitors the way that price changes over time.

Algebra
Use of symbols to represent variables and describe the relationships between them.

Allocating resources
Apportioning shared resources, e.g. men, machines, time, finance, among the users of those resources, usually in a measured economic fashion to reduce cost for example.

Alpha, α
Alpha refers to the probability that the true population parameter lies outside the confidence interval. In a time series context, i.e. exponential smoothing, alpha is the smoothing constant.

Alternative hypothesis
A hypothesis that is true when we reject the null hypothesis.

Annuity
Amount invested to give a fixed income over some period.

Answer report
A report obtainable from using Solver indicating the solution variables and their values.

Arithmetic
Calculations with numbers.

Arithmetic mean
The average of a set of numbers.

ARMINA
Auto Regressive Integrated Moving Average. A type of the stochastic model that describes the time series by using both the auto regressive and the moving average component.

Assignable causes
Also known as ‘special cause’. An assignable cause is an identifiable, specific cause of variation in a given process or measurement.

Autocorrelation
Correlation between the time series with itself, but where the observations are shifted by a certain number of periods. The series of these autocorrelation coefficients for every time shift produces the autocorrelation function.

Average range
Average range calculated to monitor process variability.

Axes
Rectangular scales for drawing graphs.

[Back to top]

B

Backward pass
Part of the PERT/CPM procedure that involves moving backwards through the network to determine the latest start and latest finish times for each activity.

Bar chart
A diagram that represents the frequency of observations in a class by the length of bar.

Base period
The fixed point of reference for an index.

Bayes’ theorem
Bayes’ theorem is a result that allows new information to be used to update the conditional probability of an event.

Beta distribution
The Beta distribution models events which are constrained to take place within an interval defined by a minimum and maximum value.

Beta, β
Beta refers to the probability that false population parameter lies inside the confidence interval.

Biased
A systematic error in a sample.

Binomial distribution
A binomial distribution can be used to model a range of discrete random data variables.

Binomial probability distribution
Is the discrete probability distribution of the number of successes in a sequence of n independent yes/no experiments, each of which yields success with probability p.

Binomial probability function
The binomial distribution is a type of discrete probability distribution. It shows the probability of achieving ‘d’ successes in a sample of ‘n’ taken from an ‘infinite’ population where the probability of a success is ‘p’.

Box and whisker plot
Is a way of summarizing a set of data measured on an interval scale.

[Back to top]

C

Capacity planning
Is the process of determining the production capacity needed by an organization to meet changing demands for its products.

Category
A set of data is said to be categorical if the values or observations belonging to it can be sorted according to category.

Central Limit Theorem
States that whenever a random sample of size n is taken from any distribution with mean µ and variance σ2, then the sample mean will be approximately normally distributed with mean µ and variance σ2. The larger the value of the sample size n, the better the approximation to the normal.

Central tendency
Measures the location of the middle or the centre of a distribution.

Changing cell
In a Solver model, one or more cells that feed into a target cell whose value is to be determined by the Solver. Changing cells will always be numerical values and will never contain formulae.

Chart
Used to refer to any form of graphical display.

Cherbyshev’s theorem
Is an empirical rule that enables us to identify outliers.

Chi-squared distribution
With k degrees of freedom is the distribution of a sum of the squares of k independent standard normal random variables. It is useful because, under reasonable assumptions, easily calculated quantities can be proven to have distributions that approximate to the chi-squared distribution if the null hypothesis is true.

Class
Range or entry in a frequency distribution.

Classical decomposition method
Approach to forecasting that decomposes a time series into constituent components (trend, cyclical, seasonal, and random component), makes estimates of every component and then recomposes the time series and extrapolates the values into the future.

Coefficient of correlation
A value that indicates the strengths of relationship between two variables.

Coefficient of variation
Measures the spread of a set of data as a proportion of its mean. It is often expressed as a percentage.

Coefficient of determination
The ration of the explained and the total variance. It provides a measure of how well the model describes the data set.

Coefficients of objective
The variable multipliers in the objective function.

Common causes
The common cause variation arises from a multitude of small factors that invariably affect any process and will conform to a normal distribution, or a distribution that is closely related to the normal distribution.

Common logarithms
Logarithm to the base 10.

Compound Annual Growth Rate (CAGR)
The year-over-year growth rate of an investment over a specified period of time.

Compound interest
Arises when interest is added to the principal, so that, from that moment on, the interest that has been added also earns interest.

Conditional probability
Allows event probability values to be updated when new information is made available.

Confidence interval
A confidence interval specifies a range of values within which the unknown population parameter may lie, e.g. mean.

Confidence level
The confidence level is the probability value (1 −α) associated with a confidence interval.

Conservative approach (or pessimistic approach)
Would be used by a conservative decision maker to minimize payoff for each decision and then selecting the maximum of these minimum payoffs (Wald’s maximin criterion rule).

Constraints
Restrictions or limitations imposed on a problem.

Constraints
Limitations upon the values that can be adopted by a target cell and/or changing cells that reflect the logical requirements of the model.

Continuous probability distribution
If a random variable is a continuous variable, its probability distribution is called a continuous probability distribution.

Continuous quality improvement (CQI)
The ongoing betterment of products, services, or processes through incremental and breakthrough enhancements.

Continuous variable
A set of data is said to be continuous if the values belong to a continuous interval of real values.

Contribution
The value of each variables contribution within the linear programming objective function and constraints.

Control chart
Control charts are used to monitor the output of a process. They generally monitor the process mean, the process variation, or a combination of both.

Convex polyhedron
Geometrically, the linear constraints define a convex polyhedron, which is called the feasible region.

Coordinate geometry
Values of x and y that define a point on Cartesian axes.

Crash cost
Cost of crashing a project.

Crash time
Time to crash a project.

Crashing
The shortening of activity times by adding resources and hence usually increasing cost.

Crashing a network
The shortening of activity times by adding resources and hence usually increasing cost.

Crashing a project
Shortening a project time by spending money to reduce critical activities, for example, working overtime.

Critical path
The longest path in a project network.

Critical path activities
The activities on the critical path.

Critical path analysis
Critical path analysis (and PERT) are powerful tools to help schedule and manage complex projects.

Critical path method (CPM)
A network-based project scheduling procedure.

Critical test statistic
The critical value for a hypothesis test is a limit at which the value of the sample test statistic is judged to be such that the null hypothesis may be rejected.

Cumulative frequency distribution
The cumulative frequency for a value x is the total number of scores that are less than or equal to x.

Curve
A curved line that visualises a nonlinear function.

Cyclical component
One of the components from the classical time series analysis that will define the long-term regularities (more than a year) of the movements of the time series.

Cyclical variations
Variations around the trend line that take place over a larger number of years.

[Back to top]

D

Data collection
The gathering of facts that can then be used to make decisions.

Data presentation
The method used to present findings in numerical and graphical form.

Decimal fraction
Part of the whole described by a number following a decimal point, such as 0.2, 0.34.

Decision alternatives
The decision maker develops a finite number of alternative decisions available when the decision model is developed.

Decision criteria
Simple rules that recommend an alternative for decisions with uncertainty.

Decision making
Make decisions based upon the data and information available.

Decision making process with risk
Implies a degree of uncertainty and an inability to fully control the outcomes or consequences of such an action.

Decision making under uncertainty
The chance of an event (or events) occurring in the decision making process is unknown or difficult to assess.

Decision making with certainty
The decision maker knows for sure (that is, with certainty) the outcome or consequence of every decision alternative.

Decision node
Points in a decision tree where decisions are made.

Decision tree
Diagram that represents a series of alternatives and events by the branches of a tree.

Decision variables
A controllable set of inputs for a linear programming model.

Definite integral
Evaluation of the indefinite integral at two points to find the difference.

Degrees of freedom
Refers to the number of independent observations in a sample minus the number of population parameters that must be estimated from sample data.

Depreciation
Refers to the decrease in the value of assets.

Deseasonalized
Values of the variable from which any seasonal influences have been removed.

Deterministic
Describing a situation of certainty.

Deterministic model
A deterministic model will give the same result output(s) based upon the same input(s).

Deviation
A difference between the actual values in the series and the trend that approximates the time series.

Differentiation
Algebraic process to calculate the instantaneous rate of change of one variable.

Directed numbers
The numbers which have a direction and a size are called directed numbers.

Directional test
Implies a direction for the implied hypothesis (one tailed test).

Discount rate
Value of r when discounting to present value.

Discounting
Value of (1 + r)−n when discounting to present value.

Discrete data
Where the values or observations belonging to it are distinct and separate.

Discrete probability distribution
If a random variable is a discrete variable, its probability distribution is called a discrete probability distribution.

Dual price
The improvement in the value of the optimal solution per unit increase in a constraint right-hand side value.

Dummy activities
A dummy activity is a simulated activity of sorts, one that is of zero duration and is created for the sole purpose of demonstrating a specific relationship and path of action on the arrow diagramming method.

[Back to top]

E

Earliest start time (EST)
The earliest time an activity may begin.

Earliest finish time (EFT)
The earliest time an activity may be completed.

Effective annual interest rate
Is the interest rate on a loan or financial product restated from the nominal interest rate as an interest rate with annual compound interest payable in arrears.

Empirical probability
Method whereby the value of a probability is based on measurement.

EOQ
The optimal level of an inventory item that should be ordered periodically so that all relevant costs are minimized.

Equation
Algebraic formula that shows the relationship between variables, saying that the value of one expression equals the value of the second expression.

Equation of the line
Equation of the line representing the equation y = mx + c.

Error
A difference between the active and predicted value (in the context of time series).

Estimate
An estimate is an indication of the value of an unknown quantity based on observed data.

Event
An instantaneous occurrence that changes the state of the system in a model.

Excel solver
Is an Excel add in that can solve for the values of one or more unknowns in a spreadsheet model.

Expected activity time
The time that an activity is expected to take, an average from pessimistic, optimistic and most likely times.

Expected loss
In this case we are looking to minimize the expected loss by choosing the alternative loss with the smallest expected loss.

Expected monetary value of perfect information
The expected value of information that would tell the decision maker exactly which state of nature is going to occur (i.e. perfect information).

Expected time
The average activity time.

Expected value
In a mixed strategy game, a value computed by multiplying each payoff by its probability and summing. It can be interpreted as the long-run average payoff for the mixed strategy.

Expected value
The mean of a probability distribution is called the expected value and can be found by multiplying each payoff value by its associated probability.

Explained variations
Also called the regression sum of squares (SSR). This is the portion of the variations between the actual and modelled data that can be explained by the model. Together with SSE, this produces the Total Sum of Squares (SST), or the total variations from the model.

Exponential smoothing method
Forecasting method that uses a constant (or several values by ‘smoothing‘ the past values in the series. The effect of this constant decreases exponentially as the older observations are taken into calculation.

Ex-post forecasts
The forecasts that are approximating the past (actual) values of the variable, as opposed to the true future forecasts. The ex-post forecasts are also called the fitted values.

[Back to top]

F

F distribution
A continuous statistical distribution which arises in the testing of whether two observed samples have the same variance.

Factor
A coefficient or the value that indicates the rate of change of variable (multiplier).

Feasible region
The set of all feasible solutions.

Feasible solution
A decision alternative or solution that satisfies all constraints.

Finite population
Population with a fixed number of items.

Finite population correction factor
It is common practice to use finite population correction factors in estimating variances when sampling from a finite population.

First derivative of y on x
Result of differentiating a function of the form y = f(x).

Five data summary
Consists of using also the minimum and maximum values together with the quartiles to summarize data.

Five-number summary
A 5-number summary is especially useful when we have so many data that it is sufficient to present a summary of the data rather than the whole data set. It consists of 5 values: the most extreme values in the data set (maximum and minimum values), the lower and upper quartiles, and the median.

Fixed costs
The portion of the total cost that does not depend on the volume; this cost remains the same no matter how much is
produced.

Float
The amount of time available for a task to slip before it results in a delay of the project end date. It is the difference between the task’s early and late start dates.

Forecasting errors
A series of differences between every actual and its corresponding forecasted value in the time series.

Forecasting horizon
A number of future observations that are to be extrapolated.

Forecasting
A method of predicting the future value of a variable, usually represented as the time series values.

Formulation
Formulating the linear programming equations based upon the information provided.

Forward pass
Part of the PERT/CPM procedure that involves moving forward through the project network to determine the earliest start and earliest finish times for each activity.

Four data summary
The four data summary exploits a measure of dispersion together with quartiles that can also represent skewness.

Fraction
Part of the whole expressed as the ratio of a numerator over a denominator.

Frequency distribution
Diagram showing the number of observations in each class.

Function
(as per P152) in this context a subprogram or a mini routine that converts a formula or an algorithm into a single value, or an array of values.

Future value
Value of an investment after a period of time.

Future value of an ordinary annuity (see sinking fund)
A fund that receives regular payments so that a specified sum is available at some point in the future.

[Back to top]

G

Gantt chart
A bar chart that depicts a schedule of activities and milestones. Generally activities are listed along the left side of the chart and the time line along the top or bottom. The activities are shown as horizontal bars of a length equivalent to the duration of the activity. Gantt charts may be annotated with dependency relationships and other schedule-related information.

General addition law
The addition rule is a result used to determine the probability that event A or event B occurs or both occur.

Goal programming
A linear programming approach to multicriteria decision problems whereby the objective function is designed to minimize the deviations from goals.

Gradient
A measure of how steeply a function is changing (dy/dx).

Gradient of the line
Gradient of the line of the form y = mx + c.

Graph
Used to refer to any form of graphical display.

Graphical method
A method for solving linear programming problems with two decision variables on a two-dimensional graph.

Grouped data
Raw data already divided into classes.

Grouped frequency distribution
Data arranged in intervals to show the frequency with which the possible values of a variable occur.

[Back to top]

H

Histogram
Frequency distribution for continuous data.

Holding costs
The cost of holding a given level of inventory of a period of the production run. It is assumed to depend upon the average stock level, the purchase cost of the item, and a percentage of this cost representing the opportunity cost of the funds tied up.

Homoscedasicity
Also called homogeneity or uniformity of variance. Implies that the variance is finite, i.e. the variance around the regression line is constant for all values of the predictor.

Hurwicz criterion rule
Is a weighted average method that is a compromise between the optimistic and pessimistic decisions.

Hypothesis test procedure
A series of steps to determine whether to accept or reject a null hypothesis, based on sample data.

[Back to top]

I

Immediately preceding activities
The activities that must be completed immediately prior to the start of a given activity.

Increasing the sum invested
Adding to the amount deposited at the end of each year to increase the sum invested.

Indefinite integral
The reverse of differentiation.

Independent events
Are events that do not influence each other.

Index or index number
A number that compares the value of a variable at any point in time with its value in a base period.

Infeasible solution
A decision alternative or solution that does not satisfy one or more constraints.

Infeasible solution
A solution for which at least one constraint is violated.

Instantaneous gradient
Gradient of a curve at a single point.

Integer programming
A linear programme with the additional requirement that one or more of the variables must be integer.

Integration
The reverse of differentiation.

Intercept
A point at which the function crosses vertical y-axis.

Interest
Amount paid to lenders as reward for using their money.

Internal rate of return
Is a rate of return used in capital budgeting to measure and compare the profitability of investments – also called the discounted cash flow rate of return or the rate of return.

Interquartile range (IQR)
The inter-quartile range is a measure of the spread of or dispersion within a data set.

Interval
An interval scale is a scale of measurement where the distance between any two adjacent units of measurement (or ‘intervals’) is the same but the zero point is arbitrary.

Interval estimate
Is a range of values within which, we believe, the true population parameter lies with high probability.

Inventories
Items such as raw materials, components, finished, and semi-finished goods that are essential inputs in the process of manufacturing the firm’s end products.

Inventory replacement
Is the rate at which the required amount of new inventory can be bearded to existing stock.

Irregular variations
Any variations around the trend line that are neither cyclical nor seasonal. Also used as another expression for errors or residuals.

[Back to top]

K

Kurtosis
Is a measure of the ‘peaked ness‘ of the distribution.

[Back to top]

L

Laplace equally likely criterion
Finds the decision alternative with the highest average payoff.

Laspeyres’ index
Base-weighted index.

Latest finish time (LFT)
The latest time an activity may be completed without increasing the project completion time.

Latest start time (LST)
The latest time an activity may begin without increasing the project completion time.

Lead time
The length of time between placing an inventory order and receipt of the delivery from the supplier. A lead time of zero implies immediate delivery, although this may either be in full or in batches.

Least squares method
A method used to estimate parameters that will enable the fitting of a curve to a series of historical data. The criterion is that the sum of all squared differences between the fitted and actual data has to be the minimum value.

Limits report
A report obtainable from using Solver. Range over which decision variables can change without breaking constraints while keeping one fixed.

Linear programming
A mathematical model with a linear objective function, a set of linear constraints, and non-negative variables.

Linear regression
A method that models relationships between a response variable and a predictor, or explanatory variable. This relationship has to be linear, i.e. described by a linear model.

Linear relationship
A relationship between two variables of the form y = ax + c, giving a straight line graph.

Logarithm
The value of n when a number is represented in the logarithmic format bn = x.

[Back to top]

M

Mathematical programming (MP)
A family of techniques to optimize objectives when the resources are constrained.

Maximax criterion rule
The optimistic decision maker chooses the largest possible payoff .

Maximizing
The greatest value in the domain of a variable is its maximum.

Mean
The average of a set of numbers.

Mean error
The mean value of all the differences between the actual and forecasted values in the time series.

Mean square error
The mean value of all the squared differences between the actual and forecasted values in the time series.

Measure of dispersion (or spread)
Showing how widely data is dispersed about its centre.

Measure of location
Showing the typical value for a set of data values.

Median
The middle value of a set of numbers.

Microsoft Project
Computer software to plan and control projects.

Minimum criterion rule
The optimistic decision maker chooses the lowest cost.

Minimizing
The smallest value in the domain of a variable is its minimum.

Mode
The most frequent value in a set of numbers.

Model
An abstraction of reality i.e. as implified method of representing reality.

Modified internal rate of return
Is a modification of the internal rate of return which is a financial measure of an investment’s attractiveness.

Monte Carlo simulation method
Is a computerized mathematical technique that allows people to account for risk in quantitative analysis and decision making.

Mortgage
Is a loan secured on the purchase of a property.

Most likely time
The most likely activity time under normal conditions.

Moving averages
A series of averages calculated for a rolling, or a moving, number of periods. Every subsequent interval excludes the first observation from the previous interval and includes the next observation following the previous period.

Multiplication rule of probability
The multiplication rule is a result used to determine the probability that two events, A and B, both occur.

Multiplicative model
A model from the classical decomposition method that assumes that the components are related in a multiplicative way.

Mutually exclusive
Events where only one can happen, but not both.

[Back to top]

N

Natural logarithms
Logarithm to the base e.

Net present value
Is the difference between the present value of cash inflows and the present value of cash outflows.

Network analysis
Network analysis is an important project management tool used during the planning phase of a project. It demonstrates activities against fixed time frames and helps in controlling the sequence of activities in a project.

Network diagram
A graphic tool for depicting the sequence and relationships between tasks in a project.

Node
An intersection or junction point of an influence diagram or a decision tree.

Nominal data
Data for which there is no useful quantitative measure.

Non-linear programming
An optimization problem that contains at least one non-linear term in the objective function, or a constraint.

Non-linear regression
A method that models a relationship between a response variable and a predictor, or explanatory variable. This relationship has to be non-linear, i.e. described by a non-linear model.

Non-linear relationship
A relationship between y and x that is not of the form y =mx + c but depends upon the power of x, e.g. y = x2 + 4.

Non-negativity constraints
A set of constraints that requires all variables to be non-negative.

Non-parametric tests (also called distribution-free tests)
Are often used in face of their parametric counterparts when certain assumptions about the underlying population are questionable.

Normal cost
The project cost under normal conditions.

Normal distribution
Is a continuous probability distribution that has a bell-shaped probability density function.

Normal probability plot
Graphical technique to assess whether the data is normally distributed.

Normal time
The project time under normal conditions.

Null hypothesis
The original hypothesis that is being tested.

Numbering the nodes
Of a network to identify points in the project, useful for project control as well as identifying the end and start of activities.

Number of combinations
Number of ways of selecting r items from n, when the order of selection does not matter.

[Back to top]

O

Objectives
What we are trying to achieve or optimise towards, e.g. maximizing profit.

Objective function
A mathematical expression that describes the problem’s objective.

One sample test
Is a statistical hypothesis test which uses one sample from the population.

One sample t-test
A one sample t-test is a hypothesis test for answering questions about the mean where the data are a random sample of independent observations from an underlying normal distribution N(µ, σ2), where σ2 is unknown.

One-tail test
A one tailed test is a statistical hypothesis test in which the values for which we can reject the null hypothesis, H0, are located entirely in one tail of the probability distribution.

Operating characteristic curve
The OC curve is used in sampling inspection. It plots the probability of accepting a batch of items against the quality level of the batch.

Operational
These are everyday decisions used to support tactical decisions. Their impact is immediate, short-term, short range, and usually has a low impact cost to the organization.

Optimal solution
A best feasible solution according to the objective function.

Optimizing
The specific decision-variable value(s) that provide the ‘best’ output for the model.

Optimistic approach
Would be used by an optimistic decision maker where the largest possible payoff is chosen (maximax criterion rule).

Optimistic time
An activity time estimate based on the assumption that the activity will progress in an ideal manner.

Ordering costs
The cost of ordering a delivery of inventory from the supplier.

Ordinal data
Data that cannot be precisely measured, but that can be ranked or ordered.

Origin
The point where x and y Cartesian axes cross.

Outcome
The result of an experiment.

Outlier
In statistics, an outlier is an observation that is numerically distant from the rest of the data.

Output cells
The cells in a spreadsheet that provide output that depends on the changing cells.

[Back to top]

P

Paasche index
Current-weighted index.

Parameter
A single value (quantity) used to multiple a variable in an equation.

Parametric
Any statistic computed by procedures that assume the data were drawn from a particular distribution.

Parametric test
Hypothesis test that concerns the value of a parameter.

Pareto analysis
A statistical method of identifying the extent to which the majority of stoppages in production can be attributed to stockouts in relatively few items.

Path
A sequence of connected nodes that leads from the Start node to the Finish node.

Pattern
A recurring and predictable sequence of values that look identical or similar.

Payoff table (or matrix)
Table that shows the outcomes for each combination of alternatives and events in a decision.

Percentage
Fraction expressed as a part of 100.

Percentile
Are values that divide a sample of data into one hundred groups containing (as far as possible) equal numbers of observations.

Permissable region
A region inside the confidence interval that implies that certain values are not a result of random variations.

Permutation
Number of ways of selecting r items from n, when the order of selection is important.

Perpetual annuity
Is an annuity that has no end, or a stream of cash payments that continues forever.

Pessimistic (or conservative approach approach)
Would be used by a pessimistic decision maker to minimize payoff for each decision and then select the maximum of these minimum payoffs (Wald’s maximin criterion rule).

Pessimistic time
An activity time estimate based on the assumption that the most unfavourable conditions apply.

Pie chart
Diagram that represents the frequency of the observations in a class by the area of a circle.

Point estimate
A point estimate (or estimator) is any quantity calculated from the sample data that is used to provide information about the population.

Point estimators
In statistics, point estimation involves the use of sample data to calculate a single value which is to serve as a ‘best guess’ of an unknown population parameter.

Poisson distribution
Poisson distributions model a range of discrete random data variables.

Poisson probability distribution
Is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.

Polynomial
Equations containing a variable raised to some power.

Population
Is any entire collection of people, animals, plants or things from which we may collect data.

Population mean (µ)
Mean value when taking into account the entire population of data values.

Population standard deviation (σ)
Standard deviation value when taking into account the entire population of data values.

Posterior probability
Is the new probability value based upon the original conditions and the new information identified.

Power
Value of n when a number is represented as xn (x to the power n).

Precedence table
Table of predecessor tasks.

Precision
Is a measure of how close an estimator is expected to be to the true value of a parameter.

Predecessor task
A task (or activity) that must be started or finished before another task or milestone can be performed.

Prediction interval
An interval that will, given a certainty level or level of confidence, define the upper and lower value of the forecast within which the actual value is likely to fluctuate.

Present value
Is the current worth of a future sum of money or stream of cash flows given a specified rate of return.

Present value of a stream of earnings
Is the current worth of a future sum of money or stream of cash flows given a specified rate of return.

Present value of an ordinary annuity (See trust fund)
Is the value of a stream of expected or promised future payments that have been discounted to a single equivalent value today.

Primary data
Is data collected by the user.

Principal
Amount originally borrowed for a loan.

Prior probability
Is the original probability value before new information is identified.

Probabilistic (or stochastic)
Containing uncertainty that is measured by probabilities.

Probabilistic demand
A situation in which the estimated demand for the end product, and therefore the inventory item, is subject to random variation rather than being known for certain.

Probability
Provides a number value to the likely occurrence of a particular event.

Probability distribution
A description of the relative frequency of observations.

Probability sampling
The concept of probability sampling assumes some form of random selection of the data and it is assumed that such data, even though random, represents the population.

Probability value (p-value)
Of a statistical hypothesis test is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis H0 is true.

Process control
Is the manipulation of the conditions of a process to bring about a desired change in the output characteristics of the process.

Program Evaluation and Review Technique (PERT)
A method of Critical Path Analysis that is designed for uncertain activity times.

Project planning
A series of steps or actions to complete the project given the stated activity times and costs.

P-value
The p-value is the probability of getting a value of the test statistic as extreme as or more extreme than that observed by chance alone, if the null hypothesis is true.

[Back to top]

Q

Quadratic equation
Equation with the general form y = ax2 + bx + c.

Qualitative variable
Variables can be classified as descriptive or categorical.

Quality assurance
Having confidence that a service provides a quality service.

Quality control
Actions or methods that are used to ensure the quality of a product or service.

Quantitative variable
Variables can be classified using numbers.

Quartile
Quartiles are values that divide a sample of data into four groups containing (as far as possible) equal numbers of observations.

Quartile range
Difference in value between the third and first quartiles.

Questionnaire
Set of questions used to collect data.

[Back to top]

R

R chart
Is a type of control chart used to monitor variable data when samples are collected at regular intervals from a business process.

Random
A property that implies impossibility to predict precisely the future value. The general shape of a random time series is that without a trend and any regularity. This is the shape and the property that a series of errors should have after fitting a forecasting model to a time series and subtracting the corresponding values.

Random experiment
Is a sampling technique where we select a group of subjects (a sample) for study from a larger group (a population). Each individual is chosen entirely by chance and each member of the population has a known, but possibly non-equal, chance of being included in the sample. By using random sampling, the likelihood of bias is reduced.

Range
Difference between largest and smallest values in a data set.

Rate
Interest rate per annum.

Ratio
Ratio data are continuous data where both differences and ratios are interpretable and have a natural zero.

Raw data
Raw facts that are processed to give information.

Redundant constraint
A constraint that does not affect the feasible region.

Region of rejection
The range of values that leads to rejection of the null hypothesis.

Regression
A generic name for a method that describes and predicts movements of one variable with another related variable.

Relative frequency
Relative frequency is another term for proportion; it is the value calculated by dividing the number of times an event occurs by the total number of times an experiment is carried out.

Re-order stock level
The level of current inventory that triggers an order for more stock.

Re-order time
The time at which an order for more inventory must be placed. This will depend upon the usage rate, the lead time, the replenishment rate, and the initial amount of stock that was received.

Replenishment rate
The rate at which inventory that has been ordered is delivered by the supplier. This may be in full or in lots or batches over a period of time.

Residuals
The same as errors, i.e. the differences between the actual and predicted values. Also used for unexplained variations after fitting a regression model.

Resource allocation
Resources allocated to complete the tasks in a project.

Resource histogram
A block diagram representing the sum of the activity resources at a particular point in time.

Return-to-risk ratio
The observed average return divided by the standard deviation of returns. This is the simplest measure of return to risk trade-off and can be used to compare portfolio returns.

Risk averter
Implies that the decision maker would be cautious of investing any more funds within this investment.

Risk neutral
Implies that for every extra unit of currency provided the change in utility would be constant.

Risk seeker
Implies that the decision maker is prepared to take increased risks to make larger profits.

[Back to top]

S

Safety stock
The amount of inventory held in excess of the EOQ to prevent random variations in the level of demand, causing a stockout.

Sample
A group of units collected from a larger group (the population).

Sample from a normal population
Collection of a sample from a normal population.

Sample mean
An estimator available for estimating the population mean.

Sample proportion
Is the ratio of the number elements in the sample to the complete sample size.

Sample space
An exhaustive list of all the possible outcomes of an experiment.

Sample standard deviation
Is a measure of the spread of or dispersion within a set of sample data.

Sample variance
Is a measure of the spread of or dispersion within a set of sample data.

Sampling distribution
The sampling distribution describes probabilities associated with a statistic when a random sample is drawn from a population.

Sampling distribution of the mean
Distribution of the mean of samples from the population.

Sampling error
Sampling error refers to the error that results from taking one sample rather than taking a census of the entire population.

Sampling frame
A list of every member of the population.

Sampling is without replacement
Sample item not replaced.

Savage minimax regret
The regret of an outcome is the difference between the value of that outcome and the maximum value of all the possible outcomes, in the light of the particular chance event that actually occurred. The decision maker should choose the alternative that minimizes the maximum regret he could suffer.

Scatter plot
Graph of a set of points (x, y).

Seasonal component
One of the components from the classical time series analysis that will define the short-term regularities (less than a year) of the movements of the time series.

Seasonal indices
Are the values that indicate by how much a time series deviates from the trend in every seasonal period. As the deviation is typically multiplied by 100, the values are called indices.

Seasonal variations
Variations around the trend line that take place within one year.

Second derivative of y on x
The result of differentiating the first derivative (d2y/dx2).

Secondary data
Is data collected by someone other than the user.

Semi-interquartile range
Half the interquartile range.

Sensitivity
The size of a response value to changes of one of the input values.

Sensitivity analysis
The study of how changes in the coefficients of a linear programming problem affect the optimal solution.

Sensitivity Report
A report obtainable from using Solver indicating the ranges over which solution variables may hold true. For example, the sensitivity values of a linear programme.

Serial correlation
Same as autocorrelation, i.e. a relationship between a variable and itself over different time intervals. The use ‘serial correlation’ is mainly used in regression analysis to describe a relationship of errors over different time intervals.

Shadow price
Is the change in the objective function value that would arise if the constraint level is adjusted by 1 without changing the optimal constraint.

Significance
A percentage that indicates if some event is random or a result of some pattern. Most frequently used levels of significance are 1%, 5% and 10%. They are the thresholds indicating the probability beyond which an event or relationship is random (due to chance).

Significance level, α
the significance level of a statistical hypothesis test is a fixed probability of wrongly rejecting the null hypothesis, H0, if it is in fact true.

Sinking fund
Is a sum apart periodically from the income of a government or a business and allowed to accumulate in order, ultimately, to pay off a debt.

Simple equations
Equations containing but one unknown quantity, and that quantity only in the first degree.

Simple index
Is a number that expresses the relative change in price, quantity, or value from one period to another.

Simple interest
Interest paid on only the initial deposit, but not on interest already earned.

Simple random sample
Is a basic sampling technique where we select a group of subjects (a sample) for study from a larger group (a population). Each individual is chosen entirely by chance and each member of the population has an equal chance of being included in the sample. Every possible sample of a given size has the same chance of selection; i.e. each member of the population is equally likely to be chosen at any stage in the sampling process.

Simple tables
A table consisting of an ordered arrangement of rows and columns that allow data and information to be accessible in a visual form.

Simplex method
An algebraic procedure for solving linear programming problems. The simplex method uses elementary row operations to iterate from one basic feasible solution (extreme point) to another until the optimal solution is reached.

Sinking fund (See future value of an ordinary annuity)
A fund that receives regular payments so that a specified sum is available at some point in the future.

Skewness
Skewness is defined as asymmetry in the distribution of the data values.

Slack
The length of time an activity can be delayed without affecting the project completion time.

Smoothing constant alpha
A constant used in exponential smoothing to predict the future values by ‘smoothing’ or ‘discounting’ the influence that all the past values in the series have on this future value. The influence decreases exponentially the further in the past we go.

Solver
A software package for solving certain types of mathematical models, such as linear programming models.

Spectrum of management decisions
Explains the three levels of management decision making: operational, tactical, and strategic.

Spread
The distance an observation is from the mean.

Square of a number
Is that number multiplied by itself.

Square root of a number
The square root of n (√n) is the number that is multiplied by itself to give n.

Standard deviation
A measure of the data spread, equal to the square root of variance.

Standard error
Standard deviation of the actual values around the regression line. Calculated by dividing the error sum of squares (unexplained variations) by the degrees of freedom.

Standard error of the mean (SEM)
Standard deviation of the sampling error of the mean.

Standard form (or scientific notation)
A number written in the form A * 10n.

Standard normal distribution
Is a normal distribution with zero mean and unit standard deviation.

Stated limits
The lower and upper limits of a class interval.

States of nature
The decision maker should develop a decision model with mutually exclusive and exhaustive list of all possible future events.

Statistic
A statistic is a quantity that is calculated from a sample of data.

Statistical inference
Concerns the problem of inferring properties of an unknown distribution from data generated by that distribution.

Statistical process control
Statistical process control involves using statistical methods to monitor processes.

Stochastic model (or probabilistic)
Containing uncertainty that is measured by probabilities.

Stockout
A situation in which there is no stock left and output still to be produced.

Straight line graph
A plot of y against x for y = 2x + 3 will result in a straight line when the line is plotted through the coordinate points (x, y).

Strategic
Strategic decisions are the highest level with decisions focusing on general direction and long-term business goals.

Structured
Structured decisions are the decisions which are made under the established situation.

Student’s t distribution
Is a family of continuous probability distributions that arises when estimating the mean of a normally distributed population in situations where the sample size is small and population standard deviation is unknown.

Student’s t test
Is any statistical hypothesis test in which the test statistic follows a Student’s t distribution if the null hypothesis is supported.

Subjective probability
A subjective probability describes an individual’s personal judgement about how likely a particular event is to occur.

Symmetrical
A data set is symmetrical when the data values are distributed in the same way above and below the middle value.

[Back to top]

T

T value
A calculated value that will be compared with the critical t-value from study t-distribution to decide if a hypothesis will be accepted or rejected.

Tactical
Tactical decisions support strategic decisions and tend to be medium range, with medium significance, and with moderate consequences.

Target cell
A cell in a worksheet that is to be minimized, maximized or made equal to a defined value by the Solver on the basis of one or more defined changing cells in the same or another linked worksheet. It must always contain a formula that is linked to the changing cells.

Target cell value
Is the value that the target cell is required to adopt. This can be a maximum, minimum or specified value.

Three data summary
The mean, median, and mode are all measures of central tendency since all three provide an understanding of where the majority of the data hang out and tend to centralize.

Time
Time period of the investment/loan in years.

Time series
A set of data points (x, y) where variable x represents the time point.

Total enumeration
Tracking and evaluating all paths in a project during crashing.

Total float
Slack time or float time of an activity.

Transportation problem
A network flow problem that often involves minimizing the cost of shipping goods from a set of origins to a set of destinations; it can be formulated and solved as a linear programme by including a variable for each arc and a constraint for each node.

Tree diagram
Is a useful visual aid to help map out alternative outcomes of an experiment and associated probabilities.

Trend
Either a component in the classical time series, or a generic expression for the general direction and the shape of the time series movements.

Trend component
The trend component is the long term movements in the mean.

Triang
Is often used when the minimum, maximum, and the most likely values are known.

Triangular distribution
Is a continuous probability distribution with lower limit a, upper limit c and mode b, where a ≤ b ≤ c.

Trust fund (See present value of an ordinary annuity)
Is an arrangement that allows individuals to create sustained benefits for another individual or entity.

Turning points
Maxima and minima on a graph.

Two data summary
The mean is the most common indicator of central tendency while the standard deviation is a popular and normalized measure of dispersion.

Two sample test
Is a statistical hypothesis test which uses two samples from the population.

Two-tail tests
A two tailed test is a statistical hypothesis test in which the values for which we can reject the null hypothesis, H0, are located in both tails of the probability distribution.

Type 1 error
Finding a result statistically significant when in fact it is not.

Type I error, α
a type I error occurs when the null hypothesis is rejected when it is in fact true; that is, H0 is wrongly rejected.

Type 2 error
When a survey finds that a result is not significant, though in fact it is.

Type II error, β
a type II error occurs when the null hypothesis H0 is not rejected when it is in fact false.

Typical seasonal index
A single value for every seasonal period that represents all corresponding seasonal periods.

[Back to top]

U

UK consumer price index (CPI)
The consumer price index is the official measure of inflation of consumer prices of the United Kingdom published monthly by the Office for National Statistics. The CPI calculates the average price increase as a percentage for a basket of 600 different goods and services with the CPI using the geometric mean of prices to aggregate items at the lowest levels, instead of the arithmetic mean as used by the RPI. This means that the CPI will generally be lower than the RPI.

UK retail consumer price index (RPI)
Is a separate measure of inflation published monthly by the Office for National Statistics. The RPI uses the arithmetic mean and in general will be higher than the CPI.

Unbiased
When the mean of the sampling distribution of a statistic is equal to a population parameter, that statistic is said to be an unbiased estimator of the parameter.

Uncertain activity times
Degree of uncertainty within the completion time for certain activities.

Uncertainty (or strict uncertainty)
Situation in which we can list possible events for a decision, but cannot give them probabilities.

Unexplained variations
Also called the Residual (Error) Sum of Squares (SSE). This is the portion of the variations between the actual and modelled data that cannot be explained by the model. Together with SSR, this produces the Total Sum of Squares (SST), or the total variations from the model.

Uniform distribution
All values within the range are deemed to have the same constant probability density, where that density changes to zero at the minimum and maximum values.

Unstructured
Unstructured decisions are the decisions which are made under the emergent situation.

Usage rate
The periodic rate at which inventory is used up by the production process.

Utility
A measure that shows the real value of money to a decision maker.

[Back to top]

V

Variable
A variable is a symbol that can take on any of a specified set of values.

Variance
The difference between estimated cost, duration, or effort and the actual result of performance. In addition, it can be the difference between the initial or baseline product scope and the actual product delivered.

Variation
A difference between the regression line values and the actual time series values that the regression line approximates.

Venn diagrams
A diagram that represents probabilities as circles that may or may not overlap.

Vertex
The point or corner of a feasible region where two or more points intersect on the feasible region.

[Back to top]

W

Wald’s maximin criterion rule
Would be used by a conservative (or pessimistic) decision maker to minimize payoff for each decision and then select the maximum of these minimum payoffs.

Weighted index
A price index that takes into account both prices and the importance of items.

What-if analysis
You can use several different sets of values in one or more formulas to explore all the various results.

[Back to top]

Y

Y-intercept
Point on the y-axis that a linear equation (y = mx + c) crosses the x = 0 axis.

[Back to top]

Z

Zero float
A condition where there is no excess time between activities. An activity with zero float is considered a critical activity.

Z-score
In statistics, a standard score indicates how many standard deviations an observation is above or below the mean value.

[Back to top]