Risk Simulator Real Options SLS title= Modeling Toolkit PEAT ESO Valuation ROV BizStats
Risk Simulator Runtime ROV Compiler ROV Extractor ROV Optimizer ROV Dashboard ROV Webmodels
Quantitative LSRO SDK rov-visual-modeler

SOFTWARE INFORMATION

Risk Simulator
Real Options SLS title=
Modeling Toolkit
PEAT
ESO Valuation
ROV BizStats
Risk Simulator Runtime
ROV Compiler
ROV Extractor
ROV Compiler
ROV Dashboard
ROV Webmodels

PEAT – PROJECT ECONOMICS ANALYSIS TOOL

Project Economics Analysis Tool

PEAT – PROJECT ECONOMICS ANALYSIS TOOL

OVERVIEW OF ROV PEAT

After installing PEAT – Project Economics Analysis Tool, start the software and a splash screen menu will appear. PEAT comes in 7 languages (English, Chinese Simplified, Chinese Traditional, Korean, Portuguese, Russian, and Spanish) and has several main modules briefly described below. A wealth of resources is available to get you started including Online Getting Started Videos, User Manuals, Case Studies, White Papers, Help Files, and Hands-on Exercises (these are installed with the software and available on the software DVD, as well on the website:
www.rovusa.com).

PEAT – Project Economics Analysis Tool software was developed to perform a comprehensive Integrated Risk Management analysis on capital investments, discounted cash flow, cost and schedule risk project management, oil and gas applications, healthcare analytics, and enterprise risk management. This tool will help you to set up a series of projects or capital investment options, model their cash flows, simulate their risks, run advanced risk simulations, perform business intelligence analytics, run forecasting and prediction modeling, optimize your investment portfolio subject to budgetary and other resource and qualitative constraints, and generate automated reports and charts, all within a single easy-to-use integrated software suite.

  • Corporate Investments (Dynamic Discounted Cash Flow)
  • Corporate Investments (Lease versus Buy)
  • Enterprise Risk Management
  • Goals Analytics (Sales Force Automation)
  • Healthcare Economics (HEAT and REJ)
  • Oil and Gas (Oil Field Reserves, Oil Recovery Analysis, Well-Type Curves)
  • Project Management (Cost and Schedule Risk)
  • Public Sector Analysis (Knowledge Value Added)
  • ROV Compiled Models

ROV PEAT takes all of our advanced risk and decision analytical methodologies and incorporates them into a simple-to-use and step-by-step integrated software application suite. It simplifies the risk-based decision analysis process and empowers the decision maker with insights from powerful analytics. If you already perform discounted cash flow modeling or enterprise risk management in Excel, why do you still need PEAT? Because PEAT’s integrated advanced analytical techniques extend the analysis you have already performed, and do so in a simple-to-use, simple-to-understand, and automated format, thus generating valuable insights that would be impossible without such advanced methods. PEAT allows you to scale and replicate your analysis, archive and encrypt your models and data, create automated reports, and customize your own PEAT modules.

MAIN MODULES IN PEAT

Corporate Investments (Dynamic Discounted Cash Flow): With a few simple assumptions, auto-generate cash flow statements of multiple projects, obtain key performance indicators and financial metrics (NPV, IRR, MIRR, PP, DPP, ROI), run risk simulations on uncertainty inputs, generate static Tornado sensitivity analysis, run dynamic sensitivities, simultaneously compare multiple projects within a portfolio, perform forecasts of future revenues and cash flow, draw multiple strategic investment pathways and options, model and value these strategic paths, compute and optimize the best projects within a portfolio subject to multiple constraints and restrictions, view results in management dashboards, encrypt your model and data, and auto-generate analysis reports.

Corporate Investments (Lease versus Buy): Run a lease versus buy analysis, compare capital and operating leases with interest payments and tax advantages, value the lease contract from the point of view of the lessee and lessor, and generate the complete cash flow analysis to obtain the net advantage to leasing.

Enterprise Risk Management (ERM): Perform traditional qualitative ERM with Risk Registers but also enhance the analysis with more quantitative analysis. This ERM module comes with an online Web version as well as a module within PEAT, where users can enter and save multiple Risk Registers to generate Key Risk Indicators (KRI) by Risk Divisions and Risk Taxonomy (Geographic, Operations, Products, Activity or Process, and Department), assign risk items to different Risk Managers by performing Risk Mapping of Risk Categories to different Risk Divisions, create Risk Dashboards of the results, enter Risk Elements within multiple customizable Risk Engagements, draw Risk Diagrams, perform and run Risk Controls on KRIs to see if certain risks are within control or out of control, perform Risk Forecasts, check if certain Risk Mitigation projects do, indeed, work or are statistically ineffective, perform Risk Sensitivity on KRIs, perform Risk Scenarios on quantitative risk metrics, run Risk Simulations on risk metrics, generate Risk Reports, and encrypt your data and files for the purposes of Risk Security.

Goals Analytics (Sales Force Automation): Develop and maintain corporate sales goals. A Web-based SaaS and desktop-based PEAT module, it focuses on the creation and use of goals that help make goal-setting more accurate and sustainable by any company seeking to improve its sales performance (sales goal forecasting, probability of hitting corporate revenues, sales pipeline analysis, and other sales-based metrics analysis).

Healthcare Economics (HEAT and REJ): Run the economics of various options available under the U.S. Affordable Care Act (Obamacare) for corporations providing employer-sponsored healthcare by loading employee-census data (healthcare economics analysis tool, HEAT), or perform rapid economic justification (REJ) of each option by simulating its high-level inputs.

Oil and Gas (Oil Field Reserves, Oil Recovery, Well-Type Curves): Perform oil and gas industry models on analyzing the economics of oil field reserves and available oil recovery based on uncertainty and risks, as well as generate oil-well-specific type curves and economics.

Project Management (Cost and Schedule Risk): Draw your own project pathways (simple linear project tasks versus complex parallel and recombining projects), then click a button to auto-generate the model. Enter the cost and schedule estimates as well as their spreads, then run a risk simulation on the model to determine the probability of cost-schedule overruns, cost-schedule buffers at various probabilities of completion, critical path identification, and sensitivity analysis.

Public Sector Analysis (Knowledge Value Added): Model government and nonprofit organizations’ value, value to society, or intangible value via Knowledge Value Added utilizing market comparables to identify and monetize such projects and assets.

ROV Compiled Models: With the compiler software, users can compile their existing Excel models into license-controlled executable EXE files. ROV’s patented methods can be used to encrypt and lock up the intellectual property and mathematical algorithms of the model, and issue hardware-controlled and timed licenses to the purchaser’s own users or customers.

PEAT CUSTOMIZATION, TRAINING, AND CONSULTING

Companies such as Northrop Grumman, Saudi Aramco, ARCO, Paychex, and others have their own tried-and-true models that have been incorporated into PEAT and customized to their specific needs, encrypted for their proprietary use only, and updated as required over time. We also perform customized and generalized training on PEAT modules, as well as provide consulting services for the purposes of getting our clients started quickly with their customized modules.

RISK ANALYTICS SECTIONS IN PEAT

Applied Analytics: Run Tornado Analysis and Scenario Analysis.

Methods include Dynamic Sensitivity Analysis, Comprehensive Reports, Distributional Probabilities (PDF, CDF, ICDF), Hypothesis Testing, Overlay Charts, Pareto Analysis, Sensitivity Analysis, Scenario Analysis, Statistical Analytics, Tornado Analysis, Seasonality Test, Detrending, ROV Decision Trees (Bayes’ Analysis, Risk Simulation on Dynamic Decision Trees, Sensitivity and Scenario Analysis, Utility Functions), ROV Strategic Trees, Applied Business Statistics, Risk Analytics, Hypothesis Tests, and over 160 models with easy-to-use detailed reports with superfast computations.

Risk Simulation: Set Monte Carlo Risk Simulation Input Assumptions, Run and View Simulation Results, Compare and Overlay Simulation Results, Run Analysis of Alternatives, and Perform Dynamic Sensitivity.

Methods include Run simulations on 50 Probability Distributions, Confidence and Percentiles of Simulated Results, Probabilities of Success and Failures, Value at Risk, Risk-Adjusted Returns, Probabilistic Overlay and Comparison of Multiple Options and Investment Strategies, and Dynamic Sensitivity of Results.

Options Strategies: Create Strategy Trees and Run Dynamic Decision Trees.

Options Valuation: Model and Value Strategic Real Options Strategies.

Methods include Customized American, Bermudan, and European Options on Abandonment, Contraction, Expansion, and Multiple Phased Complex Sequential Compound Options coupled with Monte Carlo Risk Simulation and Dynamic Sensitivities.

Forecast Prediction: Perform Forecast Prediction and Modeling.

Methods include Box-Jenkins ARIMA, Auto ARIMA, Basic Econometrics, Auto Econometrics, Cubic Spline, Custom Distributions, GARCH, J Curve, S Curve, Markov Chain, Maximum Likelihood, Limited Dependent Variables (Logit, Probit, and Tobit), Multiple Regression, Nonlinear Extrapolation, Stochastic Processes, Time-Series Decomposition, and Multivariate Trendlines.

Portfolio Optimization: Create Optimization Models and Assumption Settings, Run Optimization Routines, and Create Customized Optimization Models.

Methods include Static and Dynamic Optimization with Continuous and Integer Decision Variables; Efficient Frontier; Project Portfolio Selection; Linear and Nonlinear Optimization; and Project Selection and Portfolio Optimization subject to Budgetary, Schedule, and Other Resource and Subjective Constraints.

Dashboard: Create, Run, and Save Management Results Dashboards.

Knowledge Center: Quick Getting Started Guides and Videos.

DETAILED ADVANCED ANALYTICS

1. Monte Carlo Risk Simulation. You can set up probabilistic input Assumptions (45 probability distributions) and output Forecasts, run tens or hundreds of thousands of simulated trials, obtain the resulting probability distributions of your outputs, generate reports, extract simulated data statistics, create scenarios, and generate dynamic charts (histograms, S curves, PDF/CDF charts).

2. Forecasting and Prediction Analytics. Using historical data or subject matter estimates, you can run forecast models on time-series or cross-sectional data by applying advanced forecast analytics such as ARIMA, Auto ARIMA, Auto Econometrics, Basic Econometrics, Cubic Splines, Fuzzy Logic, GARCH (8 variations), Exponential J Curves, Logistic S Curves, Markov Chains, Generalized Linear Models (Logit, Probit, Tobit), Multivariate Regressions (Linear and Nonlinear), Neural Network, Stochastic Processes (Brownian Motion, Mean-Reversion, Jump-Diffusion), Time-Series Predictions, and Trendlines.

3. Optimization. This module helps you to optimize and find the best Decision variables (which projects to execute, stock portfolio allocation, human resource and budget allocation, pricing levels, and many other applications) subject to Constraints (budget, time, risk, cost, schedule, resources) to minimize or maximize some Objective (profit, risk, revenue, investment return, cost). You can enhance the analysis with Genetic Algorithms, Goal Seek, Efficient Frontier, Dynamic Optimization, and Stochastic Optimization.

4. Analytical Tools. These tools are very valuable to analysts working in the realm of risk analysis, from running sensitivity, scenario, and tornado analyses, to distributional fitting to find the best-fitting probability distributions, creating simulation reports and charts, diagnosing data, testing for reliability of your models, computing the exact statistical probabilities of various distributions, finding the statistical properties of your data, testing for precision, and setting correlations among input assumptions.

5. ROV BizStats. Comprises over 170 business intelligence and business statistics methods. Types of analysis include Charts (2D/3D Area/Bar/Line/Point, Box-Whisker, Control, Pareto, Q-Q, Scatter); Distributional Fitting (Akaike, Anderson-Darling, Chi-Square, Kolmogorov-Smirnov, Kuiper’s Statistic, Schwarz/Bayes Criterion); Generalized Linear Models (GLM, Logit, Probit, Tobit); Data Diagnostics (Autocorrelation ACF/PACF, Heteroskedasticity, Descriptive Statistics); Forecast Prediction (ARIMA, Cubic Spline, Econometrics, Fuzzy Logic, GARCH [E/M/T/GJR], Multiple Regression, J/S Curves, Neural Network, Seasonality Tests, Time-Series Decomposition, Trendlines, Yield Curves); Hypothesis Tests (Parametric T/F/Z, Nonparametric: Friedman’s, Kruskal-Wallis, Lilliefors, Runs, Wilcoxon); and Group Tests (ANOVA, Principal Component Analysis, Segmentation Clustering).

6. ROV Decision Tree. This module is used to create and value decision tree models. Additional advanced methodologies and analytics are also included: Decision Tree Models, Monte Carlo Risk Simulation, Sensitivity Analysis, Scenario Analysis, Bayesian (Joint and Posterior Probability Updating), Expected Value of Information, MINIMAX, MAXIMIN, and Risk Profiles. FORECASTING: QUICK OVERVIEW There are 18 forecast prediction methodologies available in PEAT and they can be assessed accessed through the PEAT | Forecasting menu. Each methodology is briefly discussed below:

1. ARIMA Autoregressive Integrated Moving Average is used for forecasting time-series data using its own historical data by itself or with exogenous/other variables.

2. Auto Econometrics.Runs some common combinations of Basic Econometrics and returns the best models.

3. Basic Econometrics. Applicable for forecasting time-series and cross-sectional data and for modeling relationships among variables, and allows you to create custom multiple regression models.

4. Combinatorial Fuzzy Logic. Applies fuzzy logic algorithms for forecasting time-series data by combining other forecast methods to create an optimized model.

5. Cubic Spline Curves.Interpolates missing values of a time-series dataset and extrapolates values of future forecast periods using nonlinear curves.

6. Custom Distributions. Expert opinions can be collected and a customized distribution can be generated. This forecasting technique comes in handy when the dataset is small or the goodness of fit is bad when applied to a distributional fitting routine. It can be accessed through PEAT | Set Input Assumption | Custom Distribution.

7. GARCH The Generalized Autoregressive Conditional Heteroskedasticity model is used to model historical and forecast future volatility levels of a time-series of raw price levels of a marketable security (e.g., stock prices, commodity prices, and oil prices). GARCH first converts the prices into relative returns, and then runs an internal optimization to fit the historical data to a mean-reverting volatility term structure, while assuming that the volatility is heteroskedastic in nature (changes over time according to some econometric characteristics). Several variations of this methodology are available in PEAT, including EGARCH, EGARCH-T, GARCH-M, GJR-GARCH, GJR-GARCH-T, IGARCH, and T-GARCH.

8. J Curve.This function models exponential growth where the value of the next period depends on the current period’s level and the increase is exponential. Over time, the values will increase significantly from one period to another. This model is typically used in forecasting biological growth and chemical reactions over time.

9. Markov Chain. Models the probability of a future state that depends on a previous state, forming a chain when linked together that reverts to a long-run steady state level. It is typically used to forecast the market share of two competitors.

10. Maximum Likelihood/Generalized Linear Models (Logit, Probit, Tobit). Generalized Linear Models (GLM) are used to forecast the probability of something occurring given some independent variables (e.g., predicting if a credit line will default given the obligor’s characteristics such as age, salary, credit card debt levels, or the probability a patient will have lung cancer based on age and number of cigarettes smoked monthly, and so forth). The dependent variable is limited (i.e., binary 1 and 0 for default/cancer, or limited to integer values 1, 2, 3, etc.). Traditional regression analysis will not work as the predicted probability is usually less than zero or greater than one, and many of the required regression assumptions are violated (e.g., independence and normality of the errors).

11. Multivariate Regression. Multivariate regression is used to model the relationship structure and characteristics of a certain dependent variable as it depends on other independent exogenous variables. Using the modeled relationship, we can forecast the future values of the dependent variable. The accuracy and goodness of fit for this model can also be determined. Linear and nonlinear models can be fitted in the multiple regression analysis.

12. Neural Network. Often used to refer to a network or circuit of biological neurons, modern usage of the term often refers to artificial neural networks comprising artificial neurons, or nodes, recreated in a software environment. Such networks attempt to mimic the neurons in the human brain in ways of thinking and identifying patterns and, in our situation, identifying patterns for the purposes of forecasting time-series data.

13. Nonlinear Extrapolation. The underlying structure of the data to be forecasted is assumed to be nonlinear over time. For instance, a dataset such as 1, 4, 9, 16, 25 is considered to be nonlinear (these data points are from a squared function).

14. S Curve. The S curve, or logistic growth curve, starts off like a J curve, with exponential growth rates. Over time, the environment becomes saturated (e.g., market saturation, competition, overcrowding), the growth slows, and the forecast value eventually ends up at a saturation or maximum level. This model is typically used in forecasting market share or sales growth of a new product from market introduction until maturity and decline, population dynamics, and other naturally occurring phenomenon.

15. Stochastic Processes. Sometimes variables cannot be readily predicted using traditional means, and these variables are said to be stochastic. Nonetheless, most financial, economic, and naturally occurring phenomena (e.g., motion of molecules through the air) follow a known mathematical law or relationship. Although the resulting values are uncertain, the underlying mathematical structure is known and can be simulated using Monte Carlo risk simulation. The processes supported include Brownian motion random walk, mean-reversion, jump-diffusion, and mixed processes useful for forecasting nonstationary time-series variables.

16. Time-Series Analysis and Decomposition. In well-behaved time-series data (e.g., sales revenues and cost structures of large corporations), the values tend to have up to three elements: a base value, trend, and seasonality. Time-series analysis uses these historical data and decomposes them into these three elements, and recomposes them into future forecasts. In other words, this forecasting method, like some of the others described, first performs a back-fitting (backcast) of historical data before it provides estimates of future values (forecasts).

17. Trendlines. Trendlines can be used to determine if a set of time-series data follows any appreciable trend. Trends can be linear or nonlinear (such as exponential, logarithmic, moving average, power, or polynomial).

OPTIMIZATION: QUICK OVERVIEW

In most simulation models, there are variables over which you have control, such as how much to charge for a product, how much to invest in a project, or which projects to select or invest in, all the while being subject to some Constraints or limitations (e.g., budget, time, schedule, cost, and resource constraints). These controlled variables are called Decision variables. Finding the optimal values for decision variables can make the difference between reaching an important goal or Objective and missing that goal. These OCD variables have to be set up via the PEAT | Optimization menu before an optimization can be run.

1. Objective. The output we care about that we wish to Maximize (profits, revenue, returns, etc.) or Minimize (e.g., cost, risk, etc.).

2. Decisions. The variables you have control over and that can be continuous (e.g., % budget allocation), binary (e.g., go or no-go on projects), or discrete integers (e.g., number of light bulbs to manufacture).

3. Constraints. Limitations in the model, such as budget, time, schedule, or other resource constraints.

4. Efficient Frontier. The Efficient Frontier optimization procedure applies the concepts of marginal increments and shadow pricing in optimization. That is, what would happen to the results of the optimization if one of the constraints were relaxed slightly? This is the concept of the Markowitz efficient frontier in investment finance.

5. Static Optimization. PEAT can be used to run a Static Optimization, that is, an optimization that is run on a static model, where no simulations are run and all the inputs in the model are static and unchanging, and it is applicable when it is assumed that no uncertainties exist. A static optimization is often run first to determine if there exist solutions to the optimization problem before a more protracted analysis is performed.

6. Dynamic Optimization. Monte Carlo simulation is first run and the results of the simulation are applied in the model, then optimization is run (e.g., Simulation-Optimization). A simulation is run for N trials, and then an optimization process is run for M iterations until the optimal results are obtained or an infeasible set is found. You can choose which forecast and assumption statistics to use and replace in the model after the simulation is run. Then, these forecast statistics can be applied in the optimization process.

ROV DECISION TREE: QUICK OVERVIEW

1. Decision Tree. This is the main tab of the ROV Decision Tree, used to create and value decision tree models. You can very quickly draw a decision tree complete with Decision Variables (square), Uncertainty Events (circle), and Terminal Nodes (triangles). You can also change the properties of these shapes (color, font, labels, size, and connection lines), create and save your own styles, and set single point input values or set simulation assumptions in the uncertainty and terminal event nodes.

2. Monte Carlo Risk Simulation. Runs Monte Carlo Risk Simulation on the decision tree. It allows you to set probability distributions as input assumptions for running simulations. You can either set an assumption for the selected node or set a new assumption and use this new assumption (or use previously created assumptions) in a numerical equation or formula.

3. Bayesian Analysis. Used on any two uncertainty events that are linked along a path, and computes the joint, marginal, and Bayesian posterior updated probabilities by entering the prior probabilities and reliability conditional probabilities; or reliability probabilities can be computed when you have posterior updated conditional probabilities.

4. EVPI, Minimax, Risk Profile. Computes the Expected Value of Perfect Information (EVPI) and MINIMAX and MAXIMIN Analysis, as well as the Risk Profile and the Value of Imperfect Information.

5. Sensitivity and Scenario Analysis. Sensitivity analysis is run on the input probabilities to determine the impact of inputs on the values of decision paths. Scenario tables help determine the output values given some changes to the input.

6. Utility Functions. Utility functions, or U(x), are sometimes used in place of expected values of terminal payoffs in a decision tree. They can be modeled for a decision maker who is risk-averse (downsides are more disastrous or painful than an equal upside potential), risk-neutral (upsides and downsides have equal attractiveness), or risk-loving (upside potential is more attractive).

ANALYTICAL TOOLS: QUICK OVERVIEW

1. Create Forecast Statistics Table. After a simulation is run, you can extract the output forecasts main statistics as a comprehensive table.

2. Create Report. After a simulation is run, you can generate a report of the assumptions and forecasts used in the simulation run, as well as the results obtained during the simulation run.

3. Data Deseasonalization and Detrending. Removes any seasonal and trending components in your original data.

4. Distributional Analysis. Computes the probability density function (PDF), where given some distribution and its parameters, you can determine the probability of occurrence given some outcome x. The cumulative distribution function (CDF) is also computed, which is the sum of the PDF values up to this x value. Finally, the inverse cumulative distribution function (ICDF) is used to compute the value x given the cumulative probability of occurrence. Works for all 45 probability distributions available in PEAT.

5. Distributions Charts and Tables. Used to compare different parameters of the same distribution (e.g., the shapes and PDF, CDF, ICDF values of a Weibull distribution with Alpha and Beta of [2, 2], [3, 5], and [3.5, 8]) and overlays them on top of one another.

6. Distributional Designer. Allows you to create custom distributions by entering or pasting in existing data. Data can be in a single column or two columns (unique values and their respective frequencies such that the probabilities of occurrence sum to 100%).

7. Distributional Fitting (Single Variable). Determines which distribution to use for a particular input variable in a model and the relevant distributional parameters. Advanced algorithms are employed such as Akaike Information Criterion, Anderson-Darling, Chi-Square, Kolmogorov-Smirnov, Kuiper’s Statistic, and Schwarz/Bayes’ Information Criterion.

8. Distributional Fitting (Multi-Variable). Runs distributional fitting on multiple variables at once, and captures their correlations as well as computing the relevant statistical significance of the correlations and the fit.

9. Distributional Fitting (Percentiles). Uses an alternate method of entry (percentiles and first/second moment combinations) to find the best-fitting parameters of a specified distribution without the need for having raw data. This method is suitable for use when there are insufficient data, when only percentiles and moments are available, or as a means to recover the entire distribution with only two or three data points but the distribution type needs to be assumed or known.

10. Hypothesis Testing. Tests the means and variances of two distributions to determine if they are statistically identical or statistically different from one another; that is, whether the differences are based on random chance or if they are, in fact, statistically significant.

11. Overlay Charts. Used to compare different distributions (theoretical input assumptions and empirically simulated output forecasts) and to overlay them on top of one another for a visual comparison.

12. Principal Component Analysis. Identifies patterns in data and recasts the data in such a way as to highlight their similarities and differences. Patterns of data are very difficult to find in high dimensions when multiple variables exist, and higher dimensional graphs are very difficult to represent and interpret. Once the patterns in the data are found, they can be compressed, and the number of dimensions is reduced.

13. Scenario Analysis. Run multiple scenarios of your existing model quickly and effortlessly by changing one or two input parameters to determine the output of a variable.

14. Seasonality Test. Many time-series data exhibit seasonality where certain events repeat themselves after some time period or seasonality period (e.g., ski resorts revenues are higher in winter than in summer, and this predictable cycle will repeat itself every winter).

15. Segmentation Clustering. From an original dataset, algorithms (combination k-means hierarchical clustering and other method of moments) are run to find the best-fitting groups or natural statistical clusters to statistically divide, or segment, the original data set into different groups or segments.

16. Sensitivity Analysis. While tornado analysis (tornado charts and spider charts) applies static perturbations before a simulation run, sensitivity analysis applies dynamic perturbations created after the simulation run.

17. Statistical Analysis. Determines the statistical properties of the data. The diagnostics run include checking the data for various statistical properties, from basic descriptive statistics to testing for and calibrating the stochastic properties of the data.

18. Structural Break Test. A time-series dataset is divided into two subsets and the algorithm is used to test each subset individually and on one another and on the entire dataset to statistically determine if, indeed, there is a break starting at a particular time period.

19. Tornado Analysis. A powerful simulation tool that captures the static impacts of each variable on the outcome of the model. That is, the tool automatically perturbs each variable in the model a preset amount, captures the fluctuation on the model’s forecast or final result, and lists the resulting perturbations ranked from the most significant to the least.