< PreviousA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 10 SEPTEMBER | OCTOBER 2019 t h e v a l u e e x a m i n e r causality. If correlation does not prove causation, how can regression be admitted as relevant under Federal Rules of Evidence (FRE) 401?17 Answer: FRE 401 requires that relevant evidence only tend to make a consequential fact more or less probable—in Bayesian parlance, more or less believable18— than without it.19 “One variable could cause the other, or they could both be effects of a third cause; for prediction, it does not matter.”20 Failure to identify the precise cause of a correlation does not diminish the predictive usefulness of the regression formula.21 Causation is not a unique cause of relevance. Either way, Damages Expert’s trial testimony suggests that Plaintiff’s costs and revenues were separate effects of unobservable common “inputs,” which, in turn, were effects of the Plaintiff’s latent revenue expectations. Damages Expert testified as follows: [T]he number of miles the planes fly, the destinations that the planes fly to, the number of seats, the type of aircraft, that all affects revenue. Revenue...in the airline industry is what drives the cost. So, as the revenues change, your costs are going to change. The input here in this particular case are going to be planes, seats, cost of filling the seats, cost of training the pilots, cost of training the flight attendants and the crew, etc. So, as the revenues change, those costs associated with it are going to change.22 This sounds more like the causal network in Figure 3 than the linear formula in Equation 1.23 17. Fed. R. Evid. 401 (“evidence is relevant if: (a) it has any tendency to make a fact more or less probable than it would be without the evidence; and (b) the fact is of consequence in determining the action”). 18. See Kurt S. Schulzke and Gerlinde Berger-Walliser, Toward a Unified Theory of Materiality in Securities Law, 56 Colum. J. Transnat’l L. 6, 51 (2017) (stating that Bayesian “probability is a quantitative measure of subjective belief based on objective evidence”); Reference Manual on Scientific Evidence 241 (contrasting frequentist/objectivist and Bayesian probability theories). 19. See Schulzke and Berger-Walliser at 21 (discussing Matrixx Inititiaves, Inc. v. Siracusano, 563 U.S. 27 (2011), blessing, inter alia, inference of causal relationships with or without “statistical significance”). 20. Pearl and Mackenzie at 29, 62. 21. Pearl and Mackenzie at 30 (“The owl can be a good hunter without understanding why the rat always goes from point A to point B.”). 22. Trial Transcript p. 915. 23. Probabilities differentiate causal Bayesian networks from SEMs. See Judea Pearl, On Structural Equations versus Causal Bayes Networks (Dec. 7, NETWORK MODEL OF PLAINTIFF’S LOST PROFITS Arrows represent causal relationships between nodes (a.k.a. variables). Origin (input) nodes are causes (parents) of destination (children) nodes. In Figure 3, Rev Passenger Miles is the parent or common cause of Lost Revenue and Avoided Cost and, as such, is their “confounder,”24 inducing statistical correlation between them despite no direct causal relation.25 For prediction, their correlation is just as good as if they were causally linked. Unfortunately for Plaintiff, when Damages Expert said, “revenue drives the cost,” Rebuttal Expert and the appellate court conjured economic heresy.26 Damages Expert could have avoided this trap by regressing revenues on costs, thereby nodding in the direction of Rebuttal Expert and the court, followed by a dash of light algebra.27 equations-versus-causal-bayes-networks/ 24. See Pearl and Mackenzie at 114; Catalog of Bias, Confounding, 25. Ibid. 26. The court found that Damages Expert’s “most glaring error” was reversing what, in the court’s mistaken view, was an irreversible and causal relationship between cost (which simply must be X) and revenue (which can only be Y). At oral argument, the author of the court’s opinion spent 11 minutes on the regression analysis, demanding an answer to the question, “How can revenues drive costs?” 27. Regressing Revenue on Cost2 + Trend produces this regression formula: Rev = 98.0659 + 0.0037131982 * Cost2 – 9.7384457969 * Trend, or, solved for Cost: Cost = Thus, if Rev = 286.5 and Trend = 11, then Cost = 282.1, and net profit = $4.371 million. (Rev - 98.0659482635 + 9.7384457969*Trend) 0.0037131982 Revenue Expectation Rev Passenger Miles Lost Profits Lost RevenueAvoided CostA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r SEPTEMBER | OCTOBER 2019 11 P-Values and R2 Damages Expert wrote: “[T]he p-value explains how much of the data is explained by chance rather than by the relationship of the variables, which means, in this case, that greater than 99.999 percent of the data is explained by the relationship of total costs to total revenue.”28 This is a fundamental misinterpretation of “p-value.” The plaintiff’s counsel was similarly confused.29 In general, the p-value is a probability of observing a test statistic30 at least as extreme as this one, assuming that the related null hypothesis is true.31 Stated in alternative terms, it is the probability of seeing the data we see (represented in Table 4 by the “test statistics” t or F), given that the null hypothesis is true. Symbolically, p-value = p (data | H0 = true). Drilling down a bit further, Table 4 matches the p-values and test statistics in Table 2 with their respective null hypotheses: TABLE 4: P-VALUES AND NULL HYPOTHESES (H0) IN DAMAGES EXPERT’S LINEAR REGRESSION TABLE 2 OUTPUT Item ScopeNull hypothesistest statistic Table 2 p-value 1 Intercept termH01: intercept = 0 t: 1.966050.08486 2 Revenue coefficientH02: Revenue coefficient = 0 t: 12.784811.3211 x 10-6 3 Whole modelH03: coefficients of all predictors = 0 F: 163.45121.3211 x 10-6 Thus, p-value #2 says that the probability of observing t ≥ 12.78481 is 0.0000013211,32 given that Revenue is not associated with Cost. Symbolically: p-value =p(t ≥ 12.78481|Rev Coefficient = 0)=0.0000013211 Equation 2 Therefore, assuming a “significance” benchmark of 0.05 or even 0.00001, we must “reject the null hypothesis” because 0.0000013211 is smaller than 0.00001. Which means exactly nothing of consequence.33 It does not mean “there is a less than five percent chance Revenue is not associated with Cost,” nor “there is a ninety-five percent+ chance that Revenue is associated with Cost,” nor even “we are ninety-five percent+ confident that Revenue is associated with Cost.”34 By tradition, and because the alternative is too horrible to contemplate, we “behave as if” Revenue is associated with Cost. At trial, the Damages Expert also appeared to misfire on R2, giving it p-value-like properties: “My R squared was 0.956...So my significance test is based upon economically plausible variables of 0.95.”35 28. Damages Expert Report at 67. 29. Plaintiff’s Response Memo, para. 62 (“Damages Expert’s p-value...meant that the probability of getting a similar result of $253,843,094 by pure chance was less than one hundredth thousand [sic] of one percent”). 30. Table 1 reports t and F statistics. t statistic = estimate/standard error. For Revenue, t = 0.77960930/0.06097936. See Pardoe et as here, F = tRev2 = 163.4512. 31. See Norman Fenton and Martin McNeil, Risk Assessment and Decision Analysis with Bayesian Networks 373–374; accord, Federal Judicial Center, Reference Manual on Scientific Evidence (RMSE) at 250 (“p-value merely gives the chance of getting evidence against the null hypothesis as strong as or stronger than the evidence at hand”); but see RMSE at 320 (misinterpreting p-values as “the probability that a coefficient of this magnitude or larger could have occurred by chance if the null hypothesis were true”). 32. P-value #3 = p-value #2 because Revenue is the only predictor and is, therefore, the whole model. P-value #1, for the intercept, is typically ignored. 33. See Stephen T. Ziliak and Deirdre N. McCloskey, The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives 4–5 (2008) (“[T]he so-called test of statistical significance does not in fact answer a quantitative, scientific question...It is a philosophical, qualitative test. It does not ask how much. It asks ‘whether.’”). 34. Fenton and McNeil at 373. 35. Trial Transcript, Vol. 6: 979.A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 12 SEPTEMBER | OCTOBER 2019 t h e v a l u e e x a m i n e r This rendition of p-values comes closer to R2, which can be explained as “the percentage of variation in the target explained by variation in the predictors.”36 As a side note, in contrast to R2, Adj-R2 “penalizes” models for “overfitting” (i.e., using too many predictor variables) and should typically be used for multiple-predictor models. This is because a model that tracks its own data too closely typically makes bad predictions on “out-of-sample” data (i.e., data that were not used in developing the regression model). Point Estimates and Confidence Intervals Point estimates—coefficients, variances, and target values— offer limited information because point estimates vary and the probability of each is zero.37 Hence, many use confidence or prediction intervals (CIs or PIs)38 in an attempt to bracket the uncertainty: "An X percent confidence interval for a parameter θ is an interval generated by a confidence procedure that, in repeated sampling, has at least an X percent probability of containing the true value of θ for all possible values of θ."39 Damages Expert said nothing about CIs or PIs.40 Rebuttal Expert objected and prescribed (perhaps not so correctly) a forecast interval as the “sure” cure for uncertainty: “[O]ne can be ninety-five percent sure” that the true 2008 cost lies within the ninety-five percent [forecast] interval.41 While Damages Expert erred in ignoring uncertainty, Rebuttal Expert’s confidence in PIs was misplaced. While popular, CIs and PIs are no predictive panacea. Consistent with Jerzy Neyman’s guidance since 1937, Richard Morey convincingly demonstrates that: (a) a CI does not identify an interval in which the true value (e.g., Cost) can be found with X percent (e.g., ninety-five percent) confidence, (b) the width of a CI does not indicate the precision of the estimated value, and (c) values outside the CI are no less likely to be “true” than those inside.42 In other words, CIs 36. See The Coefficient of Determination, 37. See, e.g., Fenton and Neil at 301. 38. Forecast (a.k.a. prediction) intervals or PIs bracket a single point estimate (more variance), whereas CIs bracket a population mean (less variance). Thus, PIs tend to be wider than CIs.. 39 See Richard D. Morey, et al., The fallacy of placing confidence in confidence intervals, 23 Psychon. Bull. Rev. 103, 104 (2015) (citing Jerzy Neyman, 1937). 40. Damages Expert Report, Tab 20. 41. Rebuttal Expert Declaration at 12. 42. See Morey et al., at 104–106 (declaring that CIs are nothing more than output of “confidence procedures” that contain the true value in a fixed could be excluded under FRE 702(a) or (b) because they are not the product of reliable principles and they convey no useful information. Therefore, they are incapable of helping the trier of fact.43 CIs are called “confidence” intervals only because confidence procedures generate them, not because they confidently demark the plausible value range of a point estimate.44 Thus, they can neither confirm nor deny the plausibility of Damages Expert’s point estimate. For a plaintiff (defendant) carrying the burden of proving (or disproving) damages, CIs and PIs are useless. Model Misspecification George Box wrote, “[T]he question you need to ask is not ‘Is the model true?’ (it never is) but ‘Is the model good enough for this particular application?’”45 Models can be misspecified in three primary ways: under-specification (variable or transformation omission), over-specification (inclusion of redundant variables), or extra-specification the (inclusion of extraneous variables).46 Damages Expert did not assess misspecification. Rebuttal Expert, using Ramsey’s RESET test,47 concluded that variables were missing and added a time proxy, which he called “Trend.”48 While RESET clearly can detect omitted nonlinear (e.g., exponential or logarithmic) transformations of variables already in the model,49 its ability to detect other omitted variables is disputed.50 In this connection, neither proportion of samples); accord, Robert Nau, Statistical 43. See FRE 702 and Daubert. 44. See Richard D. Morey, Example 1: The Lost org/papers/confidenceIntervalsFallacy/lostsub.html (interactive simulation of CI flaws). 45. George E. P. Box, Alberto Luceño, and Maria del Carmen Paniagua- Quinones, Statistical Control by Monitoring and Adjustment (2009). 46. See Iain Pardoe, Laura Simon and Derek Young, STAT 501, Regression 47. Rebuttal Expert Deposition, p. 42–43. 48. Rebuttal Expert Declaration, p. 10. RESET does not identify or prioritize candidate variables. Rebuttal Expert also modeled Cost ~ Rev + Trend + Trend2. His Trend2 and Trend models ranked #17 and #18 of 31 models, with Cp = 39.5 and 43.1, respectively. For pedagogical purposes, the Trend model (Model 9) is discussed here, while the Trend2 model is not. 49. See e.g., Jeffrey M. Wooldridge, Introductory Econometrics: A Modern Approach, 5th ed. 306 (2013) (RESET tests for functional form misspecification); accord, SAS 9.2 User Guide (recommending RESET as a test for omitted nonlinear transformations of already-included predictors”), htm#etsug_autoreg_sect025.htm 50. See Wooldridge (2013) at 307 (stating that the RESET test “has no power for detecting omitted variables whenever they have expectations that are linear A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r SEPTEMBER | OCTOBER 2019 13 expert mentioned residual plots (Figure 4, R: car package). These practically shout, “linearity violation: need exponent!”51 FIGURE 4: DAMAGES EXPERT (LEFT) AND REBUTTAL EXPERT (RIGHT) RESIDUALS VS. FITTED VALUES PLOTS in the included independent variables in the model” and that “RESET is a functional form test, and nothing more”). But see Simon Peters, On the use of the RESET test in micro-econometric models, 7 Applied Economics Letters 361, 364 (2000) (suggesting that RESET may detect omitted variables); Peter Kennedy, A Guide to Econometrics 4th ed 79–80 (1998) (RESET tests for omitted variables). 51. See Pardoe et al., Identifying Specific Problems Using Residual deposition, Rebuttal Expert noted functional form misspecification as a possibility. Rebuttal Expert Deposition, p. 44, 47. A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 14 SEPTEMBER | OCTOBER 2019 t h e v a l u e e x a m i n e r To find the missing term, we can quickly create a portfolio of all possible models by running an all subsets regression (R: olsrr package) on Cost ~ Rev + Rev2 + Rev3 + Trend + Trend2. This procedure generates Adj-R2, AIC, and Mallows Cp52 metrics for each model in the portfolio. Table 5 rank-orders the top ten of these models, with Model 16 (with Rev3) ranked number 1 on all three metrics53 and capturing the bias- and variance-minimizer crown, as well.54 TABLE 5: OLSRR ALL SUBSETS REGRESSION OUTPUT Acceptable Candidates Have Cp (i.e., Cp < p), Where p = n + 1 and Lowest Cp Wins Table 6 confirms the “statistical significance” of Model 16 and its predictors. TABLE 6: MODEL 16: COST ~ REV + REV3 + TREND 52. Large Adj-R2, and small Cp and AIC, are generally best. 53. The models in the Damages Expert Report and Rebuttal Expert Report appear as Model 1 and 8, respectively. For parsimony here, the controversial “hierarchy principle,” which some statisticians claim requires inclusion in the model of all predictors with exponents smaller than the largest, is disregarded. 54. See Pardoe et al., Best Subsets Regression, Adjusted R-Sq, Mallows Cp RANK MODELNPREDICTORSADJRCp AIC 1163 Rev Rev3 Trend 0.9982922.04685856.74539 2173 Rev Rev2 Trend 0.9980082.72148558.28721 3264 Rev Rev3 Trend Trend2 0.997974.00870258.65065 4274 Rev Rev2 Rev3 Trend 0.9979514.04675858.74514 5183 Rev Rev2 Trend2 0.9973294.32945861.21812 6284 Rev Rev2 Trend Trend2 0.997764.4228459.6338 7193 Rev Rev3 Trend2 0.9970764.92815862.12192 8315 Rev Rev2 Rev3 Trend Trend2 0.997468660.62892 9294 Rev Rev2 Rev3 Trend2 0.9968046.31154463.18978 10203 Rev2 Rev3 Trend 0.9957228.1380865.92896A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r SEPTEMBER | OCTOBER 2019 15 Model 16 predictions and PIs appear in Table 7 for two revenue numbers: Damages Expert’s estimate and Plaintiff’s 2008 $339 million budget. The latter predicts a Cost of $312 million, yielding $27 million in net profits. Notably, the ninety-five percent Cost PI does not bracket Revenue. TABLE 7: PREDICTED COSTS AND PIs FOR TABLE 5 MODEL Model 16 looks promising. However, the fan pattern in Figure 5 signals trouble. With data so sparse, drawing a firm conclusion is difficult; however, the residual “fan” could be disqualifying because it appears to violate the homoscedasticity (a.k.a. “equal variance”) assumption that linear regression requires. FIGURE 5: MODEL 16 PLOTS OF RESIDUALS VS. FITTED VALUES The Fan Pattern Signals Non-Constant Variance, Violating the “E” LINE Assumption Revenue Pred Cost Lower Upper 286.500 286.306 275.200 297.413 339.000 312.166 301.166 322.407 95% Prediction Interval Influential Outliers Influential outliers can “pull” the regression line off track, potentially skewing results. Neither expert addressed outlier influence, though the Damages Expert repeatedly flagged 2007’s high expenses and low profits (see Fig. 1) as an anomaly.55 Figures 6 and 7—DFbeta plots quantifying the impact on each formula coefficient of influential outliers56—show Obs. 8 (2005) significantly moving the Rev coefficient in Model 1 (Damages Expert) and Model 9 (Rebuttal Expert). 55. See, e.g., Damages Expert Report at 65. 56. See intro.html#dfbetas-panelA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 16 SEPTEMBER | OCTOBER 2019 t h e v a l u e e x a m i n e r FIGURE 6: MODEL 1 REV DFbetasFIGURE 7: MODEL 9 REV DFbetas Similarly, Figure 8 (four panels) shows Model 16 DFbeta plots for each parameter, revealing that Obs. 7 (2004) influences Model 16 but, based on the shorter Y axes, less so than Obs. 8 in Models 1 and 8 (Figs. 6 and 7). FIGURE 8: MODEL 16 DFbetas SHOWING OBS 7 AS MOST INFLUENTIALA PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES t h e v a l u e e x a m i n e r SEPTEMBER | OCTOBER 2019 17 A proper response to outliers (drop, amend, or let be) is context dependent. Outliers may point to real features of the cost- or income-producing process or measurement or recording errors. Rerunning the model after correcting errors may move the influence metrics, especially where, as here, the data are sparse. Observations may be dropped if, in context, a persuasive case can be made that the dropped observations are not representative of the population and that the remaining data are sufficient to support the model. PRACTICE POINTERS This case offers important lessons for experts who wish to use linear regression. For the wrong reasons, the court was right that Damages Expert’s regression does not “do anything” admissible under FRE 401 and 702. Damages Expert’s regression failed, in part, because of operator error. But regression itself is also to blame. Linear regression is technically delicate, cause-blind, and incapable of accurately or effectively quantifying and communicating uncertainty. The fact that a widely respected federal appellate judge was so confused is strong evidence of the hazards of regression and the opportunity for educator- experts to make a difference. This case suggests five central takeaways. First, given the delicacy of required LINE assumptions, and with point estimates, PIs, and p-values discredited, linear regression is easy prey for rebuttal. Experts who choose regression should prepare for a vigorous counterattack at trial and on appeal. In part, this requires properly and transparently addressing uncertainty. Second, attorneys and judges—even the most statistics-savvy—know, at best, only enough regression to be dangerous. Thus, in their reports and testimony, experts must carefully lay the groundwork for appeal by proactively dispelling popular regression myths. No judge or jury should be left thinking that the equals operator in a regression formula signifies causation, that a single-point estimate provides adequate factual basis for a damages award, that p-values are chosen scientifically, or that p-values represent anything more than the probability of the evidence given the truth of the related hypothesis. Third, experts must learn linear regression well enough to fulfill their essential (re) education role. Fourth, experts should explicitly cite AICPA FVS practice aids and the Federal Judicial Center’s Reference Manual on Scientific Evidence to support their reports and testimony. In this case, Damages Expert cited neither, thereby giving the court license to discredit his model. While Damages Expert swayed the trial judge and jury, the appeals court was not convinced. Finally, as the court wrote, linear regression was unnecessary in this case, and more generally, is overused by experts. Experts should consider alternatives. Bayesian networks, a specimen of which appears in Figure 9, deserve a closer look. A PROFESSIONAL DEVELOPMENT JOURNAL for the CONSULTING DISCIPLINES 18 SEPTEMBER | OCTOBER 2019 t h e v a l u e e x a m i n e r FIGURE 9: ILLUSTRATIVE BAYESIAN NETWORK MODEL Plaintiff’s Damages Linked to Revenue ~ Cost Regression Formula As we shall see in Part II, Figure 9’s interactive Bayesian network model offers a user-friendly, transparent causal structure that can easily be built by a subject-area expert. Bayesian networks work with big data, small data, or expert knowledge alone. Consulting or testifying with the aid of such models, experts can confidently educate counsel, judges, juries, arbitrators, and negotiating counterparties to the model’s uncertainty, node by node. Bayesian networks are ideally suited to deal gracefully and transparently with scant-data scenarios like that encountered in this case. They can work with or without linear regression and its LINE assumptions, p-values, and PIs. A hallmark of Bayesian networks is their transparent, engaging presentation of full probability distributions over plausible values of the target variable and related predictors. Bayesian networks make zero-probability point estimates obsolete, empower factfinders to reliably estimate values based on available evidence, and are more likely to produce fair settlements and appeal-proof jury awards. Kurt Schulzke, JD, CPA, CFE, teaches forensic accounting and audit analytics at University of North Georgia. He has published on materiality, expert witnessing, revenue recognition, and Benford’s Law in the Columbia Journal of Transnational Law, Vanderbilt Journal of Transnational Law, Tennessee Journal of Business Law, Journal of Forensic Accounting Research, and the Columbia Law Blue Sky Blog. With an MS in applied statistics, he is equally adept as counsel, expert witness or neutral in valuation-related matters. E-mail: VE Customize Your CPE with Unlimited Options Annual subscription benefits—for first user: • Membership Dues + Tri-Annual Recertification Fee • Recertification Bonus Point Program Registrations + CPE • All Live Classroom Registrations + CPE • All Live Online Broadcast Registrations + CPE • All Live Online Webinar Registrations + CPE • All Live Online Webcast Registrations + CPE • BVTC Recorded Training Videos On-Demand • All CPE On-Demand Courses (nearly 700) + CPE • Annual Consultants’ Conference Registration + CPE (in-person or live/online broadcasts) • Financial Valuation SuperConference Registration + CPE (in-person or live/online broadcasts) • Financial Consultants’ Accelerated Training Institute Registration + CPE (in-person or live/online broadcasts) • Electronic Self-Study Courses + CPE (shipping and handling not included for hard copies) • The Value Examiner® and QuickRead® CPE Quizzes • Surgent CPE | Exclusive NACVA/CTI library of NASBA compliant ethics, accounting, auditing, and tax fields of study Want it all? • Ultimate Triple Play Subscription— $625 per month for the first user (receive everything included in all three other subscriptions, plus Damages Advocate calculation software) Visit www.NACVA.com/Ultimate for complete listing of what is included in each subscription. • Ultimate KeyValueData® Titanium Subscription— $250 per month for first user (access to 21 separate databases, reports, libraries, and presentations) • Ultimate Software Subscription— $90 per month for first user (licenses to five valuation and report writing software packages, plus technical support) Add to your Ultimate Training and Membership Subscription: Ultimate Training and Membership Subscription Unlimited Continuing Professional Education Membership Dues | Recertification Fees (multi-user options available) For details and to sign up, visit www.NACVA.com/Ultimate, or contact Member/Client Services at (800) 677-2009. UltimateMembershipFlyer8.14.19.indd 19/19/19 3:00 PMCustomize Your CPE with Unlimited Options Annual subscription benefits—for first user: • Membership Dues + Tri-Annual Recertification Fee • Recertification Bonus Point Program Registrations + CPE • All Live Classroom Registrations + CPE • All Live Online Broadcast Registrations + CPE • All Live Online Webinar Registrations + CPE • All Live Online Webcast Registrations + CPE • BVTC Recorded Training Videos On-Demand • All CPE On-Demand Courses (nearly 700) + CPE • Annual Consultants’ Conference Registration + CPE (in-person or live/online broadcasts) • Financial Valuation SuperConference Registration + CPE (in-person or live/online broadcasts) • Financial Consultants’ Accelerated Training Institute Registration + CPE (in-person or live/online broadcasts) • Electronic Self-Study Courses + CPE (shipping and handling not included for hard copies) • The Value Examiner® and QuickRead® CPE Quizzes • Surgent CPE | Exclusive NACVA/CTI library of NASBA compliant ethics, accounting, auditing, and tax fields of study Want it all? • Ultimate Triple Play Subscription— $625 per month for the first user (receive everything included in all three other subscriptions, plus Damages Advocate calculation software) Visit complete listing of what is included in each subscription. • Ultimate KeyValueData® Titanium Subscription— $250 per month for first user (access to 21 separate databases, reports, libraries, and presentations) • Ultimate Software Subscription— $90 per month for first user (licenses to five valuation and report writing software packages, plus technical support) Add to your Ultimate Training and Membership Subscription: Ultimate Training and Membership Subscription Unlimited Continuing Professional Education Membership Dues | Recertification Fees (multi-user options available) For details and to sign up, visit or contact Member/Client Services at (800) 677-2009. UltimateMembershipFlyer8.14.19.indd 19/19/19 3:00 PMNext >