Wednesday March 28, 2018 3:00 p.m. Yost 306
Title: Diagnostic Tool of Forecast
Student: Nana Ama Baffoe
Advisor: Dr. Jenny Brynjarsdottir
Abstract: Forecasting is an important aspect in Statistics and as a result it is important that our forecasts reflect our uncertainties. But most importantly, our forecasts should be as accurate as possible. And how else can forecasters tell their probabilistic forecast distribution are as good or almost as good as the true distribution, which is unknown most of time (if not all time) to the forecaster. We need to come up with a diagnostic tool that helps us to know how close our probabilistic forecasts distributions are to the true. The verification rank histograms and probability integral transforms(PIT) histograms are the most common diagnostic tools to determine if probabilistic forecast distributions and observations are well calibrated in the univariate settings.Calibration in a nutshell mean show compatible the probabilistic forecasts and observations are. The purpose of this study is to compare the sensitivity off our calibration metrics/multivariate ranking methods – Multivariate ranking method, Minimum Spanning Tree, Band Depth and Average ranking method to misspecification. A simulation study and the case study of the Orbiting Carbon Observatory 2 (OCO-2) were used.