A framework for evaluating epidemic forecasts

dc.contributor.authorTabataba, Farzaneh Sadaten
dc.contributor.authorChakraborty, Prithwishen
dc.contributor.authorRamakrishnan, Narenen
dc.contributor.authorVenkatramanan, Srinivasanen
dc.contributor.authorChen, Jiangzhuoen
dc.contributor.authorLewis, Bryan L.en
dc.contributor.authorMarathe, Madhav V.en
dc.contributor.departmentComputer Scienceen
dc.date.accessioned2017-08-03T20:01:25Zen
dc.date.available2017-08-03T20:01:25Zen
dc.date.issued2017-05-15en
dc.date.updated2017-08-03T10:58:56Zen
dc.description.abstractBackground Over the past few decades, numerous forecasting methods have been proposed in the field of epidemic forecasting. Such methods can be classified into different categories such as deterministic vs. probabilistic, comparative methods vs. generative methods, and so on. In some of the more popular comparative methods, researchers compare observed epidemiological data from the early stages of an outbreak with the output of proposed models to forecast the future trend and prevalence of the pandemic. A significant problem in this area is the lack of standard well-defined evaluation measures to select the best algorithm among different ones, as well as for selecting the best possible configuration for a particular algorithm. Results In this paper we present an evaluation framework which allows for combining different features, error measures, and ranking schema to evaluate forecasts. We describe the various epidemic features (Epi-features) included to characterize the output of forecasting methods and provide suitable error measures that could be used to evaluate the accuracy of the methods with respect to these Epi-features. We focus on long-term predictions rather than short-term forecasting and demonstrate the utility of the framework by evaluating six forecasting methods for predicting influenza in the United States. Our results demonstrate that different error measures lead to different rankings even for a single Epi-feature. Further, our experimental analyses show that no single method dominates the rest in predicting all Epi-features when evaluated across error measures. As an alternative, we provide various Consensus Ranking schema that summarize individual rankings, thus accounting for different error measures. Since each Epi-feature presents a different aspect of the epidemic, multiple methods need to be combined to provide a comprehensive forecast. Thus we call for a more nuanced approach while evaluating epidemic forecasts and we believe that a comprehensive evaluation framework, as presented in this paper, will add value to the computational epidemiology community.en
dc.description.versionPublished versionen
dc.format.mimetypeapplication/pdfen
dc.identifier.citationBMC Infectious Diseases. 2017 May 15;17(1):345en
dc.identifier.doihttps://doi.org/10.1186/s12879-017-2365-1en
dc.identifier.urihttp://hdl.handle.net/10919/78637en
dc.language.isoenen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.holderThe Author(s)en
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.titleA framework for evaluating epidemic forecastsen
dc.title.serialBMC Infectious Diseasesen
dc.typeArticle - Refereeden
dc.type.dcmitypeTexten

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
12879_2017_Article_2365.pdf
Size:
5.18 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 1 of 1
Name:
license.txt
Size:
1.5 KB
Format:
Item-specific license agreed upon to submission
Description: