ABSTRACT

Imagine a meteorologist preparing a weather forecast. In addition to years of experience and a vast store of domain knowledge, the forecaster has access to satellite images, to computer-generated weather models and programs to display them in a variety of ways, and to an assortment of special-purpose tools that provide additional task-relevant data. There is no shortage of data, yet despite this array of resources, the task remains very challenging. One source of complexity is the uncertainty inherent in these data, uncertainty that takes many forms. Why are two weather models making different predictions? Are the models based on many observations or just a few? Are there enough observations in a given model to trust it? Is one model more reliable than

another in certain circumstances, and if so, what are they? Which one, if either, should be believed? How long ago were these data collected? How have things changed since the data were originally displayed? What is the real location of this front, and how is it affected by other changing variables, such as wind direction and speed, which may also have changed?