ABSTRACT

Once the cameras are calibrated, objects are tracked across the network. Typically, tracking algorithms estimate the location and/or the shape of objects over time. Object tracking is a challenging task due to appearance and view changes, nonrigid structures and occlusions (Polat and Ozden 2006, Qu and Schonfeld 2007). Moreover,

there is the challenge to establish object correspondence across cameras (Yinghao et al. 2007). Although, early trackers primarily focused on color and appearance (Nummiaro et al. 2003), cameras may have different photometric properties (e.g., brightness adjustment, contrast), which reduce the effectiveness of these features in the matching process. Moreover, varying lighting conditions further decrease the reliability of these features. In case of nonoverlapping cameras, the expected transition times across sensors may be used to associate objects across the network (Javed et al. 2003). More recently, trajectory-based object correspondence approaches have been proposed, where each camera tracks objects independently and then the correspondence is established using learned motion features (Sheikh and Shah 2008, Kayumbi et al. 2008). These approaches also simplify the extrapolation of object motion in unobserved regions.