ABSTRACT

Recent studies have focused on single occupant and coarse-grained activity recognition (AR), while fine-grained multi-occupancy AR has received little attention. This presents a semantic-enabled approach to fine-grained multi-occupancy AR and estimating AR confidence level (ARCL). The approach leverages ontology modeling techniques for generic and personalized activities of daily living (ADL) descriptions, incremental semantic reasoning, and belief-based importance values for estimating confidence level. To calculate ARCL, the segmented sensor observations and candidate activity class are analyzed at coarse- and fine-grained levels. The importance values are specified for each key action and context for a given ADL. ARCL is calculated twofold: coarse-grained confidence level (CCL) and fine-grained confidence level (FCL). The CCL algorithm extracts and takes location, key objects, and time interval into consideration, whereas FCL inspects the occupant’s interactions with objects to detect fine-grained actions using predefined thresholds. In addition, the multi-occupancy AR (MAR) approach is proposed to detect, identify, and associate an occupant’s actions with sensor observations. MAR leverages time window analysis and a location-based approach to detect multi-occupancy activities and fingerprint sensors to identify and associate object interactions. A fusion of ambient sensors and dense sensors for a non-invasive and non-obstructive data collection approach is proposed and applied to a kitchen application scenario to illustrate its use.