Abstract:In this paper, a toolkit is created for designing vision-based gesture interactions. First, an abstract model for non-contact devices is proposed. Then, based on a data flow diagram method and an interactive learning approach, the IEToolkit is presented. It is designed based on the attributes of vision interaction and shields the underlying details of the computer vision algorithms. It has the following characteristics: a scalable interface to facilitate developers to add new classifiers, a unified management mechanism that provides dynamic configuration for all of the classifiers, and a visual user interface that supports the definition of a high-level semantic gesture. Finally, several prototypes are given. Experimental results show that the IEToolkit can provide a unified platform and a general solution for vision-based hand gesture games.