Current 3D printed graphics can only convey limited information through their shapes and textures. We are developing tools to enable users to interact with 3D prints with gestures.
Our previous work explored how to add several hotspots in 3D printed models, we are interested in how to make 3D prints more interactive using computer vision in this project.
The current toolkit includes a tracker and a software program that processed video stream. A model designer must add a single tracker with fiducial tags to a model and provides content for the model. After the tracker is added, our system only requires an RGB camera, so it can be easily deployed on many devices such as mobile phones, laptops, and smart glasses. A user can access the content associated with hotspots on the model by exploring the model using gestures.