Using Visual Language Models to Control Bionic Hands: ICAT 2025

We investigate visual language models to enhance object perception and grasp inference for bionic hands. An onboard camera streams context to an edge model that returns interpretable grasp suggestions and rationales, aiding control strategies and user interaction.

project image project image