HandJam is an "instrument" that is played by using American Sign Language (ASL) numbers, 0-9, to play corresponding notes. I created this project along with a team of 2 others, Michael Chou and Elliot Friesen. We used a NUCLEO-L432KC as our embedded system which meant we needed to scale everything to fit on a STM32 system. This included scaling down our ML model to fit on the device, and scaling the real time images to 64 x 64 bytes so that the model could run in real time. Our model was trained on a dataset of around 2000 images of ASL numbers, and was able to achieve 90-100% accuracy under the correct conditions. This meant that our camera needed to have a clear view of the hand, and solid background. Once the model was trained, we were able to play a corresponding frequency to the number that was being shown to the camera by the users hand. All the code for this project can be found on the github link below.
- GithubHandJam
- HardwareNUCLEO-L432KC, 8Ω Speaker with PAM8302 Amplifier, 7-segment display
- StackTensorflow, Python, C, Platform IO
