Thank you for the opportunity to share details about my capstone project exploring innovations that push the boundaries of existing technologies and methods. This final undertaking for my degree has allowed me to combine my passions for research, engineering, and making meaningful contributions.
My project centers around developing an affordable and accessible prosthetic arm controlled using electromyography (EMG) signals from the residual limb. Current EMG prosthetics on the market can cost tens of thousands of dollars, putting them out of reach for many who could benefit. I set out to create a prototype under $1,000 that is open-source so others may build upon the work.
The most significant innovation lies in the EMG sensor array and machine learning algorithms used to interpret muscle contractions into discrete commands. Existing products typically use between one to eight surface EMG sensors placed in a sock or sleeve over targeted residual limb muscles. While functional, their placement and number of sensors limits the range of simultaneous motions that can be intuitively controlled.
For my design, I 3D printed a flexible sensor array containing 32 miniature EMG sensors arranged in a high-density grid. This allows for detecting subtle contractions across a much larger area of the residual limb. I then applied advanced machine learning techniques to map complex patterns of muscle activation to simultaneous movements like gripping while rotating the wrist. Preliminary testing shows the array can distinguish over a dozen unique gestures with high accuracy.
Another novel aspect is using deep neural networks for real-time, on-board gesture classification instead of cloud-based processing. Most EMG prosthetics transmit raw sensor data over Bluetooth to an accompanying mobile app or computer for gesture decoding. While this offloads computing demands, it introduces latency from wireless transmission and requires an always-available external device.
My prototype collects EMG signals through a small microcontroller and runs inferences using a trained convolutional neural network model stored locally in memory. This embedded processing enables snappy, imperceptible latency between intent and response. It’s also ideal for applications demanding strict privacy like medical or defence use cases. Early tests clock gesture classification at under 30ms which meets or exceeds clinical standards.
The proof-of-concept prototype demonstrates fully self-contained EMG sensing, processing, and actuation is achievable using off-the-shelf affordable components. Total project costs came in just under $800 without manufacturing optimizations. For the control interface, I simplified the process of mapping gestures by designing an augmented reality application that visually guides users through the calibration process.
Where the design currently has limitations is battery life, as continuous neural network inference and actuation drain power quickly. Future work will focus on implementing ultra-low-power machine learning techniques and efficient embedded software to extend runtime towards clinical viability. Other next steps include full systems integration, additional training data collection to enlarge the gesture dictionary, and testing with potential end-users to gather meaningful feedback.
This capstone project tested bringing high-resolution EMG sensing, on-device gesture classification using deep learning methods, and intuitive augmented reality calibration together into an accessible low-cost prototype. The innovations open up opportunities to expand prosthetic access globally and push the field of intuitive, affordable myoelectric control further. I hope sharing details of this work provides some useful context into how final year projects can pursue meaningful technical challenges that break new ground.
