← Back to projects

CASE STUDY

Fitness Companion

On-device exercise analysis with ML-powered form assessment — no data leaves your phone.

82.4% avg form accuracy89% on squat200 external test videos0 network calls
React NativeTypeScriptONNX RuntimeMediaPipe Poseexpo-sqlitePythonKerasscikit-learnFFmpeg

What it is

A cross-platform mobile app that records your workout, classifies the exercise, counts reps, and tells you whether your form is safe — all running entirely on your Android phone with zero cloud dependency. Built as a final-year capstone at NUS with a team of four.

My Role

I owned the analytical engine and was responsible for integrating all four subsystems into a working product. My pipeline is the single service call that connects the vision models, the mobile UI, the database, and my own analysis code into one post-recording workflow.

What I Built

  • A five-stage TypeScript analytical pipeline running on-device: pose validation → temporal smoothing → angle computation → FSM rep counting → ML form assessment
  • Five exercise-specific form quality models (DNN for squat, SVM for bench press, curl, deadlift, lat pulldown) trained in Python, deployed on-device via ONNX Runtime
  • Singleton session management for ONNX Runtime to prevent native memory exhaustion on mobile hardware
  • Five typed repository adapters providing the integration boundary between all subsystems through a shared SQLite database
  • Background Processing Service orchestrating the complete pipeline with stage-level error handling and graceful degradation

The Hardest Problem

Migrated the entire pipeline from a Python FastAPI server to on-device TypeScript between semesters after a privacy analysis revealed that streaming pose landmarks over a network constitutes biometric data transmission. The algorithms transferred — the architecture didn't. Every component was reimplemented from scratch in a different language, runtime, and inference framework, with validated numerical equivalence to prevent training–serving skew.

Key Results

82.4%

Average form classification accuracy

200

Independently recorded external test videos

0

Network calls. All inference runs on-device.