A machine learning system for gesture-controlled interaction with holographic interfaces. Uses MediaPipe for hand tracking and TensorFlow for real-time gesture recognition.
Advanced hand tracking and gesture recognition for next-gen interfaces.
Real-time tracking of 21 hand landmarks with precise x, y, and z coordinates using MediaPipe.
TensorFlow/Keras-based gesture classification with automated training pipeline.
Simultaneous tracking and recognition for multiple hands in the frame.
Color-coded depth feedback for intuitive understanding of hand position in 3D space.
Low-latency gesture recognition for seamless interactive experiences.
Built-in capability to record and save gesture data for model training.
Built with industry-standard ML and computer vision tools.
What you need to get started.
Get the gesture recognition system running locally.
Clone the repository
git clone https://github.com/Neeraj-x0/gestureControl.gitNavigate to directory
cd gestureControlInstall dependencies
pip install -r requirements.txtRun the application
python main.pyThis project aspires to evolve into a sophisticated system where users can intuitively control and manipulate digital objects using natural gestures, fully integrating with holographic environments to create an unparalleled interactive experience.
Next Steps: Expanding the gesture dataset, integrating with Unity 3D for holographic prototypes, and implementing multi-gesture sequence recognition.
Help build the next generation of gesture-controlled interfaces.