External

MLOps Engineer

🏢 Rayvector Technologies  •  📍 India

Sign up to view full application details.

Job Description

About the Role We are looking for an MLOps Engineer with 3–5 years of experience (including 0–3 years in AI/ML engineering) to help build, deploy, and maintain our machine learning solutions. This role focuses on deploying AI models to edge devices and integrating them with our C++ backend systems. You will work closely with ML, C++, and embedded engineering teams to ensure smooth and reliable production operations. Key Responsibilities Deploy and optimize ML models on edge devices (ARM boards, Jetson, etc.). Convert and package models for C++ integration (ONNX, TensorRT, TFLite). Automate ML workflows including training, testing, and deployment. Build and maintain CI/CD pipelines for Python and C++ components. Monitor model performance on devices—latency, drift, accuracy, resource usage. Support data pipelines for collecting field data from edge devices. Manage model versioning, tracking, documentation, and release processes. Required Skills Strong Python skills and working knowledge of C++ integration. Experience with deploying models on edge/embedded hardware. Familiarity with ONNX Runtime, TensorRT, TFLite, or similar optimization tools. Experience with CI/CD tools (GitHub Actions, GitLab CI, Jenkins). Understanding of ML workflows and model lifecycle. Good communication and documentation skills. Experience with real-time systems or embedded environments. Knowledge of lightweight orchestrators. Experience building telemetry/monitoring for edge devices. What You Will Achieve Reliable, fast, hardware-optimized ML deployments on edge devices. Smooth integration of ML models with C++ backend systems. Automated, repeatable, and stable ML operations across the full lifecycle.
View Full Description & Requirements →