← Back to all jobs
OneForma

ML Ops Infrastructure Engineer

OneForma

20h ago

0$150kDevopsUnited Stateshimalayas
MLOpsDevOpsInfrastructure-EngineeringAI-InfrastructurePlatform-EngineeringSenior

Job Description

About CentificCentific is a frontier AI data foundry that curates diverse, high-quality data, using our purpose-built technology platforms to empower the Magnificent Seven and our enterprise clients with safe, scalable AI deployment. Our team includes more than 150 PhDs and data scientists, along with more than 4,000 AI practitioners and engineers. We harness the power of an integrated solution ecosystem—comprising industry-leading partnerships and 1.8 million vertical domain experts in more than 230 markets—to create contextual, multilingual, pre-trained datasets; fine-tuned, industry-specific LLMs; and RAG pipelines supported by vector databases. Our zero-distance innovation™ solutions for GenAI can reduce GenAI costs by up to 80% and bring solutions to market 50% faster.Our mission is to bridge the gap between AI creators and industry leaders by bringing best practices in GenAI to unicorn innovators and enterprise customers. We aim to help these organizations unlock significant business value by deploying GenAI at scale, helping to ensure they stay at the forefront of technological advancement and maintain a competitive edge in their respective markets.About JobAbout the RoleOur Vision AI platform runs where the data is generated — on-premises, inside government facilities, and at the network edge — not in a hyperscaler cloud. That means the infrastructure has to be bulletproof: GPU clusters provisioned correctly, Kubernetes workloads scheduled efficiently across heterogeneous compute, storage performing at the throughput AI training and inference demands, and the network capable of handling high-bandwidth, low-latency sensor data at scale.As our MLOps / AI Infrastructure Engineer, you will own all of it. You will rack, configure, and operate the on-premises compute and GPU infrastructure that powers the platform, build and maintain the Kubernetes clusters that orchestrate AI workloads, design the networking fabric that ties edge nodes to core compute, and implement the MLOps pipelines that take models from development to production. You will work directly with our AI/ML engineers, the Lead Architect, and on-site client technical teams to ensure the platform runs reliably in environments that are often air-gapped, physically secured, and subject to strict government compliance requirements.Key ResponsibilitiesGPU Compute & Hardware InfrastructureDeploy, configure, and maintain on-premises GPU servers — primarily NVIDIA H200 and A100 nodes — including driver management, CUDA toolkit versioning, NVLink/NVSwitch topology, and firmware updates.Implement and tune NVIDIA-specific tooling: DCGM (Data Center GPU Manager) for health monitoring and telemetry, MIG (Multi-Instance GPU) partitioning for multi-tenant workloads, and NVIDIA Container Toolkit for GPU-aware containerization.Manage bare-metal provisioning workflows (iPXE, PXE, or tools such as MAAS/Foreman) to enable repeatable, auditable server builds at client sites.Monitor hardware health, capacity utilization, and thermal/power envelopes; define alerting thresholds and respond to hardware failures with minimal service disruption.Kubernetes & Container OrchestrationBuild, upgrade, and maintain production-grade Kubernetes clusters (kubeadm or Rancher RKE2) on bare-metal infrastructure, with GPU node pools configured via the NVIDIA GPU Operator.Design and operate cluster networking using CNI plugins appropriate for high-throughput AI workloads — Calico, Cilium, or SR-IOV for RDMA-capable networking where required.Configure and manage MetalLB or equivalent bare-metal load balancing, ingress controllers, and service mesh components (Istio or Linkerd) for secure intra-cluster communication.Implement resource quotas, LimitRanges, PriorityClasses, and node affinity/taints to ensure AI training jobs, inference services, and platform workloads coexist without resource contention.Maintain cluster security posture: RBAC policies, Pod Security Admission, network policies, secrets management (HashiCorp Vault or Sealed Secrets), and CIS Kubernetes Benchmark compliance.MLOps Pipelines & AI Workload ManagementDeploy and operate MLOps platforms (MLflow, Kubeflow, or equivalent) for experiment tracking, model versioning, and pipeline orchestration across training and inference workloads.Configure and manage NVIDIA Triton Inference Server for multi-model serving, dynamic batching, and model ensemble execution on GPU nodes.Build CI/CD pipelines for model deployment (GitOps with ArgoCD or Flux), including automated model validation, canary rollouts, and rollback mechanisms.Optimize GPU utilization for both batch training jobs (Volcano or KUEUE scheduler) and latency-sensitive inference services, tracking efficiency metrics via DCGM and Prometheus.Manage model artifact storage and versioning using software-defined storage backends (Ceph RBD/CephFS or MinIO) integrated with the MLOps toolchain.Networking & Storage ArchitectureDesign and implement the high-bandwidth network fa