Support Engineer
LLM.co is seeking a Senior Support Engineer to provide advanced technical support for enterprise AI deployments, including on-premise and hybrid large language model (LLM) systems. This role involves troubleshooting complex infrastructure, collaborating with engineering teams, and ensuring a seamless customer experience across regulated industries.
LLM.co builds private, secure large language model (LLM) solutions for enterprises in highly regulated industries, including law, healthcare, finance, and government. We provide organizations with compliant, on-premise or hybrid AI deployments tailored to their unique workflows and data needs.
Position Overview
LLM.co is seeking a highly skilled Senior Support Engineer to lead our customer success efforts through technical troubleshooting, hands-on solution delivery, and close collaboration with our AI, DevOps, and product teams. You’ll be the go-to expert for ensuring clients can deploy, use, and scale our platform reliably—across cloud, on-premise, and hybrid environments.
This role blends Tier 2/3 support, technical guidance, and direct client interaction. You’ll help enterprise customers troubleshoot LLM deployment issues, optimize model performance, resolve system configuration problems, and provide white-glove technical support throughout the customer lifecycle.
Key Responsibilities
- Provide advanced technical support for LLM deployments, integrations, and configurations
- Diagnose, troubleshoot, and resolve complex customer issues across networking, infrastructure, and AI model usage
- Serve as escalation point for support engineers and work directly with enterprise customers
- Collaborate with engineering to relay bugs, feature requests, and deployment challenges
- Write and maintain internal knowledge bases and external customer-facing documentation
- Build diagnostic tools, scripts, and logs to improve support workflows and resolution times
- Participate in client onboarding, system validation, and incident response
- Champion customer success by ensuring timely resolution and clear communication at every stage
Required Qualifications
- 5+ years in a technical support, DevOps, or engineering role with customer-facing experience
- Deep troubleshooting expertise across Linux environments, containerized systems (Docker/Kubernetes), and cloud infrastructure (AWS, GCP, or Azure)
- Strong scripting skills (Python, Bash, or similar)
- Familiarity with AI/ML system architecture, APIs, inference tuning, or vector databases
- Experience working with enterprise clients and managing complex environments
- Strong communication skills (verbal, written, and technical documentation)
- Ability to balance multiple tickets and projects in a fast-paced setting
Preferred Qualifications
- Experience supporting or deploying LLMs, RAG pipelines, or generative AI systems
- Familiarity with Hugging Face, LangChain, LlamaIndex, or similar frameworks
- Exposure to air-gapped, on-premise deployments or highly regulated environments
- Past experience in legaltech, fintech, or healthtech
- Background in SRE, DevSecOps, or platform engineering
What We Offer
- Competitive salary and equity
- Fully remote flexibility
- A collaborative team working at the intersection of AI and enterprise infrastructure
- Direct access to cutting-edge AI deployments and GPU-powered systems
- Opportunity to help shape the support function in a high-growth, high-impact space
Join us in powering the next generation of secure AI.