
Machine Learning Operations (MLOPs) Architect - GCP - (US Remote)
RackspacePosted 6/5/2025

Machine Learning Operations (MLOPs) Architect - GCP - (US Remote)
Rackspace
Job Location
Salary Range
Job Summary
We are seeking a seasoned Machine Learning Operations (MLOps) Architect to build and optimize ML inference platforms on Google Cloud Platform (GCP). The ideal candidate will have expertise in machine learning engineering and infrastructure, with experience in building and scaling ML inference systems. They will collaborate with cross-functional teams to translate business objectives into robust engineering solutions and provide technical leadership to foster a high-performing engineering team. This remote position requires strong communication skills, independence, and innovative problem-solving abilities. The successful candidate will have hands-on experience with leading deep learning frameworks such as TensorFlow, Keras, or Spark MLlib, and a solid foundation in machine learning algorithms, natural language processing, and statistical modeling. They will also be familiar with public cloud services, particularly GCP and Vertex AI, and have expertise in applying model optimization techniques to production environments. The role offers flexible remote work options, $4,000/year travel stipends, and equity in a fast-growing company. We value curiosity, ownership, and a drive to improve, and we will shape the role around your strengths.
Job Description
What you will be doing:
- Architect and optimize our existing data infrastructure to support cutting-edge machine learning and deep learning models.
- Collaborate closely with cross-functional teams to translate business objectives into robust engineering solutions.
- Own the end-to-end development and operation of high-performance, cost-effective inference systems for a diverse range of models, including state-of-the-art LLMs.
- Provide technical leadership and mentorship to foster a high-performing engineering team.
Requirements:
- Proven track record in designing and implementing cost-effective and scalable ML inference systems.
- Hands-on experience with leading deep learning frameworks such as TensorFlow, Keras, or Spark MLlib.
- Solid foundation in machine learning algorithms, natural language processing, and statistical modeling.
- Strong grasp of fundamental computer science concepts including algorithms, distributed systems, data structures, and database management.
- Ability to tackle complex challenges and devise effective solutions. Use critical thinking to approach problems from various angles and propose innovative solutions.
- Worked effectively in a remote setting, maintaining strong written and verbal communication skills. Collaborate with team members and stakeholders, ensuring clear understanding of technical requirements and project goals.
- Proven experience in Apache Hadoop ecosystem (Oozie, Pig, Hive, Map Reduce).
- Expertise in public cloud services, particularly in GCP and Vertex AI.
Must have:
- Proven expertise in applying model optimization techniques (distillation, quantization, hardware acceleration) to production environments.
- Proficiency and recent experience in Java is required (Must have)
- In-depth understanding of LLM architectures, parameter scaling, and deployment trade-offs.
- Technical degree: Bachelor's degree in Computer Science with a minimum of 10+ years of relevant industry experience, or
- A Master's degree in Computer Science with at least 8+ years of relevant industry experience.
- A specialization in Machine Learning is preferred.
Travel
- Travel as needed per business requirements
Sponsorship
- This role is not sponsorship eligible
- Candidates need to be legally able to work in the US for any employers