← Serch more jobs

DevOps/Platform Engineer II

LinkedIn Pacific Northwest National Laboratory Seattle, WA
Not Applicable Posted March 27, 2026 Job link
Thinking about this job
Not Met Priorities
What still needs stronger evidence
Requirements
  • Infrastructure & Automation Fundamentals
  • Working proficiency in Python with foundational knowledge of at least one additional language (Bash, Go, C#, JavaScript/TypeScript) for scripting and automation tasks
  • Understanding of Infrastructure as Code principles with exposure to tools like Terraform, CloudFormation, or Ansible and ability to write basic infrastructure configurations
  • Familiarity with version control workflows (Git) including branching, commits, pull requests, and collaborative development practices with willingness to learn CI/CD pipeline concepts and contribute to build automation
  • Eagerness to learn and apply AI assist tools (e.g., GitHub Copilot, Claude, ChatGPT) to accelerate learning, generate infrastructure code, troubleshoot issues, and improve automation script quality
  • MLOps & Machine Learning Infrastructure
  • Foundational knowledge of machine learning concepts including model training, evaluation, and deployment with exposure to frameworks (PyTorch, TensorFlow, scikit-learn)
  • Basic understanding of the ML lifecycle and MLOps principles including experiment tracking, model versioning, and monitoring with willingness to learn tools like MLflow, Weights & Biases, or Kubeflow
  • Exposure to or willingness to learn about ML model serving, inference APIs, and supporting infrastructure for training and deployment pipelines
  • Interest in supporting LLM applications, agent-based frameworks, and ML workloads on cloud platforms or Kubernetes with eagerness to grow expertise through hands-on projects
  • Cloud Platforms & Container Technologies
  • Basic knowledge of cloud computing principles and familiarity with services within AWS, Azure, or GCP (compute, storage, networking, IAM)
  • Exposure to containerization with Docker and foundational understanding of container orchestration concepts (Kubernetes) with willingness to learn pod management, deployments, and services
  • Understanding of basic networking concepts including DNS, load balancing, and firewalls with awareness of RESTful API principles and microservice architecture patterns
  • Familiarity with monitoring and logging tools (CloudWatch, Prometheus, Grafana, ELK Stack) and willingness to learn observability practices
  • Data Infrastructure & Pipeline Support
  • Awareness of cloud-native data pipeline concepts and ETL/ELT principles with exposure to services like AWS S3, Lambda, Glue, or equivalent Azure/GCP services
  • Basic knowledge of cloud-based data storage systems (S3, PostgreSQL, MongoDB) and understanding of differences between relational and NoSQL databases
  • Foundational understanding of distributed computing and streaming concepts with exposure to frameworks like Spark, Kafka, or Ray through coursework or personal projects
  • Knowledge of common data formats (JSON, CSV, Parquet, Avro) with basic understanding of schema design, data validation, and data quality considerations
  • Collaboration & Professional Growth
  • Ability to collaborate effectively within DevOps, platform engineering, and cross-functional teams while actively seeking mentorship and learning opportunities
  • Developing communication skills to document infrastructure configurations, write clear runbooks, and articulate technical challenges through team discussions and written documentation
  • Enthusiastic participation in code reviews and infrastructure design discussions with openness to constructive feedback and eagerness to learn best practices
  • Demonstrated ability to incorporate feedback, learn from operational incidents, and continuously improve through peer collaboration, self-study, and hands-on experience
  • National Interest Project Examples
  • Detect and prevent smuggling of drugs and contraband at ports of entry [Link]
  • Applying big data solutions to national security problems [Link]
  • Applying image classification for nuclear forensics analysis [Link]
  • Develop capabilities for scalable geospatial analytics [Link]
  • MS/MA -OR
  • BS/BA and 2 years of relevant experience
  • Exposure to infrastructure automation, deployment pipelines, or cloud platform management through coursework, personal projects, labs, or internship experience
  • Basic scripting or programming experience with Python, Bash, or similar languages demonstrated through academic projects or personal automation initiatives
  • This position requires the ability to obtain and maintain a federal security clearance.
  • In addition, applicants must be able to demonstrate non-use of illegal drugs, including marijuana, for the 12 consecutive months preceding completion of the requisite Questionnaire for National Security Positions (QNSP).
  • Department of Energy if non-use of illegal drugs, including marijuana, for 12 months cannot be demonstrated.
Preferred Skills
  • Exposure to or willingness to learn about ML model serving, inference APIs, and supporting infrastructure for training and deployment pipelines
  • Interest in supporting LLM applications, agent-based frameworks, and ML workloads on cloud platforms or Kubernetes with eagerness to grow expertise through hands-on projects
  • Understanding of basic networking concepts including DNS, load balancing, and firewalls with awareness of RESTful API principles and microservice architecture patterns
  • Familiarity with monitoring and logging tools (CloudWatch, Prometheus, Grafana, ELK Stack) and willingness to learn observability practices
  • Awareness of cloud-native data pipeline concepts and ETL/ELT principles with exposure to services like AWS S3, Lambda, Glue, or equivalent Azure/GCP services
  • Basic knowledge of cloud-based data storage systems (S3, PostgreSQL, MongoDB) and understanding of differences between relational and NoSQL databases
  • Foundational understanding of distributed computing and streaming concepts with exposure to frameworks like Spark, Kafka, or Ray through coursework or personal projects
  • Knowledge of common data formats (JSON, CSV, Parquet, Avro) with basic understanding of schema design, data validation, and data quality considerations
  • Applying image classification for nuclear forensics analysis [Link]
  • Develop capabilities for scalable geospatial analytics [Link]
  • Degree in computer science, software engineering, or related technical field
  • Exposure to infrastructure automation, deployment pipelines, or cloud platform management through coursework, personal projects, labs, or internship experience
  • Basic scripting or programming experience with Python, Bash, or similar languages demonstrated through academic projects or personal automation initiatives
  • Experience with containerization (Docker) through personal projects, coursework, or labs with interest in learning Kubernetes
  • Strong problem-solving abilities demonstrated through technical challenges, troubleshooting exercises, or course projects
  • Active engagement in learning cloud technologies, automation, MLOps, or modern infrastructure practices (e.g., coursework, certifications, or technical projects)
  • Demonstrated commitment to professional growth in platform or DevOps engineering through mentorship, training, or technical skill development
  • Participation in relevant communities, online courses (Coursera, Udemy, A Cloud Guru), or technical forums demonstrating commitment to continuous learning
Education
  • (Not required) – PhD -OR
  • (Not required) – MS/MA -OR
  • (Not required) – BS/BA and 2 years of relevant experience
  • (Not required) – Degree in computer science, software engineering, or related technical field
  • (Not required) – Active engagement in learning cloud technologies, automation, MLOps, or modern infrastructure practices (e.g., coursework, certifications, or technical projects)