Mid-Senior level
Posted April 3, 2026
2 variants
Job link
Thinking about this job
Responsibilities
Commitments
Responsibilities
- Support a highly available and scalable infrastructure containing Object storage, Openshift, Spark, Iceberg, Yunikorn, Trino Monitor for configuration drift and enforce infrastructure policies.
- Configure and monitor Big Data ecosystem components with various BI tools, observability tools etc
- Build automated regression and performance test suite to ensure health checks of all components of the platform
- Monitor system health and enforce runtime policies
- Implement and manage security protocols, including Oauth authentication, TLS encryption, and role-based access control (RBAC).
- Conduct regular maintenance, including cluster scaling, perform regular security audits.
Commitments
Contract Duration: 24 Months
Posted By: Natalie DeWitt
Not Met Priorities
What still needs stronger evidence
Requirements
- Programming & Scripting
- Languages: Python, Bash, Shell, SQL, Java (basic), Scala (for big data, good to have)
- Automation & Scripting: Python scripting for automation, Linux shell scripting
- Operating Systems & Containers
- System programing, performance tuning, networking
- OCP, Kubernetes (K8s), Helm, Terraform, container orchestration and deployment
- Big Data & Data Engineering
- Frameworks: NexusOne, Apache Spark, Hadoop, Hive, Trino, Iceberg
- ETL Tools: Apache Airflow, NiFi (good to have)
- Data Pipelines: Batch and streaming (Kafka, Flink)
- Object Storage: S3, NetApp StorageGrid
- Data Formats: Parquet/Avro, ORC, JSON, CSV
- Frameworks or LLM modeling
- Model Ops: MLflow, Kubeflow, SageMaker
- Data Science: Feature engineering, model deployment, inference pipelines
- Security & Access Control
- Access Models: RBAC (Role-Based Access Control), ABAC (Attribute-Based Access Control)
- Data Protection: Encryption at rest and in transit, TLS/SSL, KMS (Key Management Services)
- Compliance: GDPR, HIPAA (if applicable), IAM policies
- System Design & Architecture (good to have, at least at a conceptual level)
- Design Principles: Microservices, Event-driven architecture, Serverless
- Scalability: Load balancing, caching (Redis, Memcached), horizontal scaling
- High Availability: Failover strategies, disaster recovery, monitoring (Prometheus, Grafana)
Preferred Skills
- ETL Tools: Apache Airflow, NiFi (good to have)
- AI/ML & MTC (Model Training & Consumption) (Nice to have)
- Frameworks or LLM modeling
- Model Ops: MLflow, Kubeflow, SageMaker
- Data Science: Feature engineering, model deployment, inference pipelines
- Security & Access Control
- Access Models: RBAC (Role-Based Access Control), ABAC (Attribute-Based Access Control)
- Data Protection: Encryption at rest and in transit, TLS/SSL, KMS (Key Management Services)
- Compliance: GDPR, HIPAA (if applicable), IAM policies
- System Design & Architecture (good to have, at least at a conceptual level)
- Design Principles: Microservices, Event-driven architecture, Serverless
- Scalability: Load balancing, caching (Redis, Memcached), horizontal scaling
- High Availability: Failover strategies, disaster recovery, monitoring (Prometheus, Grafana)
- Posted By: Natalie DeWitt
Outstanding contract opportunity! A well-known Financial Services Company is looking for a Cloud Platform Engineer in Charlotte, NC or Iselin, NJ.
Work with the brightest minds at one of the largest financial institutions in the world. This is long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: 24 Months
Primary Role: Build and maintain secure, scalable infrastructure and services.
What You Will Be Doing
Support a highly available and scalable infrastructure containing Object storage, Openshift, Spark, Iceberg, Yunikorn, Trino Monitor for configuration drift and enforce infrastructure policies.
Configure and monitor Big Data ecosystem components with various BI tools, observability tools etc
Build automated regression and performance test suite to ensure health checks of all components of the platform
Monitor system health and enforce runtime policies
Implement and manage security protocols, including Oauth authentication, TLS encryption, and role-based access control (RBAC).
Conduct regular maintenance, including cluster scaling, perform regular security audits.
Required Skills
Programming & Scripting
Languages: Python, Bash, Shell, SQL, Java (basic), Scala (for big data, good to have)
Automation & Scripting: Python scripting for automation, Linux shell scripting
Operating Systems & Containers
System programing, performance tuning, networking
OCP, Kubernetes (K8s), Helm, Terraform, container orchestration and deployment
Big Data & Data Engineering
Frameworks: NexusOne, Apache Spark, Hadoop, Hive, Trino, Iceberg
ETL Tools: Apache Airflow, NiFi (good to have)
Data Pipelines: Batch and streaming (Kafka, Flink)
Object Storage: S3, NetApp StorageGrid
Data Formats: Parquet/Avro, ORC, JSON, CSV
AI/ML & MTC (Model Training & Consumption) (Nice to have)
Frameworks or LLM modeling
Model Ops: MLflow, Kubeflow, SageMaker
Data Science: Feature engineering, model deployment, inference pipelines
Security & Access Control
Access Models: RBAC (Role-Based Access Control), ABAC (Attribute-Based Access Control)
Data Protection: Encryption at rest and in transit, TLS/SSL, KMS (Key Management Services)
Compliance: GDPR, HIPAA (if applicable), IAM policies
System Design & Architecture (good to have, at least at a conceptual level)
Design Principles: Microservices, Event-driven architecture, Serverless
Scalability: Load balancing, caching (Redis, Memcached), horizontal scaling
High Availability: Failover strategies, disaster recovery, monitoring (Prometheus, Grafana)
Posted By: Natalie DeWitt
Work with the brightest minds at one of the largest financial institutions in the world. This is long-term contract opportunity that includes a competitive benefit package! Our client has been around for over 150 years and is continuously innovating in today's digital age. If you want to work for a company that is not only a household name, but also truly cares about satisfying customers' financial needs and helping people succeed financially, apply today.
Contract Duration: 24 Months
Primary Role: Build and maintain secure, scalable infrastructure and services.
What You Will Be Doing
Support a highly available and scalable infrastructure containing Object storage, Openshift, Spark, Iceberg, Yunikorn, Trino Monitor for configuration drift and enforce infrastructure policies.
Configure and monitor Big Data ecosystem components with various BI tools, observability tools etc
Build automated regression and performance test suite to ensure health checks of all components of the platform
Monitor system health and enforce runtime policies
Implement and manage security protocols, including Oauth authentication, TLS encryption, and role-based access control (RBAC).
Conduct regular maintenance, including cluster scaling, perform regular security audits.
Required Skills
Programming & Scripting
Languages: Python, Bash, Shell, SQL, Java (basic), Scala (for big data, good to have)
Automation & Scripting: Python scripting for automation, Linux shell scripting
Operating Systems & Containers
System programing, performance tuning, networking
OCP, Kubernetes (K8s), Helm, Terraform, container orchestration and deployment
Big Data & Data Engineering
Frameworks: NexusOne, Apache Spark, Hadoop, Hive, Trino, Iceberg
ETL Tools: Apache Airflow, NiFi (good to have)
Data Pipelines: Batch and streaming (Kafka, Flink)
Object Storage: S3, NetApp StorageGrid
Data Formats: Parquet/Avro, ORC, JSON, CSV
AI/ML & MTC (Model Training & Consumption) (Nice to have)
Frameworks or LLM modeling
Model Ops: MLflow, Kubeflow, SageMaker
Data Science: Feature engineering, model deployment, inference pipelines
Security & Access Control
Access Models: RBAC (Role-Based Access Control), ABAC (Attribute-Based Access Control)
Data Protection: Encryption at rest and in transit, TLS/SSL, KMS (Key Management Services)
Compliance: GDPR, HIPAA (if applicable), IAM policies
System Design & Architecture (good to have, at least at a conceptual level)
Design Principles: Microservices, Event-driven architecture, Serverless
Scalability: Load balancing, caching (Redis, Memcached), horizontal scaling
High Availability: Failover strategies, disaster recovery, monitoring (Prometheus, Grafana)
Posted By: Natalie DeWitt