DATA OPERATIONS ENGINEER

80.000.000 - 120.000.000


Direct message the job poster from Hyqoo Sr. Sourcing Specialist @ Hyqoo (formerly ClikSource) | MBA, Technical Recruiting Job Title: Data Operations Engineer Location: Remote (Colombia) Experience Level: Entry to Mid-level (3-5) Introduction: We are seeking a highly skilled and detail-oriented Data Operations Engineer to join our growing data platform team. In this role, you will play a crucial part in designing, building, and optimizing data infrastructure that supports our next-generation products and data-driven decision-making. You will develop scalable, secure, and high-performance data pipelines while ensuring effective integration with cloud platforms, big data tools, and CI/CD automation frameworks. The ideal candidate is technically strong, has hands-on experience with AWS, big data processing, and automation, and can collaborate with cross-functional teams with ease. Roles and Responsibilities: 1. Data Pipeline Building and Optimization: - Build and maintain scalable data pipeline architecture using Spark and orchestration tools like Airflow or AWS Step Functions. - Manage large, complex datasets to meet business and performance needs. - Identify, design, and implement process improvements to automate data ingestion, transformation, and loading. - Enhance reliability through pipeline monitoring, failure detection, and automated recovery. - Leverage Snowflake and dbt for optimized data warehousing and transformation workflows. 2. Data Platform and Self-Serving Infrastructure: - Design and implement robust infrastructure using AWS services such as EMR, S3, Lambda, Glue, RDS, and VPC. - Create abstractions and internal services that allow partners to work more autonomously with data. - Implement and maintain metadata, dependency management, and lifecycle processes for datasets. - Manage data access and security using IAM and role-based access control across cloud tools. 3. CI/CD and Infrastructure as Code: - Build and maintain CI/CD pipelines using tools such as Jenkins, GitHub Actions, or GitLab CI. - Automate provisioning and configuration of infrastructure using Terraform or AWS CloudFormation. - Ensure seamless and repeatable deployments for data applications and infrastructure components. 4. Monitoring and Observability: - Implement infrastructure and application monitoring using tools like CloudWatch, Datadog, ELK stack, or Prometheus. - Configure dashboards and alerts to support proactive performance tuning and troubleshooting. 5. Scripting & Programming: - Write modular and reusable scripts using Python, Bash, or Shell for data automation and orchestration. - Develop and maintain tools to support internal analytics and platform services. 6. (Nice to Have) Containerization & Orchestration: - Familiarity with Docker and Kubernetes for deploying and managing containerized workloads. - Experience with ECR/ECS or other container orchestration platforms is a plus. Qualifications: - Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or related field, or equivalent hands-on experience. - 3–5 years of experience in data engineering and cloud-based development using Python and AWS services. - Proficiency with AWS services like EMR, S3, Lambda, Glue, RDS, IAM, and VPC. - Strong expertise with Apache Spark and data pipeline orchestration using Airflow or AWS Step Functions. - Experience with Snowflake for warehousing, including performance tuning and user/role management. - Hands-on experience with dbt and cloud-native tools for data transformation and modeling. - Strong understanding of CI/CD pipelines and IaC tools such as Terraform or CloudFormation. - Proficient in scripting with Python, Bash, or Shell and writing automation scripts for data workflows. - Solid experience in monitoring, logging, and alerting frameworks for cloud infrastructure. - (Nice to have) Exposure to containerized environments and orchestration using Docker and Kubernetes. - Excellent communication, analytical, and problem-solving skills with a collaborative mindset. - Strong sense of ownership, with the ability to work independently and manage multiple priorities. Tools and Technologies: - AWS Services: EMR, S3, Lambda, Glue, RDS, IAM, VPC - Data Pipeline Tools: Apache Spark, Airflow, AWS Step Functions - Data Warehousing: Snowflake - Data Transformation: dbt - CI/CD Tools: Jenkins, GitHub Actions, GitLab CI - IaC Tools: Terraform, AWS CloudFormation - Scripting Languages: Python, Bash, Shell - (Nice to Have) Containerization: Docker, Kubernetes, ECR/ECS This position offers the opportunity to make a significant impact on our data-driven initiatives and contribute to the success of our organization. If you are passionate about data operations and eager to work in a dynamic environment, we encourage you to apply. Seniority level Seniority level Mid-Senior level Employment type Employment type Contract Job function Job function Information Technology Industries IT Services and IT Consulting Referrals increase your chances of interviewing at Hyqoo by 2x Get notified about new Data Engineer jobs in Colombia . Data Engineer (PySpark) - Technology (Latam) Bogota, D.C., Capital District, Colombia 1 week ago Bogota, D.C., Capital District, Colombia $24,000.00-$36,000.00 1 month ago Bogota, D.C., Capital District, Colombia 1 week ago Bogota, D.C., Capital District, Colombia 4 months ago Data Visualization Engineer (Latin America) Bogota, D.C., Capital District, Colombia 1 week ago Bogota, D.C., Capital District, Colombia 1 day ago We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI. #J-18808-Ljbffr

trabajosonline.net © 2017–2021
Más información