We are seeking an experienced Data Engineer to lead and deliver technical solutions for large-scale digital transformation projects. Your Key Responsibilities "> - Work closely with clients to understand complex business challenges and architect data solutions that create real business value. - Translate business and technical requirements into system designs and implement scalable data pipelines and platforms. - Design, develop, and deliver end-to-end data solutions, including data ingestion, transformation, storage, and access layers. - Leverage your expertise in cloud-based data architectures to build and manage modern, cloud-based data systems. - Optimize and automate data platform operations and monitor system performance post-deployment. - Conduct feasibility assessments and support estimation of timelines and resources for project planning. Your Qualifications "> - Proven experience designing and implementing data pipelines in enterprise environments using Python for data transformation. - Proficiency with cloud-based dataservices such as Amazon S3, Glue, Redshift, Kinesis, EMR, Athena, Lambda among others. - Experience with both structured and unstructured data, including columnar, NoSQL, and relational databases. - Familiarity with ETL/ELT tools and frameworks such as PySpark, Databricks and Spark Streaming. - Solid understanding of data modeling concepts, including dimensional modeling and star/snowflake schema design. - Comfortable working in code-driven environments and using tools for version control and continuous integration. - Effective communication skills and the ability to collaborate with cross-functional teams. Additional Requirements "> - Experience in real-time streaming data integrations. - Background in performance tuning and optimization of data queries and processes. - Familiarity with DevOps practices in data engineering workflows. - Proficiency with additional cloud platforms (Azure or GCP). Work type: Remote.