DATA ENGINEER

40.000.000 - 80.000.000


Get AI-powered advice on this job and more exclusive features. At Neostella, we take a customer-centric approach and use cutting-edge technologies to deliver solutions that meet the unique needs of our clients' businesses. Our offerings include Neodeluxe Legal Solutions, Work-Relay process and workflow solutions for Salesforce, Robotic Process Automation, and Application Integration. To continue our growth, we are seeking a Data Engineer! By joining our team, you'll work in a fast-paced, rapidly growing startup environment. We are looking for a Data Engineer experienced in SQL and Python, with expertise in data warehouses and data migrations. The ideal candidate will have experience with various ETL tools and developing ETL processes to ensure data quality, accuracy, and consistency. They should also have experience working with large datasets, optimizing SQL queries, and improving database performance. The growth potential and opportunities here are endless, and we want you to be part of our journey. Curious what your day would look like as a Data Engineer? Check out the details below! Key Responsibilities Develop and maintain complex SQL scripts and queries to extract, transform, and load data into AWS-based data warehouses. Design, implement, and optimize data models and database schemas for efficient data storage and retrieval on AWS. Perform data migrations and integrations between various data sources and systems using Python. Utilize AWS services such as AWS S3, AWS Glue, and Amazon Redshift for data storage, processing, and analytics. Collaborate with cross-functional teams to understand business requirements and translate them into scalable, AWS-based solutions. Analyze, troubleshoot, and resolve issues related to data quality, integrity, and performance. Develop and maintain documentation of AWS-based data architecture, data flows, and data lineage. Ensure compliance with data governance policies, standards, and best practices in an AWS environment. Continuously improve and optimize AWS-based data pipelines for performance and scalability. Requirements 1+ years of experience in SQL development and data modeling for AWS data warehousing. Strong proficiency in Python programming, particularly in building data processing and automation scripts. Experience with ETL tools and AWS services such as AWS Glue, Amazon Redshift, and AWS Lambda. Familiarity with Agile software development methodologies. Excellent problem-solving skills and the ability to work independently as well as in a team environment. Strong communication and interpersonal skills to collaborate effectively with technical and non-technical stakeholders. Benefits You will have an undefined contract, a fast-growing career path, pre-paid health insurance coverage with Sura for you and one additional family member, flex time, the flexibility to work from home or in the office, a yearly ophthalmological health bonus, and the opportunity to improve your English skills by working with international teams and projects, along with fully personalized English classes, and more. #J-18808-Ljbffr

trabajosonline.net © 2017–2021
Más información