About the Role Join our team to support the post-deployment operations of data pipelines, machine learning workflows, and product integrations. This entry-level role is ideal for individuals looking to grow their skills in data infrastructure and AI technologies. Key Responsibilities: Monitor and maintain data pipelines using tools such as Airflow and DBT. Assist in resolving data integration issues across internal systems and external APIs. Support the setup and operation of experiments involving large language models (LLMs), including prompt evaluation and fine-tuning. Document standard procedures and recurring issues to support knowledge sharing. Collaborate with cross-functional teams including data engineers, ML researchers, and product teams. Escalate complex technical challenges to senior team members for resolution. Qualifications: BS in computer science, engineering or equivalent practical experience 3+ years of experience in similar roles Basic understanding of SQL and data transformation tools like DBT. Exposure to data orchestration tools such as Airflow or Prefect. Interest in AI and LLMs; familiarity with platforms like OpenAI, Hugging Face, Ollama, or LangChain is a plus. Strong analytical and collaboration skills. Enthusiasm for continuous learning in a dynamic, research-driven environment. Work in a Way That Works for You: We support a healthy work/life balance and offer flexible working arrangements. Our initiatives include shared parental leave, study assistance, sabbaticals, and wellness programs to help you thrive both personally and professionally. Working for You: Prepaid Medical/Dental Plan Gas Voucher Life Insurance Meal/Grocery Voucher