About Transform Transform is an AI-native platform designed to manage and orchestrate change across complex enterprise systems landscapes. We empower consultants and companies with the ability to capture and organize knowledge, turning it into actionable artifacts and work plans, and accelerating project delivery to simplify how organizations manage change. We're a well-funded startup led by a successful repeat founder and backed by top investors. With an ambitious vision, an experienced North American founding team, and a growing global footprint, our Bogotà hub will be a cornerstone of our future, and we're building something extraordinary from day one. The Role We're hiring an AI Engineer focused on Data Ingestion and Processing to lead the buildout of our ingest layer—the foundational pipeline that connects to enterprise tools and transforms unstructured chaos into structured insight. This engineer will design and own the process that pulls in data across formats, orchestrates ETL pipelines, applies AI/LLM processing to extract and classify insights, and ultimately builds a master database of business requirements and operational knowledge. This is an in-person role in Bogotá, starting remotely while our office is finalized. The AI Engineer will work closely with our U.S.-based engineering and AI teams, and help define the data architecture of our platform from the ground up. - Architect and build data ingestion pipelines for multi-modal sources (audio, video, text, documents) - Integrate with APIs and SDKs from tools like Zoom, Microsoft Teams, Google Drive, SharePoint, Dropbox, OneDrive, Notion, etc. - Design processes to normalize, parse, classify, and structure content from unstructured formats (e.g., .docx, .pptx, .pdf, .srt, .mp3, .csv) - Apply LLMs and other AI models to extract meeting insights, action items, business requirements, and structured metadata - Build and manage a secure, scalable architecture for processing and storing enterprise knowledge artifacts - Collaborate cross-functionally with product, UX, and platform engineering teams - Define and enforce best practices in data pipeline quality, latency, and reliability What We're Looking For - 5+ years of experience in data engineering, ML engineering, or AI infrastructure roles - Expertise in Python and modern data tooling (e.g., Airflow, dbt, Spark, Pandas, PyPDF, LangChain, etc.) - Experience working with unstructured and semi-structured data (transcripts, documents, audio, video) - Strong understanding of modern LLM pipelines (e.g., embedding extraction, chunking strategies, RAG, prompt engineering) - Proven experience building scalable ETL/ELT pipelines from third-party APIs - Fluent in English (you'll work directly with U.S.-based teams) - Based in Bogotá, Colombia, and excited to work in person Bonus Points - Experience working with enterprise file systems (e.g., Google Workspace, M365, SharePoint) - Familiarity with ASR (automated speech recognition), NLP, and summarization techniques - Knowledge of vector databases and retrieval-augmented generation (RAG) workflows - Experience building internal tools or data products for business or consulting teams - Exposure to metadata frameworks or document classification systems Why Join Us - Join a high-caliber team solving some of the hardest technical challenges in enterprise AI - Help build a global product from the ground up—starting with Bogotá - Work on meaningful problems with deep business impact and cutting-edge technology - Competitive compensation and benefits (via Employer of Record) - A rare opportunity to shape infrastructure, architecture, and platform from day one