About SME Work SME is a platform that bridges subject-matter experts with AI projects, enabling them to contribute their knowledge to improve AI models. It offers flexible opportunities to work on tasks like data labeling, quality assurance, and domain-specific problem-solving while earning competitive pay. About the Role We're hiring a Code Reviewer with deep Dart expertise to review evaluations completed by data annotators assessing AI-generated Dart code responses. Your role is to ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Responsibilities - Review and audit annotator evaluations of AI-generated Dart code. - Assess if the Dart code follows the prompt instructions, is functionally correct, and secure. - Validate code snippets using proof-of-work methodology. - Identify inaccuracies in annotator ratings or explanations. - Provide constructive feedback to maintain high annotation standards. - Work within Project Atlas guidelines for evaluation integrity and consistency. Requirements - 5–7+ years of experience in Dart development, QA, or code review. - Strong knowledge of Dart syntax, debugging, edge cases, and testing. - Comfortable using code execution environments and testing tools. - Excellent written communication and documentation skills. - Experience working with structured QA or annotation workflows. - English proficiency at B2, C1, C2, or Native level. Prior Experience Beneficial - Experience in AI training, LLM evaluation, or model alignment. - Familiarity with annotation platforms. - Exposure to RLHF (Reinforcement Learning from Human Feedback) pipelines. Why This Opportunity? This role offers remote flexibility, milestone-based delivery, and competitive compensation. You will be part of a high-impact team working at the intersection of AI and software development, directly influencing the accuracy, safety, and clarity of AI-generated code.