Prompt Engineering has evolved from a curiosity into a serious engineering discipline. The AI Prompt Engineer designs, tests, iterates and maintains the instruction systems that make LLM-powered products behave reliably — combining linguistic precision, software engineering rigour and deep understanding of model behaviour to turn capable foundation models into consistent, trustworthy production systems. Role & Responsibilities: • Design system prompts and prompt architectures for production AI features: customer-facing assistants, internal knowledge tools, document processing pipelines and agentic workflows • Apply advanced prompting techniques: chain-of-thought reasoning, few-shot examples, structured output formatting, tool use specification, context management and multi-turn conversation design • Build and maintain prompt evaluation frameworks: defining test suites, running automated evaluations, measuring quality metrics and detecting prompt regressions • Optimise prompts for reliability, cost and latency: reducing unnecessary tokens, improving output consistency and designing graceful fallback behaviours • Work across multiple model providers: Anthropic Claude, OpenAI GPT-4, Google Gemini — understanding the behavioural differences and optimal prompting patterns for each • Design prompt security measures: defending against prompt injection, jailbreaking attempts and adversarial inputs in production • Collaborate with AI engineers on RAG prompt architecture: designing prompts that effectively use retrieved context and cite sources appropriately • Document prompt design decisions, version control prompt templates and maintain prompt libraries Required Skills & Experience: • 2+ years of experience in a prompt engineering, AI engineering or applied AI role — not just personal experimentation • Deep working knowledge of at least two major LLMs: Anthropic Claude, OpenAI GPT-4, Google Gemini or Llama • Experience building evaluation frameworks for LLM outputs: RAGAS, DeepEval, LLM-as-judge patterns or custom pipelines • Strong Python skills for prompt automation, evaluation pipelines and API integration • Experience with production prompt challenges: context window management, hallucination reduction, output consistency • Understanding of LLM internals: context windows, temperature, top-p and how these affect output • Anthropic API, OpenAI API or Azure OpenAI production experience required What We Offer: • Specialist role at the frontier of applied AI engineering • Salary $80,000–$110,000 based on experience and location • Remote-first with worldwide applicants welcome • Direct exposure to cutting-edge model capabilities Prompt Engineering is systematic, evidence-based engineering of model behaviour — with version control, evaluation pipelines and production monitoring. If you have built prompt systems that work reliably at scale and defended them against adversarial inputs, this role is yours.
Remote · Worldwide | $80,000–$110,000