The Responsible AI Officer is an emerging executive-adjacent role that owns the ethical framework, policy and oversight processes that govern how an organisation develops and deploys AI. As the EU AI Act comes into force, as AI failures attract regulatory scrutiny and as public trust in AI becomes a competitive factor, this role is transitioning from aspirational to essential. The Responsible AI Officer is not a compliance manager who checks boxes. They are a senior professional who combines genuine AI technical literacy with deep ethical reasoning, policy expertise and the stakeholder management skills to make responsible AI practices actually change how AI is built and deployed — not just what documents say. Role & Responsibilities: • Own the enterprise responsible AI framework: defining principles, translating them into concrete engineering and product standards, and ensuring those standards are actually followed • Lead EU AI Act compliance: classifying AI systems by risk tier, implementing conformity assessment processes, maintaining technical documentation and supporting notified body audits for high-risk AI systems • Build and chair the AI ethics review board: defining the review process, determining which AI systems require ethics review, facilitating reviews and escalating to executive leadership where needed • Design and deliver AI ethics training: building the literacy across engineering, product and business teams that makes responsible AI a shared responsibility, not a centralised gatekeeping function • Define bias and fairness standards for AI systems: specifying measurement approaches, acceptable thresholds and remediation requirements — working with data scientists and ML engineers to implement them • Manage responsible AI stakeholder engagement: responding to civil society concerns, engaging with regulators on AI policy development and representing the organisation in industry standards bodies • Produce responsible AI reporting: internal governance metrics, external transparency reports, regulatory submissions and investor ESG reporting on AI • Stay ahead of the regulatory curve: monitoring EU AI Act implementing acts, UK AISI guidance, US executive orders and sector-specific AI regulations globally Required Skills & Experience: • 8+ years of experience in technology policy, AI governance, ethics or a closely related field • Genuine AI technical literacy: you understand how LLMs work, what bias in ML means technically, how AI systems are built — well enough to engage credibly with engineering teams • Deep knowledge of AI regulation: EU AI Act structure, risk categories, obligations and enforcement mechanisms • Experience building and running governance processes: ethics review boards, policy development, standards-setting • Strong stakeholder management: NGOs, regulators, media, board members and engineering teams — you can operate credibly across all of these simultaneously • Legal or policy qualification is advantageous; philosophy, social science or equivalent that informs ethical reasoning is valued • Experience with bias auditing, algorithmic impact assessment or AI fairness tooling is a strong advantage What We Offer: • Senior governance role with real authority to shape AI development practices • Salary £85,000–£115,000 based on experience • Remote-first with engagement travel • Direct partnership with General Counsel, CAIO and board risk committee The Responsible AI Officer is the role that determines whether AI ethics in an organisation is real or cosmetic. If you have the technical credibility to challenge engineering decisions, the policy expertise to navigate regulation and the influence to change how AI is actually built, this role is yours.
Remote · UK / Europe | £85,000–£115,000