AI horizon scanner

AI horizon scanner

Timeline of AI playbooks, reports, regulations and product releases to help product managers working in UK public, higher education and third sector consider trends in AI technology, safety, regulation.

TitleDescriptionLinkTypeAuthorDate
Announcing the Agent2Agent ProtocolA new open protocol called Agent2Agent (A2A) enabling agents to interoperate with each other, even if they were built by different vendors or in a different framework, to increase autonomy and multiply productivity gains, while lowering long-term costs.https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/Product releaseGoogleApril, 2025
Anthropic Education Report: How University Students Use ClaudeThe first large-scale study of real-world AI usage patterns in higher education, analyzing one million anonymized student conversations on Claude.ai.https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claudeReportAnthropicApril, 2025
Khan Academy’s Framework for Responsible AI in EducationResponsible AI Framework, its core principles, and how we integrate it into our product development processes to create meaningful, safe, and ethical learning experiences.https://blog.khanacademy.org/khan-academys-framework-for-responsible-ai-in-education/FrameworkKhan AcademyApril, 2025
LSE partners with Anthropic to shape the future of AI in educationLSE has announced a new partnership with leading Artificial Intelligence (AI) safety and research company Anthropic, to provide all LSE students with access to its ‘Claude’ AI technology.https://www.lse.ac.uk/News/Latest-news-from-LSE/2025/d-April/LSE-partners-with-Anthropic-to-shape-the-future-of-AI-in-educationNewsLondon School of Economics and Political ScienceApril, 2025
SoA day of action following allegations of Meta’s mass theft of authors’ workA day of action to protest against Meta’s alleged theft of copyright-protected works and to showcase the strength of feeling amongst creators, with SoA members protesting outside Meta’s London offices.https://societyofauthors.org/2025/04/01/soa-day-of-action-following-allegations-of-metas-mass-theft-of-authors-work/EventPolicy team, Society of AuthorsApril, 2025
Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AIThe question of how copyright law applies to the training of AI models has rightly attracted much debate. Data are the lifeblood of artificial intelligence, and large language models – such as ChatGPT – have mainly been trained on vast amounts of publicly available data sets containing content scraped from the internet. This has caused friction between rights holders and AI developers. Currently, United Kingdom copyright law provides insufficient clarity for creators, rights holders, developers and consumer groups, impeding innovation while failing to address creator concerns about consent and compensation.https://institute.global/insights/tech-and-digitalisation/rebooting-copyright-how-the-uk-can-be-a-global-leader-in-the-arts-and-aiReportTony Blair InstituteApril, 2025
Close encounters of the AI kindReport into the adoption of AI in personal life.https://imaginingthedigitalfuture.org/reports-and-publications/close-encounters-of-the-ai-kind/close-encounters-of-the-ai-kind-main-report/ReportImagining a digital future, Elon UniversityMarch, 2025
Introducing 4o Image GenerationUnlocking useful and valuable image generation with a natively multimodal model capable of precise, accurate, photorealistic outputs.https://openai.com/index/introducing-4o-image-generation/Product releaseOpen AIMarch, 2025
How the UK tech secretary uses ChatGPT for policy adviceFreedom of information laws were used to obtain the ChatGPT records of Peter Kyle, the UK’s technology secretary, in what is believed to be a world-first use of such legislationhttps://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/NewsChris Stokel-Walker, New ScientistMarch, 2025
AI Tools – What about data protection?Data protection assessment of Gen AI providershttps://www.vischer.com/en/knowledge/blog/part-25-ai-tools-what-about-data-protection/ToolHunger & Baeriswyl, VischerMarch, 2025
Microsoft / OpenAI partnership merger inquiry closedThe CMA has decided that Microsoft’s partnership with OpenAI does not qualify for investigation under the merger provisions of the Enterprise Act 2002.https://www.gov.uk/cma-cases/microsoft-slash-openai-partnership-merger-inquiryNewsCompetition and Markets AuthorityMarch, 2025
AL Energy LeaderboardEstablishing a standardized framework for reporting AI models’ energy efficiency, thereby enhancing transparency across the field.https://huggingface.co/spaces/AIEnergyScore/LeaderboardToolHugging FaceFebruary, 2025
The Children’s Manifesto
for the Future of AI
Sets out our children’s and young people’s priorities and what they want world leaders at the Paris AI Action Summit to know about children’s hopes and worries about AI.https://www.turing.ac.uk/sites/default/files/2025-02/childrens_manifesto_for_the_future_of_ai.pdfReportChildren and AI team, The Alan Turing InstituteFebruary, 2025
Artificial Intelligence Playbook for the UK GovernmentThe AI Playbook will support the public sector in better understanding what AI can and cannot do, and how to mitigate the risks it brings. It will help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, wellbeing, and trust of the public we serve.https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-htmlPlaybookUK GovernmentFebruary, 2025
Harnessing AI for environmental justicePrinciples and practices to guide climate justice and digital rights campaigners in the responsible use of AI.https://policy.friendsoftheearth.uk/reports/harnessing-ai-environmental-justicePrinciplesFriends of the EarthFebruary, 2025
AI. 8 Realities, 8 Billion Reasons to Regulate.A report into why the UK needs an AI Regulation Bill.https://lordchrisholmes.com/wp-content/uploads/2025/02/AI-Regulation-Report.pdfReportLord Holmes of Richmond MBE.February, 2025
Charity AI Task ForceCAST and Zoe Amar Digital announced the launch of a new UK charity task force, set up to champion the responsible, inclusive and collaborative use of AI across the social sector, for maximum impact and collective benefit.https://www.wearecast.org.uk/our-work/how-we-work-with-funders-and-partners/charity-ai-task-force/EventCASTFebruary, 2025
Artificial intelligence action summitParis hosted numerous events aimed at strengthening international action towards artificial intelligence serving the general interest.https://www.elysee.fr/en/sommet-pour-l-action-sur-l-iaEventFrench governmentFebruary, 2025
AI Playbook for charitiesThis playbook helps charities use AI thoughtfully and effectively. Drawing from decades of experience working both within charities and alongside them as consultants.https://www.aiplaybookforcharities.com/PlaybookEdd Baldry & Suzanne BegleyFebruary, 2025
Introducing GPT-4.5We’re releasing a research preview of GPT‑4.5—our largest and best model for chat yet. GPT‑4.5 is a step forward in scaling up pre-training and post-training.https://openai.com/index/introducing-gpt-4-5/Product releaseOpen AIFebruary, 2025
Biggest Market Loss In History: Nvidia Stock Sheds Nearly $600 Billion As DeepSeek Shakes AI DarlingNvidia set a dubious Wall Street record Monday, as the stock at the forefront of the U.S.-led artificial intelligence revolution got a scare from DeepSeek, the Chinese AI company which developed a ChatGPT rival at a fraction of the reported cost of its American peers.https://www.forbes.com/sites/dereksaul/2025/01/27/biggest-market-loss-in-history-nvidia-stock-sheds-nearly-600-billion-as-deepseek-shakes-ai-darling/NewsDerek Saul, ForbesJanuary, 2025
DeepSeek-R1 ReleasePerformance on par with OpenAI-o1https://api-docs.deepseek.com/news/news250120Product releaseDeepSeekJanuary, 2025
Defra AI SDLC PlaybookThis playbook provides guidance on best practices for integrating AI into the Software Development Lifecycle (SDLC), specifically tailored to Defra’s needs and challenges. It serves as a living document, continuously updated to reflect emerging practices and lessons learned.https://defra.github.io/defra-ai-sdlc/PlaybookDEFRAJanuary, 2025
Humanity’s Last ExamHumanity’s Last Exam is a multi-modal benchmark at the frontier of human knowledge designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. The dataset consists of 2,700 challenging questions across over a hundred subjects.https://agi.safe.ai/ToolPhan, Han & AdesaraJanuary, 2025
Block Open Source Introduces “codename goose” — an Open Framework for AI AgentsBlock’s Open Source Program Office announced the launch of codename goose, an interoperable AI agent framework that enables users to connect large language models (LLMs) to real-world actions.https://block.xyz/inside/block-open-source-introduces-codename-gooseProduct releaseBlockJanuary, 2025
Friends for sale: the rise and risks of AI companionsWhat are the possible long-term effects of AI companions on individuals and society?https://www.adalovelaceinstitute.org/blog/ai-companions/ReportJamie Bernardi, Ada Lovelace InstituteJanuary, 2025
A learning curve?A Iandscape review of AI and education in the UK.https://www.adalovelaceinstitute.org/report/a-learning-curve/ReportRenate Samson & Kruakae Pothong, Ada Lovelace InstituteJanuary, 2025
Generative AI: product safety expectationsGuidance on the safety expectations for using generative AI products and systems in educational settings.https://www.gov.uk/government/publications/generative-ai-product-safety-expectationsGuidanceDepartment of EducationJanuary, 2025
AI Opportunities Action PlanAI capabilities are developing at an extraordinary pace. If this continues, artificial intelligence (AI) could be the government’s single biggest lever to deliver its five missions, especially the goal of kickstarting broad-based economic growth.https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-planDepartment for
Science, Innovation
& Technology
January, 2025
OpenAI o3-miniPushing the frontier of cost-effective reasoning.https://openai.com/index/openai-o3-mini/Product releaseOpen AIJanuary, 2025
Introducing OperatorA research preview of an agent that can use its own browser to perform tasks for you.https://openai.com/index/introducing-operator/Product releaseOpen AIJanuary, 2025
AI C.A.R.E. Cards for LeadersThis is a set of open-ended questions meant to start conversations around AI from a leadership perspective.https://www.alignmentedu.com/resourcesToolAlignment Edu
Sora is hereSharing initial research progress⁠ on world simulation in Sora⁠, a model that can create realistic videos from text.https://openai.com/index/sora-is-here/Product releaseOpen AIDecember, 2024
AI Generated Business: The Rise of AGI and the Rush to Find a Working Revenue ModelThis report set out to investigate and elucidate the business models behind the generative AI companies that are drawing hundreds of billions of dollars in investment.https://ainowinstitute.org/general/ai-generated-businessReportBrian Merchant, AI Now InstituteDecember, 2024
OpenAI o1 and new tools for developersIntroducing OpenAI o1, Realtime API improvements, a new fine-tuning method and more for developers.https://openai.com/index/o1-and-new-tools-for-developers/Product releaseOpen AIDecember, 2024
Introducing the Model Context ProtocolOpen-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.https://www.anthropic.com/news/model-context-protocolProduct releaseAnthropicNovember, 2024
Introducing ChatGPT searchGet fast, timely answers with links to relevant web sources.https://openai.com/index/introducing-chatgpt-search/Product releaseOpen AIOctober, 2024
Governing AI for Humanity: Final ReportRecommendations to advance a holistic vision for
a globally networked, agile and fexible approach
to governing AI for humanity.
https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdfReportUnited NationsSeptember, 2024
The Dawn of
the AI Era:
Teens, Parents, and the Adoption of
Generative AI at Home and School
Provide additional data and
support for those who are developing educational, research,
and policy initiatives to better understand and represent the
interests of middle and high school students and their parents
or guardians at a time of great debate over the integration of
artificial intelligence technologies in schools.
https://www.commonsensemedia.org/sites/default/files/research/report/2024-the-dawn-of-the-ai-era_final-release-for-web.pdfReportCommon Sense MediaSeptember, 2024
GenAI academic prompt bankA dynamic resource designed to support thoughtful and ethical use of GenAI as a tool for learning.https://www.sheffield.ac.uk/study-skills/digital/generative-ai/prompt-bankToolUniversity of SheffieldSeptember, 2024
Generative AI Guidelines CanvasA canvas to help organise key pieces of information into a unified framework for AI adoption unique to your specific context.https://johnnash.notion.site/Generative-AI-Guidelines-Canvas-v-2-d174ba11a62a41f59526f9271b9db8a6ToolJohn Nash
Artificial Intelligence Risk Management
Framework: Generative Artificial
Intelligence Profile
Cross-sectoral profile of and companion resource for the AI Risk Management
Framework (AI RMF 1.0) for Generative AI,
1 pursuant to President Biden’s Executive Order (EO) 14110 on
Safe, Secure, and Trustworthy Artificial Intelligence.
https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdfFrameworkNational Institute of Standards and Technology (NIST)July, 2024
EU AI Act launchedThe AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe.https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689RegulationEuropean UnionJune, 2024
The AI Bill ProjectThis paper sets out the background to the TUC Artificial Intelligence (Employment and Regulation) Bill (“the Bill”), the multi-stakeholder process behind the drafting, why the Bill is needed, and how it could improve the rights of working people.https://www.tuc.org.uk/research-analysis/reports/ai-bill-projectReportMary Towers, Policy officer – employment rights, TUCApril, 2024
Advances model fairness analysis summary reportInternal assessment of a machine-learning programme used to vet claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.https://www.whatdotheyknow.com/request/ai_strategy_information/response/2748592/attach/6/Advances%20Fairness%20Analysis%20February%2024%20redacted%201.pdf?cookie_passthrough=1ReportIntegrated risk and intelligence serviceApril, 2024
Power and Governance in the Age of AIExperts Reflect on Artificial Intelligence and the Public Good.https://www.newamerica.org/planetary-politics/briefs/power-governance-ai-public-good/ReportGordon LaForge, Allison Stanger, Sarah Myers West, Bruce Schneier, Stephanie Forrest, and Nazli Choucri, New AmericaMarch, 2024
AI nationalism(s): Global industrial policy approaches to AICollection of essays exploring the nationalist narratives and emergent industrial policies being proposed by governments with differing economic and geopolitical motivations.https://ainowinstitute.org/ai-nationalismsReportAI NowMarch, 2024
BBC AI PrinciplesBBC AI Principles are at the heart of our approach to using AI responsibly and apply to all use of AI at the BBC. They underpin the BBC’s public commitments about how we will use Generative AI.https://www.bbc.co.uk/supplying/working-with-us/ai-principles/PrinciplesBritish Broadcasting CorporationFebruary, 2024
Generative AI Framework for HMGThe Generative AI Framework for HMG is guidance on using generative AI safely and securely for civil servants and people working in government organisations.https://www.gov.uk/government/publications/generative-ai-framework-for-hmgFrameworkCabinet Office, Government Digital Service and Central Digital and Data OfficeJanuary, 2024
ISO/IEC 42001:2023ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems.https://www.iso.org/standard/81230.htmlStandardInternational Organization for StandardizationDecember, 2023
New York Times sues Microsoft and OpenAI for ‘billions’US news organisation the New York Times is suing ChatGPT-owner OpenAI over claims its copyright was infringed to train the system.https://www.bbc.co.uk/news/technology-67826601NewsBBCDecember, 2023
Introducing GPTsCreate custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills.https://openai.com/index/introducing-gpts/Product releaseOpen AINovember, 2023
AI Safety Summit 2023The AI Safety Summit 2023 is a major global event that will take place on the 1 and 2 November at Bletchley Park, Buckinghamshire.https://www.gov.uk/government/topical-events/ai-safety-summit-2023EventDepartment for Science, Innovation and TechnologyNovember, 2023
Foundation models
in the public sector
An overview of foundation models and their potential use cases in central and local government in the UK. Considers their risks and opportunities in the public sector, as
highlighted by public-sector leaders, researchers and civil society, and explores the principles, regulations and practices, such as impact assessments,
monitoring and public involvement, necessary to deploy foundation models in the public sector safely, ethically and equitably.
https://www.adalovelaceinstitute.org/wp-content/uploads/2023/10/Foundation-models-in-the-public-sector-Oct-2023.pdfReportAda Lovelace InstituteOctober, 2023
Hiroshima Process International Guiding Principles for Advanced AI systemThe International Guiding Principles for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems.https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-systemPrinciplesDigital strategy, European UnionOctober, 2023
Emerging processes for frontier AI safetyOverview of emerging frontier AI safety processes and associated practices.https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safetyReportDepartment for Science, Innovation and TechnologyOctober, 2023
How do people feel about AI?A nationally representative survey of public attitudes to artificial intelligence in Britainhttps://www.adalovelaceinstitute.org/report/public-attitudes-ai/ReportRoshni Modhvadia, Ada Lovelace InstituteJune, 2023
AI regulation: a pro-innovation approachWhite paper details the UK government’s plans for implementing a pro-innovation approach to AI regulation.https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approachRegulationDepartment for Science, Innovation and Technology and Office for Artificial IntelligenceMarch, 2023
Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including PromptsThis paper provides guidance for using AI to quickly and easily implement evidence-based teaching strategies that instructors can integrate into their teaching. We discuss five teaching strategies that have proven value but are hard to implement in practice due to time and effort constraints.https://dx.doi.org/10.2139/ssrn.4391243Academic paperMollach & Mollach, Wharton SchoolMarch, 2023
Introducing Khanmigo!AI education tool for teachers, learners and parents.https://blog.khanacademy.org/harnessing-ai-so-that-all-students-benefit-a-nonprofit-approach-for-equal-access/Product releaseKhan AcademyMarch, 2023
Artificial Intelligence (AI) and Digital Healthcare Technologies Capability frameworkA clear need for our healthcare workforce is to continually adapt to meet the needs of the society it serves. Health Education England (HEE) commissioned the University of Manchester to perform a learning needs analysis and develop a framework outlining the skills and capabilities to ensure our health and care professionals can work in a digitally enhanced environment.https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/ai-and-digital-healthcare-technologiesFrameworkNHS EnglandFebruary, 2023
Looking before we leapExpanding ethical review processes for AI and data science research.https://www.adalovelaceinstitute.org/report/looking-before-we-leap/ReportAda Lovelace Institute, the University of Exeter’s Institute for Data Science and Artificial Intelligence, and the Alan Turing InstituteDecember, 2022
Introducing ChatGPTChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.https://openai.com/index/chatgpt/Product releaseOpen AINovember, 2022
Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultationThe Government sought evidence and views on a range of options on how AI should be dealt with in the patent and copyright systems. This response sets out the conclusions of that process.https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents/outcome/artificial-intelligence-and-intellectual-property-copyright-and-patents-government-response-to-consultationReportIntellectual Property OfficeJune, 2022
The Scottish AI PlaybookScotland’s AI Support and Resource Hub for Business.https://www.scottishaiplaybook.com/PlaybookScottish AI AllianceMarch, 2022
Algorithmic impact assessment: a case study in healthcareThis report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context.https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ReportLara Groves, Ada Lovelace InstituteFebruary, 2022
Awful AIA curated list to track current scary usages of AI – hoping to raise awareness to its misuses in society.https://github.com/daviddao/awful-ai/tree/v1.0.0ToolDavid Dao, et alJanuary, 2022
Technical methods for regulatory inspection of algorithmic systemsA survey of auditing methods for use in regulatory inspections of online harms in social media platforms.https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/ReportAda Lovelace InstituteDecember, 2021
Recommendation on the Ethics of Artificial IntelligenceThe first-ever global standard on AI ethics is applicable to all 194 member states of UNESCO.https://www.unesco.org/en/artificial-intelligence/recommendation-ethicsGuidanceUNESCONovember, 2021
National AI StrategyThe National AI Strategy builds on the UK’s strengths but also represents the start of a step-change for AI in the UK, recognising the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors.https://www.gov.uk/government/publications/national-ai-strategyStrategyDepartment for Science, Innovation and Technology, Office for Artificial Intelligence, Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial StrategySeptember, 2021
Algorithmic accountability for the public sectorLearning from the first wave of policy implementation.https://www.adalovelaceinstitute.org/report/algorithmic-accountability-public-sector/ReportAda Lovelace Institute, AI Now Institute, and Open Government PartnershipsAugust, 2021
Scotland’s AI strategyThe Strategy marks a new chapter in Scotland’s relationship with artificial intelligence. It is the result of an extensive consultation and engagement programme involving academia, industry, the public sector and the people of Scotland who were generous with their time, contributing ideas, insights and opinion on AI.https://www.scotlandaistrategy.com/the-strategyStrategyScottish AI AllianceMarch, 2021
The Ethical Framework
for AI in Education
The Ethical Framework for Al In Education is grounded
in a shared vision of ethical Al in education and will help to enable all learners to benefit
optimally from Al in education, whilst also being protected against the risks this technology
presents. The Framework is aimed at those making procurement and application decisions
relevant to AI in education.
https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdfFrameworkInstitute for Ethical Al in EducationMarch, 2021
On the Dangers of Stochastic Parrots:
Can Language Models Be Too Big?
This paper considers the possible risks associated with LLM technology and what paths are available for mitigating those risks, and provides recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises.https://dl.acm.org/doi/pdf/10.1145/3442188.3445922Academic paperEmily Bender, Timnit Gebru, Angelina McMillan-Major, March, 2021
Inspecting algorithms in social media platformsJoint briefing with Reset, giving insights and recommendations towards a practical route forward for regulatory inspection of algorithms.https://www.adalovelaceinstitute.org/report/inspecting-algorithms-in-social-media-platforms/ReportAda Lovelace InstituteNovember, 2020
Transparency mechanisms for UK public-sector algorithmic decision-making systemsExisting UK mechanisms for transparency and their relation to the implementation of algorithmic decision-making systems.https://www.adalovelaceinstitute.org/report/transparency-mechanisms-for-uk-public-sector-algorithmic-decision-making-systems/ReportAda Lovelace InstituteOctober, 2020
Examining the Black BoxIdentifying common language for algorithm audits and impact assessments.https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/ReportAda Lovelace InstituteApril, 2020
Planning and preparing for artificial intelligence implementationGuidance to help you plan and prepare for implementing artificial intelligence (AI).https://www.gov.uk/guidance/planning-and-preparing-for-artificial-intelligence-implementationGuidanceDepartment for Science, Innovation and Technology, Office for Artificial Intelligence and Centre for Data Ethics and InnovationJune, 2019
Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector.The most comprehensive guidance on the topic of AI ethics and safety in the public sector to date. It identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. The guide stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems.https://www.turing.ac.uk/sites/default/files/2019-08/understanding_artificial_intelligence_ethics_and_safety.pdfGuidanceProfessor David Leslie, Alan Turing InstituteJune, 2019
OECD AI PrinciplesPromote use of AI that is innovative and trustworthy and that respects human rights and democratic values.https://oecd.ai/en/ai-principlesPrinciplesOrganisation for Economic Co-operation and DevelopmentMay, 2019
The AI PlaybookThe step-by-step guide to taking advantage of AI in your business.https://www.ai-playbook.com/introductionPlaybookMMC Ventures, in partnership with BarclaysJanuary, 2019
Google’s AI PrinciplesApproach to developing and harnessing the potential of AI is grounded in our founding mission — to organize the world’s information and make it universally accessible and useful — and it is shaped by our commitment to improve the lives of as many people as possible.https://ai.google/responsibility/principles/PrinciplesGoogle AIJune, 2018
Introducing OpenAIOpenAI launches as a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole.https://openai.com/index/introducing-openai/EventOpen AIDecember, 2015

If you think of anything I should add, let me know.

Roadmap