AI horizon scanner
AI horizon scanner
Timeline of AI playbooks, reports, regulations and product releases to help product managers working in UK public, higher education and third sector consider trends in AI technology, safety, regulation.
Title | Description | Link | Type | Author | Date |
---|---|---|---|---|---|
Announcing the Agent2Agent Protocol | A new open protocol called Agent2Agent (A2A) enabling agents to interoperate with each other, even if they were built by different vendors or in a different framework, to increase autonomy and multiply productivity gains, while lowering long-term costs. | https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/ | Product release | April, 2025 | |
Anthropic Education Report: How University Students Use Claude | The first large-scale study of real-world AI usage patterns in higher education, analyzing one million anonymized student conversations on Claude.ai. | https://www.anthropic.com/news/anthropic-education-report-how-university-students-use-claude | Report | Anthropic | April, 2025 |
Khan Academy’s Framework for Responsible AI in Education | Responsible AI Framework, its core principles, and how we integrate it into our product development processes to create meaningful, safe, and ethical learning experiences. | https://blog.khanacademy.org/khan-academys-framework-for-responsible-ai-in-education/ | Framework | Khan Academy | April, 2025 |
LSE partners with Anthropic to shape the future of AI in education | LSE has announced a new partnership with leading Artificial Intelligence (AI) safety and research company Anthropic, to provide all LSE students with access to its ‘Claude’ AI technology. | https://www.lse.ac.uk/News/Latest-news-from-LSE/2025/d-April/LSE-partners-with-Anthropic-to-shape-the-future-of-AI-in-education | News | London School of Economics and Political Science | April, 2025 |
SoA day of action following allegations of Meta’s mass theft of authors’ work | A day of action to protest against Meta’s alleged theft of copyright-protected works and to showcase the strength of feeling amongst creators, with SoA members protesting outside Meta’s London offices. | https://societyofauthors.org/2025/04/01/soa-day-of-action-following-allegations-of-metas-mass-theft-of-authors-work/ | Event | Policy team, Society of Authors | April, 2025 |
Rebooting Copyright: How the UK Can Be a Global Leader in the Arts and AI | The question of how copyright law applies to the training of AI models has rightly attracted much debate. Data are the lifeblood of artificial intelligence, and large language models – such as ChatGPT – have mainly been trained on vast amounts of publicly available data sets containing content scraped from the internet. This has caused friction between rights holders and AI developers. Currently, United Kingdom copyright law provides insufficient clarity for creators, rights holders, developers and consumer groups, impeding innovation while failing to address creator concerns about consent and compensation. | https://institute.global/insights/tech-and-digitalisation/rebooting-copyright-how-the-uk-can-be-a-global-leader-in-the-arts-and-ai | Report | Tony Blair Institute | April, 2025 |
Close encounters of the AI kind | Report into the adoption of AI in personal life. | https://imaginingthedigitalfuture.org/reports-and-publications/close-encounters-of-the-ai-kind/close-encounters-of-the-ai-kind-main-report/ | Report | Imagining a digital future, Elon University | March, 2025 |
Introducing 4o Image Generation | Unlocking useful and valuable image generation with a natively multimodal model capable of precise, accurate, photorealistic outputs. | https://openai.com/index/introducing-4o-image-generation/ | Product release | Open AI | March, 2025 |
How the UK tech secretary uses ChatGPT for policy advice | Freedom of information laws were used to obtain the ChatGPT records of Peter Kyle, the UK’s technology secretary, in what is believed to be a world-first use of such legislation | https://www.newscientist.com/article/2472068-revealed-how-the-uk-tech-secretary-uses-chatgpt-for-policy-advice/ | News | Chris Stokel-Walker, New Scientist | March, 2025 |
AI Tools – What about data protection? | Data protection assessment of Gen AI providers | https://www.vischer.com/en/knowledge/blog/part-25-ai-tools-what-about-data-protection/ | Tool | Hunger & Baeriswyl, Vischer | March, 2025 |
Microsoft / OpenAI partnership merger inquiry closed | The CMA has decided that Microsoft’s partnership with OpenAI does not qualify for investigation under the merger provisions of the Enterprise Act 2002. | https://www.gov.uk/cma-cases/microsoft-slash-openai-partnership-merger-inquiry | News | Competition and Markets Authority | March, 2025 |
AL Energy Leaderboard | Establishing a standardized framework for reporting AI models’ energy efficiency, thereby enhancing transparency across the field. | https://huggingface.co/spaces/AIEnergyScore/Leaderboard | Tool | Hugging Face | February, 2025 |
The Children’s Manifesto for the Future of AI | Sets out our children’s and young people’s priorities and what they want world leaders at the Paris AI Action Summit to know about children’s hopes and worries about AI. | https://www.turing.ac.uk/sites/default/files/2025-02/childrens_manifesto_for_the_future_of_ai.pdf | Report | Children and AI team, The Alan Turing Institute | February, 2025 |
Artificial Intelligence Playbook for the UK Government | The AI Playbook will support the public sector in better understanding what AI can and cannot do, and how to mitigate the risks it brings. It will help ensure that AI technologies are deployed in responsible and beneficial ways, safeguarding the security, wellbeing, and trust of the public we serve. | https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html | Playbook | UK Government | February, 2025 |
Harnessing AI for environmental justice | Principles and practices to guide climate justice and digital rights campaigners in the responsible use of AI. | https://policy.friendsoftheearth.uk/reports/harnessing-ai-environmental-justice | Principles | Friends of the Earth | February, 2025 |
AI. 8 Realities, 8 Billion Reasons to Regulate. | A report into why the UK needs an AI Regulation Bill. | https://lordchrisholmes.com/wp-content/uploads/2025/02/AI-Regulation-Report.pdf | Report | Lord Holmes of Richmond MBE. | February, 2025 |
Charity AI Task Force | CAST and Zoe Amar Digital announced the launch of a new UK charity task force, set up to champion the responsible, inclusive and collaborative use of AI across the social sector, for maximum impact and collective benefit. | https://www.wearecast.org.uk/our-work/how-we-work-with-funders-and-partners/charity-ai-task-force/ | Event | CAST | February, 2025 |
Artificial intelligence action summit | Paris hosted numerous events aimed at strengthening international action towards artificial intelligence serving the general interest. | https://www.elysee.fr/en/sommet-pour-l-action-sur-l-ia | Event | French government | February, 2025 |
AI Playbook for charities | This playbook helps charities use AI thoughtfully and effectively. Drawing from decades of experience working both within charities and alongside them as consultants. | https://www.aiplaybookforcharities.com/ | Playbook | Edd Baldry & Suzanne Begley | February, 2025 |
Introducing GPT-4.5 | We’re releasing a research preview of GPT‑4.5—our largest and best model for chat yet. GPT‑4.5 is a step forward in scaling up pre-training and post-training. | https://openai.com/index/introducing-gpt-4-5/ | Product release | Open AI | February, 2025 |
Biggest Market Loss In History: Nvidia Stock Sheds Nearly $600 Billion As DeepSeek Shakes AI Darling | Nvidia set a dubious Wall Street record Monday, as the stock at the forefront of the U.S.-led artificial intelligence revolution got a scare from DeepSeek, the Chinese AI company which developed a ChatGPT rival at a fraction of the reported cost of its American peers. | https://www.forbes.com/sites/dereksaul/2025/01/27/biggest-market-loss-in-history-nvidia-stock-sheds-nearly-600-billion-as-deepseek-shakes-ai-darling/ | News | Derek Saul, Forbes | January, 2025 |
DeepSeek-R1 Release | Performance on par with OpenAI-o1 | https://api-docs.deepseek.com/news/news250120 | Product release | DeepSeek | January, 2025 |
Defra AI SDLC Playbook | This playbook provides guidance on best practices for integrating AI into the Software Development Lifecycle (SDLC), specifically tailored to Defra’s needs and challenges. It serves as a living document, continuously updated to reflect emerging practices and lessons learned. | https://defra.github.io/defra-ai-sdlc/ | Playbook | DEFRA | January, 2025 |
Humanity’s Last Exam | Humanity’s Last Exam is a multi-modal benchmark at the frontier of human knowledge designed to be the final closed-ended academic benchmark of its kind with broad subject coverage. The dataset consists of 2,700 challenging questions across over a hundred subjects. | https://agi.safe.ai/ | Tool | Phan, Han & Adesara | January, 2025 |
Block Open Source Introduces “codename goose” — an Open Framework for AI Agents | Block’s Open Source Program Office announced the launch of codename goose, an interoperable AI agent framework that enables users to connect large language models (LLMs) to real-world actions. | https://block.xyz/inside/block-open-source-introduces-codename-goose | Product release | Block | January, 2025 |
Friends for sale: the rise and risks of AI companions | What are the possible long-term effects of AI companions on individuals and society? | https://www.adalovelaceinstitute.org/blog/ai-companions/ | Report | Jamie Bernardi, Ada Lovelace Institute | January, 2025 |
A learning curve? | A Iandscape review of AI and education in the UK. | https://www.adalovelaceinstitute.org/report/a-learning-curve/ | Report | Renate Samson & Kruakae Pothong, Ada Lovelace Institute | January, 2025 |
Generative AI: product safety expectations | Guidance on the safety expectations for using generative AI products and systems in educational settings. | https://www.gov.uk/government/publications/generative-ai-product-safety-expectations | Guidance | Department of Education | January, 2025 |
AI Opportunities Action Plan | AI capabilities are developing at an extraordinary pace. If this continues, artificial intelligence (AI) could be the government’s single biggest lever to deliver its five missions, especially the goal of kickstarting broad-based economic growth. | https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan | Department for Science, Innovation & Technology | January, 2025 | |
OpenAI o3-mini | Pushing the frontier of cost-effective reasoning. | https://openai.com/index/openai-o3-mini/ | Product release | Open AI | January, 2025 |
Introducing Operator | A research preview of an agent that can use its own browser to perform tasks for you. | https://openai.com/index/introducing-operator/ | Product release | Open AI | January, 2025 |
AI C.A.R.E. Cards for Leaders | This is a set of open-ended questions meant to start conversations around AI from a leadership perspective. | https://www.alignmentedu.com/resources | Tool | Alignment Edu | — |
Sora is here | Sharing initial research progress on world simulation in Sora, a model that can create realistic videos from text. | https://openai.com/index/sora-is-here/ | Product release | Open AI | December, 2024 |
AI Generated Business: The Rise of AGI and the Rush to Find a Working Revenue Model | This report set out to investigate and elucidate the business models behind the generative AI companies that are drawing hundreds of billions of dollars in investment. | https://ainowinstitute.org/general/ai-generated-business | Report | Brian Merchant, AI Now Institute | December, 2024 |
OpenAI o1 and new tools for developers | Introducing OpenAI o1, Realtime API improvements, a new fine-tuning method and more for developers. | https://openai.com/index/o1-and-new-tools-for-developers/ | Product release | Open AI | December, 2024 |
Introducing the Model Context Protocol | Open-sourcing the Model Context Protocol (MCP), a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses. | https://www.anthropic.com/news/model-context-protocol | Product release | Anthropic | November, 2024 |
Introducing ChatGPT search | Get fast, timely answers with links to relevant web sources. | https://openai.com/index/introducing-chatgpt-search/ | Product release | Open AI | October, 2024 |
Governing AI for Humanity: Final Report | Recommendations to advance a holistic vision for a globally networked, agile and fexible approach to governing AI for humanity. | https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf | Report | United Nations | September, 2024 |
The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School | Provide additional data and support for those who are developing educational, research, and policy initiatives to better understand and represent the interests of middle and high school students and their parents or guardians at a time of great debate over the integration of artificial intelligence technologies in schools. | https://www.commonsensemedia.org/sites/default/files/research/report/2024-the-dawn-of-the-ai-era_final-release-for-web.pdf | Report | Common Sense Media | September, 2024 |
GenAI academic prompt bank | A dynamic resource designed to support thoughtful and ethical use of GenAI as a tool for learning. | https://www.sheffield.ac.uk/study-skills/digital/generative-ai/prompt-bank | Tool | University of Sheffield | September, 2024 |
Generative AI Guidelines Canvas | A canvas to help organise key pieces of information into a unified framework for AI adoption unique to your specific context. | https://johnnash.notion.site/Generative-AI-Guidelines-Canvas-v-2-d174ba11a62a41f59526f9271b9db8a6 | Tool | John Nash | — |
Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile | Cross-sectoral profile of and companion resource for the AI Risk Management Framework (AI RMF 1.0) for Generative AI, 1 pursuant to President Biden’s Executive Order (EO) 14110 on Safe, Secure, and Trustworthy Artificial Intelligence. | https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf | Framework | National Institute of Standards and Technology (NIST) | July, 2024 |
EU AI Act launched | The AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence) is the first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe. | https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 | Regulation | European Union | June, 2024 |
The AI Bill Project | This paper sets out the background to the TUC Artificial Intelligence (Employment and Regulation) Bill (“the Bill”), the multi-stakeholder process behind the drafting, why the Bill is needed, and how it could improve the rights of working people. | https://www.tuc.org.uk/research-analysis/reports/ai-bill-project | Report | Mary Towers, Policy officer – employment rights, TUC | April, 2024 |
Advances model fairness analysis summary report | Internal assessment of a machine-learning programme used to vet claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud. | https://www.whatdotheyknow.com/request/ai_strategy_information/response/2748592/attach/6/Advances%20Fairness%20Analysis%20February%2024%20redacted%201.pdf?cookie_passthrough=1 | Report | Integrated risk and intelligence service | April, 2024 |
Power and Governance in the Age of AI | Experts Reflect on Artificial Intelligence and the Public Good. | https://www.newamerica.org/planetary-politics/briefs/power-governance-ai-public-good/ | Report | Gordon LaForge, Allison Stanger, Sarah Myers West, Bruce Schneier, Stephanie Forrest, and Nazli Choucri, New America | March, 2024 |
AI nationalism(s): Global industrial policy approaches to AI | Collection of essays exploring the nationalist narratives and emergent industrial policies being proposed by governments with differing economic and geopolitical motivations. | https://ainowinstitute.org/ai-nationalisms | Report | AI Now | March, 2024 |
BBC AI Principles | BBC AI Principles are at the heart of our approach to using AI responsibly and apply to all use of AI at the BBC. They underpin the BBC’s public commitments about how we will use Generative AI. | https://www.bbc.co.uk/supplying/working-with-us/ai-principles/ | Principles | British Broadcasting Corporation | February, 2024 |
Generative AI Framework for HMG | The Generative AI Framework for HMG is guidance on using generative AI safely and securely for civil servants and people working in government organisations. | https://www.gov.uk/government/publications/generative-ai-framework-for-hmg | Framework | Cabinet Office, Government Digital Service and Central Digital and Data Office | January, 2024 |
ISO/IEC 42001:2023 | ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. | https://www.iso.org/standard/81230.html | Standard | International Organization for Standardization | December, 2023 |
New York Times sues Microsoft and OpenAI for ‘billions’ | US news organisation the New York Times is suing ChatGPT-owner OpenAI over claims its copyright was infringed to train the system. | https://www.bbc.co.uk/news/technology-67826601 | News | BBC | December, 2023 |
Introducing GPTs | Create custom versions of ChatGPT that combine instructions, extra knowledge, and any combination of skills. | https://openai.com/index/introducing-gpts/ | Product release | Open AI | November, 2023 |
AI Safety Summit 2023 | The AI Safety Summit 2023 is a major global event that will take place on the 1 and 2 November at Bletchley Park, Buckinghamshire. | https://www.gov.uk/government/topical-events/ai-safety-summit-2023 | Event | Department for Science, Innovation and Technology | November, 2023 |
Foundation models in the public sector | An overview of foundation models and their potential use cases in central and local government in the UK. Considers their risks and opportunities in the public sector, as highlighted by public-sector leaders, researchers and civil society, and explores the principles, regulations and practices, such as impact assessments, monitoring and public involvement, necessary to deploy foundation models in the public sector safely, ethically and equitably. | https://www.adalovelaceinstitute.org/wp-content/uploads/2023/10/Foundation-models-in-the-public-sector-Oct-2023.pdf | Report | Ada Lovelace Institute | October, 2023 |
Hiroshima Process International Guiding Principles for Advanced AI system | The International Guiding Principles for Organizations Developing Advanced AI Systems aims to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems. | https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system | Principles | Digital strategy, European Union | October, 2023 |
Emerging processes for frontier AI safety | Overview of emerging frontier AI safety processes and associated practices. | https://www.gov.uk/government/publications/emerging-processes-for-frontier-ai-safety | Report | Department for Science, Innovation and Technology | October, 2023 |
How do people feel about AI? | A nationally representative survey of public attitudes to artificial intelligence in Britain | https://www.adalovelaceinstitute.org/report/public-attitudes-ai/ | Report | Roshni Modhvadia, Ada Lovelace Institute | June, 2023 |
AI regulation: a pro-innovation approach | White paper details the UK government’s plans for implementing a pro-innovation approach to AI regulation. | https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach | Regulation | Department for Science, Innovation and Technology and Office for Artificial Intelligence | March, 2023 |
Using AI to Implement Effective Teaching Strategies in Classrooms: Five Strategies, Including Prompts | This paper provides guidance for using AI to quickly and easily implement evidence-based teaching strategies that instructors can integrate into their teaching. We discuss five teaching strategies that have proven value but are hard to implement in practice due to time and effort constraints. | https://dx.doi.org/10.2139/ssrn.4391243 | Academic paper | Mollach & Mollach, Wharton School | March, 2023 |
Introducing Khanmigo! | AI education tool for teachers, learners and parents. | https://blog.khanacademy.org/harnessing-ai-so-that-all-students-benefit-a-nonprofit-approach-for-equal-access/ | Product release | Khan Academy | March, 2023 |
Artificial Intelligence (AI) and Digital Healthcare Technologies Capability framework | A clear need for our healthcare workforce is to continually adapt to meet the needs of the society it serves. Health Education England (HEE) commissioned the University of Manchester to perform a learning needs analysis and develop a framework outlining the skills and capabilities to ensure our health and care professionals can work in a digitally enhanced environment. | https://digital-transformation.hee.nhs.uk/building-a-digital-workforce/dart-ed/horizon-scanning/ai-and-digital-healthcare-technologies | Framework | NHS England | February, 2023 |
Looking before we leap | Expanding ethical review processes for AI and data science research. | https://www.adalovelaceinstitute.org/report/looking-before-we-leap/ | Report | Ada Lovelace Institute, the University of Exeter’s Institute for Data Science and Artificial Intelligence, and the Alan Turing Institute | December, 2022 |
Introducing ChatGPT | ChatGPT interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. | https://openai.com/index/chatgpt/ | Product release | Open AI | November, 2022 |
Artificial Intelligence and Intellectual Property: copyright and patents: Government response to consultation | The Government sought evidence and views on a range of options on how AI should be dealt with in the patent and copyright systems. This response sets out the conclusions of that process. | https://www.gov.uk/government/consultations/artificial-intelligence-and-ip-copyright-and-patents/outcome/artificial-intelligence-and-intellectual-property-copyright-and-patents-government-response-to-consultation | Report | Intellectual Property Office | June, 2022 |
The Scottish AI Playbook | Scotland’s AI Support and Resource Hub for Business. | https://www.scottishaiplaybook.com/ | Playbook | Scottish AI Alliance | March, 2022 |
Algorithmic impact assessment: a case study in healthcare | This report sets out the first-known detailed proposal for the use of an algorithmic impact assessment for data access in a healthcare context. | https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ | Report | Lara Groves, Ada Lovelace Institute | February, 2022 |
Awful AI | A curated list to track current scary usages of AI – hoping to raise awareness to its misuses in society. | https://github.com/daviddao/awful-ai/tree/v1.0.0 | Tool | David Dao, et al | January, 2022 |
Technical methods for regulatory inspection of algorithmic systems | A survey of auditing methods for use in regulatory inspections of online harms in social media platforms. | https://www.adalovelaceinstitute.org/report/technical-methods-regulatory-inspection/ | Report | Ada Lovelace Institute | December, 2021 |
Recommendation on the Ethics of Artificial Intelligence | The first-ever global standard on AI ethics is applicable to all 194 member states of UNESCO. | https://www.unesco.org/en/artificial-intelligence/recommendation-ethics | Guidance | UNESCO | November, 2021 |
National AI Strategy | The National AI Strategy builds on the UK’s strengths but also represents the start of a step-change for AI in the UK, recognising the power of AI to increase resilience, productivity, growth and innovation across the private and public sectors. | https://www.gov.uk/government/publications/national-ai-strategy | Strategy | Department for Science, Innovation and Technology, Office for Artificial Intelligence, Department for Digital, Culture, Media & Sport and Department for Business, Energy & Industrial Strategy | September, 2021 |
Algorithmic accountability for the public sector | Learning from the first wave of policy implementation. | https://www.adalovelaceinstitute.org/report/algorithmic-accountability-public-sector/ | Report | Ada Lovelace Institute, AI Now Institute, and Open Government Partnerships | August, 2021 |
Scotland’s AI strategy | The Strategy marks a new chapter in Scotland’s relationship with artificial intelligence. It is the result of an extensive consultation and engagement programme involving academia, industry, the public sector and the people of Scotland who were generous with their time, contributing ideas, insights and opinion on AI. | https://www.scotlandaistrategy.com/the-strategy | Strategy | Scottish AI Alliance | March, 2021 |
The Ethical Framework for AI in Education | The Ethical Framework for Al In Education is grounded in a shared vision of ethical Al in education and will help to enable all learners to benefit optimally from Al in education, whilst also being protected against the risks this technology presents. The Framework is aimed at those making procurement and application decisions relevant to AI in education. | https://www.buckingham.ac.uk/wp-content/uploads/2021/03/The-Institute-for-Ethical-AI-in-Education-The-Ethical-Framework-for-AI-in-Education.pdf | Framework | Institute for Ethical Al in Education | March, 2021 |
On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? | This paper considers the possible risks associated with LLM technology and what paths are available for mitigating those risks, and provides recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, and carrying out pre-development exercises. | https://dl.acm.org/doi/pdf/10.1145/3442188.3445922 | Academic paper | Emily Bender, Timnit Gebru, Angelina McMillan-Major, | March, 2021 |
Inspecting algorithms in social media platforms | Joint briefing with Reset, giving insights and recommendations towards a practical route forward for regulatory inspection of algorithms. | https://www.adalovelaceinstitute.org/report/inspecting-algorithms-in-social-media-platforms/ | Report | Ada Lovelace Institute | November, 2020 |
Transparency mechanisms for UK public-sector algorithmic decision-making systems | Existing UK mechanisms for transparency and their relation to the implementation of algorithmic decision-making systems. | https://www.adalovelaceinstitute.org/report/transparency-mechanisms-for-uk-public-sector-algorithmic-decision-making-systems/ | Report | Ada Lovelace Institute | October, 2020 |
Examining the Black Box | Identifying common language for algorithm audits and impact assessments. | https://www.adalovelaceinstitute.org/report/examining-the-black-box-tools-for-assessing-algorithmic-systems/ | Report | Ada Lovelace Institute | April, 2020 |
Planning and preparing for artificial intelligence implementation | Guidance to help you plan and prepare for implementing artificial intelligence (AI). | https://www.gov.uk/guidance/planning-and-preparing-for-artificial-intelligence-implementation | Guidance | Department for Science, Innovation and Technology, Office for Artificial Intelligence and Centre for Data Ethics and Innovation | June, 2019 |
Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. | The most comprehensive guidance on the topic of AI ethics and safety in the public sector to date. It identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. The guide stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. | https://www.turing.ac.uk/sites/default/files/2019-08/understanding_artificial_intelligence_ethics_and_safety.pdf | Guidance | Professor David Leslie, Alan Turing Institute | June, 2019 |
OECD AI Principles | Promote use of AI that is innovative and trustworthy and that respects human rights and democratic values. | https://oecd.ai/en/ai-principles | Principles | Organisation for Economic Co-operation and Development | May, 2019 |
The AI Playbook | The step-by-step guide to taking advantage of AI in your business. | https://www.ai-playbook.com/introduction | Playbook | MMC Ventures, in partnership with Barclays | January, 2019 |
Google’s AI Principles | Approach to developing and harnessing the potential of AI is grounded in our founding mission — to organize the world’s information and make it universally accessible and useful — and it is shaped by our commitment to improve the lives of as many people as possible. | https://ai.google/responsibility/principles/ | Principles | Google AI | June, 2018 |
Introducing OpenAI | OpenAI launches as a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole. | https://openai.com/index/introducing-openai/ | Event | Open AI | December, 2015 |
If you think of anything I should add, let me know.