Auto-Update News
News items discovered by the auto-update pipeline and how they were routed to wiki pages. 200 items across 9 runs, 200 high-relevance, 40 routed to pages.
Live data from wiki-server
News Items
| Score↓ | Title↕ | Source↕ | Published↕ | Routed To↕ | Run↕ |
|---|---|---|---|---|---|
| 95 | Taking a responsible path to AGI We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community. | deepmind-blog | Wed, 02 Apr 2025 13:31:00 +0000 | not routed | 2026-03-17 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around | openai-blog | Tue, 21 Jun 2016 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Why responsible AI development needs cooperation on safety We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and bene | openai-blog | Wed, 10 Jul 2019 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Safety Gym We’re releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training. | openai-blog | Thu, 21 Nov 2019 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | OpenAI Microscope We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organisms” which are often studied in interpretability. Microscope make | openai-blog | Tue, 14 Apr 2020 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-17 |
| 95 | Superalignment Fast Grants We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, a | openai-blog | Thu, 14 Dec 2023 08:00:00 GMT | not routed | 2026-03-17 |
| 95 | Preparing for future AI risks in biology Advanced AI can transform biology and medicine—but also raises biosecurity risks. We’re proactively assessing capabilities and implementing safeguards to prevent misuse. | openai-blog | Wed, 18 Jun 2025 10:00:00 GMT | Bioweapons Attack Chain Modelstandard | 2026-03-17 |
| 95 | Concrete AI safety problems We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around | openai-blog | Tue, 21 Jun 2016 07:00:00 GMT | AI Accident Risk Cruxesstandard | 2026-03-15 |
| 95 | Our approach to alignment research We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig | openai-blog | Wed, 24 Aug 2022 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-15 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-14 |
| 95 | Lessons learned on language model safety and misuse We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models. | openai-blog | Thu, 03 Mar 2022 08:00:00 GMT | Large Language Modelsstandard | 2026-03-14 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-14 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-14 |
| 95 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | Existential Risk from AIstandard | 2026-03-14 |
| 95 | OpenAI and Anthropic share findings from a joint safety evaluation OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highli | openai-blog | Wed, 27 Aug 2025 10:00:00 GMT | not routed | 2026-03-14 |
| 95 | Navigating AI Risks — Homepage A Substack publication offering **news and analysis about the governance of transformative AI risks**, aimed at policymakers, tech enthusiasts, and engaged citizens. | navigating-ai-risks | 2026-03-13 | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Taking a responsible path to AGI We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community. | deepmind-blog | Wed, 02 Apr 2025 13:31:00 +0000 | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Introducing OpenAI OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g | openai-blog | Fri, 11 Dec 2015 08:00:00 GMT | not routed | 2026-03-13 |
| 95 | Aligning language models to follow instructions We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment researc | openai-blog | Thu, 27 Jan 2022 08:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Lessons learned on language model safety and misuse We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models. | openai-blog | Thu, 03 Mar 2022 08:00:00 GMT | Large Language Modelsstandard | 2026-03-13 |
| 95 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Estimating worst case frontier risks of open weight LLMs In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as ca | openai-blog | Tue, 05 Aug 2025 00:00:00 GMT | not routed | 2026-03-13 |
| 95 | Advancing independent research on AI alignment OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks. | openai-blog | Thu, 19 Feb 2026 10:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-13 |
| 95 | Planning for AGI and beyond Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. | openai-blog | Fri, 24 Feb 2023 08:00:00 GMT | not routed | 2026-03-10 |
| 95 | Governance of superintelligence Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI. | openai-blog | Mon, 22 May 2023 07:00:00 GMT | not routed | 2026-03-10 |
| 95 | Detecting and reducing scheming in AI models Apollo Research and OpenAI developed evaluations for hidden misalignment (“scheming”) and found behaviors consistent with scheming in controlled tests across frontier models. The team shared concrete | openai-blog | Wed, 17 Sep 2025 00:00:00 GMT | Why Alignment Might Be Hardstandard | 2026-03-10 |
| 95 | Advancing independent research on AI alignment OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks. | openai-blog | Thu, 19 Feb 2026 10:00:00 GMT | not routed | 2026-03-10 |
| 95 | OpenAI technical goals OpenAI’s mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. | openai-blog | Mon, 20 Jun 2016 07:00:00 GMT | AI Safety Solution Cruxesstandard | 2026-03-09 |
| 95 | AI safety needs social scientists We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems | openai-blog | Tue, 19 Feb 2019 08:00:00 GMT | Why Alignment Might Be Hardstandard | 2026-03-09 |
| 92 | AI Safety Newsletter #69: Department of War, Anthropic, and National Security Also, Anthropic Removes a Core Safety Commitment | cais-newsletter | Fri, 13 Mar 2026 14:15:54 GMT | not routed | 2026-03-15 |
| 92 | OpenAI’s Approach to Frontier Risk An Update for the UK AI Safety Summit | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-15 |
| 92 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-14 |
| 90 | Last Week in AI #338 - Anthropic sues Trump, xAI starting over, Iran AI Fakes Anthropic sues Trump administration in AI dispute with Pentagon, ‘Not built right the first time’ — Musk’s xAI is starting over again, again, Cascade of A.I. Fakes About War Wi | last-week-in-ai | Mon, 16 Mar 2026 04:18:14 GMT | not routed | 2026-03-17 |
| 90 | AI Safety Newsletter #69: Department of War, Anthropic, and National Security Also, Anthropic Removes a Core Safety Commitment | cais-newsletter | Fri, 13 Mar 2026 14:15:54 GMT | not routed | 2026-03-17 |
| 90 | Let 2026 Be the Year the World Comes Together for AI Safety *Nature* (Published December 29, 2025) An editorial from *Nature* calling for global coordination on AI safety governance. The authorities in China are taking AI regulation extremely seriously, as ar | aisafety-news-search | 2026-03-17 | not routed | 2026-03-17 |
| 90 | Updating the Frontier Safety Framework Our next iteration of the FSF sets out stronger security protocols on the path to AGI | deepmind-blog | Tue, 04 Feb 2025 16:41:00 +0000 | not routed | 2026-03-17 |
| 90 | OpenAI technical goals OpenAI’s mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible. | openai-blog | Mon, 20 Jun 2016 07:00:00 GMT | not routed | 2026-03-17 |
| 90 | openai-blog | Thu, 21 Nov 2019 08:00:00 GMT | not routed | 2026-03-17 | |
| 90 | Improving verifiability in AI development We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society, | openai-blog | Thu, 16 Apr 2020 07:00:00 GMT | not routed | 2026-03-17 |
| 90 | Our approach to alignment research We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig | openai-blog | Wed, 24 Aug 2022 07:00:00 GMT | not routed | 2026-03-17 |
| 90 | Our approach to AI safety Ensuring that AI systems are built, deployed, and used safely is critical to our mission. | openai-blog | Wed, 05 Apr 2023 07:00:00 GMT | not routed | 2026-03-17 |
| 90 | Frontier risk and preparedness To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge. | openai-blog | Thu, 26 Oct 2023 07:00:00 GMT | not routed | 2026-03-17 |
| 90 | Weak-to-strong generalization We present a new research direction for superalignment, together with promising initial results: can we leverage the generalization properties of deep learning to control strong models with weak super | openai-blog | Thu, 14 Dec 2023 00:00:00 GMT | not routed | 2026-03-17 |
| 90 | Stargate Infrastructure OpenAI, and our strategic partners, are thrilled about our shared vision for the Infrastructure of AGI. We are energized by the challenges we face and are excited by the prospect of partnering with fi | openai-blog | Tue, 21 Jan 2025 13:30:00 GMT | not routed | 2026-03-17 |
Configured Sources
25 sources configured (22 enabled). Edit data/auto-update/sources.yaml to change.
| Status↓ | Name↕ | Type↕ | Frequency↕ | Reliability↕ | Categories↕ | Last Fetched↕ |
|---|---|---|---|---|---|---|
| ON | OpenAI Blog | rss | daily | high | ai-labs, models, safety, policy | — |
| ON | Anthropic Blog / Research | web-search | daily | high | ai-labs, safety, models, interpretability | — |
| ON | Google DeepMind Blog | rss | daily | high | ai-labs, models, safety, research | — |
| ON | Meta AI Blog | web-search | daily | high | ai-labs, models, open-source | — |
| ON | Alignment Forum | rss | daily | high | safety, alignment, research | — |
| ON | LessWrong | rss | daily | medium | safety, alignment, rationality, research | — |
| ON | EA Forum | rss | daily | medium | safety, policy, governance, funding | — |
| ON | AI Safety Policy News | web-search | daily | medium | policy, governance, regulation | — |
| ON | AI Executive Orders & Legislation | web-search | daily | medium | policy, governance, regulation | — |
| ON | ML Safety Newsletter | rss | daily | high | safety, alignment, research | — |
| ON | AI Safety Newsletter (CAIS) | rss | daily | high | safety, alignment, policy, research | — |
| ON | Last Week in AI | rss | daily | medium | ai-labs, models, industry, research | — |
| ON | Navigating AI Risks | web-search | daily | medium | safety, governance, policy, risk | — |
| ON | arXiv cs.AI (Artificial Intelligence) | rss | daily | high | research, safety, alignment, models, interpretability | — |
| ON | arXiv cs.CL (Computation and Language) | rss | daily | high | research, models, interpretability, capabilities | — |
| ON | arXiv cs.LG (Machine Learning) | rss | daily | high | research, models, safety, alignment | — |
| ON | AI Industry News | web-search | daily | medium | compute, industry, funding | — |
| ON | Jeffrey Epstein AI Researcher Connections | web-search | weekly | medium | safety, funding, history, governance | — |
| ON | AI Policy in Congress | web-search | daily | medium | policy, governance, legislation | — |
| ON | AI PAC & Election Spending | web-search | weekly | medium | policy, funding, governance | — |
| ON | State AI Legislation | web-search | weekly | medium | policy, governance, legislation | — |
| ON | Biosecurity Policy | web-search | weekly | medium | policy, governance, biosecurity | — |
| OFF | Import AI Newsletter | rss | daily | high | ai-labs, models, policy, research | — |
| OFF | The Gradient | rss | daily | high | research, models, safety | — |
| OFF | Zvi Mowshowitz (Don't Worry About the Vase) | rss | daily | high | safety, policy, models, ai-labs, governance | — |