Skip to content
Longterm Wiki
Updated 2026-02-27HistoryData
Page StatusDocumentationDashboard
Edited 5 weeks ago
Content0/12
SummaryScheduleEntityEdit history
Tables0Diagrams0Int. links0/ ~5Ext. links0Footnotes0References0Quotes0Accuracy0

Auto-Update News

News items discovered by the auto-update pipeline and how they were routed to wiki pages. 200 items across 9 runs, 200 high-relevance, 40 routed to pages.

Live data from wiki-server

News Items

200 news items
ScoreTitleSourcePublishedRouted ToRun
95
Taking a responsible path to AGI

We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.

deepmind-blogWed, 02 Apr 2025 13:31:00 +0000not routed2026-03-17
95
Introducing OpenAI

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g

openai-blogFri, 11 Dec 2015 08:00:00 GMTnot routed2026-03-17
95
Concrete AI safety problems

We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around

openai-blogTue, 21 Jun 2016 07:00:00 GMTnot routed2026-03-17
95
Why responsible AI development needs cooperation on safety

We’ve written a policy research paper identifying four strategies that can be used today to improve the likelihood of long-term industry cooperation on safety norms in AI: communicating risks and bene

openai-blogWed, 10 Jul 2019 07:00:00 GMTnot routed2026-03-17
95
Safety Gym

We’re releasing Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training.

openai-blogThu, 21 Nov 2019 08:00:00 GMTnot routed2026-03-17
95
OpenAI Microscope

We’re introducing OpenAI Microscope, a collection of visualizations of every significant layer and neuron of eight vision “model organisms” which are often studied in interpretability. Microscope make

openai-blogTue, 14 Apr 2020 07:00:00 GMTnot routed2026-03-17
95
Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

openai-blogMon, 22 May 2023 07:00:00 GMTnot routed2026-03-17
95
Superalignment Fast Grants

We’re launching $10M in grants to support technical research towards the alignment and safety of superhuman AI systems, including weak-to-strong generalization, interpretability, scalable oversight, a

openai-blogThu, 14 Dec 2023 08:00:00 GMTnot routed2026-03-17
95
Preparing for future AI risks in biology

Advanced AI can transform biology and medicine—but also raises biosecurity risks. We’re proactively assessing capabilities and implementing safeguards to prevent misuse.

openai-blogWed, 18 Jun 2025 10:00:00 GMT
Bioweapons Attack Chain Modelstandard
2026-03-17
95
Concrete AI safety problems

We (along with researchers from Berkeley and Stanford) are co-authors on today’s paper led by Google Brain researchers, Concrete Problems in AI Safety. The paper explores many research problems around

openai-blogTue, 21 Jun 2016 07:00:00 GMT
AI Accident Risk Cruxesstandard
2026-03-15
95
Our approach to alignment research

We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig

openai-blogWed, 24 Aug 2022 07:00:00 GMTnot routed2026-03-15
95
Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

openai-blogFri, 24 Feb 2023 08:00:00 GMTnot routed2026-03-15
95
Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

openai-blogMon, 22 May 2023 07:00:00 GMTnot routed2026-03-15
95
Frontier risk and preparedness

To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.

openai-blogThu, 26 Oct 2023 07:00:00 GMTnot routed2026-03-15
95
Introducing OpenAI

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g

openai-blogFri, 11 Dec 2015 08:00:00 GMTnot routed2026-03-14
95
Lessons learned on language model safety and misuse

We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.

openai-blogThu, 03 Mar 2022 08:00:00 GMT
Large Language Modelsstandard
2026-03-14
95
Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

openai-blogFri, 24 Feb 2023 08:00:00 GMTnot routed2026-03-14
95
Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

openai-blogMon, 22 May 2023 07:00:00 GMTnot routed2026-03-14
95
OpenAI’s Approach to Frontier Risk

An Update for the UK AI Safety Summit

openai-blogThu, 26 Oct 2023 07:00:00 GMT
Existential Risk from AIstandard
2026-03-14
95
OpenAI and Anthropic share findings from a joint safety evaluation

OpenAI and Anthropic share findings from a first-of-its-kind joint safety evaluation, testing each other’s models for misalignment, instruction following, hallucinations, jailbreaking, and more—highli

openai-blogWed, 27 Aug 2025 10:00:00 GMTnot routed2026-03-14
95
Navigating AI Risks — Homepage

A Substack publication offering **news and analysis about the governance of transformative AI risks**, aimed at policymakers, tech enthusiasts, and engaged citizens.

navigating-ai-risks2026-03-13
AI Safety Solution Cruxesstandard
2026-03-13
95
Taking a responsible path to AGI

We’re exploring the frontiers of AGI, prioritizing technical safety, proactive risk assessment, and collaboration with the AI community.

deepmind-blogWed, 02 Apr 2025 13:31:00 +0000
AI Safety Solution Cruxesstandard
2026-03-13
95
Introducing OpenAI

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to g

openai-blogFri, 11 Dec 2015 08:00:00 GMTnot routed2026-03-13
95
Aligning language models to follow instructions

We’ve trained language models that are much better at following user intentions than GPT-3 while also making them more truthful and less toxic, using techniques developed through our alignment researc

openai-blogThu, 27 Jan 2022 08:00:00 GMT
AI Safety Solution Cruxesstandard
2026-03-13
95
Lessons learned on language model safety and misuse

We describe our latest thinking in the hope of helping other AI developers address safety and misuse of deployed models.

openai-blogThu, 03 Mar 2022 08:00:00 GMT
Large Language Modelsstandard
2026-03-13
95
Frontier risk and preparedness

To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.

openai-blogThu, 26 Oct 2023 07:00:00 GMT
AI Safety Solution Cruxesstandard
2026-03-13
95
OpenAI’s Approach to Frontier Risk

An Update for the UK AI Safety Summit

openai-blogThu, 26 Oct 2023 07:00:00 GMT
AI Safety Solution Cruxesstandard
2026-03-13
95
Estimating worst case frontier risks of open weight LLMs

In this paper, we study the worst-case frontier risks of releasing gpt-oss. We introduce malicious fine-tuning (MFT), where we attempt to elicit maximum capabilities by fine-tuning gpt-oss to be as ca

openai-blogTue, 05 Aug 2025 00:00:00 GMTnot routed2026-03-13
95
Advancing independent research on AI alignment

OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks.

openai-blogThu, 19 Feb 2026 10:00:00 GMT
AI Safety Solution Cruxesstandard
2026-03-13
95
Planning for AGI and beyond

Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.

openai-blogFri, 24 Feb 2023 08:00:00 GMTnot routed2026-03-10
95
Governance of superintelligence

Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.

openai-blogMon, 22 May 2023 07:00:00 GMTnot routed2026-03-10
95
Detecting and reducing scheming in AI models

Apollo Research and OpenAI developed evaluations for hidden misalignment (“scheming”) and found behaviors consistent with scheming in controlled tests across frontier models. The team shared concrete

openai-blogWed, 17 Sep 2025 00:00:00 GMT
Why Alignment Might Be Hardstandard
2026-03-10
95
Advancing independent research on AI alignment

OpenAI commits $7.5M to The Alignment Project to fund independent AI alignment research, strengthening global efforts to address AGI safety and security risks.

openai-blogThu, 19 Feb 2026 10:00:00 GMTnot routed2026-03-10
95
OpenAI technical goals

OpenAI’s mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible.

openai-blogMon, 20 Jun 2016 07:00:00 GMT
AI Safety Solution Cruxesstandard
2026-03-09
95
AI safety needs social scientists

We’ve written a paper arguing that long-term AI safety research needs social scientists to ensure AI alignment algorithms succeed when actual humans are involved. Properly aligning advanced AI systems

openai-blogTue, 19 Feb 2019 08:00:00 GMT
Why Alignment Might Be Hardstandard
2026-03-09
92cais-newsletterFri, 13 Mar 2026 14:15:54 GMTnot routed2026-03-15
92
OpenAI’s Approach to Frontier Risk

An Update for the UK AI Safety Summit

openai-blogThu, 26 Oct 2023 07:00:00 GMTnot routed2026-03-15
92
Frontier risk and preparedness

To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.

openai-blogThu, 26 Oct 2023 07:00:00 GMTnot routed2026-03-14
90
Last Week in AI #338 - Anthropic sues Trump, xAI starting over, Iran AI Fakes

Anthropic sues Trump administration in AI dispute with Pentagon, ‘Not built right the first time’ — Musk’s xAI is starting over again, again, Cascade of A.I. Fakes About War Wi

last-week-in-aiMon, 16 Mar 2026 04:18:14 GMTnot routed2026-03-17
90cais-newsletterFri, 13 Mar 2026 14:15:54 GMTnot routed2026-03-17
90
Let 2026 Be the Year the World Comes Together for AI Safety

*Nature* (Published December 29, 2025) An editorial from *Nature* calling for global coordination on AI safety governance. The authorities in China are taking AI regulation extremely seriously, as ar

aisafety-news-search2026-03-17not routed2026-03-17
90
Updating the Frontier Safety Framework

Our next iteration of the FSF sets out stronger security protocols on the path to AGI

deepmind-blogTue, 04 Feb 2025 16:41:00 +0000not routed2026-03-17
90
OpenAI technical goals

OpenAI’s mission is to build safe AI, and ensure AI’s benefits are as widely and evenly distributed as possible.

openai-blogMon, 20 Jun 2016 07:00:00 GMTnot routed2026-03-17
90openai-blogThu, 21 Nov 2019 08:00:00 GMTnot routed2026-03-17
90
Improving verifiability in AI development

We’ve contributed to a multi-stakeholder report by 58 co-authors at 30 organizations, including the Centre for the Future of Intelligence, Mila, Schwartz Reisman Institute for Technology and Society,

openai-blogThu, 16 Apr 2020 07:00:00 GMTnot routed2026-03-17
90
Our approach to alignment research

We are improving our AI systems’ ability to learn from human feedback and to assist humans at evaluating AI. Our goal is to build a sufficiently aligned AI system that can help us solve all other alig

openai-blogWed, 24 Aug 2022 07:00:00 GMTnot routed2026-03-17
90
Our approach to AI safety

Ensuring that AI systems are built, deployed, and used safely is critical to our mission.

openai-blogWed, 05 Apr 2023 07:00:00 GMTnot routed2026-03-17
90
Frontier risk and preparedness

To support the safety of highly-capable AI systems, we are developing our approach to catastrophic risk preparedness, including building a Preparedness team and launching a challenge.

openai-blogThu, 26 Oct 2023 07:00:00 GMTnot routed2026-03-17
90
Weak-to-strong generalization

We present a new research direction for superalignment, together with promising initial results: can we leverage the generalization properties of deep learning to control strong models with weak super

openai-blogThu, 14 Dec 2023 00:00:00 GMTnot routed2026-03-17
90
Stargate Infrastructure

OpenAI, and our strategic partners, are thrilled about our shared vision for the Infrastructure of AGI. We are energized by the challenges we face and are excited by the prospect of partnering with fi

openai-blogTue, 21 Jan 2025 13:30:00 GMTnot routed2026-03-17
Page 1 of 4

Configured Sources

25 sources configured (22 enabled). Edit data/auto-update/sources.yaml to change.

25 sources
StatusNameTypeFrequencyReliabilityCategoriesLast Fetched
ONOpenAI Blogrssdailyhighai-labs, models, safety, policy
ONAnthropic Blog / Researchweb-searchdailyhighai-labs, safety, models, interpretability
ONGoogle DeepMind Blogrssdailyhighai-labs, models, safety, research
ONMeta AI Blogweb-searchdailyhighai-labs, models, open-source
ONAlignment Forumrssdailyhighsafety, alignment, research
ONLessWrongrssdailymediumsafety, alignment, rationality, research
ONEA Forumrssdailymediumsafety, policy, governance, funding
ONAI Safety Policy Newsweb-searchdailymediumpolicy, governance, regulation
ONAI Executive Orders & Legislationweb-searchdailymediumpolicy, governance, regulation
ONML Safety Newsletterrssdailyhighsafety, alignment, research
ONAI Safety Newsletter (CAIS)rssdailyhighsafety, alignment, policy, research
ONLast Week in AIrssdailymediumai-labs, models, industry, research
ONNavigating AI Risksweb-searchdailymediumsafety, governance, policy, risk
ONarXiv cs.AI (Artificial Intelligence)rssdailyhighresearch, safety, alignment, models, interpretability
ONarXiv cs.CL (Computation and Language)rssdailyhighresearch, models, interpretability, capabilities
ONarXiv cs.LG (Machine Learning)rssdailyhighresearch, models, safety, alignment
ONAI Industry Newsweb-searchdailymediumcompute, industry, funding
ONJeffrey Epstein AI Researcher Connectionsweb-searchweeklymediumsafety, funding, history, governance
ONAI Policy in Congressweb-searchdailymediumpolicy, governance, legislation
ONAI PAC & Election Spendingweb-searchweeklymediumpolicy, funding, governance
ONState AI Legislationweb-searchweeklymediumpolicy, governance, legislation
ONBiosecurity Policyweb-searchweeklymediumpolicy, governance, biosecurity
OFFImport AI Newsletterrssdailyhighai-labs, models, policy, research
OFFThe Gradientrssdailyhighresearch, models, safety
OFFZvi Mowshowitz (Don't Worry About the Vase)rssdailyhighsafety, policy, models, ai-labs, governance