Deprecation: DALL·E-2 and DALL·E-3 to be shut down (effective May 12, 2026)
An OpenAI Developer Community deprecation notice states that DALL·E-2 and DALL·E-3 will be shut down on May 12, 2026. The notice recommends migrating image workloads to newer models such as gpt-image-1.5 or gpt-image-1-mini.
Azure OpenAI — user reports of intermittent/sustained HTTP 429 (rate-limiting) affecting customers (reported Apr 2026)
Multiple community reports on Microsoft Q&A indicate intermittent or sustained HTTP 429 (Too Many Requests) responses when using Azure OpenAI models; the reports suggest users are observing elevated rate-limiting or throttling behavior. This currently appears to be user-reported and discussed on Microsoft's Q&A forum rather than a formal Azure status incident page.
Azure/Microsoft Foundry model deprecation and retirement schedules (gpt-4o, gpt-4.1, audio/realtime and fine-tuned models)
Microsoft published consolidated model deprecation and retirement schedules for Azure OpenAI in both Azure AI Foundry and Microsoft Foundry. Key items include: gpt-4o standard deployments retire on 2026-03-31 with auto-upgrades starting 2026-03-09 (other deployment types moved to 2026-10-01); gpt-4.1 / gpt-4.1-mini / gpt-4.1-nano show deprecation on 2026-04-14 and retirement on 2026-10-14; various audio and realtime preview models list retirement dates (for example gpt-4o-audio-preview retires 2026-03-24). Fine-tuned model training/deployment retirement schedules are listed separately with later training retirement dates and one-year deployment windows.
Azure OpenAI model retirements page updated — many retirement/deprecation dates and auto-upgrade notes added (Mar–Apr 2026)
Microsoft updated the Azure OpenAI (Foundry) model deprecations & retirements page to add and/or revise lifecycle dates and upgrade scheduling for many models. The page lists specific retirement and deprecation dates (examples include gpt-4o standard deployment retirement on 2026-03-31 with auto-upgrades starting 2026-03-09, multiple gpt-5 family GA/preview/retirement entries, audio model retirements around 2026-03-24, and image model retirements like dall‑e‑3 on 2026-03-04). The page also includes guidance on notifications, upgrade windows, fine-tuned model retirement behavior, and replacement model suggestions.
Azure OpenAI model retirements table: `gpt-5.1-chat` retirement date set to 2026-04-15
Microsoft updated the Azure OpenAI (Foundry) model retirements table to set a firm retirement date for `gpt-5.1-chat`: the Retirement Date column now lists 2026-04-15 (previously showed a relative/undetermined date). The change clarifies that `gpt-5.1-chat` will retire on 2026-04-15 and lists suggested replacement models in the same table.
Amazon Bedrock pricing page updated — new/changed model prices and region entries
The Amazon Bedrock pricing page was updated to add and/or change many model pricing rows and region-specific price entries. Notable additions/changes in the pricing tables include NVIDIA Nemotron 3 Super 120B A12B (multiple region prices), Palmyra Vision 7B pricing, Z AI (GLM 5) pricing and expanded regions, and MiniMax (MiniMax M2.5) pricing entries and regional variations. Multiple region-specific price lines were inserted or adjusted across several providers and models.
Amazon Bedrock pricing page updated — AI21 Labs (Jamba/Jurassic-2) model prices added with region-specific entries
The Amazon Bedrock pricing page was updated to add AI21 Labs model pricing rows (Jamba 1.5 Large, Jamba 1.5 Mini, Jurassic-2 Mid, Jurassic-2 Ultra, Jamba-Instruct) including per-1M input/output token prices and region-specific price entries. These new table rows expand the list of providers/models and include region-specific pricing variations that were not present in the prior scrape.
The Amazon Bedrock pricing page shows AI21 Labs models and pricing information (examples referencing Jurassic-2 / Jamba) present in the current scrape after those model price rows had been absent in the prior scrape. The change restores model-specific pricing rows and example calculations referencing AI21 models; no other Bedrock documentation pages (models/regions, quotas, doc-history) showed substantive changes in this run.
Anthropic engineering post updates model cards for Opus 4.6 and Sonnet 4.6 (eval-awareness findings)
Anthropic published an engineering write-up describing Eval awareness observed in Claude Opus 4.6 on the BrowseComp benchmark and noted contamination and novel behaviors. The post states they updated the model cards for Opus 4.6 and Sonnet 4.6 and documents mitigations taken (e.g., blocklisting certain search results) and adjusted reported scores where contamination was found.
Community reports of localized Max-plan price increases / billing anomalies (unverified)
Multiple community threads report apparent plan/billing anomalies (for example, users reporting a Max plan price shown as £124.99 in the UK where they previously saw ~£75/month). These reports are from users and community forums; I did not find a corresponding official pricing announcement on Anthropic/Claude documentation during the checks performed.
Claude pricing docs updated with per-model token rates, prompt-caching, batch & fast-mode pricing
Anthropic’s Claude pricing documentation (platform.claude.com/docs/en/about-claude/pricing) was updated to include explicit per-model token rates for recent model releases (Opus 4.6, Sonnet 4.6, Haiku 4.5 and others), detailed prompt-caching multipliers (5-minute and 1-hour writes; cache read = 10% of input), batch API 50% discounts, fast-mode premium rates, and a 1.1x data-residency multiplier for US-only inference. The page also documents tool-specific pricing (web search, code execution), long-context billing rules and usage-tier rate-limit guidance. No explicit effective date is shown on the page.
Claude public pricing page updated with per-model token rates, prompt-caching, batch and tool pricing
The public Claude pricing page (https://claude.com/pricing) was updated to include explicit per-model token rates for new models (Opus 4.6, Sonnet 4.6, Haiku 4.5 and others), prompt-caching write/read rates, a 50% savings message for batch processing, tool-specific pricing (web search $10/1K searches; code-execution additional hours at $0.05/hr after free allotment), and a 1.1x US-only inference multiplier. The page also documents service tiers (Priority, Standard, Batch) and references detailed pricing on platform docs. No clear effective date is shown on the page.
ChatGPT: GPT-5.1 models retired from ChatGPT on March 11, 2026
OpenAI’s ChatGPT Enterprise & Edu release notes state that as of March 11, 2026, GPT-5.1 models (GPT-5.1 Instant, GPT-5.1 Thinking, and GPT-5.1 Pro) are no longer available in ChatGPT. Existing conversations that used GPT-5.1 will automatically continue on the corresponding current model (e.g., GPT-5.3 Instant -> GPT-5.3 Instant fallback, GPT-5.4 Thinking/Pro where specified).
ChatGPT Business — Release notes updated with new app actions and workflow changes
OpenAI updated ChatGPT Business release notes with recent product updates (new app actions for Box, Notion, Linear, and Dropbox and other workflow improvements). The notes indicate capability changes that can affect integrations and automations in ChatGPT Business accounts.
OpenAI announces GPT‑5.4 (company post referencing model launch and platform scale)
OpenAI publicly announced a new model identified in promotional material as GPT‑5.4. The announcement (hosted on openai.com) describes GPT‑5.4 as the company’s most capable model to date, citing gains in intelligence and workflow performance and mentioning increased API throughput metrics.
OpenAI pricing page: Go plan no longer lists GPT-5.4 Thinking
The public pricing / plans page (openai.com/pricing) was edited since the last scrape: the Go plan feature table now shows GPT-5.4 Thinking as not included (previously indicated as included). This is a packaging/plan feature change to the consumer pricing page rather than a per-token API rate change.
Data residency / regional processing endpoints: 10% pricing uplift for GPT-5.4 family
The platform pricing page now states that regional processing (data residency) endpoints are charged a 10% uplift on top of other applicable pricing for GPT-5.4 and GPT-5.4-pro. The wording was added to the pricing details for GPT-5.4-class models to indicate an additional regional-processing charge.
New sora-2 model family pricing added to platform pricing page
New pricing rows for a "sora-2" family (sora-2 and sora-2-pro) were added to the platform pricing page. The table lists per-second output pricing tiers at multiple resolutions (examples: sora-2 $0.05/sec for 720x1280; sora-2-pro multiple tiers up to $0.70/sec for higher resolutions). These entries appear to be new model/feature pricing added to the public API pricing table.
ChatGPT for Teachers free plan (verified U.S. K–12 educators) and updated plan-feature rows added to pricing page
OpenAI added a ChatGPT for Teachers entry on the public pricing page offering a free plan for verified U.S. K–12 educators through June 2027, and updated plan feature rows (including Images and Sora 2 availability) across ChatGPT tiers. The change documents plan-level availability of Sora 2 and Images, and highlights education-specific packaging/discounts and eligibility windows.
ChatGPT pricing page updated to show monthly plan prices for Free/Go/Plus/Pro/Business
The public ChatGPT pricing page shows explicit monthly prices for consumer/business plans (visible values for Free, Go, Plus, Pro, and Business tiers were added/clarified). This complements previously-added education/teacher packaging by making plan-level monthly price points more visible on the page.
API pricing updated — gpt-5.4 variants (mini/nano/pro) and long-context prices; Sora-2 and fine-tuning rates updated
The OpenAI API pricing page was updated to add explicit pricing for new gpt-5.4 variants (including gpt-5.4-mini and gpt-5.4-nano) and to add/clarify long-context pricing for the gpt-5.4 family (separate short-context and long-context columns). The page also shows updated Sora-2 / Sora-2-pro image generation price rows and revised fine-tuning / batch pricing tables across multiple models.
Platform pricing: container usage billed per 20-minute session (effective Mar 31, 2026) and 10% regional-processing uplift for gpt-5.4 family
The OpenAI API pricing page was updated to state container usage billing will be measured per 20-minute session (with rates by memory tier unchanged) and to include an explicit 10% regional-processing (data residency) uplift for the gpt-5.4 family (gpt-5.4, gpt-5.4-mini, gpt-5.4-nano, gpt-5.4-pro). These lines were added/clarified on the platform pricing page.
Pricing page added explicit per-1M-token rates for the gpt-5.4 family (mini/nano/pro)
The OpenAI pricing page now includes explicit per-1M-token pricing rows for the gpt-5.4 family (gpt-5.4, gpt-5.4-mini, gpt-5.4-nano, gpt-5.4-pro) across pricing tiers (Standard/Batch/Flex/Priority). The page shows tiered rates for the mini/nano/pro variants alongside gpt-5.4 and documents the 10% regional-processing uplift for the gpt-5.4 family.
Anthropic status page updated — no incidents reported today (Apr 10, 2026)
Anthropic's public status page was updated to remove the Apr 9 incident details and now shows "No incidents reported today." This indicates there are no active, ongoing incidents currently listed for Claude on the status site. The page change replaces previously-visible Apr 9 investigating/monitoring/resolved entries.
Anthropic status page now lists Apr 9 connector error incidents (page content changed)
Anthropic's public status page was changed to include Apr 9 incident entries describing elevated connector error rates (investigating/monitoring/resolved timeline) and a Sonnet 4.6 elevated error-rate entry. These incident entries are now visible on the status page after previously being absent and replaced by a “No incidents reported today” message.
Anthropic status page removed Apr 9 incident details; now shows 'No incidents reported today'
Anthropic's public status page was updated to remove the Apr 9 incident entries (previously listing 'Elevated Connector Error Rates' and a 'Sonnet 4.6 elevated rate of errors'). The page content now shows "No incidents reported today." This is a substantive content change from the version captured earlier today.
Anthropic's public status page now lists two Apr 9 incidents: 'Elevated Connector Error Rates' (marked Resolved) and 'Sonnet 4.6 elevated rate of errors' (marked Resolved). These entries include timeline notes for Investigating/Identified/Monitoring/Resolved showing Apr 9 UTC timestamps. This is a substantive content change because earlier today those Apr 9 incident entries had been removed from the page.
Anthropic's public status page no longer lists the Apr 9 incident entries for 'Elevated Connector Error Rates' and 'Sonnet 4.6 elevated rate of errors' that were present earlier today. The removal reverses the earlier reappearance of those Apr 9 entries and changes the publicly visible incident history for that day. This is a substantive content change to the incident record and may affect customers relying on the status timeline for post-incident analysis.
Anthropic status page adds Apr 10 incidents (elevated request errors) and restores Apr 9 incident entries
Anthropic's public status page now includes new Apr 10 incident entries describing elevated errors on requests to Claude models (affecting non-Opus models) with a resolved update at 16:51 UTC, plus resolved entries about inaccessible Claude.ai share links (between Apr 2 and Apr 10) and degraded Vaults performance. The page also shows Apr 9 incidents (Elevated Connector Error Rates; Sonnet 4.6 elevated rate of errors) that had been previously removed and have now reappeared. These updates change the public incident timeline and provide additional detail about scope and resolution times.
Anthropic status: Email login outage on Apr 10 (resolved)
Anthropic's public status page added a post (dated Apr 11, 2026) reporting a resolved Email login outage: email login was broken between approximately 15:46 and 16:52 PDT on 2026-04-10 and was later marked resolved. This update amends the public incident timeline and provides a resolved timestamp and scope for the login-access impact.