πŸ”­ Get these alerts in real-time. Subscribe to Change Radar β€” Free during beta.
πŸ” Google πŸš€ Model Release

Gemini 3.1 Flash Live preview added (audio-to-audio low-latency model)

Google added a Gemini 3.1 Flash Live preview entry describing a low-latency audio-to-audio model (gemini-3.1-flash-live-preview) and published associated input/output pricing for text, audio, and image/video usage. The doc includes a short feature description and links to try the model in Google AI Studio.

πŸ“‘ Amazon (Bedrock) πŸš€ Model Release

Amazon Bedrock model-region support table updated (Mar 25, 2026)

AWS updated the Amazon Bedrock 'Model support by AWS Region' documentation on 2026-03-25, refreshing the table that maps specific foundation models to AWS Regions and inference-profile availability. The update includes many provider/model entries (Amazon Nova variants, Titan embeddings, Anthropic Claude versions, Mistral, Meta Llama variants, and others) and clarifies cross-region inference-profile notes.

πŸ” Google model_availability

Community reports: Gemini Pro access restricted to paid subscriptions starting Mar 25, 2026 (unconfirmed)

Community threads and third-party posts (GitHub discussion, Reddit, and some news/aggregator pages) report a service update stating that, beginning March 25, 2026, Gemini Pro models will be limited to paid subscriptions and free-tier users will be restricted to Gemini Flash models. These reports surfaced in developer/community forums and a few aggregator articles but no authoritative Google announcement (docs or blog) has been found to confirm the change at the time of this check.

πŸ€– OpenAI integration_sdk_change

openai-agents-python v0.13.1 (2026-03-25) β€” any-LLM adapter, realtime/MCP/tool-call handling and stability fixes

openai-agents-python published v0.13.1 (2026-03-25). This release adds an any-LLM adapter (Any-LLM extension), improves MCP/tool-call continuity (reasoning-content replay opt-in), preserves static MCP metadata, hardens realtime sequencing (wait for response.done before follow-up response.create), and fixes handling of cancelled single function tools so they are treated as tool failures rather than silent cancellations.

πŸ€– OpenAI integration_sdk_change

openai-agents-js v0.8.1 (2026-03-25) β€” realtime/streaming and MCP stability fixes (non-breaking)

openai-agents-js published v0.8.1 (2026-03-25). This minor/patch release fixes several realtime/streaming and MCP-related stability issues (hide ignored handoffs without breaking managed continuations, stabilize streamable HTTP reconnect retry tests, hide streamed final output after guardrail failures, defer response.create until prior turn finishes, and omit empty computer safety checks on replay). No breaking changes were announced.

🧠 Anthropic integration_sdk_change

Support release notes updated β€” Computer-use preview (Mar 23) and Interactive apps (Mar 25, 2026)

Anthropic updated the Claude support release notes with new feature entries: March 23 (computer-use research preview in Cowork and Claude Code plus Dispatch improvements) and March 25 (interactive apps support in Claude mobile). The release-notes page was updated to document these capabilities and link to detailed articles.

🧠 Anthropic incident_outage

Elevated errors on claude.ai (Mar 25, 2026)

Anthropic/Claude experienced elevated errors impacting claude.ai; community reports and status aggregator entries indicate an incident that affected availability and was addressed by Anthropic engineering. Operators reported migrating workloads to healthy infrastructure and restoring normal service.

🧠 Anthropic incident_outage

Elevated errors on claude.ai (Mar 25, 2026)

Anthropic reported elevated errors impacting claude.ai beginning March 25; engineering migrated affected workloads to healthy infrastructure and restored normal service by March 27. The incident impacted availability and caused elevated error rates for some Claude models and surfaces.

πŸ€– OpenAI integration_sdk_change

Large pastes (>5k chars) converted to attachments in ChatGPT β€” Mar 25, 2026

ChatGPT now converts large pasted content (>5,000 characters) into an attachment rather than inserting it directly into the text composer. This behavior is rolling out to Plus, Pro, and Business users and preserves composer usability and context-window capacity; users can move attachment content back into the message with a β€˜Show in text field’ action.

πŸ€– OpenAI πŸš€ Model Release

GPT-5.3 Instant rolling out; GPT-5.4 Thinking/Pro availability and usage limits documented β€” Mar 25, 2026

OpenAI updated the Help Center article β€œGPT-5.3 and GPT-5.4 in ChatGPT” to announce that GPT-5.3 Instant is rolling out to all ChatGPT users and that GPT-5.4 Thinking (and GPT-5.4 Pro) are available per-tier. The article specifies per-tier usage limits (e.g., Free: 10 GPT-5.3 messages per 5 hours; Plus/Go: 160 per 3 hours; Thinking/week limits), context-window sizes for Instant and Thinking, and notes legacy model retirements. It also documents automatic switching behavior, tool support, and guardrails/limits for Business/Pro accounts.

πŸ€– OpenAI πŸš€ Model Release

ChatGPT adds richer shopping features (product cards, image matching, side-by-side comparisons) β€” Mar 24, 2026

OpenAI updated the ChatGPT release notes on March 24, 2026 to announce improved shopping features. ChatGPT now returns more visually rich product results, supports browsing and refining product results in chat, accepts images to find similar items, and offers side-by-side product comparisons with details such as price, reviews, and features. The announcement notes improved product data coverage, freshness, and speed and states these improvements rely on the Agentic Commerce Protocol (ACP) and are rolling out to users that week.

πŸ“‘ Vercel / Chat SDK integration_sdk_change

Chat SDK adds configurable concurrent message handling

The Chat SDK gained a new concurrency control option for message handling (the Chat constructor now accepts a concurrency config with strategies like queue, drop, debounce, and concurrent). This changes the runtime behavior when a new message arrives while a previous message is still being processed.

🦜 LangChain integration_sdk_change

LangChain core 1.2.22 β€” prompt.save/load validation and method deprecations

LangChain released langchain-core==1.2.22 (published 2026-03-24). The release notes indicate validation added for paths in prompt.save and load_prompt and that some methods were deprecated. The change appears to be aimed at preventing invalid path usage and cleaning up older APIs.

🌬️ Mistral AI πŸš€ Model Release

Mistral releases Voxtral TTS β€” open-weights text-to-speech model

Mistral published Voxtral TTS, an open-weights text-to-speech model that supports multilingual generation, fast inference, and zero-shot voice cloning from a small audio sample. The release is documented on Mistral's news page and model artifacts are available in public model hubs and docs.

πŸ€– OpenAI integration_sdk_change

openai-agents-js v0.8.0 (2026-03-23) β€” realtime default model update and MCP/runtime stability fixes

The openai-agents-js repository released v0.8.0 on 2026-03-23. The release does not introduce breaking changes but upgrades the default realtime model to gpt-realtime-1.5 and includes MCP/runtime stability fixes (improved recovery of segmented assistant output, resource wrappers/streamable session ids), documentation updates (tracing/streaming guidance), and tooling updates.

πŸ“‘ Amazon (Bedrock) integration_sdk_change

Nova Forge SDK announced to customize Nova models (Mar 2026)

AWS announced the Nova Forge SDK (called out in the Mar 23, 2026 weekly roundup) as a new SDK to customize Amazon Nova models for enterprise AI, enabling streamlined fine-tuning/customization and deployment of Nova models directly within Amazon Bedrock.

πŸ” Google policy_terms_change

Pricing pages updated and Terms of Service updated (March 23, 2026)

Google updated pricing/terms pages and noted an update to the Terms of Service (March 23, 2026). The pricing page and related docs now reference the new Prepay/Postpay options, changes to qualification for tiers, and billing-related terms (credit expiry, refund behavior when switching to Postpay, and eligibility notes for welcome credits).

πŸ” Google πŸ’° Pricing

Gemini API/AI Studio introduces Prepay and Postpay billing; Prepay credits, expiry, and service suspension rules

Google rolled out new Prepay and Postpay billing plans for Gemini API / AI Studio and documented billing behavior changes (March 23, 2026). Key details: Prepay accounts must purchase credits (min $10), Prepay credits are consumed for Gemini API usage only and expire after 12 months, Prepay credits are non-refundable except when switching to Postpay, and when a Prepay account's credit balance hits $0 all Gemini API services across projects linked to that billing account will stop. The pages also describe billing tiers, billing account caps, and that tier spend caps enforcement will begin April 1, 2026.

🧠 Anthropic incident_outage

Elevated errors on Claude.ai [retroactive] (brief elevated error rate)

Anthropic published a retroactive incident for elevated errors on Claude.ai (resolved) that occurred on 2026-03-23; the incident timeline shows a brief period of elevated error rates and a resolution. The status page indicates the Claude API was not affected for that incident window.

πŸ“‘ Amazon (Bedrock) integration_sdk_change

Amazon Bedrock monitoring/metrics docs updated (Mar 22, 2026)

The Amazon Bedrock 'Monitoring the performance of Amazon Bedrock' documentation was updated (Mar 22, 2026) to enumerate CloudWatch metrics and monitoring guidance for Bedrock (InvocationThrottles, InputTokenCount, OutputTokenCount, TimeToFirstToken, and EstimatedTPMQuotaUsage, among others) and to recommend using CloudWatch, CloudTrail, and EventBridge for runtime monitoring and alarms.

πŸ“‘ Amazon (Bedrock) quota_rate_limit_change

Amazon Bedrock quota increase instructions updated (Mar 22, 2026)

The Amazon Bedrock 'Request an increase for Amazon Bedrock quotas' documentation was updated (Mar 22, 2026) to clarify which quotas can be increased, the process (via the Service Quotas workflow), and that requesting an increase for the 'Cross-Region InvokeModel tokens per minute for ${model}' quota is the way to request bundled increases for related quotas. The page also notes priority is given to customers consuming their existing quota.

πŸ€– OpenAI quota_rate_limit_change

Foundry Quotas & Limits updated (2026-03-21): introduces Quota Tiers, auto-upgrades, and opt-out API

Microsoft updated the Azure OpenAI (Foundry) Quotas and Limits documentation on 2026-03-21 to introduce Quota Tiers. The page describes seven tiers (Free, 1–6), automatic tier upgrades based on consumption and customer relationship, an opt-out preview flag (NoAutoUpgrade) configurable via a PATCH management API, and preserves previously approved quota increases. It also includes detailed RPM/TPM tables per model and per-tier, guidance for requesting increases, and regional capacity/capacity-api guidance.

πŸ“‘ LangGraph integration_sdk_change

LangGraph CLI release: langgraph-cli 0.4.19 β€” new deploy revisions list and dependency bumps

LangGraph published a new CLI release langgraph-cli==0.4.19 (release entry shows 20 Mar 22:12). Changes include a new deploy revisions list command and several dependency bumps across CLI packages and examples.

πŸ€– OpenAI model_deprecation

Azure OpenAI model retirements page updated β€” added retirement dates and auto-upgrade guidance (Mar–Apr 2026)

Microsoft updated the Azure OpenAI (Foundry) model retirements documentation to add and revise lifecycle dates for many models and to document automatic upgrade behavior for some deployment types. The page includes specific retirement dates (examples in March–April 2026 for several audio, image, and GPT-family models), guidance on notifications and upgrade windows, fine-tuned model retirement behavior, and suggested replacement models.

🧠 Anthropic incident_outage

Elevated errors observed on Claude Opus 4.6 (2026-03-19)

Community and status posts reported elevated errors on Claude Opus 4.6 on 2026-03-19 (reports surfaced via automated status posts and user reports). The incident indicates transient reliability issues affecting Opus 4.6 responses during that window.

πŸ€– OpenAI model_deprecation

Legacy deep research mode in ChatGPT scheduled for removal (Mar 26, 2026)

OpenAI added a March 19, 2026 release-notes entry announcing that the legacy "deep research" mode in ChatGPT will be removed on March 26, 2026. The announcement states this affects only the legacy mode; the current deep research experience will remain available and historical conversations/results will remain accessible. A link to the Deep research help article is provided for migration/reference.

πŸ” Google integration_sdk_change

Gemini API changelog: OpenAI-compat support for gemini-3-pro-image-preview and /v1/videos (veo-3.1)

Google updated the Gemini API changelog (Mar 19, 2026) to add OpenAI compatibility support for the gemini-3-pro-image-preview model and to support the /v1/videos endpoint with the veo-3.1-generate-preview model. This is an API/integration change enabling OpenAI-compatible endpoints to access these preview capabilities.

🦜 LangChain integration_sdk_change

LangChain release: langchain 1.2.13 β€” LangSmith metadata and agents Runtime export

LangChain published langchain==1.2.13 which includes changes that add LangSmith integration metadata to create_agent and init_chat_model, and export Runtime from agents.middleware. There are also CI and dependency bumps and small fixes to integrations (e.g., OpenAI Responses API typing). No explicit deprecation or breaking-change notice appears in the release notes.

🧠 Anthropic incident_outage

Elevated errors on Claude Opus 4.6 (Mar 19, 2026)

Anthropic posted a status incident on Mar 19, 2026 reporting elevated error rates affecting Claude Opus 4.6 across web and API surfaces (claude.ai, api.anthropic.com, platform.claude.com, and Claude Code). The status timeline shows Investigating at 15:59 UTC and Resolved at 16:14 UTC after fixes were implemented and monitoring continued.

πŸ€– OpenAI model_deprecation

Legacy deep research mode scheduled for removal (deprecation notice posted Mar 19, 2026; removal on Mar 26, 2026)

OpenAI posted a deprecation notice for the legacy deep research mode on March 19, 2026, stating it will be removed on March 26, 2026. This affects only the legacy deep research mode; the current deep research experience remains available and historical conversations/results will remain accessible.