Before

The Agents SDK now exports an MCPConnectionState enum that provides better type safety and consistency when working with MCP server connection states.
Connection states are now available as enum constants instead of string literals:
import { MCPConnectionState } from "agents";
// Check connection states using enum constantsif (server.state === MCPConnectionState.READY) { console.log("Server is ready to use");}Available states:
MCPConnectionState.AUTHENTICATING - Waiting for OAuth authorization to completeMCPConnectionState.CONNECTING - Establishing transport connection to MCP serverMCPConnectionState.DISCOVERING - Discovering server capabilities (tools, resources, prompts)MCPConnectionState.READY - Fully connected and ready to useMCPConnectionState.FAILED - Connection failed at some pointIf you were using string literals to check connection states, update your code:
if (server.state === "ready") { // Do something}import { MCPConnectionState } from "agents";
if (server.state === MCPConnectionState.READY) { // Do something}MCP client connections now use fail-fast behavior with Promise.all instead of Promise.allSettled, providing quicker feedback when server capability discovery fails. The SDK also now only attempts to register capabilities that the server advertises, improving reliability and reducing unnecessary errors.
The connection state machine has been refined:
CONNECTING → DISCOVERING → READYAUTHENTICATING → (callback) → CONNECTING → DISCOVERING → READYFAILED on errorFAILED before throwing errors during capability discovery, ensuring proper state managementFor more information, refer to the McpClient API reference.
Containers and Sandboxes pricing for CPU time is now based on active usage only, instead of provisioned resources.
This means that you now pay less for Containers and Sandboxes.
Imagine running the standard-2 instance type for one hour, which can use up to 1 vCPU,
but on average you use only 20% of your CPU capacity.
CPU-time is priced at $0.00002 per vCPU-second.
Previously, you would be charged for the CPU allocated to the instance multiplied by the time it was active, in this case 1 hour.
CPU cost would have been: $0.072 — 1 vCPU * 3600 seconds * $0.00002
Now, since you are only using 20% of your CPU capacity, your CPU cost is cut to 20% of the previous amount.
CPU cost is now: $0.0144 — 1 vCPU * 3600 seconds * $0.00002 * 20% utilization
This can significantly reduce costs for Containers and Sandboxes.
See the documentation to learn more about Containers, Sandboxes, and associated pricing.
Until now, if a Worker had been previously deployed via the Cloudflare Dashboard ↗, a subsequent deployment done via the Cloudflare Workers CLI, Wrangler
(through the deploy command), would allow the user to override the Worker's dashboard settings without providing details on
what dashboard settings would be lost.
Now instead, wrangler deploy presents a helpful representation of the differences between the local configuration
and the remote dashboard settings, and offers to update your local configuration file for you.
See example below showing a before and after for wrangler deploy when a local configuration is expected to override a Worker's dashboard settings:
Before

After

Also, if instead Wrangler detects that a deployment would override remote dashboard settings but in an additive way, without modifying or removing any of them, it will simply proceed with the deployment without requesting any user interaction.
Update to Wrangler v4.50.0 or greater to take advantage of this improved deploy flow.
AI Search now supports custom HTTP headers for website crawling, solving a common problem where valuable content behind authentication or access controls could not be indexed.
Previously, AI Search could only crawl publicly accessible pages, leaving knowledge bases, documentation, and other protected content out of your search results. With custom headers support, you can now include authentication credentials that allow the crawler to access this protected content.
This is particularly useful for indexing content like:
To add custom headers when creating an AI Search instance, select Parse options. In the Extra headers section, you can add up to five custom headers per Website data source.

For example, to crawl a site protected by Cloudflare Access, you can add service token credentials as custom headers:
CF-Access-Client-Id: your-token-id.accessCF-Access-Client-Secret: your-token-secretThe crawler will automatically include these headers in all requests, allowing it to access protected pages that would otherwise be blocked.
Learn more about configuring custom headers for website crawling in AI Search.
To facilitate significant enhancements to our submission processes, the Final Disposition column of the Team Submissions > Reclassifications page inside the Email Security Zero Trust application will be temporarily removed.
The column displaying the final disposition status for submitted email misses will no longer be visible on the specified page.
This temporary change is required as we revamp and integrate a more powerful backend infrastructure for processing these security-critical submissions. This update is designed to make even more effective use of the data you provide to improve our detection capabilities. We assure you that your submissions are continuing to be addressed at an even greater rate than before, fueling faster and more accurate security improvements.
Rest assured, the ability to submit email misses and the underlying analysis work remain fully operational. We are committed to reintroducing a refined, more valuable status update feature once the new infrastructure is completed.
The Zero Trust dashboard and navigation is receiving significant and exciting updates. The dashboard is being restructured to better support common tasks and workflows, and various pages have been moved and consolidated.
There is a new guided experience on login detailing the changes, and you can use the Zero Trust dashboard search to find product pages by both their new and old names, as well as your created resources. To replay the guided experience, you can find it in Overview > Get Started.

Notable changes

No changes to our API endpoint structure or to any backend services have been made as part of this effort.
This week highlights enhancements to detection signatures improving coverage for vulnerabilities in DELMIA Apriso, linked to CVE-2025-6205.
Key Findings
This vulnerability allows unauthenticated attackers to gain privileged access to the application. The latest update provides enhanced detection logic for resilient protection against exploitation attempts.
Impact
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | DELMIA Apriso - Auth Bypass - CVE:CVE-2025-6205 | Log | Block | This is a new detection. | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - Body | N/A | Disabled | Rule metadata description refined. Detection unchanged. | |
| Cloudflare Managed Ruleset | N/A | PHP Wrapper Injection - URI | N/A | Disabled | Rule metadata description refined. Detection unchanged. |
| Announcement Date | Release Date | Release Behavior | Legacy Rule ID | Rule ID | Description | Comments |
|---|---|---|---|---|---|---|
| 2025-11-17 | 2025-11-24 | Log | N/A | PHP Wrapper Injection - Body - Beta | This is a beta detection and will replace the action on original detection "PHP Wrapper Injection - Body" (ID: | |
| 2025-11-17 | 2025-11-24 | Log | N/A | PHP Wrapper Injection - URI - Beta | This is a beta detection and will replace the action on original detection "PHP Wrapper Injection - URI" (ID: | |
| 2025-11-17 | 2025-11-24 | Log | N/A | FortiWeb - Authentication Bypass via CGIINFO Header - CVE:CVE-2025-64446 | This is a new detection | |
| 2025-11-17 | 2025-11-24 | Log | N/A | XSS - JS Context Escape - Beta | This is a beta detection and will replace the action on original detection "PHP Wrapper Injection - URI" (ID: |
You can now stay on top of your SaaS security posture with the new CASB Weekly Digest notification. This opt-in email digest is delivered to your inbox every Monday morning and provides a high-level summary of your organization's Cloudflare API CASB findings from the previous week.
This allows security teams and IT administrators to get proactive, at-a-glance visibility into new risks and integration health without having to log in to the dashboard.
To opt in, navigate to Manage Account > Notifications in the Cloudflare dashboard to configure the CASB Weekly Digest alert type.
The CASB Weekly Digest notification is available to all Cloudflare users today.
We've resolved a bug in Log Explorer that caused inconsistencies between the custom SQL date field filters and the date picker dropdown. Previously, users attempting to filter logs based on a custom date field via a SQL query sometimes encountered unexpected results or mismatching dates when using the interactive date picker.
This fix ensures that the custom SQL date field filters now align correctly with the selection made in the date picker dropdown, providing a reliable and predictable filtering experience for your log data. This is particularly important for users creating custom log views based on time-sensitive fields.
We've significantly enhanced Log Explorer by adding support for 14 additional Cloudflare product datasets.
This expansion enables Operations and Security Engineers to gain deeper visibility and telemetry across a wider range of Cloudflare services. By integrating these new datasets, users can now access full context to efficiently investigate security incidents, troubleshoot application performance issues, and correlate logged events across different layers (like application and network) within a single interface. This capability is crucial for a complete and cohesive understanding of event flows across your Cloudflare environment.
The newly supported datasets include:
Dns_logsNel_reportsPage_shield_eventsSpectrum_eventsZaraz_eventsAudit LogsAudit_logs_v2Biso_user_actionsDNS firewall logsEmail_security_alertsMagic Firewall IDSNetwork AnalyticsSinkhole HTTPipsec_logsYou can now use Log Explorer to query and filter with each of these datasets. For example, you can identify an IP address exhibiting suspicious behavior in the FW_event logs, and then instantly pivot to the Network Analytics logs or Access logs to see its network-level traffic profile or if it bypassed a corporate policy.
To learn more and get started, refer to the Log Explorer documentation and the Cloudflare Logs documentation.
Digital Experience Monitoring (DEX) provides visibility into WARP device metrics, connectivity, and network performance across your Cloudflare SASE deployment.
We've released four new WARP and DEX device data sets that can be exported via Cloudflare Logpush. These Logpush data sets can be exported to R2, a cloud bucket, or a SIEM to build a customized logging and analytics experience.
To create a new DEX or WARP Logpush job, customers can go to the account level of the Cloudflare dashboard > Analytics & Logs > Logpush to get started.

You can now perform more powerful queries directly in Workers Analytics Engine ↗ with a major expansion of our SQL function library.
Workers Analytics Engine allows you to ingest and store high-cardinality data at scale (such as custom analytics) and query your data through a simple SQL API.
Today, we've expanded Workers Analytics Engine's SQL capabilities with several new functions:
countIf() - count the number of rows which satisfy a provided conditionsumIf() - calculate a sum from rows which satisfy a provided conditionavgIf() - calculate an average from rows which satisfy a provided conditionNew date and time functions: ↗
toYear()toMonth()toDayOfMonth()toDayOfWeek()toHour()toMinute()toSecond()toStartOfYear()toStartOfMonth()toStartOfWeek()toStartOfDay()toStartOfHour()toStartOfFifteenMinutes()toStartOfTenMinutes()toStartOfFiveMinutes()toStartOfMinute()today()toYYYYMM()Whether you're building usage-based billing systems, customer analytics dashboards, or other custom analytics, these functions let you get the most out of your data. Get started with Workers Analytics Engine and explore all available functions in our SQL reference documentation.
A new GA release for the Windows WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
Known issues
For Windows 11 24H2 users, Microsoft has confirmed a regression that may lead to performance issues like mouse lag, audio cracking, or other slowdowns. Cloudflare recommends users experiencing these issues upgrade to a minimum Windows 11 24H2 KB5062553 or higher for resolution.
Devices using WARP client 2025.4.929.0 and up may experience Local Domain Fallback failures if a fallback server has not been configured. To configure a fallback server, refer to Route traffic to fallback server.
Devices with KB5055523 installed may receive a warning about Win32/ClickFix.ABA being present in the installer. To resolve this false positive, update Microsoft Security Intelligence to version 1.429.19.0 or later.
DNS resolution may be broken when the following conditions are all true:
To work around this issue, reconnect the WARP client by toggling off and back on.
A new GA release for the macOS WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
Changes and improvements
Known issues
A new GA release for the Linux WARP client is now available on the stable releases downloads page.
This release contains minor fixes, improvements, and new features including Path Maximum Transmission Unit Discovery (PMTUD). When PMTUD is enabled, the client will dynamically adjust packet sizing to optimize connection performance. There is also a new connection status message in the GUI to inform users that the local network connection may be unstable. This will make it easier to diagnose connectivity issues.
WARP client version 2025.8.779.0 introduced an updated public key for Linux packages. The public key must be updated if it was installed before September 12, 2025 to ensure the repository remains functional after December 4, 2025. Instructions to make this update are available at pkg.cloudflareclient.com.
Changes and improvements
Starting February 2, 2026, the cloudflared proxy-dns command will be removed from all new cloudflared releases.
This change is being made to enhance security and address a potential vulnerability in an underlying DNS library. This vulnerability is specific to the proxy-dns command and does not affect any other cloudflared features, such as the core Cloudflare Tunnel service.
The proxy-dns command, which runs a client-side DNS-over-HTTPS (DoH) proxy, has been an officially undocumented feature for several years. This functionality is fully and securely supported by our actively developed products.
Versions of cloudflared released before this date will not be affected and will continue to operate. However, note that our official support policy for any cloudflared release is one year from its release date.
We strongly advise users of this undocumented feature to migrate to one of the following officially supported solutions before February 2, 2026, to continue benefiting from secure DNS-over-HTTPS.
The preferred method for enabling DNS-over-HTTPS on user devices is the Cloudflare WARP client. The WARP client automatically secures and proxies all DNS traffic from your device, integrating it with your organization's Zero Trust policies and posture checks.
For scenarios where installing a client on every device is not possible (such as servers, routers, or IoT devices), we recommend using the WARP Connector.
Instead of running cloudflared proxy-dns on a machine, you can install the WARP Connector on a single Linux host within your private network. This connector will act as a gateway, securely routing all DNS and network traffic from your entire subnet to Cloudflare for filtering and logging.
We're excited to announce a quality-of-life improvement for Log Explorer users. You can now resize the custom SQL query window to accommodate longer and more complex queries.
Previously, if you were writing a long custom SQL query, the fixed-size window required excessive scrolling to view the full query. This update allows you to easily drag the bottom edge of the query window to make it taller. This means you can view your entire custom query at once, improving the efficiency and experience of writing and debugging complex queries.
To learn more and get started, refer to the Log Explorer documentation.
We’re excited to introduce Logpush Health Dashboards, giving customers real-time visibility into the status, reliability, and performance of their Logpush jobs. Health dashboards make it easier to detect delivery issues, monitor job stability, and track performance across destinations. The dashboards are divided into two sections:
Upload Health: See how much data was successfully uploaded, where drops occurred, and how your jobs are performing overall. This includes data completeness, success rate, and upload volume.
Upload Reliability – Diagnose issues impacting stability, retries, or latency, and monitor key metrics such as retry counts, upload duration, and destination availability.

Health Dashboards can be accessed from the Logpush page in the Cloudflare dashboard at the account or zone level, under the Health tab. For more details, refer to our Logpush Health Dashboards documentation, which includes a comprehensive troubleshooting guide to help interpret and resolve common issues.
AI Crawl Control now supports per-crawler drilldowns with an extended actions menu and status code analytics. Drill down into Metrics, Cloudflare Radar, and Security Analytics, or export crawler data for use in WAF custom rules, Redirect Rules, and robots.txt files.
The Metrics tab includes a status code distribution chart showing HTTP response codes (2xx, 3xx, 4xx, 5xx) over time. Filter by individual crawler, category, operator, or time range to analyze how specific crawlers interact with your site.

Each crawler row includes a three-dot menu with per-crawler actions:

Learn more about AI Crawl Control.
This week’s release introduces new detections for Prototype Pollution across three common vectors: URI, Body, and Header/Form.
Key Findings
Impact
Exploitation may allow attackers to change internal logic or cause unexpected behavior in applications using JavaScript or Node.js frameworks. Developers should sanitize input keys and avoid merging untrusted data structures.
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - URI | Log | Disabled | This is a new detection | |
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - Body | Log | Disabled | This is a new detection | |
| Cloudflare Managed Ruleset | N/A | Generic Rules - Prototype Pollution - Header - Form | Log | Disabled | This is a new detection |
Enable automatic tracing on your Workers, giving you detailed metadata and timing information for every operation your Worker performs.

Tracing helps you identify performance bottlenecks, resolve errors, and understand how your Worker interacts with other services on the Workers platform. You can now answer questions like:
You can now:
{ "observability": { "tracing": { "enabled": true, }, },}We have previously added new application categories to better reflect their content and improve HTTP traffic management: refer to Changelog. While the new categories are live now, we want to ensure you have ample time to review and adjust any existing rules you have configured against old categories. The remapping of existing applications into these new categories will be completed by January 30, 2026. This timeline allows you a dedicated period to:
Applications being remappedd
| Application Name | Existing Category | New Category |
|---|---|---|
| Google Photos | File Sharing | Photography & Graphic Design |
| Flickr | File Sharing | Photography & Graphic Design |
| ADP | Human Resources | Business |
| Greenhouse | Human Resources | Business |
| myCigna | Human Resources | Health & Fitness |
| UnitedHealthcare | Human Resources | Health & Fitness |
| ZipRecruiter | Human Resources | Business |
| Amazon Business | Human Resources | Business |
| Jobcenter | Human Resources | Business |
| Jobsuche | Human Resources | Business |
| Zenjob | Human Resources | Business |
| DocuSign | Legal | Business |
| Postident | Legal | Business |
| Adobe Creative Cloud | Productivity | Photography & Graphic Design |
| Airtable | Productivity | Development |
| Autodesk Fusion360 | Productivity | IT Management |
| Coursera | Productivity | Education |
| Microsoft Power BI | Productivity | Business |
| Tableau | Productivity | Business |
| Duolingo | Productivity | Education |
| Adobe Reader | Productivity | Business |
| AnpiReport | Productivity | Travel |
| ビズリーチ | Productivity | Business |
| doda (デューダ) | Productivity | Business |
| 求人ボックス | Productivity | Business |
| マイナビ2026 | Productivity | Business |
| Power Apps | Productivity | Business |
| RECRUIT AGENT | Productivity | Business |
| シフトボード | Productivity | Business |
| スタンバイ | Productivity | Business |
| Doctolib | Productivity | Health & Fitness |
| Miro | Productivity | Photography & Graphic Design |
| MyFitnessPal | Productivity | Health & Fitness |
| Sentry Mobile | Productivity | Travel |
| Slido | Productivity | Photography & Graphic Design |
| Arista Networks | Productivity | IT Management |
| Atlassian | Productivity | Business |
| CoderPad | Productivity | Business |
| eAgreements | Productivity | Business |
| Vmware | Productivity | IT Management |
| Vmware Vcenter | Productivity | IT Management |
| AWS Skill Builder | Productivity | Education |
| Microsoft Office 365 (GCC) | Productivity | Business |
| Microsoft Exchange Online (GCC) | Productivity | Business |
| Canva | Sales & Marketing | Photography & Graphic Design |
| Instacart | Shopping | Food & Drink |
| Wawa | Shopping | Food & Drink |
| McDonald's | Shopping | Food & Drink |
| Vrbo | Shopping | Travel |
| American Airlines | Shopping | Travel |
| Booking.com | Shopping | Travel |
| Ticketmaster | Shopping | Entertainment & Events |
| Airbnb | Shopping | Travel |
| DoorDash | Shopping | Food & Drink |
| Expedia | Shopping | Travel |
| EasyPark | Shopping | Travel |
| UEFA Tickets | Shopping | Entertainment & Events |
| DHL Express | Shopping | Business |
| UPS | Shopping | Business |
For more information on creating HTTP policies, refer to Applications and app types.
You can now set a jurisdiction when creating a D1 database to guarantee where your database runs and stores data. Jurisdictions can help you comply with data localization regulations such as GDPR. Supported jurisdictions include eu and fedramp.
A jurisdiction can only be set at database creation time via wrangler, REST API or the UI and cannot be added/updated after the database already exists.
npx wrangler@latest d1 create db-with-jurisdiction --jurisdiction eucurl -X POST "https://api.cloudflare.com/client/v4/accounts/<account_id>/d1/database" \ -H "Authorization: Bearer $TOKEN" \ -H "Content-Type: application/json" \ --data '{"name": "db-wth-jurisdiction", "jurisdiction": "eu" }'To learn more, visit D1's data location documentation.
Permissions for managing Logpush jobs related to Zero Trust datasets (Access, Gateway, and DEX) have been updated to improve data security and enforce appropriate access controls.
To view, create, update, or delete Logpush jobs for Zero Trust datasets, users must now have both of the following permissions:
This week’s emergency release introduces a new detection signature that enhances coverage for a critical vulnerability in the React Native Metro Development Server, tracked as CVE-2025-11953.
Key Findings
The Metro Development Server exposes an HTTP endpoint that is vulnerable to OS command injection (CWE-78). An unauthenticated network attacker can send a crafted request to this endpoint and execute arbitrary commands on the host running Metro. The vulnerability affects Metro/cli-server-api builds used by React Native Community CLI in pre-patch development releases.
Impact
Successful exploitation of CVE-2025-11953 may result in remote command execution on developer workstations or CI/build agents, leading to credential and secret exposure, source tampering, and potential lateral movement into internal networks. Administrators and developers are strongly advised to apply the vendor's patches and restrict Metro’s network exposure to reduce this risk.
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | N/A | React Native Metro - Command Injection - CVE:CVE-2025-11953 | N/A | Block | This is a New Detection |
Workers VPC Services is now available, enabling your Workers to securely access resources in your private networks, without having to expose them on the public Internet.
export default { async fetch(request, env, ctx) { // Perform application logic in Workers here
// Sample call to an internal API running on ECS in AWS using the binding const response = await env.AWS_VPC_ECS_API.fetch("https://internal-host.example.com");
// Additional application logic in Workers return new Response(); },};Set up a Cloudflare Tunnel, create a VPC Service, add service bindings to your Worker, and access private resources securely. Refer to the documentation to get started.
We're excited to announce that Log Explorer users can now cancel queries that are currently running.
This new feature addresses a common pain point: waiting for a long, unintended, or misconfigured query to complete before you can submit a new, correct one. With query cancellation, you can immediately stop the execution of any undesirable query, allowing you to quickly craft and submit a new query, significantly improving your investigative workflow and productivity within Log Explorer.
We're excited to announce a new feature in Log Explorer that significantly enhances how you analyze query results: the Query results distribution chart.
This new chart provides a graphical distribution of your results over the time window of the query. Immediately after running a query, you will see the distribution chart above your result table. This visualization allows Log Explorer users to quickly spot trends, identify anomalies, and understand the temporal concentration of log events that match their criteria. For example, you can visually confirm if a spike in traffic or errors occurred at a specific time, allowing you to focus your investigation efforts more effectively. This feature makes it faster and easier to extract meaningful insights from your vast log data.
The chart will dynamically update to reflect the logs matching your current query.
This week highlights enhancements to detection signatures improving coverage for vulnerabilities in Adobe Commerce and Magento Open Source, linked to CVE-2025-54236.
Key Findings
This vulnerability allows unauthenticated attackers to take over customer accounts through the Commerce REST API and, in certain configurations, may lead to remote code execution. The latest update provides enhanced detection logic for resilient protection against exploitation attempts.
Impact
| Ruleset | Rule ID | Legacy Rule ID | Description | Previous Action | New Action | Comments |
|---|---|---|---|---|---|---|
| Cloudflare Managed Ruleset | 100774C | Adobe Commerce - Remote Code Execution - CVE:CVE-2025-54236 | Log | Block | This is an improved detection. |
The Brand Protection logo query dashboard now allows you to use the Report to Cloudflare button to submit an Abuse report directly from the Brand Protection logo queries dashboard. While you could previously report new domains that were impersonating your brand before, now you can do the same for websites found to be using your logo wihtout your permission. The abuse reports wiull be prefilled and you will only need to validate a few fields before you can click the submit button, after which our team process your request.
Ready to start? Check out the Brand Protection docs.
Workers, including those using Durable Objects and Browser Rendering, may now process WebSocket messages up to 32 MiB in size. Previously, this limit was 1 MiB.
This change allows Workers to handle use cases requiring large message sizes, such as processing Chrome Devtools Protocol messages.
For more information, please see the Durable Objects startup limits.
We've raised the Cloudflare Workflows account-level limits for all accounts on the Workers paid plan:
These increases mean you can create new instances up to 10x faster, and have more workflow instances concurrently executing. To learn more and get started with Workflows, refer to the getting started guide.
If your application requires a higher limit, fill out the Limit Increase Request Form or contact your account team. Please refer to Workflows pricing for more information.
Two-factor authentication (2FA) is one of the best ways to protect your account from the risk of account takeover. Cloudflare has offered phishing resistant 2FA options including hardware based keys (for example, a Yubikey) and app based TOTP (time-based one-time password) options which use apps like Google or Microsoft's Authenticator app. Unfortunately, while these solutions are very secure, they can be lost if you misplace the hardware based key, or lose the phone which includes that app. The result is that users sometimes get locked out of their accounts and need to contact support.
Today, we are announcing the addition of email as a 2FA factor for all Cloudflare accounts. Email 2FA is in wide use across the industry as a least common denominator for 2FA because it is low friction, loss resistant, and still improves security over username/password login only. We also know that most commercial email providers already require 2FA, so your email address is usually well protected already.
You can now enable email 2FA on the Cloudflare dashboard:
Cloudflare is critical infrastructure, and you should protect it as such. Review the following best practices and make sure you are doing your part to secure your account:
As Cloudflare's platform has grown, so has the need for precise, role-based access control. We’ve redesigned the Member Management experience in the Dashboard to help administrators more easily discover, assign, and refine permissions for specific principals.
Refreshed member invite flow
We overhauled the Invite Members UI to simplify inviting users and assigning permissions.

Refreshed Members Overview Page
We've updated the Members Overview Page to clearly display:

New Member Permission Policies Details View
We've created a new member details screen that shows all permission policies associated with a member; including policies inherited from group associations to make it easier for members to understand the effective permissions they have.

Improved Member Permission Workflow
We redesigned the permission management experience to make it faster and easier for administrators to review roles and grant access.

Account-scoped Policies Restrictions Relaxed
Previously, customers could only associate a single account-scoped policy with a member. We've relaxed this restriction, and now Administrators can now assign multiple account-scoped policies to the same member; bringing policy assignment behavior in-line with user-groups and providing greater flexibility in managing member permissions.
Cloudflare now provides two new request fields in the Ruleset engine that let you make decisions based on whether a request used TCP and the measured TCP round-trip time between the client and Cloudflare. These fields help you understand protocol usage across your traffic and build policies that respond to network performance. For example, you can distinguish TCP from QUIC traffic or route high latency requests to alternative origins when needed.
| Field | Type | Description |
|---|---|---|
cf.edge.client_tcp | Boolean | Indicates whether the request used TCP. A value of true means the client connected using TCP instead of QUIC. |
cf.timings.client_tcp_rtt_msec | Number | Reports the smoothed TCP round-trip time between the client and Cloudflare in milliseconds. For example, a value of 20 indicates roughly twenty milliseconds of RTT. |
Example filter expression:
cf.edge.client_tcp && cf.timings.client_tcp_rtt_msec < 100More information can be found in the Rules language fields reference.
You can now access preview URLs directly from the build details page, making it easier to test your changes when reviewing builds in the dashboard.

What's new
Cloudflare Access for private hostname applications can now secure traffic on all ports and protocols.
Previously, applying Zero Trust policies to private applications required the application to use HTTPS on port 443 and support Server Name Indicator (SNI).
This update removes that limitation. As long as the application is reachable via a Cloudflare off-ramp, you can now enforce your critical security controls — like single sign-on (SSO), MFA, device posture, and variable session lengths — to any private application. This allows you to extend Zero Trust security to services like SSH, RDP, internal databases, and other non-HTTPS applications.

For example, you can now create a self-hosted application in Access for ssh.testapp.local running on port 22. You can then build a policy that only allows engineers in your organization to connect after they pass an SSO/MFA check and are using a corporate device.
This feature is generally available across all plans.
AI Search now supports reranking for improved retrieval quality and allows you to set the system prompt directly in your API requests.
You can now enable reranking to reorder retrieved documents based on their semantic relevance to the user’s query. Reranking helps improve accuracy, especially for large or noisy datasets where vector similarity alone may not produce the optimal ordering.
You can enable and configure reranking in the dashboard or directly in your API requests:
const answer = await env.AI.autorag("my-autorag").aiSearch({ query: "How do I train a llama to deliver coffee?", model: "@cf/meta/llama-3.3-70b-instruct-fp8-fast", reranking: { enabled: true, model: "@cf/baai/bge-reranker-base" }});Previously, system prompts could only be configured in the dashboard. You can now define them directly in your API requests, giving you per-query control over behavior. For example:
// Dynamically set query and system prompt in AI Searchasync function getAnswer(query, tone) { const systemPrompt = `You are a ${tone} assistant.`;
const response = await env.AI.autorag("my-autorag").aiSearch({ query: query, system_prompt: systemPrompt });
return response;}
// Example usageconst query = "What is Cloudflare?";const tone = "friendly";
const answer = await getAnswer(query, tone);console.log(answer);Learn more about Reranking and System Prompt in AI Search.
Cloudflare CASB (Cloud Access Security Broker) now supports two new granular roles to provide more precise access control for your security teams:
These new roles help you better enforce the principle of least privilege. You can now grant specific members access to CASB security findings without assigning them broader permissions, such as the Super Administrator or Administrator roles.
To enable Data Loss Prevention (DLP), scans in CASB, account members will need the Cloudflare Zero Trust role.
You can find these new roles when inviting members or creating API tokens in the Cloudflare dashboard under Manage Account > Members.
To learn more about managing roles and permissions, refer to the Manage account members and roles documentation.
To give you precision and flexibility while creating policies to block unwanted traffic, we are introducing new, more granular application categories in the Gateway product.
We have added the following categories to provide more precise organization and allow for finer-grained policy creation, designed around how users interact with different types of applications:
The new categories are live now, but we are providing a transition period for existing applications to be fully remapped to these new categories.
The full remapping will be completed by January 30, 2026.
We encourage you to use this time to:
For more information on creating HTTP policies, refer to Applications and app types.
Logpush now supports integration with Microsoft Sentinel ↗.The new Azure Sentinel Connector built on Microsoft’s Codeless Connector Framework (CCF), is now avaialble. This solution replaces the previous Azure Functions-based connector, offering significant improvements in security, data control, and ease of use for customers. Logpush customers can send logs to Azure Blob Storage and configure this new Sentinel Connector to ingest those logs directly into Microsoft Sentinel.
This upgrade significantly streamlines log ingestion, improves security, and provides greater control:
Find the new solution here ↗ and refer to the Cloudflare's developer documention ↗for more information on the connector, including setup steps, supported logs and Microsfot's resources.
Radar now introduces Top-Level Domain (TLD) insights, providing visibility into popularity based on the DNS magnitude metric, detailed TLD information including its type, manager, DNSSEC support, RDAP support, and WHOIS data, and trends such as DNS query volume and geographic distribution observed by the 1.1.1.1 DNS resolver.
The following dimensions were added to the Radar DNS API, specifically, to the /dns/summary/{dimension} and /dns/timeseries_groups/{dimension} endpoints:
tld: Top-level domain extracted from DNS queries; can also be used as a filter.tld_dns_magnitude: Top-level domain ranking by DNS magnitude.And the following endpoints were added:
/tlds - Lists all TLDs./tlds/{tld} - Retrieves information about a specific TLD.
Learn more about the new Radar DNS insights in our blog post ↗, and check out the new Radar page ↗.
The Requests for Information (RFI) dashboard now shows users the number of tokens used by each submitted RFI to better understand usage of tokens and how they relate to each request submitted.

What’s new:
Strategic Threat Research request type.Cloudforce One subscribers can try it now in Application Security > Threat Intelligence > Requests for Information ↗.
Previously, if you wanted to develop or deploy a worker with attached resources, you'd have to first manually create the desired resources. Now, if your Wrangler configuration file includes a KV namespace, D1 database, or R2 bucket that does not yet exist on your account, you can develop locally and deploy your application seamlessly, without having to run additional commands.
Automatic provisioning is launching as an open beta, and we'd love to hear your feedback to help us make improvements! It currently works for KV, R2, and D1 bindings. You can disable the feature using the --no-x-provision flag.
To use this feature, update to wrangler@4.45.0 and add bindings to your config file without resource IDs e.g.:
{ "kv_namespaces": [{ "binding": "MY_KV" }], "d1_databases": [{ "binding": "MY_DB" }], "r2_buckets": [{ "binding": "MY_R2" }],}wrangler dev will then automatically create these resources for you locally, and on your next run of wrangler deploy, Wrangler will call the Cloudflare API to create the requested resources and link them to your Worker.
Though resource IDs will be automatically written back to your Wrangler config file after resource creation, resources will stay linked across future deploys even without adding the resource IDs to the config file. This is especially useful for shared templates, which now no longer need to include account-specific resource IDs when adding a binding.
The Cloudflare Vite plugin now supports TanStack Start ↗ apps. Get started with new or existing projects.
Create a new TanStack Start project that uses the Cloudflare Vite plugin via the create-cloudflare CLI:
npm create cloudflare@latest -- my-tanstack-start-app --framework=tanstack-startyarn create cloudflare my-tanstack-start-app --framework=tanstack-startpnpm create cloudflare@latest my-tanstack-start-app --framework=tanstack-startMigrate an existing TanStack Start project to use the Cloudflare Vite plugin:
@cloudflare/vite-plugin and wranglernpm i -D @cloudflare/vite-plugin wrangleryarn add -D @cloudflare/vite-plugin wranglerpnpm add -D @cloudflare/vite-plugin wranglerimport { defineConfig } from "vite";import { tanstackStart } from "@tanstack/react-start/plugin/vite";import viteReact from "@vitejs/plugin-react";import { cloudflare } from "@cloudflare/vite-plugin";
export default defineConfig({ plugins: [ cloudflare({ viteEnvironment: { name: "ssr" } }), tanstackStart(), viteReact(), ],});{ "$schema": "./node_modules/wrangler/config-schema.json", "name": "my-tanstack-start-app", "compatibility_date": "2025-10-11", "compatibility_flags": [ "nodejs_compat" ], "main": "@tanstack/react-start/server-entry"}name = "my-tanstack-start-app"compatibility_date = "2025-10-11"compatibility_flags = ["nodejs_compat"]main = "@tanstack/react-start/server-entry"package.json{ "scripts": { "dev": "vite dev", "build": "vite build && tsc --noEmit", "start": "node .output/server/index.mjs", "preview": "vite preview", "deploy": "npm run build && wrangler deploy", "cf-typegen": "wrangler types" }}See the TanStack Start framework guide for more info.
Developers can now programmatically retrieve a list of all file formats supported by the Markdown Conversion utility in Workers AI.
You can use the env.AI binding:
await env.AI.toMarkdown().supported()Or call the REST API:
curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/tomarkdown/supported \ -H 'Authorization: Bearer {API_TOKEN}'Both return a list of file formats that users can convert into Markdown:
[ { "extension": ".pdf", "mimeType": "application/pdf", }, { "extension": ".jpeg", "mimeType": "image/jpeg", }, ...]Learn more about our Markdown Conversion utility.
AI Crawl Control now includes a Robots.txt tab that provides insights into how AI crawlers interact with your robots.txt files.
The Robots.txt tab allows you to:
robots.txt files across all your hostnames, including HTTP status codes, and identify hostnames that need a robots.txt file.robots.txt file, with breakdowns of successful versus unsuccessful requests.robots.txt files contain Content Signals ↗ directives for AI training, search, and AI input.robots.txt directives, including the crawler name, operator, violated path, specific directive, and violation count.robots.txt request data by crawler, operator, category, and custom time ranges.When you identify non-compliant crawlers, you can:
To get started, go to AI Crawl Control > Robots.txt in the Cloudflare dashboard. Learn more in the Track robots.txt documentation.
Admins can now create scheduled DNS policies directly from the Zero Trust dashboard, without using the API. You can configure policies to be active during specific, recurring times, such as blocking social media during business hours or gaming sites on school nights.
You can see the flow in the demo GIF:

This update makes time-based DNS policies accessible to all Gateway customers, removing the technical barrier of the API.
You can now generate on-demand security reports directly from the Cloudflare dashboard. This new feature provides a comprehensive overview of your email security posture, making it easier than ever to demonstrate the value of Cloudflare’s Email security to executives and other decision makers.
These reports offer several key benefits:

This feature is available across the following Email security packages: