Skip to content
Fundamentals·8 min read

Patched Chromium browsers explained

How scrapers patch Chromium's source code to hide automation fingerprints — and why navigator.webdriver checks no longer catch patched browsers.

What is a patched Chromium browser?

A patched Chromium browser is a build of Chromium whose source code has been modified to remove the fingerprints that identify it as an automated browser. The changes are compiled in. By the time the binary runs, the signs that would normally give it away are already gone: injected variables, telltale flags, unusual runtime properties.

This is different from stealth plugins like puppeteer-extra-stealth, which run JavaScript at page load to overwrite those properties after the fact. Runtime shims leave their own traces. Patched builds don't, because the thing that would have been shimmed was never there.

The most widely used patched projects are undetected-chromedriver, rebrowser-patches, Patchright (a Playwright fork), nodriver, and, on the Firefox side, camoufox. They differ in what they patch and how aggressively, but the shared premise is the same: fix the leak in C++, not JavaScript.

Detection used to be easier. Before Chrome unified its headful and headless codebases in November 2022, you could catch most scrapers just by looking for the `HeadlessChrome` user-agent string (Castle, 2025). That check still ships in a lot of bot-detection code. It catches almost nothing now.

Why vanilla headless Chrome gets caught

If you launch Puppeteer or Selenium with default settings against a site with real detection, you lose. The reasons are mundane and well documented.

The first is `navigator.webdriver`. The Web Driver spec requires it to be `true` when a browser is being controlled by automation, and any page JavaScript can read it in one line. Selenium sets it, Playwright sets it, Puppeteer sets it.

The second is `$cdc_asdjflasutopfhvcZLmcfl_`. ChromeDriver injects this variable into every page it drives as part of how Selenium talks to the browser (DataDome, 2024). The string is fixed. A single regex catches it.

The third is the `HeadlessChrome` substring in the user agent, which old headless mode still ships if nobody remembers to override it. The fourth is the pile of runtime properties that differ between a real browser and a controlled one: missing plugins, zero-length languages, the wrong `screen.colorDepth`, no `window.chrome` object. None of these individually proves automation. All of them together do.

Fixing each one is a one-line change. Which is why a patched browser takes about an hour to assemble if you know what you're doing, and why most scraper tutorials on the public internet stop at this level. The problem is that this only gets you past the checks a 2019 bot-detection system would run. Modern systems look somewhere else entirely.

The real signal lives in the Chrome DevTools Protocol

Puppeteer, Playwright, and Selenium all talk to Chromium through the same channel: the Chrome DevTools Protocol, or CDP. It's a WebSocket interface the browser exposes for debugging, and automation frameworks use it to drive clicks, execute scripts, and read the DOM. Roughly 95% of automated actions end up running through `Page.evaluate`, which is CDP's way of running JavaScript inside the page (Rebrowser, 2024).

Running `Page.evaluate` requires enabling the CDP `Runtime` domain. Enabling it emits events the page can observe. For years, the standard way bot-detection code spotted automation was to plant an error object with a custom getter on `.stack`, call `console.debug()` on it, and watch for the getter to trigger. A firing getter meant the Runtime domain was serializing the object across the WebSocket.

Then, in May 2025, two V8 commits quietly killed the trick. One titled "Avoid error side effects in DevTools" landed on May 7; "Apply getter guard throughout error preview" followed on May 9. Together they stopped the browser from running user-defined getters during error preview (Castle, 2025). The console.debug check went from reliable to useless, and most vendors didn't notice for months.

"Even the simplest bots, the ones that used to get caught, don't trigger the CDP signal anymore," Castle's engineers wrote about the fallout (Castle, 2025).

If your detection stack depends on that specific signal, you're running blind against any automation built after May 2025. And CDP is where modern patched browsers put most of their effort.

How the patches actually work

The patched-browser ecosystem splits into three generations, each targeting a different layer of the automation stack.

The first generation patches the fingerprint surface directly. Undetected-chromedriver modifies the Selenium ChromeDriver binary to rename `$cdc_` variables and strip the obvious Selenium tells. Puppeteer and Playwright users run rebrowser-patches, which modifies the library source so it doesn't enable the CDP Runtime domain on every page load. These patches are applied once to your automation install and persist across releases until the upstream code changes enough to break them.

The second generation patches the protocol itself. Patchright (a Playwright fork maintained as `patchright` and `patchright-python`) runs JavaScript through `Page.createIsolatedWorld` instead of the default Runtime domain, so the page side never sees the events detection code looks for. The patches are written at the C++ level, not as JavaScript wrappers, because JavaScript wrappers introduce their own detectable side effects.

The third generation walks away from CDP altogether. Nodriver and its async fork zendriver re-implement browser control using OS-level input. They move the mouse through operating-system APIs and type through keyboard events, so no CDP session exists for a detector to notice. Camoufox does the equivalent for Firefox. It patches the Juggler automation protocol at the C++ layer before compilation, so `navigator.hardwareConcurrency`, WebGL renderer strings, and screen geometry all match what a real browser would report.

Castle summarized the shift this way: "Modern anti-detect frameworks are more targeted. Rather than applying dozens of evasions, they focus on a smaller set of powerful, low-level signals, primarily those introduced by browser automation protocols" (Castle, 2025).

The ecosystem is moving down the stack. Every time a detection signal gets published, the patches move one layer closer to the bare metal.

The cost of running a fork

Most scrapers don't maintain their own Chromium fork, even though it would give them the cleanest bypass. The reason is cadence. Chrome ships a new milestone every four weeks (Chromium project). Each release moves code around, renames internals, and introduces new detection surfaces that didn't exist in the previous version. A fork that isn't rebased onto the new release quickly stops building at all.

The patches themselves are small, often a handful of lines in a specific Runtime or Page handler, but rebasing them onto each new Chromium is not. Every four weeks the fork owner has to reapply their changes against new source, retest them against live bot-detection vendors, and ship a new binary before the old one ages out. Camoufox's long maintenance gap through most of 2025, during which the project barely shipped updates, is a good illustration of what happens when one person carries the weight.

Which is why the commercial market fills the vacuum. Bright Data, Browserless, and similar services run patched Chromium as a hosted API: you point Playwright at their WebSocket endpoint and they handle the rebasing. The tradeoff is that these services have their own tells. Independent benchmarks find that the CDP automation flag is still readable from inside the browser in every session they serve (ScrapeOps benchmark, 2026).

Managed services make patched browsers affordable, but they also make them legible to anyone who knows where to look.

What still catches patched browsers

Headless fingerprint detection is easy to defeat. Behavioral detection is not.

The signals that survive patching are the ones that depend on how the browser is being used, not what the browser looks like at page load. Mouse paths that travel in straight lines between elements. Identical typing cadences across sessions. Form completion that finishes faster than a human could read the form. Session shapes that repeat at request-count intervals the real user population never produces. None of these are visible in a single request, which is why page-load fingerprinting misses them entirely.

The scale of automation on the modern web forces this shift. Cloudflare measured roughly 30% of all internet traffic as bots in 2025, and during the first week of March 2025 over 94% of authentication requests came from bots (Cloudflare, 2025). Imperva observes more than 10,000 unique IPs per day running headless Chrome for scraping and carding, split roughly half-and-half between headless and headful mode (Imperva, 2024). At that volume, a detection system that only looks at one request in isolation is scoring noise.

By early 2025, puppeteer-extra-stealth — once the default way to mask automation — no longer reliably bypassed Cloudflare, and maintainers stopped actively updating it (ZenRows, 2025; Castle, 2025). The tools that replaced it all share one property: they hide page-load signals but can't hide a session that acts like a bot.

The detection that works now looks at behavior over time, not a flag at page load.

What this means for your detection stack

If you're defending a site, the practical takeaway is narrow. Stop treating `navigator.webdriver` and user-agent substring checks as bot detection. They catch old tooling and nothing else. Patched Chromium builds are cheap, widely available, and hide exactly the signals those checks look for.

What still works is multi-signal detection that combines request-level fingerprints with behavioral patterns across a session, the kind that doesn't hinge on a single flag being true or false. Castle put the current reality bluntly in August 2025: "Even the simplest bots, the ones that used to get caught, don't trigger the CDP signal anymore" (Castle, 2025). A detector that depends on a single signal is already behind.

Centinel detects and blocks AI crawlers and automation traffic at the request level, combining runtime signals with behavioral analysis built for exactly the patched-browser class of traffic described above. When 94% of authentication requests across the internet are automated (Cloudflare, 2025), the question isn't whether to bother detecting them. It's which signals you trust to tell them apart.

See what's crawling your site right now

Run a free audit and get a detailed report of which AI crawlers are accessing your content — in 48 hours.

Get your free audit
Patched Chromium browsers explained | Centinel Analytica