The idea is tantalizing. Imagine opening a menu, clicking a single button, and watching Mozilla Firefox—your humble daily driver browser—crawl every accessible page of a domain, download all the HTML, CSS, JS, and assets, and package it neatly into a local folder. No command line. No wget flags. No httrack configuration.
From addons.mozilla.org. Configure it to save as “Complete HTML file” and enable “Save deferred images.” firefoxs siterip
But that doesn’t mean Firefox is powerless. In fact, when you combine its native DevTools, a few strategic extensions, and some underrated internal features, Firefox becomes one of the most ethical, flexible, and user-controlled tools for offline archiving. This post is the long-form guide to what “siteripping” means in the Firefox ecosystem—what works, what doesn’t, and how to do it right without breaking the law or your sanity. The idea is tantalizing
Firefox is the scalpel. Siterip tools are the chainsaw. Use the right one. No wget flags
Beyond the Save Button: A Deep Dive into Firefox’s Siterip Capabilities (And Why It’s Not What You Think)
Firefox is great here because you can already be logged in . Unlike wget , Firefox handles cookies, sessions, and WebSockets natively. Extensions like “SingleFile” will save the authenticated view. This is how you archive your own Slack history, Notion pages, or internal wikis (with permission).
The classic. Saves the current HTML file plus a _files folder containing CSS, JS, and images. It’s not recursive—it won’t follow links—but for a single page, it’s perfect.