I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).
The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.
The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.
However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.
NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
Use it where it makes sense. And S3 does not necessarily equate to using Amazon. I like the Garage S3 project that is interesting for smaller scale uses and self-hosted systems. The project is funded with EU Horizon grants via NLnet.
You hate that there is a standard, or aspects of this one? (Or that it's a de facto standard, not clearly specified for example what's required and what just happens to be in AWS' implementation?)
> In fact, you're already using WebDAV and you just don't realize it.
Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.
Both Windows and Mac have 9p support built in and both have locked away from the end user. Windows has it exclusively for communication with WSL. macOS has 9p but it's exclusively for communication with it's virtualization system. It would be amazing if I could just mount 9p from the UI.
I feel like WebDAV will have staying power for a simple reason: it’s easy to understand and implement. My company has a cloud platform for people to share files and I am working on a feature to allow it to work as a drive through WebDAV. We may support other protocols later on but WebDAV made the most sense to start off with because we already have all the infrastructure we need to deliver files over HTTP. The amount of additional complexity to support WebDAV was near-zero and the amount to support other protocols would be a lot more.
On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.
"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.
They mention that the "FTP" service includes SFTP, which is file transfer over SSH (not actually related to classic FTP), which is perfectly secure and supported by most FTP clients like Filezilla.
The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.
They also support FTP without the SSH transport, and it's not FTPS either. Various IP cameras still support FTP as a way to write files out periodically; I use this to provide a "stream" from a camera (8 seconds per frame because reasons) to the world. Actual streaming via RTSP is also available, but I could never get a stable stream to a video host (like YT or Twitch) from the camera (partially because of a poor quality network connection that can't be upgraded easily). So, FTP + credentials -> walled off directory that's not under the web root -> PHP script in web root -> web browser.
Transport encryption should be a huge priority for everyone. It's completely unacceptable to continue using unencrypted protocols over the public internet.
Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.
> It's completely unacceptable to continue using unencrypted protocols over the public internet.
That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.
I'd argue that most people like knowing that what they receive is what the original server sent(and vice versa) but maybe you enjoy ads enough to prefer having your ISP put more of it on the websites you use?
Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.
Not true. Your hosting provider already has physical access to the computer you're connecting to.
Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.
(It's different if you're the NSA or some other state-level actor, but you're not.)
Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.
Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.
The number of shared hosting providers has drastically declined since the 2000s. I would posit that things like squarespace/hosted wordpress took the lion share, with the advent of $5-10 VPS filling the remaining niches.
The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.
> The number of shared hosting providers has drastically declined since the 2000s
Yeah, there’s definitely been some wild consolidation. I’ve actually been involved in quite a few acquisitions myself over the last decade in one form or another.
> (I used to own one, hence why I still track the market)
I’m still in the industry, though in a very different segment now. I do still keep a small handful of legacy customers, folks I’ve known for years, on shared setups, but it’s more of a “you scratch my back, I’ll scratch yours” kind of thing now. It’s not really a profit play, more a mix of nostalgia and habit.
No, not at all the case. There has been continued consolidation of the shared hosting space, plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started. Combine that with site builders eating at shared hosting's market share, and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Seems short sighted, a lot of older people and privacy conscious people of all ages avoid social media. But I guess if they are sustaining a business on only Instagram, good for them.
> There has been continued consolidation of the shared hosting space
That’s been happening, at least from my own memory, since at least the mid-2000s.
> plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started.
Ah yes, the 2020s version of “just start a Facebook page.” The more things change, the more they stay the same I suppose.
> Combine that with site builders eating at shared hosting's market share
I remember hearing that for the first time in I wanna say...2006? It sure did cause a panic for at least a little while.
> and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Yes, I've heard this one more times than I can count too.
The funny thing is, I’ve been hearing this same “shared hosting is dying” narrative for nearly two decades now. Yet, in that time, I’ve seen multiple companies launch, thrive, and sell for multi-million dollar exits.
But sure, this time it’s definitely the death knell. Meanwhile, I assure you, the bigger players in the space are still making money hand over fist.
I was in the space from the late 90's, acquired ~30 brands and was the largest private consolidator of shared hosting, and sold to a Fortune 500 in 2015. Sounds like you had a similar experience as mine. There's no way you can deny that the glory days of shared hosting are over - while there is still a little money to be made by setting up a VPS with cPanel, and money to be made if you are WebPros or Newfold, the market is contracting and has been for years due to the factors I listed. The Cheval list used to be the hottest marketplace on the planet and now is just a shell of it's former self, unfortunately.
I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.
Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model
There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.
One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools
I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?
I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.
Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.
IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.
Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav
EPSON WorkForce ES-580W. Got it from eBay with "damaged packaging" (not really) from Epson Outlet Store in my country. With a discount code I only paid 324 €. There is also an official promotion by Epson (in Europe only maybe?) where you get 75 € cashback for this scanner, so effectively 249 € which is a VERY good price. Also supports SMB but I'm running Paperless on my VPS, hence I used WebDAV (if you do this: the scanner will do a GET request to the WebDAV url first which must be answered with a 200 OK or it will never try WebDAV).
I debated between this scanner and the Brother ADS-1800W but the Brother has a slow UI and no thingy where the paper lands when it's done scanning (not sure how it's called in English).
I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.
For sure. I tried to setup a collaboration environment for a Customer years ago using WebDAV over SSL in lieu of Dropbox. Everything worked great (authenticating to Active Directory, NTFS ACLs, IP address restrictions in IIS policy where necessary, auditing access in Windows security log and IIS logs, no client to install), but the Windows client experience was hideously slow. People hated it for that and it got no traction.
> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.
if operating systems had just put a bit more time into the clients and not stopped any work in 2010 or so, webdav could have been much more, covering many usecases of fuse. unfortunately especially the mac webdav and finders outdated architecture make this just too painful
This seems like another article where they never define the acronym they use and expect everyone to have seen it already.
WebDAV (Web Distributed Authoring and Versioning) is a set of extensions to the Hypertext Transfer Protocol (HTTP), which allows user agents to collaboratively author contents directly in an HTTP web server by providing facilities for concurrency control and namespace operations, thus allowing the Web to be viewed as a writeable, collaborative medium and not just a read-only medium.[1] WebDAV is defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).
Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.
> Lots of tools support it: [...| Windows Explorer (Map Network Drive, Connect to a Web site...)
Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.
My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.
I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.
I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.
It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.
I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.
It's HTTP, of course there's an extension for that?
Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:
Another example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...
That's some wishful thinking. I understand the case for JMAP above IMAP, I understand how "it makes sense" to NIH the rest of cal/cardDAV, I'm not sure what the sales pitch for file transfer is, though, especially when the ecosystem is pretty much inexistant.
I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.
Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it
And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.
This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.
The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.
None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.
I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).
The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.
The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.
However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.
I wonder how you would compare it to nfs (which I believe can be TCP based, and probably encrypted)
Not that it is a good comparison. NFS isn't super popular, macos can do it, I don't think windows can. But both windows and macos can do webdav.
NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.
Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.
> I should have titled this post "I hate S3".
Use it where it makes sense. And S3 does not necessarily equate to using Amazon. I like the Garage S3 project that is interesting for smaller scale uses and self-hosted systems. The project is funded with EU Horizon grants via NLnet.
https://garagehq.deuxfleurs.fr/
I should write a related article "I hate that the AWS S3 SDK has become a defacto web protocol"
You hate that there is a standard, or aspects of this one? (Or that it's a de facto standard, not clearly specified for example what's required and what just happens to be in AWS' implementation?)
> In fact, you're already using WebDAV and you just don't realize it.
Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.
WebDAV is neat.
I use it all the time to mount my CopyParty instance. Works great!
I wish 9p would be more generally available.
Both Windows and Mac have 9p support built in and both have locked away from the end user. Windows has it exclusively for communication with WSL. macOS has 9p but it's exclusively for communication with it's virtualization system. It would be amazing if I could just mount 9p from the UI.
I feel like WebDAV will have staying power for a simple reason: it’s easy to understand and implement. My company has a cloud platform for people to share files and I am working on a feature to allow it to work as a drive through WebDAV. We may support other protocols later on but WebDAV made the most sense to start off with because we already have all the infrastructure we need to deliver files over HTTP. The amount of additional complexity to support WebDAV was near-zero and the amount to support other protocols would be a lot more.
Fully agree, it's boring technology (tm) and that's usually the way to go (instead of relying on the next big thing), also it's an open standard.
On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.
Have a look there: https://codeberg.org/lunae/dav-next
/!\ it's a WIP, thus not packaged anywhere yet, no binary release, etc… but all feedback welcome
"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.
Exhibit A: https://help.ovhcloud.com/csm/en-ie-web-hosting-ftp-storage-...
I haven't used old school FTP in probably 15 years. Surely we're not talking about using that unencrypted protocol in 2025?
From that link:
Well, maybe we are. I'd cross that provider off my list right there.They mention that the "FTP" service includes SFTP, which is file transfer over SSH (not actually related to classic FTP), which is perfectly secure and supported by most FTP clients like Filezilla.
The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.
They also support FTP without the SSH transport, and it's not FTPS either. Various IP cameras still support FTP as a way to write files out periodically; I use this to provide a "stream" from a camera (8 seconds per frame because reasons) to the world. Actual streaming via RTSP is also available, but I could never get a stable stream to a video host (like YT or Twitch) from the camera (partially because of a poor quality network connection that can't be upgraded easily). So, FTP + credentials -> walled off directory that's not under the web root -> PHP script in web root -> web browser.
FTP still works great and encryption is a non-priority for 100% of users.
It should be priority for hosting companies though since leaked credentials and websites hosting malware is a problem.
Transport encryption should be a huge priority for everyone. It's completely unacceptable to continue using unencrypted protocols over the public internet.
Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.
I've used FTP for static sites for decades by this point. Credentials have never been leaked, transfers have never been interfered with.
> It's completely unacceptable to continue using unencrypted protocols over the public internet.
That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.
I'd argue that most people like knowing that what they receive is what the original server sent(and vice versa) but maybe you enjoy ads enough to prefer having your ISP put more of it on the websites you use?
Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.
I, for one, would like to see an ISP dedicated enough and tecnically able to inject ads in my FTP stream. :)
Agree but also wonder if ISPs bother with this anymore, now that almost all websites are https.
Didn't we already go through this 10 years ago and then Firesheep got created and thoroughly debunked it?
firesheep was built to demonstrate how Easy HTTP session hijacking was (was a Firefox extension)
on HN https://news.ycombinator.com/item?id=1827928
Not true. Your hosting provider already has physical access to the computer you're connecting to.
Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.
(It's different if you're the NSA or some other state-level actor, but you're not.)
Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.
Like you, I will miss the glory days of FTP :'(
I think the true death of ftp was amazon s3 deciding to use their own protocol instead of ftp, as s3 is basically the same niche.
FTP does not even come close to supporting the use cases of S3, especially now.
Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.
The number of shared hosting providers has drastically declined since the 2000s. I would posit that things like squarespace/hosted wordpress took the lion share, with the advent of $5-10 VPS filling the remaining niches.
The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.
> The number of shared hosting providers has drastically declined since the 2000s
Yeah, there’s definitely been some wild consolidation. I’ve actually been involved in quite a few acquisitions myself over the last decade in one form or another.
> (I used to own one, hence why I still track the market)
I’m still in the industry, though in a very different segment now. I do still keep a small handful of legacy customers, folks I’ve known for years, on shared setups, but it’s more of a “you scratch my back, I’ll scratch yours” kind of thing now. It’s not really a profit play, more a mix of nostalgia and habit.
Source on the number of providers declining?
No, not at all the case. There has been continued consolidation of the shared hosting space, plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started. Combine that with site builders eating at shared hosting's market share, and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Seems short sighted, a lot of older people and privacy conscious people of all ages avoid social media. But I guess if they are sustaining a business on only Instagram, good for them.
> There has been continued consolidation of the shared hosting space
That’s been happening, at least from my own memory, since at least the mid-2000s.
> plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started.
Ah yes, the 2020s version of “just start a Facebook page.” The more things change, the more they stay the same I suppose.
> Combine that with site builders eating at shared hosting's market share
I remember hearing that for the first time in I wanna say...2006? It sure did cause a panic for at least a little while.
> and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.
Yes, I've heard this one more times than I can count too.
The funny thing is, I’ve been hearing this same “shared hosting is dying” narrative for nearly two decades now. Yet, in that time, I’ve seen multiple companies launch, thrive, and sell for multi-million dollar exits.
But sure, this time it’s definitely the death knell. Meanwhile, I assure you, the bigger players in the space are still making money hand over fist.
https://www.mordorintelligence.com/industry-reports/web-host...
> By hosting type, shared hosting led with 37.5% of the web hosting market share in 2024
I was in the space from the late 90's, acquired ~30 brands and was the largest private consolidator of shared hosting, and sold to a Fortune 500 in 2015. Sounds like you had a similar experience as mine. There's no way you can deny that the glory days of shared hosting are over - while there is still a little money to be made by setting up a VPS with cPanel, and money to be made if you are WebPros or Newfold, the market is contracting and has been for years due to the factors I listed. The Cheval list used to be the hottest marketplace on the planet and now is just a shell of it's former self, unfortunately.
I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.
iOS DevonThink sync WebDAV has been reliable, fast, maintained, non-subscription and includes a web scraper. Good for saving LLM chatbot markdown.
Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model
They do mention S3-compatible servers later in the post. It really seems to be about protocol itself.
More like attempt at S3 API compatibility...
I was about to make a very similar comment.
There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.
One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools
Isn't that SMB, not webdav?
I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?
I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.
Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.
"\\server\share" is called a UNC path, which can be served by SMB, WebDAV or another type of server.
(old ref, but the architecture hasn't changed AFAIK)
Ref: https://learn.microsoft.com/en-us/previous-versions/windows/...
IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.
I use webdav for serving media over tailscale to infuse when I'm on the move. SMB did not play nicely at all and nfs is not supported..
The go stdlib has quite a good one that just works with only a small bit of wrapping in a main() etc.
Although ive since written one in elixir that seems to handle my traffic better..
(you can also mount them on macos and browse with finder / shell etc which is pretty nice)
dou you happen to have the source code open somewhere? i was just looking into webdav via elixir
Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav
Which scanner, if you don’t mind me asking? I’ve got a decade+ old ix500 that had cloud support but not local SMB.
EPSON WorkForce ES-580W. Got it from eBay with "damaged packaging" (not really) from Epson Outlet Store in my country. With a discount code I only paid 324 €. There is also an official promotion by Epson (in Europe only maybe?) where you get 75 € cashback for this scanner, so effectively 249 € which is a VERY good price. Also supports SMB but I'm running Paperless on my VPS, hence I used WebDAV (if you do this: the scanner will do a GET request to the WebDAV url first which must be answered with a 200 OK or it will never try WebDAV).
I debated between this scanner and the Brother ADS-1800W but the Brother has a slow UI and no thingy where the paper lands when it's done scanning (not sure how it's called in English).
Thank you!
If you need sftp independent of unix auth - there is sftpgo.
Sftpgo also supports webdav, but for use cases in the article sftp is just better.
I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.
https://tailscale.com/kb/1369/taildrive
What else would you expect, just out of curiosity? SMB? NFS? SSHFS?
A proprietary binary patented protocol...
and do what, implement virtual filesystem driver for every OS ?
Only if adding that complexity locks in more subscribers for premium features and support.
The Windows built-in WebDAV in explorer embarrassingly slow. Pretty much unusable for anything serious.
For sure. I tried to setup a collaboration environment for a Customer years ago using WebDAV over SSL in lieu of Dropbox. Everything worked great (authenticating to Active Directory, NTFS ACLs, IP address restrictions in IIS policy where necessary, auditing access in Windows security log and IIS logs, no client to install), but the Windows client experience was hideously slow. People hated it for that and it got no traction.
In my experience, WebDAV has always been slow, no matter which platform.
Can WebDAV be made fast?
OTOH gio-based WebDAV access built into Nautilus and Thunar is something I use daily, and it works quite fine, for a FUSE-based filesystem.
Unlike NFS or SMB, WebDAV mounts do not get stuck for a minute when the connection becomes unstable.
Just like the author, I use WebDAV for Joplin, also Zotero. Just love them so much.
We need to keep using open protocols such as WebDAV instead of depending on proprietary APIs like the S3 API.
OmniFocus also supports WebDAV for folks that prefer to self-host - https://support.omnigroup.com/documentation/omnifocus/univer...
Kudos to Omni Group for supporting open-standard on-prem sync.
Copyparty has webdav and smb support (among others), which makes it a good candidate to combine with a Kodi client perhaps?
> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.
There is also NzbDav for this too, https://github.com/nzbdav-dev/nzbdav
FTP is not dead. A huge percent of Wind Turbines use FTP for data transfer.
if operating systems had just put a bit more time into the clients and not stopped any work in 2010 or so, webdav could have been much more, covering many usecases of fuse. unfortunately especially the mac webdav and finders outdated architecture make this just too painful
I feel the pain when you refeer to MinIO. I ended up using a pre 15 version in order to have all previous features but that sucks. I will try this.
This seems like another article where they never define the acronym they use and expect everyone to have seen it already.
WebDAV (Web Distributed Authoring and Versioning) is a set of extensions to the Hypertext Transfer Protocol (HTTP), which allows user agents to collaboratively author contents directly in an HTTP web server by providing facilities for concurrency control and namespace operations, thus allowing the Web to be viewed as a writeable, collaborative medium and not just a read-only medium.[1] WebDAV is defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).
https://en.wikipedia.org/wiki/WebDAV
Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.
consider versitygw or s3proxy
> FTP is dead
Says who?
A lot of apps support WebDAV. It seems to be better supported than SFTP?
You can run a WebDAV server using caddy easily.
> Lots of tools support it: [...| Windows Explorer (Map Network Drive, Connect to a Web site...)
Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.
My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.
I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.
I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.
It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.
I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.
Beautiful.
> FTP is dead (yay),
Hahahaha, haha, ha, no. And probably (still)more used than WebDAV
pls send help
Yeah, that must have been wishful thinking.
FTP is such a clunky protocol, it is peculiar it has had such staying power.
No random writes is the nail in the coffin for me
It's HTTP, of course there's an extension for that?
Sabre-DAV's implementation seems to be relatively well implemented. It's supported in webdavfs for example. Here's some example headers one might attach to a PATCH request:
https://sabre.io/dav/http-patch/ https://github.com/miquels/webdavfslAnother example is this expired draft. I don't love it, but it uses PATCH+Content-Range. There's some other neat ideas in here, and shows the versatility & open possibility (even if I don't love re-using this header this way). https://www.ietf.org/archive/id/draft-wright-http-patch-byte...
Apache has has a PUT with Content-Range, https://github.com/miquels/webdav-handler-rs/blob/master/doc...
Great write-up in rclone on trying to support partial updates, https://forum.rclone.org/t/support-putstream-for-webdav-serv...
It would be great to see a proper extension formalized here! But there are options.
It has been 16 years since I started this webdav client for Java:
https://github.com/lookfirst/sardine
Still going.
Sardine is great. I recently used it to automate some backups from a webdav share. No complaints whatsoever :-)
JMAP will eventually replace WebDAV.
That's some wishful thinking. I understand the case for JMAP above IMAP, I understand how "it makes sense" to NIH the rest of cal/cardDAV, I'm not sure what the sales pitch for file transfer is, though, especially when the ecosystem is pretty much inexistant.
I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.
What else?
Syncthing is pretty nice for that sort of thing.
Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it
That moves the goalpost. The user I was replying to wanted sync and didn't seem to be using other functionality like that.
I have just tried to run their unofficial apps, but I couldn't make them work.
>It's broadly available as you can see
And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.
This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.
The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.
None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.