"Bandwidth efficiency" for RSS has come and gone as an issue, but it will come again; all improvements on the last pass were linear in nature, meaning that as more people come online the problem will rear its head again later. And next time, the "low-hanging" fruit will be gone.
There are two basic, fundamental problems with the current system:
- When an RSS file changes, the entire RSS file is transferred. This provides basic limitation on how much bandwidth can be saved currently, through any technique of not downloading the entire file unless necessary, such as using Etags. For instance, Instapundit's RSS 1.0 file is 10KB; Scripting New's RSS file is 15KB. Start doing the bandwidth math and that's a lot of transfer. Many sites are even less efficient and have multi-hundred KB RSS files, many without knowing it. Every time the site changes, everybody gets a whole new copy. Very inefficient.
- There is only one source for the changes. When Scripting News changes, everybody has to hammer Scripting.com.
- Update: A third problem is that the only way to scale up right now is to spend more on bandwidth, money the blogger may not have. See this later posting.
Ideally, to keep the RSS system from imploding as more people come online, we need to reduce the number of bytes flowing per update, and we need to partially decentralize the system.