Thanks for explaining, thats an interesting process but i cant imagine its really any faster than individual images. I mean how long can it take to load a 16x16 image.
Remember that EVERY file request has the overhead of handshaking. The browser asking "do you have this file?", the server saying "yes, I have that file, it's dated blah blah blah", the browser going "ok, that's newer than my cache (if any), send me it", then "ok it's on it's way" -- all before a single blasted byte of actual data is transferred.
The above is a gross oversimplification.EACH of those steps can basically take the same amount of time as a "ping". So if ping time to the server is 300ms, you're lookiing at 1.2
SECONDS of overhead per file.
Thanks to HTTP parallelism the real world average is close to 200ms in total. Partly due to real-world ping being 30 to 50mhs, partly because of browser and server's ability to send multiple files in parallel -- beneficial when the data packets can take multiple different routes between server and client.
It is thus the raw total number of files you have can induce massive overhead to your page load time
irregardless of how large the files are!In fact, a dozen 10k files can take many times longer to transfer than a single 100k file. The faster the connection, the more pronounced that speed penalty for "more files" is.
That's why icons as fonts is popular, it's one file. It's why CSS sprites came into existence. If you're throttled by connection limits, or not living in the magical fantasy-land of fiber connections, a page using 40 or 50 separate files to do the job of seven or eight can take 30 seconds or more extra overhead
JUST because it's all a bunch of separate files!
That's why HTTP parallelism matters. That's why sharting endless <img> or <script> into the markup for presentation is dumbass. It's why breaking up your CSS into a half dozen separate files is dumbass. It's why things like "preload" or HTTP 2 "push" can serve benefits bypassing the handshake mechanism and allowing control of the load order.
My article on push/preload touches on a lot of this.
https://medium.com/codex/http-parallelism-push-preload-and-why-markup-bloat-is-the-enemy-ec043ed0733eIt's why I use the formula (total # of files -
/ 5 == seconds for my load time estimates. Because "but it's fast for me" means dick shit. You basically get 8 files "free" in the same overlapping handshake, then each file past the first eight averages 200ms. Might be faster than that for you or I, but for normal people on cheap throttled shared connections? That's what you should be counting.
Bottom line? If those were all separate images, it could add as much as a minute to the load time for many users. EVEN if they're tiny little files.
File size often has less to do with load times than the sheer number of separate files.It's also why keeping the HTML small as possible pays benefits. No secondary files -- EVER, not even with preload or push -- can start until the HTML has finished downloading AND being parsed.
EVER! So if you shit presentation and static scripting into the markup, you're not just missing a caching opportunity on subpages that share that same style, you're also taking a steaming dump on the start of parallel downloads of those sub-media.