It works fine now. The program curl is very handy:
I always find it a bit crippled -- as evidenced by it spitting out the fallback markup that should be ignored, instead of obeying the ACTUAL redirect in the header.
I mean, I get it that curl isn't a UA, but its behavior compared to wget is a bit of a wonk. You end up having to dick around with extra cryptic command line BS to get it to show anything meaningful, and even then it leaves far too much in the callers hands.
I mean what is the option again? -v? -i? --head? Starts reminding me of DrossDOS where you have to type in two lines of flags ending in --please to have the "ls" command not erase the partition.
What was a bad '80's joke is a 21st century reality.
This is how I do the HTTP to HTTPS redirection using Nginx:
Good to know if I ever use nginx, which seems unlikely. I've never had anything I deploy be bloated enough, request heavy enough, or poorly thought out enough for the difference between it and Apache to make any difference.
I mean, I've dealt with it on a few client's servers, but the limitations in what it can do just doesn't seem to justify the MINOR difference in speed... and in cases where it does make a difference -- serving static files -- having a static domain using lightppd blows both out of the water.
More true now that PHP-FPM is commonplace.