# Are SSL(HTTPS) Downloads Slower Than HTTP?



## nunim (Jan 2, 2016)

Pretty much as topic says, Are SSL(HTTPS) file downloads slower than normal HTTP?  If so is there anything I can do to speed them up, or any sort of compression or something I should have enabled in NGINX ?


----------



## perennate (Jan 2, 2016)

Unless the data is highly compressible (e.g. text files), SSL won't significantly increase the traffic transmitted. If it is compressible, then you can enable gzip compression over SSL, but this has some security issues and isn't recommended unless absolutely needed.


It's more likely that the computation overhead of cryptographic operations will impact performance (although only if the client has a high-throughput connection, and also only for large files). If CPU is the bottleneck, then compression won't help; in fact, it'll make it worse since compression also has high computation overhead compared to just dumping the stream to file. There's not much tuning you can do on the client-side unless you have access to the client. On the server, you can make sure that the AES CPU feature is enabled.


Obviously the first thing to do is to test download over HTTP and then over HTTPS and see if there is even any performance impact from SSL, and if there is an impact, to see what the bottleneck is.


----------



## KuJoe (Jan 2, 2016)

Some ISPs are known to throttle encrypted traffic so, depending on the ISP, SSL downloads can be significantly slower. When I was using Comcast almost 3 years ago an HTTPS connection would be about 1/10th of the speed of HTTP (this was somewhat resolved when they upgraded everybody to 100Mbps).


----------



## SkyNetHosting (Feb 19, 2016)

The initial response time would be faster but the download time should be slower for https than http downloads. But overall there shouldn't be any significant difference.


----------



## GlideServers (Feb 24, 2016)

Downloads are slower in https than http, however it is a very very small difference.


It is slower because https has to handshake, but this only takes about 2ms.


There is no significant difference.


----------



## drmike (Feb 24, 2016)

Depends.  


Bigger the data payload is, the more overhead.  More retransmits, more bloopers, more CPU.  It's good and fast on modern CPUs like 2010 year onward.


Gzip should be ran all over by default.  It works and compresses very good and very fast.  Ideally that is used in conjunction with a backend cache to prevent gzipping same stuff over and over.


I'd fuss with what you have and use something webpagetest.org to get outside real world different views of the options.


----------

