HTTPS: Developer Fundamentals

By Kellen 

HTTPS is slowly becoming ubiquitous. In this article, we'll take a look at what it protects you from, how it keeps you secure, how - and whether - it impacts performance, and explore what’s on the horizon.

When you visit a site that is protected by HTTPS, you are sending HTTP over an encrypted Secure Socket Layer/Transport Layer Security (SSL/TLS) connection. To achieve HTTPS, you — as the visiting client — perform a three step dance known as the SSL/TLS handshake with the hosting server.

Each time you access a hostname over HTTPS, the following public key exchange plays out in fractions of a second:

It is Secure HTTP: HTTPS.

If you are crafting an application that contains sensitive user data, the importance of security is clear. Jumping into application security leads to many questions:

Without further ado, we will get you some answers.

Nasty, Wild Things

You may feel that having a site with no login portal and no sensitive information means that you do not require encryption. Alas, this is not the case. Even if you are hosting a simple blog or static page, if you choose not to encrypt then you are putting your visitors at risk.

In 2015 Google, along with with the University of California, Berkeley, discovered that more than 5% of unique daily IP addresses that hit Google have been affected by an intrusion technique known as ad injection.

This is frightening!

Through unencrypted pages, an attacker can create misleading Buy Now links to scam credit cards or falsify Register Here buttons to steal personal information from your visitors. Whether your open HTTP packets are being sent through public Wi-Fi networks, the routers of your ISP, or the last stretch before your datacenter, they are free for exploitation.

Worst of all, in addition to being vulnerable to dangerous content injections, HTTP packets travel in plaintext. Any Person-in-the-Middle of the client and the server is able to read travelling data as though it were in plain text.

Encrypting your site, no matter the complexity, protects your application and your users while helping to improve the overall safety of the greater Internet. You can consider the vaccine metaphor: just as the general public is safer when individuals inoculate against contagious pathogens, the overall web is safer when every site is encrypted.

HTTPSecure: The Goodness

Let us look at some of the key advantages you receive by encrypting your application traffic…

Above all, HTTPS is vital for a healthy Internet. Applying it can be time-consuming. In addition, there are a few myths around performance that dampen SSL/TLS enthusiasm, despite the clear benefits. What are some of the performance drawbacks that prevent HTTPS from being applied to every site? Let us take a look.

Looking for the easiest way to get HTTPS? We looked at three of the easiest ways to get HTTPS running on your pages for free. You can read that post here.

HTTPS, Server Performance

In recent history, HTTPS has been maligned for allegedly taking a bite out of application performance. There is nuance to performance related claims; we need to qualify what type of performance is being impacted. First, we will take a look at claims of the computational burden placed on hosting servers. After that, we will look at visitor rendering times.

Encryption requires your servers to do additional work. Between computing the ciphers and accommodating extra requests within the handshake, some assume that the additional cost and residual slow-down makes HTTPS not worth the trouble. However, the CPU performance tax of SSL/TLS is contentious. Since 2010, when Google migrated popular mail application Gmail to HTTPS, the debate has all but vanished.

Google found that their production front-end servers applied less than 1% of their CPU load and less than 2% of their network overhead when crunching SSL/TLS. They observed that their production hardware was capable of a mighty 1500 SSL/TLS handshakes per second, per core before optimization.

Do note, though, that Google applied 1024bit keys. Using a beefier 2048bit or 4096bit SSL key will increase relative CPU load.

HTTPS, Page Rendering Speeds

While claims over CPU drain are exaggerated, let us explore the impact on page rendering times. Here, the situation is more prickly. Rendering time is a significant factor in providing a pleasant user experience. In one of the many supporting data points, Amazon discovered that every 100ms in latency leads to a 1% reduction in sales.

To look into rendering time, we will run a quick experiment. We can use curl to figure out the length of time it takes to form a standard TCP connection, compared with that of an HTTPS connection. The HTTPS connection contains 3 additional roundtrips, as we witnessed within the opening public key exchange.

goodroot@flyio ~$ curl -kso /dev/null -w "tcp:%{time_connect}, ssldone:%{time_appconnect}\n" 

tcp:0.033 seconds, ssldone:0.132 seconds

It is important to note that the SSL/TLS rendering penalty is from the packets’ “wrapper” only. Your application data inside the secure wrapper will ultimately determine whether your overall rendering times are performant.

We see that a regular TCP connection had a round-trip time of 0.033 seconds while the SSL connection took 0.123 seconds. This shows an increase of 0.099 seconds per SSL/TLS request — a 300% increase! If your application makes multiple secure requests during the user experience, the time adds up quickly.

Luckily, there are advancements on the horizon that should result in general improvement.

HTTP/2 and TLS V1.3: New Speedy

HTTP/2 introduces a dizzying array of improvements to the HTTP protocol. Let us look at three useful adjustments within HTTP/2 that improve the performance of HTTPS:

The improvements of SSL/TLS over HTTP/2 are significant enough that it will still out-perform an unencrypted site using HTTP/1.x. Instead of being punished for securing your application, modern HTTPS over HTTP/2 can make your application faster. Good thing, too -- modern browsers like Chrome and Firefox only support HTTP/2 when HTTPS is present. If you want a short crash-course on HTTP/2 for developers, give our article a read.

Along with the improvements within HTTP/2 , the TLS protocol itself is soon to be updated from 1.2 to 1.3. TLS V1.3 will require one less roundtrip during public key exchange and it introduces the concept of 0-RTT Data. With 0-RTT Data, when a user returns to a site where they have already completed the exchange handshake, they do not need to repeat it.

Unfortunately, TLS V1.3 is not here yet and with HTTP/2 there is still a rendering penalty for SSL/TLS connections; the penalty increases based on visitor distance from the server. Never fear, though, you can mitigate the impact by terminating SSL connections as close to the visitor as possible.

Remember: Reducing the speed of the handshake and compressing the secure headers will only provide so much relief; you will always need to optimize your application for performance. Efficient SSL/TLS is but one ingredient for a well-performing application.

Shortening SSL/TLS Termination

If you have a server within Chicago and a user from Frankfurt wants to visit your application, rendering times inflate due to distance. Each roundtrip becomes slower. Given that the SSL/TLS handshake requires three roundtrips, this can cripple an experience!

For example, consider that each round-trip adds 50ms to the overall response time…

What seemed a relatively small amount of overhead ballooned over distance, resulting in a molasses like impact.

To hedge the expense of the handshake, you may consider using a Content Delivery Network (CDN); the CDN will have edge-servers located around the world to provide SSL termination. SSL termination is where SSL/TLS is decrypted, sending lighter - thus quicker - unencrypted data upstream. By terminating closer to the user, we have shortened the route and reduced the roundtrip time.

For example, your CDN has a Frankfurt edge-server. Each roundtrip to the edge-server takes 40ms for a total of 120ms to complete the handshake instead of the full 600ms when traveling all the way to Chicago

These intercepting CDN edge-servers terminate early then send unencrypted data to wherever your application server is located. This is ponderous. While rendering performance is enhanced, the danger is that your data is exposed as it travels from within the CDN to your application!


A WikiLeaks cable released in 2013 revealed a sassy demonstration of how the NSA targeted ‘beyond the Load Balancer’ after SSL termination to sniff unprotected traffic within Google data centres. Spooky!


While Google has the capability to apply sophisticated solutions and encrypt within their data centres, most applications do not have the resources to maintain a secure configuration post-SSL-termination.

To solve this problem, we crafted the open source utility Wormhole.

If you add Wormhole to the content delivery equation, it looks like this...

When a visitor makes a request to your application, it will arrive at the closest edge-server. After arrival, packets travel over an encrypted tunnel between your application and our edge-servers. In a nut-shell, it creates a secure link between two end-points.

You terminate SSL/TLS closer to the user, avoid compounding SSL/TLS rendering penalties, and have an encrypted tunnel open directly to your application. Data arrives at the application encrypted, is never exposed within the datacenter, and does not bounce around a far-reaching CDN.


Using HTTPS makes the Internet safer for everyone. Unfortunately, securing your application has been a daunting task. Protocol improvements like HTTP/2 and TLS V1.3 and services like CDNs will help balance the trade-offs between accessibility, security and performance… But vulnerable data routes can still be left exposed.

You can engineer complex and creative solutions to secure these routes. Or, you can take-off with an Application Delivery Network like Fly and get back to building nifty things for your users.