HTTP is the network protocol of the Web. It is both simple and powerful. HTTP V1.0 was officially introduced in 1996. Most of the resources were textual in those days. You could request a document from a web server, read it for five minutes, click on another link and request another document. The world was simple.
With the advent of multimedia there was a need for a protocol which could do more things smartly. In came HTTP 1.1. HTTP 1.1 improved on the initial HTTP offering using:
– Extensibility: e.g. “OPTIONS” method, a way for a client to learn about the capabilities of a server without actually requesting a resource.
– Caching: e.g, extending the existing Caching mechanisms by adding a new “Cache-Control” header, which in turn opened up a new dimensions of opportunity.
– Bandwidth optimization,
– Error notification,
and many more goodies.
There is a disruptive dynamism that has happened to web. The lines separating a full-fledged application and a web app are diminishing faster than ever. Multimedia support has pushed the boundaries of what could be communicated through the medium of web. Moreover, mobile has already pushed the network resource number crunching to the brink and this has squeezed every ounce of what could be achieved on the web using the existing Web protocols.
HTTP/2 Genesis – SPDY
In 2012, as part of the Google’s “Let’s make the web faster” initiative, they created protocols to help reduce the latency of web pages. One of these experiments was called SPDY (pronounced “SPeeDY”), an application-layer protocol for transporting content over the web, designed specifically for minimal latency. The development of HTTP/2 used SPDY was seen as the start-point so to speak.
So what does HTTP bring to the plate?
It is backward compatible
HTTP/2 is backward compatible. That means all the existing ecosystem still works as is. In fact, the library that you use for HTTP/1 can be updated to support HTTP/2 without changing any application code. There are new changes to the APIs that allow you to fine-tune some of the protocol’s new capabilities, making it even more efficient. But then these are optional to use.
A binary protocol
As a binary protocol it has a lower overhead to parse, as well as a slightly lighter network footprint. But more importantly the real reason for this big change is that binary protocols are simpler, and therefore less error-prone. But then this would unfortunately mean that the ability to telnet to check the response would no longer exist.
Any web Developer will shout his head off telling you to “avoid HTTP requests”. Techniques like in-lining, concatenation and sprinting where used to bend the rules to achieve optimization at the cost of increasing maintenance cost.
With HTTP/2, these techniques shouldn’t be necessary, because one of the main goals of the protocol is to reduce the marginal overhead of new requests. It uses multiplexing to allow many messages to be interleaved together on a single connection at the same time, so that one large response (or one that takes a long time for the server to think about) doesn’t block others. Additionally, it also adds header compression, to request and response headers to reduce bandwidth requirement. That’s a win-win on a mobile platform, where getting large request headers increase the load times.
The above features largely focused on interoperability, stability, and basic performance, and they would evolve considerably in the future. These feature updates would be camouflaged as better performance to the end user.
The stage would actually light up for HTTP/2 when advanced features like server push and fine-grained prioritization would push the envelope of a new refined UX for the end user.