Posts

Showing posts from February, 2020

HTTP/2 - HTTP/1

HTTP/1.1 assumes that a TCP connection should be kept open unless directly told to close. HTTP2 reduces latency by using multiplexing, compression and prioritization. An application level API would still create messages in the conventional HTTP formats, but the underlying layer converts the payload into binary (binary framing). HTTP/2 establishes a single connection object between the two machines. Within this connection there are multiple streams of data. Each stream consists of multiple messages in the familiar request/response format. Finally, each of these messages split into smaller units called frames. Multiplexing: Several requests and responses can run in parallel using a single TCP connection without blocking each other. This reduces processor and memory resources and the SSL handshakes. Stream prioritization feature allows developers to prioritize the requests by assigning a weight between 1 and 256 to each stream. The higher number indicates higher priority.  server ca

Some thoughts on Conway's law

Human beings are complex social animals. Rome was not built in a day. Address the issues that can be addressed first. Create independent subsystems to reduce the communication cost. Read more:  https://medium.com/@Alibaba_Cloud/conways-law-a-theoretical-basis-for-the-microservice-architecture-c666f7fcc66a