TCP, the workhorse of the internet

signa11 | 234 points

If you start with the problem of how to create a reliable stream of data on top of an unreliable datagram layer, then the solution that comes out will look virtually identical to TCP. It just is the right solution for the job.

The three drawbacks of the original TCP algorithm were the window size (the maximum value is just too small for today's speeds), poor handling of missing packets (addressed by extensions such as selective-ACK), and the fact that it only manages one stream at a time, and some applications want multiple streams that don't block each other. You could use multiple TCP connections, but that adds its own overhead, so SCTP and QUIC were designed to address those issues.

The congestion control algorithm is not part of the on-the-wire protocol, it's just some code on each side of the connection that decides when to (re)send packets to make the best use of the available bandwidth. Anything that implements a reliable stream on top of datagrams needs to implement such an algorithm. The original ones (Reno, Vegas, etc) were very simple but already did a good job, although back then network equipment didn't have large buffers. A lot of research is going into making better algorithms that handle large buffers, large roundtrip times, varying bandwidth needs and also being fair when multiple connections share the same bandwidth.

gsliepen | 11 hours ago

Any love for SCTP?

> The Stream Control Transmission Protocol (SCTP) is a computer networking communications protocol in the transport layer of the Internet protocol suite. Originally intended for Signaling System 7 (SS7) message transport in telecommunication, the protocol provides the message-oriented feature of the User Datagram Protocol (UDP) while ensuring reliable, in-sequence transport of messages with congestion control like the Transmission Control Protocol (TCP). Unlike UDP and TCP, the protocol supports multihoming and redundant paths to increase resilience and reliability.

[…]

> SCTP may be characterized as message-oriented, meaning it transports a sequence of messages (each being a group of bytes), rather than transporting an unbroken stream of bytes as in TCP. As in UDP, in SCTP a sender sends a message in one operation, and that exact message is passed to the receiving application process in one operation. In contrast, TCP is a stream-oriented protocol, transporting streams of bytes reliably and in order. However TCP does not allow the receiver to know how many times the sender application called on the TCP transport passing it groups of bytes to be sent out. At the sender, TCP simply appends more bytes to a queue of bytes waiting to go out over the network, rather than having to keep a queue of individual separate outbound messages which must be preserved as such.

> The term multi-streaming refers to the capability of SCTP to transmit several independent streams of chunks in parallel, for example transmitting web page images simultaneously with the web page text. In essence, it involves bundling several connections into a single SCTP association, operating on messages (or chunks) rather than bytes.

* https://en.wikipedia.org/wiki/Stream_Control_Transmission_Pr...

throw0101a | 7 hours ago

Do extraterrestrial civilizations also use TCP?

brcmthrowaway | 4 minutes ago

How much energy does the internet use?

brcmthrowaway | 5 minutes ago

Wait, can you actually just use IP? Can I just make up a packet and send it to a host across the Internet? I'd think that all the intermediate routers would want to have an opinion about my packet, caring, at the very least, that it's either TCP or UDP.

stavros | 12 hours ago

TCP being the “default” meant it was chosen when the need for ordering and uniform reliability wasn’t there. That was fine but left systems working less well than they could have with more carefully chosen underpinnings. With HTTP/3 gaining traction, and HTTP being the “next level up default choice” things potentially get better. The issue I see is that QUIC is far more complex, and the new power is fantastic for a few but irrelevant to most.

UDP has its place as well, and if we have more simple and effective solutions like WireGuard’s handshake and encryption on top of it we’d be better off as an industry.

mlhpdx | 4 hours ago

RUDP from Plan9 was a nice step between TCP and UDP - https://en.wikipedia.org/wiki/Reliable_User_Datagram_Protoco...

rfmoz | 3 hours ago

    Otherwise please use the original title, unless it is misleading or linkbait; don't editorialize.
https://news.ycombinator.com/newsguidelines.html
bmacho | 3 hours ago

TCP is one of the great works of the human mind, but it did not envision the dominance of semiconnected networks.

FrankWilhoit | 8 hours ago

For the record I thought the TLD for this page was ‘cerfbound’, which sounds like the name for the race horse of the internet.

tolerance | 2 hours ago

Its trivial to develop your own protocols on top of IP. It was trivial like 15 years ago in python (without any libraries) just handcrafted packets (arp, ip etc).

iberator | 11 hours ago

> The internet is incredible. It’s nearly impossible to keep people away from.

Well ... he seems very motivated. I am more skeptical.

For instance, Google via chrome controls a lot of the internet, even more so via its search engine, AI, youtube and so forth.

Even aside from this people's habits changed. In the 1990s everyone and their Grandma had a website. Nowadays ... it is a bit different. We suddenly have horrible blogging sites such as medium.com, pestering people with popups. Of course we also had popups in the 1990s, but the diversity was simply higher. Everything today is much more streamlined it seems. And top-down controlled. Look at Twitter, owned by a greedy and selfish billionaire. And the US president? Super-selfish too. We lost something here in the last some 25 years.

shevy-java | 8 hours ago

I hate to think of the future of these nice blog posts, that need to struggle to convince the readers about the organic level of their content.

zkmon | 12 hours ago

It’s worth considering how the tiny computers of the era forced a simple clean design. IPv6 was designed starting in the early 90s and they couldn’t resist loading it up with extensions, though the core protocol remains fine and is just IP with more bits. (Many of the extensions are rarely if ever used.)

If the net were designed today it would be some complicated monstrosity where every packet was reminiscent of X.509 in terms of arcane complexity. It might even have JSON in it. It would be incredibly high overhead and we’d see tons of articles about how someone made it fast by leveraging CPU vector instructions or a GPU to parse it.

This is called Eroom’s law, or Moore’s law backwards, and it is very real. Bigger machines let programmers and designers loose to indulge their desire to make things complicated.

api | 7 hours ago

i have an idea for a new javascript framework

acosmism | 10 hours ago
[deleted]
| 12 hours ago

[dead]

unit149 | an hour ago

I can easily spot it's an AI written article, because it actually explains the technology in understandable human language. A human would have written it the way it was either presented to them in university or in bloated IT books: absolutely useless.

cynicalsecurity | 10 hours ago