Five Arguments in Favour of Throttling – And Why They’re Wrong
Yes I know this is turning quickly into a net-neutrality blog – but since net-neutrality traffic is up at the moment, I figured I should strike while the iron is hot.
While I thought the CRTC presentation was quite strong, you’re always left with regrets about the questions that didn’t come up. There was a couple of points I was really hoping would be raised, since they are popular talking points of the major ISP’s and it would have been nice to offer a counter-point. So while they’re still fresh, here’s five ISP arguments in favour of traffic throttling, that I just don’t think hold much water:
1. Increasing capacity is prohibitively expensive.
Regardless of my prior post on why building additional capacity is likely far more fiscally responsible than throttling BitTorrent – total smarty-pants Jason Roks made a compelling calculation on Tuesday at the CRTC hearing that a certain national network could likely more than double it’s capacity at the most likely congestion spots for less than $2 per user per month. Of course it’s hard to offer more concrete suggestions when we have no idea of what the profit margins of the major ISP corporate units are (or what portion of their network is devoted to functions other than the Internet – like television, phone, and video-on-demand).
2. This congestion is only being caused by a small number of heavy users.
It’s really annoying when people trot out the argument that x% of the users use X% capacity of the network. The usual Canadian figures these days is something like 5% of the users using 1/3 – 1/2 of the network. ISP’s trot this out with the subtext that a small number of “bad apples” are ruining things for everyone (“hey, it’s not you we’re going to inconvenience… it’s someone else).
The big, obvious, elephant that no one ever thinks to ask is: How oversubscribed is a network if 5% of it’s users can create systemic network congestion!? No one is suggesting that these users hacked the planet. They are not running special computers that can defy the laws of physics. They can only use the throughput and bandwidth that they paid the ISP’s for. So a better re-phrasing is that the networks are not robust enough to cope with 1 out of 20 consumers actually using the services they have legally purchased.
The second, slightly less obvious, elephant is that these aren’t abnormal users. These are what the “average user” is going to look like five years from now. As I’ve mentioned before, Cisco’s five year forecast is that network traffic will quintuple by 2013. We don’t need a network that can strangle those 5% – we need a network that can support it when the other 95% catch up to them.
3. There’s no incentive for “applications” (BitTorrent) to be efficient on the network
Computer application adoption is about the most perfect illustration of the free market that I can think of. Users have shown remarkable liquidity to adopt (or abandon) whichever technologies give them the greatest benefits. Efficient applications run better, faster, and more reliably than those that don’t – and those are the ones that the public eventually turn to. One only has to look at the evolution of specific BitTorrent clients to see that those that are smaller, faster, and more efficient will quickly overtake those that aren’t.
4. Congestion will never be solved as long at BitTorrent automatically takes up any additional capacity we add
Firstly, “congestion” isn’t something you are ever going to “solve”. No one (shy of free-energy conspiracy theorists) sets out to “solve” traffic, or unemployment, or the first law of thermodynamics. This is a progression that isn’t ever going to end.
Secondly, as I’ve stated previously BitTorrent isn’t a terribly unique program outside of the two points that:
a) It has P2P properties which are inconvenient for traffic management
b) It is popular
Given that every technology created or adapted from this point forward will have the former (World of Warcraft, Flash 10, new set-top-boxes) is the solution to any new technology which becomes “inconveniently popular” to crush it? I think “technological wack a mole” is an apt discription… except the consumer can actually occasionally win in wack a mole.
5. Throttling BitTorrent only (slightly) inconveniences BitTorrent users – whose data isn’t time-sensitive. Even ignoring the presumption about *what* I might be using *my* application for, and how important it may or may not be – this is demonstrably false.
I’ve never really talked about the Couchathon snafu prior to the hearing yesterday. There’s a little more detail in the Cartt.ca article but the Coles notes is that the Couchathon had to “re-boot” (losing us a significant percentage of our viewership each time) because our connection to ustream.tv was throttled by Bell as BitTorrent traffic (1). I didn’t want to overshadow that amazing event, or draw attention to the kind folks who had provided us our connectivity – but I’m also not going to bite my lip while I’m told that what happened to me – having an independent, innovative, Canadian content production affected nearly to the point of being unsustainable – isn’t likely. (2)
Furthermore, Japan offers a direct case example – when their ISP’s collectively cracked down on BitTorrent-like file sharing program WinNY – everyone just started encrypting their data. So then the only “wack a mole” solution for ISP’s is to throttle ALL encrypted traffic, since even DPI (deep packet inspection) is never going to tell you what’s in a encrypted data transfer… that’s the point of encrypting it. Now you’re not “just affecting BitTorrent users” – you’re prejudicing an entire category of traffic including (but not limited to):
- Business VPN traffic
- Secure website e-commerce
- any legitimate on-line audio or visual store (iTunes, Amazon…)
- private voip, voice, or teleconferencing data
- any new application or protocol that the filter doesn’t recognize (like, say ustream uploads)
It’s certainly not my job to suggest arguments for the ISP’s – but I guarantee at their CRTC appearance tomorrow we will hear these exact arguments again and again (or close variants thereof). I’m not sure why no one has examined all the various implications that their own arguments make against their sales practises.
- There is absolutely no doubt in my mind that the upload stream was throttled as BT traffic. a) The occurrences happend nowhere near peak hours (the first occurrence started late evening, and the cycle continued through the wee hours of the morning and into the next day ) b) The pattern of traffic damping was exactly the same as a graph of BitTorrent throttling I researched the next day. c) A separate Bell connection we used for web-browsing and chat and e-mail was not affected, so it was not a broader network issue.
- A representative of a different ISP (neither Bell nor Rogers) came up to me afterward and asked if there was a bit of a case of “you get what you pay for” in this instance. Unfortunately we got cut off talking as the proceedings resumed – so I felt bad I never got a chance to chat with him more, or really get into this issue. While it’s true that with a little more lead time I might have looked into setting up a more robust solution there are two things I need to point out:
a) this was *not* a residential connection we were using. b) Even if it *was* a residential connection, I don’t think it’s unreasonable to expect a sustained transfer rate less than a third or fourth of what I get on my home residential connection. (3)
- In going to the Rogers site today I discovered that the “upload rate” on the package I’ve subscribed to for more than five years has been halved at some point!? When did that happen?