Skip to content

Usage Based Billing – The Elephant in the Muddy Waters in the Middle Ground

Oh Wikimedia Commons, is there any topic you don't have the perfect image for?

I got an e-mail from a friend (on his way to a Useage Based Billing consultation) yesterday curious as to what my thoughts were on the whole thing. I haven’t written anything about it at length (other than the odd tweet, mostly because my position generally falls outside both of the established “camps”, and when I have talked about it I generally found the discussion quickly deteriorated to me being asked to defend “the other side” and tenants I didn’t actually agree with.

Actually my biggest problem with the “debate” so far is that the two sides usually distill down to the arguments that “UBB is necessary” vs. “UBB is bad” and given that those aren’t actually mutually exclusive positions it’s frustrating to try and even define what the core issues are.

But in yesterdays exchange, I realized that I do have some thoughts which are a different viewpoint from most of what I’ve been seeing written – so if nothing else it might provide a different angle for people to contextualize their own positions – whatever they may be.

When I’m asked for advice on getting up to speed on the whole debate, I’ve lately been recommending Peter Nowak’s “10 myths from usage-based billing supporters“, but with the caveat that I don’t entirely agree with his conclusion. I could go the other route with a “pro UBB” article and then work backwards from that, but most of the people I chat with about this stuff are (as mostly end-users) generally more inclined to lean towards the “UBB is bad” camp, and it’s easier to initially flesh out a position they’re more likely to agree with(1).

But there’s two major elephants in the room in the Nowak article that seriously muddy the waters (that’s an outstanding mixed metaphor if there ever was one):

1. Regardless of how the structure *should* work in a perfect world, the fact is that data currently is a finite resource like a utility. Telephone calls don’t “consume” electrons any more than data does, but there is still a charge for both “delivery” and “useage” (to use Nowak’s terms). You pay both for a landline or cell phone (your delivery point) and then long distance or airtime (useage) because at at some point demand for finite capacity of long distance lines outpaced the network’s ability to deliver that connectivity. Same with cell phones. Calls engage limited capacity of physical cell towers so we pay for *all* airtime (long distance OR local) which is either capped or expensive(2). Furthermore this isn’t unique to the end-user consumers – servers pay bandwidth costs on the “publishing” side because exchanges pay bandwidth costs on the “delivery side”, every hop in network pays some type of both delivery and useage fee. Those oceanic fibre-optic runs aren’t infinite in their capacity (or cheap) and need to be financed somehow.(3)

Now there’s a valid argument to make that it is ludicrous to still be paying what we’re paying for long distance today (especially from landlines) given how much capacity there now is for global telecom, and how reduced actual phone service delivery costs have become. However, it is not that the rollout of the internet didn’t completely radicalize telecom in the 90s(4) and those inventions conversely forced an adjustment in LD calling rates. So the free market does self-regulate. Eventually. It just takes a long time, longer than most would like(5)

2. The second, and perhaps even bigger, issue is that the value of “bandwidth” to the end user isn’t just one of quantity – it’s also one of quality. This gets glossed over far too often in debates on UBB, but unlimited bandwidth doesn’t mean squat if you can’t negotiate a timely packet delivery. If I waved a magic wand and removed all bandwidth caps overnight, no one would be happy if their performance suddenly took such a hit it was impossible to play video games, or skype, or watch YouTube – or any other service where timely delivery was key. The raw numbers of connection speeds and bandwith caps are only one piece of the total picture of “quality of service”. A prime example is that my parents have a FiOS connection in Arizona that’s nowhere near as good an internet “experience” as my Toronto-based Rogers one. That there is a theoretical 16MBit difference in potential connection speed plus no cap on the American connection is functionally irrelevant to my actual user experience(6).

So – now we’ve swung back the other way. This is why UBB is such a compelling argument to some people (who aren’t necessarily only the big Telco’s, take Maclean’s for example). If you’ve got a product with scarcity, where both volume and timeliness of delivery are critical to the end user, free market regulation is an absolutely tried-and-true method of controlling distribution of that product. You raise prices until you balance demand and capacity (and profit), and you’re done. “QED!” say the UBB supporters and dust their hands, high fives for Adam Smith all around!(7)(8)

Well… maybe, but now we get into not just shades, but whole fields of gray. At the core the biggest problem with defining the terms of the UBB debate is the same major hurdle as Net Neutrality; To whit, no one really believes the information presented by the ISP’s. Not only do we not really have any idea what the major ISP’s are doing within their networks, the ISP’s have been demonstrably false in disclosing what they’re doing even when it directly affects end users. Ignoring the fact that those networks are (depending on your position) to some degree the results of government granted monopoly and investment to begin with, no one actually believes the ISP provided stats on congestion “as is”. Which is only fair given the track record of transparency with anyone (including the government).

Given many reasonable people believe that internet services are major profit generators for the telcos, and that those same players are getting beaten up in other areas of the broadcast/media/traditional telecommunications space – where is the incentive to re-invest significantly into network capacity? They *need* to be making large profits somewhere to even come close to breaking even overall. If a hypothetical company were (in “these economic times”) sinking huge amounts into future capacity – that’s not going to help keep the shareholders from their lobby with pitchforks and torches). So of course they’re going to cut as many corners as possible, and that means it’s perfectly logical to presume that capacity upgrades are underfunded. Heck, if congestion increases, a rational free-market response would be to decrease caps and increase prices – INCREASING profit margins further. This plan of action is entirely corporately sound, but is entirely backwards to what a progressive national internet policy should be trying to foster.

And just when you think things can’t get worse – don’t forget that Rogers, Bell, Shaw, Cogeco and the like are delivering other services (like television, and phone, and movies on demand) – all over the same physical connection into your house. If there is any investment in capacity are you, as a business-person with half a brain going to focus on improving the quality of the “public internet” (full of competitors to already declining traditional profit centres) or adding more “video on demand” options or HD channels to your proprietary set-top box? I know which makes more “bottom line” sense. And since either can technically be categorized as “network capacity investment” all the better!

Head hurt yet? Well here’s one last curveball. Where this whole thing goes off the rails entirely is that the current UBB debate has nothing to do with end-user policies. The CRTC allowed ISP’s to useage bill and cap customers way back in the 90s. We’ve all been subject to caps as long as we’ve had broadband(9). The whole thing only flared up again in November regarding a CRTC decision involving wholesale access to competitor ISP’s and what the the wholesale rates should be for access to certain “higher-level” connectivity(10). Wholesale access is, in my mind at least, one of the weakest planks in the anti-UBB platform. At it’s core arguing that competing businesses should all have *unlimited access* to finite resources is just bad for consumers. I don’t mean this from a competitive market sense, but more that it creates an ecosystem where it’s easier to improve a “competitive advantage” by taking action not in the benefit of the network as a whole. Who cares if my actions as ISP-A negatively affect ISP-B or ISP-C if it helps improve my bottom line? There’s just too many opportunities for that to go awry in a big, big way. But that means that ISP’s are never really going to be able to compete on quality because they’re ultimately dependant on Bell for their wholesale access. No matter how lousy Bell lets the national network get – they’re never going to be shown up by their DSL competitors because those competitors have to buy their connectivity from Bell.

And the snake eats its own tail.

So the whole thing is a big mess. This should probably not be a surprise.

If I was talking to a regulator my points would (and have) been pretty simple – the telcos are increasingly going to use “congestion” to argue for all kinds of regulatory changes to their benefit (I have no doubt this is just the beginning) and we need a much more transparent way to evaluate “congestion”, and also actual tangible network capacity and infrastructure investment before regulatory approval is granted in such cases.(11)

If there was any belief of actual transparency on the big ISP’s side it would be much easier to actually have a reasonable discussion about the place of things like packet filtering, and bandwidth caps as part of an entire network management strategy. Without it, there’s nothing else to do but conclude that customers are likely getting shafted by corporate interests.

Personally? As a content producer who knows that the future of his entire industry is inextricably tied up in digital distribution, I absolutely want to see much higher or unlimited bandwidth caps on data plans – who doesn’t? I absolutely agree that today’s “high volume user” is tomorrows “average user” and that piracy(12) has absolutely nothing to do with it. Average user demand is going to increase exponentially no matter what is banned / filtered. We have no idea what new service lurks just around the bend, just like no one foresaw YouTube becoming the global bandwidth consumer that it has.(13) HOWEVER the quality of that bandwidth is even more important to me. The current situation for Netflix in Canada is certainly chilling to their ability to compete against the incumbent services, but swapping the status quo for a system where everyone has unlimited caps, but at the detriment of timely packet delivery would kill them in a matter of weeks.

So how’s that for talking out of both sides of my mouth? At it’s simplest I guess I’m willing to consider UBB (and even clearly disclosed packet filtering) as very short term solutions for network congestion – but given that network demand is always going to be increasing ISP’s that want to play that card have to be compelled to forfeit some of their rights to privacy about how they manage their network, and disclose actual capital capacity investment on access to the *public internet* (not just their overall, all-in network). If they’re not willing to do either of those things (which is fine, and I could see why they wouldn’t want to) you can’t go crying for regulatory variance because of “congestion”. The two have to have some type of correlation.

On the wholesale front, I don’t really have a problem with UBB at all… but I would agree with those who think the government mandated discount should likely be more than 15%, because I honestly beleive (like many folks) that the actual BELL profit margin on these services is significantly higher than 15% – they could charge less and still make a hefty profit. Here’s a radical thought: How about compelling the major wholesalers to invest all revenue from wholesale service (or a fixed percentage of retail service) into public internet service capacity? Not only is there precedence for this (with Broadcasters having to reinvest set percentages in content production) it would be of benefit to both Bell and the ISP’s who are paying the wholesale rates as well. With either lesser price, or greater network capacity smaller boutique retail ISP’s should have more flexibility to offer greater options and packages(14) (and it’s worth noting that for all it’s sabre-rattling, Teksavvy is still offering unlimited data plans).

At the end of the day, UBB boils down to trying to find a balance somewhere on a spectrum – which is why trying to view the debate solely through the lens of “network neutrality” or “free market economics” is so frustrating. UBB can be both bad, AND necessary – but until we can change the debate to what type of internet experience we want to have available in the future, it will be easy to get lost in semantics.

Footnotes:
  1. Plus I mostly agree with Nowak’s individual points – just not where he takes them []
  2. Or, more likely, both. []
  3. This has nothing to do with the bandwidth costs being fair or even appropriate. The data rate you’d pay for bulk bandwidth (say in a co-location facility let alone peering in a major exchange) is a fraction of what you’d pay at retail. The point is only that there is a cost greater than zero. []
  4. pre-paid cards, calling plans, voip are all relatively recent inventions in the big picture []
  5. Canada’s size and relative monopolies in telecom certainly slow this process down more than elsewhere – for proof just look at our cell phone rates which are some of the worst in the world. []
  6. I’m not implying anything about the relative overall quality of the two services – I recognize that comparing downtown Toronto in the middle of the night on a weekday to the middle of the desert at peak Christmas time isn’t remotely fair. []
  7. There is totally a joke involving Adam Smith, high fives, and “The Invisible Hand”, but I’m not sure what it is []
  8. I think it’s a physical gag: Walk up to some economist and demand an “Adam Smith High Five” but don’t raise either of your arms. Then stand there for far longer than is socially comfortable. I digress. []
  9. My first broadband connection was in 1997 and it sure as heck had a cap… but then again so did my 9600 baud dial in service in 1993 []
  10. Yes it’s a simplification of the relationship between ISP’s and GAS, but it’s close enough for the point I’m making []
  11. That’s where I felt we *really* lost out in the net neutrality hearings []
  12. or whatever the “Service X” straw-man argument of the week (YouTube, Skype, BitTorrent) is []
  13. Do you ever imagine TBL or the old DARPANET engineers slaving away night and day in the hope that someday, somehow, a Child in the Amazon rain forest could access the sum total of human created videos of cat’s playing piano? []
  14. Yet more free market dynamics at work! []
  • http://twitter.com/DougWalker Doug Walker

    Excellent points.

    I actually went to town a bit with one Shaw rep about the lack of transparency around network figures. I said it is hard to trust them on their network stats when they publish vague charts like this http://yfrog.com/h47flujj AND have a vested interest in protecting other business units. I actually went so far as to suggest a third party audit of their network figures. Hell, the CRTC should be demanding that as proof.

    My biggest concern is less with UBB and more with the caps. Shaw claims that their traffic has gone up by 60% in the past six months. I am curious whether their caps will increase along with the node splits they say they are doing at least weekly, because it is only the threat of peak capacity that causes them to invest in infrastructure.

    Anyway, we would all have better net access if my pipe wasn’t clogged with 24-7 streaming video (also known as television).

    Doug

  • Anonymous

    That’s pretty much my favourite chart of the year so far… I ran it through photoshops “translate” filter though: http://yfrog.com/h8a2cyj

  • Darren Love

    Great article, however you’re assumption that data is a finite resource is incorrect and perhaps it’s because you’re confusing data and bandwidth.

    Bandwidth is the rate at which data is delivered to the customer. For me, my bandwidth at home is “up to” 5Mb/s. You’re right, Bandwidth is a limited resource because there is only a finite about of speed that a pipe can provide. From a network perspective, whole cities and neighborhoods need to share this bandwidth…so bandwidth plans need to be provisioned properly in order for people to get the speed they desire.

    Data is infinite. You never find yourself typing an email and running out of the letter “E”…just never happens. Data generation happens all the time. Your example of the telephone is a perfect example of how Bell is using 19th century business models for 21st century technology. The model from the POTS (Plain Old Telephone System) is a throwback to the telegraph. Please don’t tell me we should treat data like we did the telegraph. Since data is in infinite supply, it therefore has no value…by definition.

    Always always always…what costs money is the speed at which data is given to the customer, and as that speed goes up, costs to deliver that bit go down. What Bell is essentially trying to do here is create an artificial scarcity so that they can keep the price up.

    Great read.

  • Anonymous

    Hi Darren,

    Thanks for reading. You’re absolutely correct that “data” isn’t finite by any quantifiable measure (I was just trying to speak to Nowak’s argument about “delivery” and “useage” which I think oversimplifies the issue) but rather pipeline capacity.

    The rub though is that volume is the metric you use to use to provision a network to account for peak useage even if you’re only concerned with throughput.

    Let’s assume we have a hypothetical neighbourhood of 10 users, each with a 1Mb/s connection from a 5Mb/s “trunk”. If you want to stick to a hard “capacity limit” you’d have to halve users connection speeds because it’s conceivable under peak useage you’re going to be oversubscribed by 100%. Now we all know that the likelyhood of all 10 users being on-line and wanting to use their full throughput is statistically next to nothing, so we’ll do some modelling to project use patterns, and we find out that it’s more than likely that trunk could support significantly more people than 10 with no perceived deterioration of service. But what exactly is tricky. So we do some basic actuarial work – modeling a spectrum of users to arrive at a conclusion that “our network needs to support an average of 1Mb/s for X seconds a month per user” which – of course – results in a volume calculation (Mb/ month / user). Now the easiest way to maintain this model so that no one user can break it to the detriment of everyone is to set a hard cap on the upper end of this modeled spectrum so that y the predictive model *can’t* be broken (or is very limited). And then you’ve ended right back at having a volume based cap.

    The same is exactly how water is billed. Let’s be frank, no one (in Canada anyway) is worried about running out of water when you turn on the tap – but the *pressure* of the system is key. That being said the easiest way to provision the network is to monitor *volume* at every location, even though what’s of the most concern is trying to make sure the system isn’t oversubscribed to the point it fails when everyone flushes their toilets before overtime in a playoff hockey game.

    Now please don’t misunderstand me: This all presumes that the predictive model is fair and honest, and I’m not saying that’s the case at all. I honestly believe that the information I’ve seen clearly suggests to me that the level of caps, the cost of service, and the amount the ISP’s have oversubscribed their networks is grossly negligent. I’ve stood before the CRTC and said as much directly to the major Telco’s lawyers. They are facing a problem which could have been significantly mitigated with more aggressive infrastructure spending and are clearly reluctant to move back on presumably huge profit margins. Period. But that doesn’t mean that the problem of overprovisioning doesn’t exist.

    Switching to an uncapped but speed limited service is, given all of the above, going to result in much poorer connections for the vast majority of users – just look at the wholesale bandwidth rates in most data centers. You can either provision bandwidth based on data *volume* (ensuring you have the highest possible connection standard until you hit your cap and go into overage costs) or “unmetered” plans based on highly throttled connections (often with less preferred peering networks) – AND they’re more expensive to boot (plus your connection speed is going to degrade under increasing load).

    So either you’re looking at thoroughput expressed over time (which results in a “volume” figure – or hard cap) or you’re suggesting a radical reinterpretation of how network connections should maintain predictive models (and you KNOW the ISP’s would just look to maintain their current (flawed) practices by busting everyone down to, say, 750,000 b/s (which would mathmatically maintain a 100 GB / month cap – just in a vastly more restrictive way for almost all users.)

  • 154443