Costing a wireless network

July 8th, 2005 | by aobaoill |

Sascha has produced a good analysis of the costs of a wireless network along the lines of that being developed by the CUWiN project. I’d like to comment on – and hopefully improve – the estimates for bandwidth costs and the number of nodes required.
Sascha estimates bandwidth costs at $600 per 10 nodes. He doesn’t specify but this looks suspiciously like 12 times a typical monthly cost of a DSL connection ($50) so I think it safe to assume that’s the basis off which he is working. A number of points are worth making here. First, the cost of a DSL connection is a unit cost so when the number of nodes are not a multiple of 10 we must decide to round up (get an extra DSL connection) or round down (opt not to get an extra connection). Therefore Sascha’s estimate for the cost of a network with 25 nodes per square mile ($22,625) should either be $22,925 or $22,325. Of course if you’re doing multiple square miles the calculation is done over the entire network, not for each square mile, so your total cost will differ from that provided by Sascha’s estimate by $540 or less.
Second, and more significantly, there are some issues around using DSL costs as the basis for a large network:

  • There are major efficiencies involved in buying bandwidth in bulk. A large network that buys ‘larger pipes’ and has internet connectivity at a few locations could, potentially, save money. This, considered alone, would indicate that Sascha’s model may be unduly conservative and that costs would be lower than indicated.
  • One of the planks on which the ‘sharing DSL connectivity’ argument and the justification for the economic efficiencies in these networks rely is that people tend not to make full use of the bandwidth available to them. If you have a 512kBit/s DSL connection, the argument goes, and you maintain an average throughput of 50kBit/s (15Gbytes of traffic per month), you are only getting 10% of the value of the connection and could share that connection with up to 9 similar users.
    The problem is that the telecoms companies have already factored the low usage into their network design. DSL services have what is known as a contention ratio. This means that while your 512kBit service has a maximum throughput of up to 512kBit the backchannel to the internet is shared with other customers’ DSL connections. A contention of between 1:20 and 1:50 is not uncommon. The company I used to work for, eircom has a contention ratio of 1:48 in its entry-level package and 1:24 in other packages.
    So at worst 48 customers attempt to access the network at once and each achieves a maximum through-put of 10.6kbit. The reason DSL works even so is that customers don’t tend to use the network at precisely the same time. The probable demand can be predicted using statistical approaches such as queuing theory. The more heterogenous the user base the more broadly distributed use will be – businesses will tend to use the network during the day, residential customers at night. My grasp and memory of statistics isn’t what I would like but I think on any network like this you need to allow at least 50% headroom for bunching of usage.
    Now, if you put 10 users behind each DSL connection you’ve suddenly got a potential 480 people using each back-channel, and the telecom company’s network design assumptions are thrown out. If your 10 nodes are actually serving more than 10 users – if you’ve got one node for every 5 houses, for instance – the figures become even worse, with 2,400 users on a network entry-point designed to cater for 48.
    Figures will vary, of course, depending on the contention rate used by the carrier, but the reality remains. Dedicated bandwidth solutions, such as leased lines, appear to be a necessary route once you develop beyond small-scale or experimental networks.

My sense is that the second point, which indicates higher costs than calculated by Sascha, is more significant than the first.
Moving to the issue of the number of nodes needed, Sascha provides calculations for a wide variety of node densities. In going as high as 1000 nodes he’s being very conservative when you look at his calculation of the number of nodes needed based on CUWiN experiences. The problem is that that initial calculation is somewhat imprecise. Sascha calculated the area covered by nodes of various radii and divided the area of a square mile by the resulting figures:

Node Coverage Radius Square Feet Covered Nodes Needed
1,000 3,140,000 9
500 785,000 36
250 196,250 142

Sascha’s own experience is that the CUWiN nodes cover radii of 100 to 400m.
First, Sascha doesn’t specify the unit in which radius is listed in the table but in context it must be feet, which indicates he’s calculating for node radii of 300m, 150m and 75m, so slightly conservative based on CUWiN experience. Good.
However, just dividing the total area by the area covered by each node won’t provide an accurate result. As Sascha notes nodes generally cover a circular area. An efficient layout for nodes is to have them in a grid formation, at equal intervals. However, if you put 9 points in a grid and trace the radius covered by each you will see that the discs overlap at some points and that other areas are totally uncovered.
In fact, to totally cover the area with discs you need to ensure that the edges of discs that are diagonal to each other touch. Using this approach it is possible to recalculate the number of nodes that would be needed:

Node Coverage Radius (feet) Nodes Needed
1,000 16
500 64
250 225

My calculation obviously differs significantly from Sascha’s so I should explain how I got it:

  1. There are 5280 feet in a mile.
  2. The diagonal in a square mile will be 7467 feet (Pythagoras’ Theorem)
  3. Divide the diagonal by the diameter of each node, rounding up to the next whole number.
  4. Square this number

Now, since the diameters don’t divide into 5380 roundly it may be possible to save on some nodes by coming up with a more efficient configuration – I can’t off the top of my head elimate the possibility that alternating rows (with one set of rows at X=1,3,5… and another at X=2,4,6…) would be more efficient, but I think not. Consider my estimate a second swipe but not necessarily difinitive (IANANE – I am not a network engineer).
In any event, calculating the most efficient layout is not necessarily productive because things don’t work like that in the real world – houses aren’t located exactly where you’d like to put your antenna, nodes don’t get exactly the coverage radius you would like. Without any data I’d guess that allowing for an additional 10% to 100% of nodes would be appropriate.
So, based on CUWiN’s experience networks of 32 to 450 nodes per square mile would seem reasonable.

Sorry, comments for this entry are closed at this time.