Last month, two giants held forth the prospect of faster broadband Internet connections to the home. First, Google announced that it was going to build a trial fiber network designed to deliver 1 Gbps bandwidth to subscribers, about 100 times faster than most broadband services. Then the Financial Times reported that Cisco was also developing [...]
[url=http://www.hdtvmagazine.com/columns/2010/03/hdtv_almanac_broader_band_on_the_way.php]Read Column[/url]
HDTV Almanac - Broader Band on the Way
-
alfredpoor
- Major Contributor

- Posts: 1805
- Joined: Wed May 13, 2009 9:27 am
-
Roger Halstead
- Major Contributor

- Posts: 210
- Joined: Sun Feb 26, 2006 4:13 pm
It's certainly needed, but is it any where near enough?
1 Gbs. Is it enough? I think it goes without saying that it'd certainly help. Last week during the Olympics, streaming video "in this area" was intermittent at best with pauses in both video and audio. I have a 10 Mbs connection, but that tells me very little as I have no idea as to the location of the "bottleneck". I do know that if I try downloading anything on one of the other computers while streaming video, it becomes intermittent.
I'm out in the country, located in a small rural subdivision. There is a tremendous amount of dark fiber near here. Could it be put to good use and be economically viable? How much breathing space would would moving from the current backbones to to 1 Gbs? How fast would the bandwidth fill up if it became available?
When it comes to streaming video which includes the latest TV programs and movies along with a tremendous library of older, classic TV and movies I seriously doubt 1 Gbs could come any where near being able to keep up with the demand once the knowledge of all this material became available. I find that 1 Gbs is too slow for my home network when it comes to backing up my photography files and I certainly do not see the cloud as either an acceptable or economically viable answer. Nor do I trust the so called cloud. It certainly would not have met the requirements of the FDA validated systems I installed as a project manager. The more people rely on the "cloud" the more bandwidth that will be required. I don't think we have those kids of resources to spend.
1 Gbs may give us some breathing room, but I doubt it's going to be much more than a year or two
I'm out in the country, located in a small rural subdivision. There is a tremendous amount of dark fiber near here. Could it be put to good use and be economically viable? How much breathing space would would moving from the current backbones to to 1 Gbs? How fast would the bandwidth fill up if it became available?
When it comes to streaming video which includes the latest TV programs and movies along with a tremendous library of older, classic TV and movies I seriously doubt 1 Gbs could come any where near being able to keep up with the demand once the knowledge of all this material became available. I find that 1 Gbs is too slow for my home network when it comes to backing up my photography files and I certainly do not see the cloud as either an acceptable or economically viable answer. Nor do I trust the so called cloud. It certainly would not have met the requirements of the FDA validated systems I installed as a project manager. The more people rely on the "cloud" the more bandwidth that will be required. I don't think we have those kids of resources to spend.
1 Gbs may give us some breathing room, but I doubt it's going to be much more than a year or two
-
alfredpoor
- Major Contributor

- Posts: 1805
- Joined: Wed May 13, 2009 9:27 am
Pieces of a complex problem
You raise a lot of good points, Roger. Let me respond to a few at random.
First, we're not talking about backbone speeds with the 1 Gbps; that's the speed to the end user. And I agree that this is not going to solve all the problems; if you blast 100 GB of data across your home gigabit network, you'll likely bring it to its knees for the duration of the transfer.
However, there are many pieces to the puzzle. For example, it is possible to give some packets priority over others. In the case of your photo backups, it is possible (in theory) to assign your incoming video stream top priority, and the data backup task only gets network bandwidth when the video stream has satisfied its buffer. This is an area known as "quality of service" (QoS) and you can find a lot of technical discussions of it on the Web. This concept of varying priorities can also be applied to Internet transmissions (it's just a big network, after all), and could lead to more reliable streaming applications.
On the other hand, QoS is another way of looking at the "net neutrality" debate. If a certain type of data stream is given priority, what's to keep a service provider (or other link in the chain) from giving preferential service to "friendly" packets, while shoving competitors' packets to the bottom of the queue?
There are all sorts of bottlenecks in the Internet. There are the main trunk lines; many areas have plenty of redundancy in this regard, but as we saw when the Mediterranian cables got cut a few years back, some countries depend on a very small number of lines for their Internet service. There are the routers that decide how packets should be sent to their destination. (Play with the TRACEROUTE command if you're curious about how this works.) And then there's the local service provider. Their capacity is limited by the size of the line they have coming in. And some systems -- such as cable -- put multiple customers on the same line for the data delivered to the house. I know a number of cable users who can tell when school gets out because their Internet throughput drops like a rock.
The point of this 1 Gbps service to the home is that the end user should see much better performance. It assumes that the service provider has sufficient capacity to handle a sufficient number of subscribers at the same time (not a safe assumption based on current practice) and that the rest of the infrastructure can handle it. However, I expect that it will handle our needs for quite a while into the future. Keep in mind that we're also making progress on factors such as compression technology, which also effectively increases the bandwidth of the system.
Alfred
First, we're not talking about backbone speeds with the 1 Gbps; that's the speed to the end user. And I agree that this is not going to solve all the problems; if you blast 100 GB of data across your home gigabit network, you'll likely bring it to its knees for the duration of the transfer.
However, there are many pieces to the puzzle. For example, it is possible to give some packets priority over others. In the case of your photo backups, it is possible (in theory) to assign your incoming video stream top priority, and the data backup task only gets network bandwidth when the video stream has satisfied its buffer. This is an area known as "quality of service" (QoS) and you can find a lot of technical discussions of it on the Web. This concept of varying priorities can also be applied to Internet transmissions (it's just a big network, after all), and could lead to more reliable streaming applications.
On the other hand, QoS is another way of looking at the "net neutrality" debate. If a certain type of data stream is given priority, what's to keep a service provider (or other link in the chain) from giving preferential service to "friendly" packets, while shoving competitors' packets to the bottom of the queue?
There are all sorts of bottlenecks in the Internet. There are the main trunk lines; many areas have plenty of redundancy in this regard, but as we saw when the Mediterranian cables got cut a few years back, some countries depend on a very small number of lines for their Internet service. There are the routers that decide how packets should be sent to their destination. (Play with the TRACEROUTE command if you're curious about how this works.) And then there's the local service provider. Their capacity is limited by the size of the line they have coming in. And some systems -- such as cable -- put multiple customers on the same line for the data delivered to the house. I know a number of cable users who can tell when school gets out because their Internet throughput drops like a rock.
The point of this 1 Gbps service to the home is that the end user should see much better performance. It assumes that the service provider has sufficient capacity to handle a sufficient number of subscribers at the same time (not a safe assumption based on current practice) and that the rest of the infrastructure can handle it. However, I expect that it will handle our needs for quite a while into the future. Keep in mind that we're also making progress on factors such as compression technology, which also effectively increases the bandwidth of the system.
Alfred
-
Roger Halstead
- Major Contributor

- Posts: 210
- Joined: Sun Feb 26, 2006 4:13 pm
Re: Pieces of a complex problem
I agree with almost all of what you say, but so much depends on a lot of "yah buts" in the future.
I should add that Computer Science is my degreed field and I was working as a project manager on fairly large systems when I retired.
)
)
As a corollary to the old, "The job will expand to fill the allocated time", I see "The load will expand to fill the available bandwidth". It's just how long it'll take the load to catch up with the bandwidth which is, or has been always on the increase.
Let's hope the bandwidth available always stays ahead of the bandwidth needed by the load.
I should add that Computer Science is my degreed field and I was working as a project manager on fairly large systems when I retired.
Understood.First, we're not talking about backbone speeds with the 1 Gbps;
True although the router and switch may or may not keep the data transfer across the network from bothering the download.that's the speed to the end user. And I agree that this is not going to solve all the problems; if you blast 100 GB of data across your home gigabit network, you'll likely bring it to its knees >for the duration of the transfer.
And open a whole new bag of worms.However, there are many pieces to the puzzle. For example, it is possible to give some packets priority over others. In the case of your photo backups, it is possible (in theory) to assign >your incoming video stream top priority, and the data backup task only gets network bandwidth when the video stream has satisfied its buffer. This is an area known as "quality of >service" (QoS) and you can find a lot of technical discussions of it on the Web.
Of course the gamers might have a slightly different view and set of priorities.This concept of varying priorities can also be applied to Internet transmissions (it's just a big network, after all), and could lead to more reliable streaming applications.
And what determines which of the original streams had a higher priority? With cell towers and emergency services it's pretty much a foregone conclusion, but few actual users realize how few simultaneous calls can go through a tower, or how much backup power they have on hand (very little) . Unfortunately be it Skype, data, streaming Video, gaming, or P2P it all depends on who and when you ask. These are mundane, day-to-day, entertainment data streams and are really of about equal importance.On the other hand, QoS is another way of looking at the "net neutrality" debate. If a certain type of data stream is given priority, what's to keep a service provider (or other link in the chain) from giving preferential service to "friendly" packets, while shoving competitors' packets to the bottom of the queue?
Back in the "old days" when viruses and hacking were far less sophisticated, the traceroute was quite useful. I had been receiving a lot of spam from a particular computer. I was able to reach it via traceroute, (tracert), so a bit of playing with Telnet .... well... As the computer was well identified and rather open I figured it was infected (we hadn't head of zombies yet). So I left the owner a polite message informing him of who I was and that I figured his computer was infected what with all the spam originating from it. About two days later I received a nice e-mail thanking me for the help. He had noticed the computer being slow but thought it was "just Windows"There are all sorts of bottlenecks in the Internet. There are the main trunk lines; many areas have plenty of redundancy in this regard, but as we saw when the Mediterranian cables got cut a few years back, some countries depend on a very small number of lines for their Internet service. There are the routers that decide how packets should be sent to their destination. (Play with the TRACEROUTE command if you're curious about how this works.) And then there's the local service provider. Their capacity is limited by the size of the line they have coming in.
I think the number that share access may outnumber those that don't.And some systems -- such as cable -- put multiple customers on the same line for the data delivered to the house. I know a number of cable users who can tell when school gets out because their Internet throughput drops like a rock.
Much like the airlines over booking. They sell us unlimited access and bandwidth, until it interferes with other customers. I paid for *unlimited* which was sold as *unlimited* so using what I was sold, "bought and paid for" to the limit 24X7 is not being a bandwidth hog. Now had they sold me unlimited connectivity with a bandwidth cap of so many gigs per day, week, or month would be different. But they advertised it one way and expect the customer to know they really didn't mean it that way. Selling bandwidth is perfectly legal, but showing preference to one form of data over another is not, at least for now.The point of this 1 Gbps service to the home is that the end user should see much better performance. It assumes that the service provider has sufficient capacity to handle a sufficient number of subscribers at the same time (not a safe assumption based on current practice)
This is the part I'm not so sure about. Theoretically the rest of the system should be able to take a lot of additional load, but a lot depends on the customers When I had a 5 Mbs connection I used to "bottom out", but since they've increased it to between 10 and 15 I've never been able to get that kind of throughput. That leads me to believe that either their customer load prevents it, or their own bandwidth into the net limits it, or something down stream is causing it although I'm inclined to go with either of the first two.and that the rest of the infrastructure can handle it. However, I expect that it will handle our needs for quite a while into the future. Keep in mind that we're also making progress on factors such as compression technology, which also effectively increases the bandwidth of the system.
As a corollary to the old, "The job will expand to fill the allocated time", I see "The load will expand to fill the available bandwidth". It's just how long it'll take the load to catch up with the bandwidth which is, or has been always on the increase.
Let's hope the bandwidth available always stays ahead of the bandwidth needed by the load.