Jump to content

Bandwidth


Team Dougherty

Recommended Posts

The text at the bottom is from a discussion from the Geocaching topics form about other sites. I did not now how to quote from another thread so I just cut and pasted.

 

 

I have been in the computer/network field for 10 years. I like to think I know what I am talking about. At least my boss hopes I know what I am doing. I work for an insurance company with 50 or so people.

 

Anyway, I see this topic come up from time to time. Bandwidth cost money. I am not quite sure I understand this. We have a full T1, not the fastest, for our web and email servers. We have tons of email coming in and out and we have clients, claiments and others downloading all manner of forms in pdf format all day everyday. Our bill from our T1 providers costs the same every month. The cost of the T1.

 

Now My questions.

 

So from reading the post from the other forum. Geocaching.com gets charged for every byte of data downloaded from the site. Is that true? That must be on heck of a bill.

If they do. Is it because the site is hosted at a hosting center and not in house? Do all hosting centers charge for data downloaded? Would geocaching.com save money by just getting a T1 or T3 and hosting the website inhouse?

 

I am just curious. Becuase I have seen this brought up on other sites as well.

 

Paul

 

Below quoted from other forum.

 

QUOTE (Jeremy @ Mar 26 2005, 05:43 PM)

I don't discuss costs as a private company but recent stats have indicated that bandwidth has doubled consistently year to year. At one point last year a month tripled in comparison to the same month the previous year. Bandwidth costs money for web sites and it isn't peanuts.

Link to comment

There are probably others that know way more about this than I do. Also I don't know what sort of pricing scheme is in place for where gc.com is hosted.

 

But my understanding is that T1 or even T3 lines can have different upload/download speeds. The more you got the more you pay. So no you don't pay more for using it more you just pay more for using more at the same time. So at the least gc.com has likely seen the price increase to get a fatter pipe to their servers. That is what helps in not getting the delays when a bunch of people are hitting the servers. Well there are server issues but different topic, if they can't get to the servers it does not matter how fast they are.

 

Again I don't know how gc.com host's charges. I have seen places do it both ways -- straight fee depending on amount of bandwidth and also a charge for bits actually moved across the wire.

 

Could gc.com do their own hosting and drop in their own T1/T3 lines and avoid larger monthly charges if their pricing is based on amount of bits moved? Sure, but that is not a small undertaking. They then have to worry about a whole lot of other issues, such as power, redundancy in the internet connection, the list goes on, which is what they pay the hoster for. For the number of servers they are using going to a hoster is probably the best answer. They will have to add a bunch more servers before they could justify the cost of bringing it in house and then paying someone to do the care and feeding.

Link to comment

I doubt if GC.com would pay per GB, but if they have to double their line speed every year, that's clearly going to cost more (unless prices per Mbit/s halve).

 

It's a bit like a bus company - if one extra person rides the bus, you'll manage, but if 60 people want to ride the bus and don't buy a ticket, you have to lay on an extra bus.

 

There is a general economic problem of charging back things which have an almost negligeable individual cost, but a high collective one. I read about it when I was in high school. That's why I'm not an economist. B)

Link to comment

The short of it is that bandwidth runs on a tiered pricing structure. As a company with rarely changing bandwidth requirements your costs stay static. In an increasing bandwidth situation you stay solid for a few months until you hit the next tier. Since we continue to double traffic year to year we continue to hit milestones in traffic and our overall bandwidth fees are raised.

 

Redundancy is important which is why we use InterNap. Though even those guys run into issues when someone hits the emergency shutdown switch.

 

We have two cabinets at our hosting facility now. We do all our networking, security, etc, and only pay for rental fees, power, and bandwidth. We also own all our machines. This does significantly save on cost overall. Do some pricing on a monthly fee for backing up half a terabyte of data and you'll see what I'm writing about.

 

Also, we connect directly to fiber. If you don't know fiber it is like having a water pipe the size of Manhattan for moving traffic.

Link to comment
The short of it is that bandwidth runs on a tiered pricing structure. As a company with rarely changing bandwidth requirements your costs stay static. In an increasing bandwidth situation you stay solid for a few months until you hit the next tier. Since we continue to double traffic year to year we continue to hit milestones in traffic and our overall bandwidth fees are raised.

 

Redundancy is important which is why we use InterNap. Though even those guys run into issues when someone hits the emergency shutdown switch.

 

We have two cabinets at our hosting facility now. We do all our networking, security, etc, and only pay for rental fees, power, and bandwidth. We also own all our machines. This does significantly save on cost overall. Do some pricing on a monthly fee for backing up half a terabyte of data and you'll see what I'm writing about.

 

Also, we connect directly to fiber. If you don't know fiber it is like having a water pipe the size of Manhattan for moving traffic.

OK I Understand a little better now. I always wonderd why some websites talked about about bandwidth increases.

 

My company's cost for the T1 does not have a cost for for data that is downloaded. There is not even a spot on the bill for it. The T1 is also 1.54Mb up and down. Maybe we got a deal because we have a T for 24 phone lines and 1 for data. But My web, email and webmail servers are right in our server room.

 

Paul

Link to comment

Last month (March) we averaged 12 Mbits/sec. That's like comparing apples to apple carts.

 

There is no such thing as a clog. Currently the only thing holding back bandwidth is the speed of the code and machines. We have access to an open pipe with unlimited bandwidth (at unlimited fees). They don't flip a switch and turn us off if we exceed bandwidth.

Link to comment
Last month (March) we averaged 12 Mbits/sec. That's like comparing apples to apple carts.

 

There is no such thing as a clog. Currently the only thing holding back bandwidth is the speed of the code and machines. We have access to an open pipe with unlimited bandwidth (at unlimited fees). They don't flip a switch and turn us off if we exceed bandwidth.

:o Wow that is alot. I knew geocaching.com would have alot of data downloaded. but not that much. Our web site is puny by comparison. I feel better knowing I got a premium membership. To help with that stuff.

 

Paul

Link to comment

Jeremy, do you think that it would improve things if the default distance figure was reduced to less than "100" miles? It seems to me that you could be generating result-sets that are considerably larger than they need to be. This value seems to me to be one that would have been appropriate when the database was small and sparse. Now it may be too big, even much too big. There might even need to be a limit on the number of rows returned.

 

This would obviously have no effect at all upon bandwidth use, but it might relieve bottlenecks on the server by reducing transaction time. I'm guessing that the biggest component of variability in server interactive response-time is the number of queries that are in-progress at that time, and/or queue buildup waiting for a shot. I'm also guessing that the pocket-query batch processing is handled in such a way that it doesn't impact online performance.

Edited by HIPS-meister
Link to comment
Last month (March) we averaged 12 Mbits/sec. That's like comparing apples to apple carts.

 

There is no such thing as a clog. Currently the only thing holding back bandwidth is the speed of the code and machines. We have access to an open pipe with unlimited bandwidth (at unlimited fees). They don't flip a switch and turn us off if we exceed bandwidth.

 

thanks for this and the previous post - very interesting. And very well put.

 

Puts things into perspective for me - appreciate it.

 

For those of you who might not have gotten all that - think of it this way -

 

We got one heck of a big pipe and not enough horsepower on the pumps to push the water when we all have the tap on. That is not to say they can't handle us - just not ALL of us ALL at one time (like Saturday and Sunday evenings after we all come in from caching and wanna log all those caches). Ya - HA! even before we dig out the Technu.

 

Now a question if you care to answer -

 

how many servers are there that collectively pump out this 12 Mb/S.

 

I know you have a lot of equipment, just wondering how much horesepower a server can handle. This is new territory for me.

 

thanks -

 

cc\

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...