Jump to content

Server Too Busy


Marky

Recommended Posts

I keep getting the Server Too Busy error with the following stack trace:

 

Stack Trace:

 

[HttpException (0x80004005): Server Too Busy]

System.Web.HttpRuntime.RejectRequestInternal(HttpWorkerRequest wr) +146

 

Not a good sign of health.

 

--Marky

Link to comment

S or M or T or W, lately it's been almost everyday again. Either way this is ridiculous. Once they took on paying members they took on the obligation of giving us a site that works. It's taking me almost 45 seconds to get to any page I want to look at, and believe me I have a FAST connection! The only thing that seems to be OK is the forums.

Edited by Firehouse16
Link to comment

Still very slow, often "too busy" or "timed out" and not just because it's a Sunday/weekend. There is another problem, probably the same one Jeremy mentioned this past week.

 

Very frustrating on a day when the weather wasn't good for caching either and I was trying to log some earlier finds, but I'm sure that he or another admin is "workin' on it." :(

Link to comment

It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers.

 

The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok.

Link to comment
Yes, the server works again, the server works again!

 

(shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk")

<sighting target through scope>There you are, you typical random cacher!</sight>

Yes, you spotted me, but please don't shoot. :(

 

Even if I am "typical" and "random," I try to be proactive instead of negative and that's the only reason I even mentioned the slowness of the server, not to complain.

 

Returning to the topic, I was trying to show my gratitude for Jeremy fixing the problem with the gc site's firewall.

Link to comment
Yes, the server works again, the server works again!

 

(shouted to imitate Steve Martin's enthusiasm upon receiving the new phone books in the movie "The Jerk")

<sighting target through scope>There you are, you typical random cacher!</sight>

Yes, you spotted me, but please don't shoot. :(

 

Even if I am "typical" and "random," I try to be proactive instead of negative and that's the only reason I even mentioned the slowness of the server, not to complain.

 

Returning to the topic, I was trying to show my gratitude for Jeremy fixing the problem with the gc site's firewall.

Yeah, I know you were.....it's always a big relief from panic when that first page of gc.com finally pops back into view after some downtime!!! :D

Link to comment
It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers.

 

The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok.

I'm curious as to what kind of firewall you are using (hey I'm an IS Security Analyst) and what platform it runs on. Anyway, I can understand if you don't want to answer that question, some companies are touchy about giving that out. Hell, if I poked around a bit, I could probably figure it out myself, but I know better than to do that kind of thing.

 

--RuffRidr

Link to comment
It turned out that the firewall was the culprit. Imagine a drunk bouncer (It's the best analogy I have). The bouncer wasn't even recognizing its own members half the time and booting them, which caused a cascading problem throughout the servers.

 

The machines have been rebooted (including the firewall) and it has seemed to clear up somewhat. We'll continue to monitor the site to make sure it is ok.

I knew as soon as you would try to log your finds you would discover the problem. :(

Link to comment

Oops. I was wrong about the firewall.

 

It's really a traffic issue. We're running into a situation where traffic gets to a point that the db is completely tapped, which means the web server is waiting too long for the database to return data. It then goes into a vicious circle of crashes as it tries to keep up.

 

There are two solutions we're working toward. One, we're adding BigIP that will allow us to create a web farm. Much of the data on the site is stored in memory, but it runs out of memory too quickly. By using BigIP we can split the number of users connecting to each machine which will allow it to get a breather.

 

Additionally, we will be adding another database and making it read-only. This will allow the more detailed queries to go to one box, and updates go to another machine.

 

These solutions will occur over the next month. Afterwards, we can just add more web servers or databases as traffic increases.

Link to comment

I hate to state the obvious here, especially where caching seems to be everybody’s expertise,... but maybe the solution is to buy a couple of gigs of RAM and cache the whole db? I guess one obvious part about it is that it would be expensive. Is that the only limitation?

Link to comment

Considering most consumer boards can only handle 1GB or RAM and most consumer boards can only address a max of 2GB of RAM. Just throwing RAM at the situation isn't a solution. You need a server that can actually handle the needed amount of RAM properly.

 

Thorin

Link to comment

My guess is that if they are talking farm they have servers that are likely on the high end. Servers that are really servers and not desktop boxes running server software that can handle gigs of RAM. I also suspect they have added RAM already, that just tends to be an easier solution to try before going to a farm.

Link to comment
My guess is that if they are talking farm they have servers that are likely on the high end. Servers that are really servers and not desktop boxes running server software that can handle gigs of RAM. I also suspect they have added RAM already, that just tends to be an easier solution to try before going to a farm.

Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own.

 

I have no doubt that gc.com is actually running on some fairly half decent hardware but even then throwing RAM at it doesn't solve the problem if the server is maxed already. I wish that you could just keep throwing RAM at servers but unfortunately they max out eventually too. We have a few sun boxes here that are maxed at 4GB and 2GB respectively and we'd love to just add more RAM but can't :rolleyes:

 

Edit: Just checked out the F5 Networks BigIP stuff, that's pretty sweet HW they have in the offering.

 

Thorin

Edited by thorin
Link to comment
Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own.

Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up. Man life was so simple then. Funny to talk about 4 gigs of RAM and I remeber a 30 meg hard drive was a big deal.

Link to comment
Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own.

Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up.

Hey who told you about that, no one was ever supposed to dig that piece of code up :rolleyes:

 

Thorin

Link to comment
Agreed. I was just trying to point out to Hynr that throwing RAM at the problem isn't really a solution on it's own.

Yea but remember the good old days when throwing RAM at the problem always worked? And covered up a bunch of sloppy coding that you never felt like going back and cleaning up. Man life was so simple then. Funny to talk about 4 gigs of RAM and I remeber a 30 meg hard drive was a big deal.

Hey, I remember when 32K was a big chunk of RAM. I think it was about that time that Bill Gates said 64K was as much as anyone would ever need. (At least I think that is what he said. I know if I am wrong someone will come up with the correct quote and markwell it.)

 

Does anyone else remember RAm is kilobytes?

Link to comment
Edit: Just checked out the F5 Networks BigIP stuff, that's pretty sweet HW they have in the offering.

We use Radware here where I work. Very similair to the stuff from F5. It is pretty sweet how that stuff works. Here if our firewalls, webservers, etc. get overloaded, we can throw another one together and put it behind the Radware and it instantly grabs off some of the load. Cool technology. Not cheap.

 

--RuffRidr

Link to comment

The cost between Radware and BigIP were nominal, and we liked more of the features from BigIP. We haven't purchased the solution yet, so it may be good to pick your brain about it. Eliasbone is spec'ing out the requirements so I'll ask him.

Link to comment

If there is 4 gig of RAM in the system, how much of it is being used to cache the database and its index files. Is there a way to get a dedicated cache for just those files?

 

Another way to address this issue is to see if there is a way to have less data transfer per user action. If we assume that the most common user action is the pulling up a single web page, then perhaps there is stuff on that web page that could be omitted. I notice that there are two maps on each cache page and I never ever look at one of them (the little one). Typically just under 2000 bytes. I wouldn’t even notice if you dropped that. Replace the yellow and green graphic stars (168 and 164 bytes respectively) with simple asterisk characters (1 byte each; and maybe 15 bytes more for the html code to make it colorful). I see other icons on that screen that can be eliminated/replaced as well. I know that you want to have a snazzy looking page, but I think it could be leaner.

 

Another common request is the “My Cache Page” listing. Give us more options to turn stuff off (e.g. I can live without the calendar display; most of the time I don’t even see the bottom 80% of that page (cache finds from a month ago) - give us a setting to show the last Y number of finds or just the last 10 days rather than 30 days. Suddenly there is 66% less stuff to look up in the database, and 66% less stuff to transmit to us.

 

The listing of caches (“Filter Finds” for instance) seems quite lean to me. I think it is very well done as it is, without any undue graphic glitz. Here I only wonder what database access occurred to make it happen and whether all the implied database look-up happened for the pages that are not displayed. I hope not. I’d hate to think that every time I run that page, the search engine goes and reads all 4216 records which it claims to have “found” as a result of my query. I’m going to assume it ain’t so; if it is in fact doing all that work, just to show me 25 records, then there is another place where some efficiencies could be generated.

 

I would also like to suggest that we get some sort of off-line mechanism for logging caches. If we could do all the work off-line, and transmit them in one batch, then that would save tons of server activity.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...