Monday, August 30, 2010

Load Balancer Algorithm Explained

I've been googling around 2 days just want to know the best way that heavy-load rails sites was handled. I'm having trouble with mongrel clusters, one of which usually dies because of some heavy requests and nginx still forwards request to that frozen mongrel (using nginx simple load balancer). I was curious what are the options to solve this. I have looked at various solutions: simple nginx load balancer, fair nginx load balancer, HAProxy, and finally Phusion Passenger. However, I didn't go through JRuby for deploy rails app and Unicorn.

Simple Load Balancer
By default, nginx is only a simple robin-round load balancer. What it does is just send those requests to one of those backends (mongrel in this case) in simple manner (a,b,a,b, ...). The round-robin algorithm is often an acceptable tool: if every request finishes within a few milliseconds, there’s no problem. In this scenario, each backend has its own queue for processing concurrent requests. Therefore, if the heavy request comes in, the queue requests in that backend will be waiting and is able to process until that long-running request finishes. So what if that heavy request process longer than 60 seconds, those queue requests will never be able to process because it has been timeout.
Another problem is that nginx would flood mongrels with too many requests and mongrel's request queue was not solid and it would quickly get stuck with too many queued requests.

Here is the configuration of nginx simple load balancer.
http {
upstream myproject {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}

server {
listen 80;
server_name www.domain.com;
location / {
proxy_pass http://myproject;
}
}
}

Fair Load Balancer
Seeing this problem, a guy from Engine Yard wrote a personal patch by adding a module in nginx called fair load balancer. What it basically does it know many requests each backend is processing and it will avoid sending further requests to already busy backends. By default, fair load balancer will assign request to idle backend first, when all are busy it will use weighted least-connection round-robin (WLC-RR) algorithm, meaning that it will assign the requests using a score that depends on both the number of requests assigned to the peer (as most important) and if all equal, to the peer that had the earliest assignment.
Another mode is whenever the first backend is idle, it's going to get the next request. If it's busy, the request will go to the second backend unless it's busy too etc.

As you can see, the backend still has its queue requests, and it will run into this scenario:
Suppose we have 3 backends which have 3 requests in their queue, except for the first backend which has only 1 request in its queue.
Backend process A: [* ] (1 request in queue)
Backend process B: [*** ] (3 requests in queue)
Backend process C: [*** ] (3 requests in queue)
Backend process D: [*** ] (3 requests in queue)

Next, new request comes in, so backend A will have 2 requests where X is the new request.
Backend process A: [*X ] (2 request in queue)
Backend process B: [*** ] (3 requests in queue)
Backend process C: [*** ] (3 requests in queue)
Backend process D: [*** ] (3 requests in queue)

Assuming that B, C and D are still processing its queues, the next request comes in (called Y), it will be forwarded to backend A because it has the least number of request queues.
Backend process A: [*XY ] (3 requests in queue)
Backend process B: [*** ] (3 requests in queue)
Backend process C: [*** ] (3 requests in queue)
Backend process D: [*** ] (3 requests in queue)

The problem arises if Backend A needs 60 seconds to process X. Y will never be processed because it has been timeout. It would be much better if Y was forward to other Backends.

Here is the configuration:
upstream mongrel {
fair;
server 127.0.0.1:5000;
server 127.0.0.1:5001;
server 127.0.0.1:5002;
}


HAProxy
Rails was traditionally and mostly deployed using Apache or nginx with either a built-in or standalone proxy (like HAProxy) against a cluster of Mongrels or Unicorns.
HAProxy is a free, very fast and reliable solution offering high availability, load balancing, and proxying for TCP and HTTP-based applications.


If using HAProxy, the setup is similar to this:

Web => Nginx => HAProxy => Mongrel

In this setup nginx just proxies to a single haproxy instance and haproxy has all registered mongrels configured. HAProxy queue is much more stable and it better balances all the requests between backends than nginx does. HAProxy has better support for upstream ok/fail detection and can also limit each app server to 1 connection (maxconn directive) which is key for mongrel. HAProxy limits the number of requests sent to mongrel instead of blindly forward requests. The fact that mongrel has only 1 request to process makes the application more scalable since the previous problem never happens. If there is one heavy request, processing that request won't affect on other requests since HAProxy only forwards request to the idle mongrel.

One downside of using this is that it brings any additional latencies.
Another thing is that it's more complicated to setup and administrate than using Phusion Passenger a.k.a. modrails or modrack.

Just to take the advantage of the maxconn directive, try nginx-ey-balancer. One thing to keep in mind is that the maxconn is per nginx worker.
upstream mongrels {
server 127.0.0.1:8001;
server 127.0.0.1:8002;
max_connections 1;
}

Phusion Passenger
Phusion Passenger is an Nginx module, which makes deploying Ruby and Ruby on Rails applications on Nginx a breeze.

It's a great tool for me to deploy rails and rack-based ruby applications. Passenger has both nginx and apache module. Passenger comes with feature called global queuing, which works similar to what I mentioned above.

The good thing about passenger, it's easier to configure than HAProxy. It has a couple of strategies to load rails application (smart, smart-lv2, conservative). It can even reduce memory to 33% if using Ruby Enterprise Edition. The installation process is to install passenger gem, which is quite up-to-date and support ruby 1.9.

I won't dig into detail about passenger, you might go through the documentation by yourself.

7 comments:

Web Development Company said...

Hi, I liked your blog...Keep posting like this...

chamnap said...

Thanks

RV IT Group said...

This is very unique blog in itself because here you share your experience with us, Very informative and pretty blog sir,

web development

Tom said...

I read your article and give very good informative...

Ottawa website development

andy easton said...

hanks ! for sharing this wonderful news with us.I really like your post as it is informative as well as interesting.I am also quite interested to see your upcoming post for Web Development
so please keep writing.

Curvve said...

Nice Post..!! I liked your blog...Keep posting like this...

Secure Web Design

Hakim Bohari said...

Thank you for sharing this wonderful post. Hope to see more post about this post. Please keep us posted. I’m glad we drop by your website and found this very interesting. Thanks for placing.
wordpress development and maintenance

Subscribe in a Reader