We have a web-application that lets the users trigger a request to an external resource. The external resource spends an unspecified amount of time to gather results, so we have to poll it to get updates, and to collect the final results when they are done.
We wish to make it so that when the user triggers the request, it gets added to a queue, and then a number of worker threads will pick up each request and do the polling, while the user does other things.
Since the number of requests vary a lot during the day, we figure it would be a waste of resources to have lots of workers doing nothing when it's slow, but at the same time we need to have enough workers to handle the peak load on the system.
We would like something that could add more workers when there are a lot of requests waiting, but kill off workers when there's little to do.
It is/was possible to do this with EJB, but we don't want to use that. We also don't want to use JMS or other large scale frameworks to handle this, unless it's one we're already using (Spring, Quartz, lots of Apache stuff).
As EJB has support for this, and it's one of the more useful features found there, we imagine that someone has already solved this problem for us. Suggestions?
以上就是Automatic scaling of consumer pool的详细内容，更多请关注web前端其它相关文章！