Skip to main content
How to use the WorkQueue        

If you have some bigger tasks which could be done in parallel or where the results are just needed later, why not make use of your bored CPU cores and handle it parallel? This is where the WorkQueue comes in. A WorkQueue takes requests, executes them and notice you with a callback.

Defining the request

Your class handling the requests needs to implement some interfaces like this:

Copy to clipboard
class Foo : public WorkQueue::RequestHandler, public WorkQueue::ResponseHandler { /// Implementation for WorkQueue::RequestHandler WorkQueue::Response* handleRequest(const WorkQueue::Request* req, const WorkQueue* srcQ); /// Implementation for WorkQueue::ResponseHandler void handleResponse(const WorkQueue::Response* res, const WorkQueue* srcQ); }


handleRequest() will be the code being executed in parallel in a thread and handleResponse is the callback when the request is done.
You can also decide, whether you can currently handle requests and responses by overriding these methods:

Copy to clipboard
bool canHandleResponse(const WorkQueue::Response* res, const WorkQueue* srcQ) bool canHandleRequest(const WorkQueue::Request* req, const WorkQueue* srcQ)



Note, that you basically don't have access to the rendersystem within handleRequest as it will be in another thread! Actually, you could define your WorkQueue with "workersCanAccessRenderSystem" set to true, but now you have some performance drawbacks due to heavy locking. So it might be better to just handle CPU heavy things in the request.

You are likely to need some data within handleRequest and handleResponse. A struct might already do it:

Copy to clipboard
typedef struct FooRequest { int a; int b; friend std::ostream& operator<<(std::ostream& o, const FooRequest& r) { return o; } } FooRequest;


Note the implementation of the operator<<. The WorkQueue requires this.

A possible implementation of the two functions could now look like this:

Copy to clipboard
WorkQueue::Response* Foo::handleRequest(const WorkQueue::Request* req, const WorkQueue* srcQ) { FooRequest cReq = any_cast<FooRequest>(req->getData()); // Do some heavy work. return OGRE_NEW WorkQueue::Response(req, true, Any()); // Indicate, that everything went well. In case of error, return a response with false as second parameter. } void Foo::handleResponse(const WorkQueue::Response* res, const WorkQueue* srcQ) { // Just do something if the request was a success. if (res->succeeded()) { FooRequest cReq = any_cast<ChunkRequest>(res->getRequest()->getData()); // React how you need it. } }

Let the WorkQueue handle the requests.


Now that you have setup your request, it's time to actually execute them. First, you need to setup the WorkQueue so it takes your class as handler. Assume the following is within the Foo class above.

Copy to clipboard
WorkQueue* wq = Root::getSingleton().getWorkQueue(); uint16 workQueueChannel = wq->getChannel("Ogre/FooRequest"); wq->addRequestHandler(workQueueChannel, this); wq->addResponseHandler(workQueueChannel, this);


This grabs the workqueue instance from the root and creates a named channel. This channel is then used to register the foo instance (this) as request and response handler.

Add a request


Everything is prepared to fill the WorkQueue with requests now. Do this as often now as you need to start the work:

Copy to clipboard
const Ogre::uint16 WORKQUEUE_LOAD_REQUEST = 1; // You can use this ID to differentiate between request types if needed FooRequest req; req.a = 1; req.b = 2; wq->addRequest(workQueueChannel, WORKQUEUE_LOAD_REQUEST, Any(req));


To execute your request synchronously instead of parallel, you could hand it to the WorkQueue like this:

Copy to clipboard
wq->addRequest(workQueueChannel, WORKQUEUE_LOAD_REQUEST, Any(req), 0, false);

Wait for all requests


There could be a situation, where you want to wait for all your request until they are done. To do so, a quick and easy way would be to add a static counter to the class Foo. Increment it just before you add your request and decrement it in the responseHandler. Now you can add a loop right after the requests are all added which wait until the counter gets to zero (indicating that all requests are done now):

Copy to clipboard
while(mRequestsBeingProcessed) { OGRE_THREAD_SLEEP(0); wq->processResponses(); }


The sleep is there to let the main thread sleep a tiny amount of time on every iteration. This way, one core isn't stalled with 100% waiting loop, but can also do some management work.

Clean up


When you are done with everything, it's a good idea to clean up a bit by unregistering your Foo instance from the WorkQueue:

Copy to clipboard
wq->removeRequestHandler(workQueueChannel, this); wq->removeResponseHandler(workQueueChannel, this);

Resume

You know now enough about the WorkQueue to make something useful with it. There are a lot of other utilities coming with it like aborting requests etc.. Have a look at the API documentation and explore. 😊