Render Farm Management for Digital Media Pipelines



Try It FREE 30 Days


EDU Site Subscription

Microsoft Azure



AWS Google Oracle



“We’ve rendered over 5 million jobs and still counting…” - ReelFX
AWS Google Oracle

NYC’s Fashion Institute of Technology Manages Animation Renders on PipelineFX Qube!

Render Farm
With a remote server and a linked network of ten HP workstations, the department’s render farm needs to predictably render out student projects, which typically range from 1 to 3-minute 3D and 2D animations, through to interactive content. To achieve this, the department filters all assets, created in Softimage and mainly rendered with mental ray, through Qube! Before looking for a dedicated render farm management application, the students and administrators were getting by with RenderQ or Softimage Batchserve - free management options - but found the speed and quality of output was inconsistent and the lack of tech support was a problem.


Technical associate at FIT Eric Kaplan said, “We experimented with various iterations of hardware setups. Softimage's Batchserve was in its last year of support but was a natural fit for us because we had an established relationship with Softimage for software and support. We successfully used Softimage along with Batchserve for a few years until we perfected our hardware, eventually buying a complete rack of HP servers.

“Then, of course, we recognized the need for contemporary software with active support to increase the reliability of our render farm. Qube was suggested by an instructor who has his own production company. Qube! is much easier for the staff to use, and our students can now count on the output during finals week. It was an upgrade that paid off right away.”

Remote Tracking
The efficiency of the managed system means the department does not have to spend time sorting out crashed systems during working hours. But it also allows the IT staff to work a more conventional schedule. “We can remotely manage and track jobs and no longer need students or an administrator to stay overnight to watch the output,” Eric said. “Qube! does it for us, and if one job fails the application automatically switches it to another server.”


Their ten-workstation setup consists of eight dual XEON quad core HP DL160-G6 servers - 8 total cores in each machine - that function as render slaves, plus one HP DL360-G7 XEON dual core server strictly for the Qube! supervisor, and one HP All-In-One NAS RAID file server with 12TB of storage. These components are connected by a gigabit ethernet network on dedicated gigabit switch.

Out of the Box Workflow
Eric explained that the basic out-of-the-box features of Qube! have already been adequate to simplify the workflow without customizing the functions. “The administrator spends a few weeks at the start of a semester showing the students and faculty how to submit jobs and carry out basic job management. Some custom documentation is provided by the administrator, covering the specifics of Qube job submission. Then, by the last month of the semester as everyone gets to know the system, students work very well with Qube on their own.”


All render jobs are run from a Softimage project folder, with the complete set of default subfolders included. The project folder is placed on a remote shared server, the HP 12TB NAS, so that both class workstations and render farm servers can reach it. The job is then managed from classroom workstations via Qube!.

Because no files are actually located on the classroom workstation, only on the remote NAS server, on the rare occasion a render farm slave crashes, Qube is able to access the project file from that server and continue the job on another server. For example, it will wait for a frame to finish, and then put the failed job onto the next available server.