Cutting Edge Monitors Dual-Studio Render Farm with PipelineFX
Cutting Edge in Australia runs two large VFX studios, one in Sydney and the other in Brisbane. In order to handle bigger visual effects projects, they wanted to join the talents of the two teams, but found that because the facilities’ render farms were not communicating, they were missing out on the efficiency of a joint operation that they were expecting to achieve.
To create a bridge between the farms, Cutting Edge decided to invest in render farm management software and chose PipelineFX’s Qube!, which they have tailored to match the requirements of their bigger VFX projects. Cutting Edge’s teams produce VFX shots for movies such as ‘Predestination’ and ‘The Age of Adaline’, commercials and TV series including 'Powers', PlayStation Network’s first scripted show based on the Marvel graphic novels.
Plans to link the farms in Brisbane and Sydney started with a test. The idea was to runrender farm management applications against each other in a multi-site battle, utilizing both live and offsite workers. This became the proving ground for the key tasks including global log files tracking, scripting, platform compatibility and project mirroring.
After tests totalling 10,000 jobs, Qube! was declared the winner, largely because of how well it could handle machine log files.
Visibility and Control
“With some render management tools, reading log files for machines that aren't local was problematic, but reading log files is an incredibly important part of seeing what's going on and troubleshooting,” said Rangi Sutton, VFX Supervisor at Cutting Edge. “Our TDs and support crew can now dial up logs for a given job through GUIs or the command line tools supplied on any workstation. It’s the only way to see how everything's going across the sites.”
Two PipelineFX specialists, John Burk and Scott Morrissey, explained some of the challenges Cutting Edge were experiencing, and how they used Qube! to overcome them. For example, reading log files from multiple farms – a ‘local’ one in the same city and one in a remote city - could be problematic when trying to monitor jobs on a farm in the remote city because the client is expected have direct access to the log files.
John Burk said, “With Qube, if the client can't access the log files directly, the farm supervisor will automatically send the log file data to that client. In the latest versions of Qube, the amount of log data sent can optionally be capped as well, so that sending gigabytes of log data across an internet connection isn’t necessary. If a log exceeds a certain size, the middle portion of the log can be snipped out, still leaving the head and tail available for troubleshooting.”
One Boss, Many Resources
A primary goal for the farms was being able to run all render requests from a single dispatcher, which in Qube! is the equivalent of a Supervisor. “Having a single dispatcher or Supervisor to control your farm is much more convenient and can save lots of time,” said Scott Morrissey. “To manage all of your available resources, you only need to look in one place. Managing multiple dispatchers, on the other hand, can require multiple logins to different machines, or looking at data in different formats. The opportunity for mistakes and delays increases.”
By running off a single dispatcher, artists can submit jobs onsite or remotely through a VPN and get access to the power of two farms. With this single point of control, Cutting Edge can now count the software licenses for applications like NUKE for all users over the group. Previously, license allocations needed to be predefined between Sydney and Brisbane. During the day the farms consist of dedicated slave units, but after 7pm, all idle workstations join the pool, generating a 20 to 30 per cent boost in rendering power every night.
The ability to count all software licenses brings a couple of benefits. “One is maximizing license usage while avoiding license starvation,” John said. “Since the Qube! supervisor can count how many of a particular license are in use across the entire farm for all users, it can avoid dispatching too many jobs that require that particular license. The remaining jobs will just remain pending in queue until either those earlier jobs finish or more licenses become available.
“The next benefit is to scheduling. You can easily see if job throughput, or lack of it, is a result of a shortage of licenses. This may trigger a purchase to acquire more licenses because your facility has grown enough to do so. Or, if you are not quite at that stage, then you can look at the peak job scheduling times and either adjust when people submit jobs to reduce the peak load or reset expectations for users so they know jobs may take longer.
“Sometimes license usage can lead to offline scheduling changes where users of certain software are strategically placed within the workflow to create a predictable cadence to their job submission and to also place a maximum on the number of jobs submitted at once. From a finance standpoint, it’s also the only way to see if you have paid for the correct number of licenses for your actual usage.”
Job Type Customisation
Other functionality that drew the team to Qube! was its native compatibility with OSX and Linux, and the Unix-like design. Rangi said, “Although Qube! has a lot of built-in job types tuned for different software that would help smaller studios get started, we already have in-house tools to manage our environment for our major applications - Maya, Cinema 4D, NUKE, Houdini - so that we can control the specifics of the software used per project and even per shot.
“In this case for example, we initialize a session at the workstation on the terminal in exactly the same way that we initialise a job the render farm. This way, we don’t run into new sets of problems for farm-rendered jobs such as user permissions or application versions. Qube!’s job types don’t get in our way when we choose to ignore them.”
The built-in job types, or application pipelines, include Maya, Softimage, 3ds Max and Nuke. Each pipeline contains submission UIs and tools between Qube! and the application, and a backend execution module used to enhance processing efficiency. The custom pipelines reduce the time needed to integrate internal production pipelines with Qube! and since these job types are mostly constructed as open architecture scripts, they can be customized, allowing a studio to tailor Qube! to their individual pipelines or workflows. In short, the tool can be adapted to the studio, the studio doesn't have to adapt to the tool.
Running simulations in Houdini proved to be another opportunity to save time due to per-frame dependencies. In the past, the team found their system was dependent on stages completing before the render could move on. With Qube! this situation changed.
“Now we can send stages to the Supervisor and tell Qube! how to perform the operations - pre-processing, simulations, post-processing, followed by rendering,” said Rangi. “We can configure the granularity of these dependencies. Simulations will require all the pre-processed geometry to be available before commencing, whereas a Mantra render need only wait for all the geometry for its given frame.
In the near future, Cutting Edge will also be able to do track farm data for every project, which wasn’t possible for them before. “We had access to resource tracking – how long a crew has been working on individual shots, different storage stats – but never a stat comparing CPU cycles against project size. With 50,000 render jobs over the last six months, it’s the aggregate data that Qube! is recording in its data warehouse that will help us make a lot of important strategic decisions. Now we can apply cost-accounting to the render usage, as well as make software license decisions based on the actual usage over time.”
For more stories like this, go to: http://www.digitalmedia-world.com