New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!
Hey gang, Jeff here. Before we get started on this week’s post, I want to hop up on my soap box and say a few words. The industry needs to be vocal. TDs and engineers need to learn from each other’s mistakes, and we need share ideas. Our goal here is to continue talking to as many people as we can, and we think that everyone else should be trying to do the same. We got some great comments and questions, public and private, when we began this blog series, and we would like to hear from more people going forward. Please throw a comment in at the bottom of the post, or email us if you would prefer to ask questions privately.
Welcome back for part three of our series dedicated to building a simple pipeline using Toolkit.
Last week, Josh gave an overview of how he setup some aspects of our project’s configuration. He also went into some ideas that he has for the future of how configurations might be organized on disk and how we might be able to ease the process of altering and maintaining them throughout the lifecycle of a project. To my eyes, learning how configs work is the bulk of the learning curve associated with Toolkit. As such, it’s one of the most important aspects of the entire platform, and a place where small gains in clarity and ease of use could have a significant impact on the day-to-day lives of all TDs tasked with supporting a Toolkit-driven pipeline. We would very much like to hear what your take is, so please get in touch if you have complaints, ideas, or think that we should pursue something Josh presented.
Publishing Alembic From Maya
Publishing and Sharing your Work
As Josh mentioned last week, here is our config repository. It’s still very much a work in progress, but as we go we will continue to refine it.
The foundation for a publishing system is available right out of the box. This is a good thing. Passing a Maya scene file from one step of the pipeline to the others is as easy as opening up the Publish app and hitting a button. That’s really all I have to say about the basics of publishing, because going any farther moves us into territory that requires additional development and a lot of thought.
One of the first challenges that a Pipeline TD is faced with is the prospect of sending data from one DCC application to another. This is the case in our pipeline, as well, because we’re passing data from Maya to Nuke. In most studio environments, that list of DCCs is going to be much longer and the connections between them very complex. This raises an immediate problem concerning the out-of-the-box publishing system, which is that you’re publishing native scene formats from each DCC. Writing out a Maya ascii file is perfectly fine when passing data from one Maya scene to another, but what happens when a camera and/or geometry is needed in Nuke?
In comes Alembic, which most of you will be familiar with already. For those that are not, it is a DCC independent geometry interchange framework. That’s a fancy way of saying that we can export objects from a scene file in a format that can be read in other applications that also support Alembic. The good news is that the Toolkit team has some documentation on how to get Alembic secondary publishes setup for Maya, and the information there can be easily applied to other DCC applications. My preference would be to have everything in place needed to support Alembic in a Toolkit-driven pipeline from a code standpoint, such that only configuration changes would be required to make use of it. Right now there is a bit of tinkering involved with getting it ready for use, and I think we can do better than that.
I’ve always been of the opinion that granularity in outputs is important. I want a camera to be published separate from a piece of geometry, and I want individual pieces of geometry instead of a single output containing an entire scene’s worth. Josh and I set out immediately to get our pipeline working this way, and on the geometry front we ran into some issues with the process described in the documentation.
There is logic in the scan_scene hook to separate out each mesh group that it finds in the scene, but all of that work is squandered by the secondary_publish hook, as it’s setup to write out the entire scene’s worth of geometry and disregards the work that scan_scene has done. In addition, the way that the template is setup for Alembic output, as described in the docs, results in all of the geometry in the scene being written to the same output file path for each item that is specified by scan_scene. Not only did we not get what we wanted as far as granularity was concerned, but we ended up making an inefficient publish process at the same time.
The work required to sort this issue out and publish per mesh abc files is relatively small, but not necessarily obvious. We are going to release a fix for this soon so everyone has access to it, but in the meantime you can find what I’ve done to get this up and running by looking at the following two hooks:
For cameras, Josh was quickly able to get individual cameras published as Maya ascii files. This was a good first implementation, as we wanted to be able to publish a camera from our Layout pipeline step for use throughout our pipeline. The implementation for that secondary publish can be found in the same two hooks listed above concerning geometry.
The next step for us will be to leave Maya ascii publishing for cameras behind and write that data out using Alembic. This will allow import of our cameras into Nuke as well as Maya, which is a necessity in most all production pipelines around the world.
Josh put the necessary logic in place to publish out Maya shader networks as secondary publishes from surfacing. He also put together a quick and dirty system that allows the shaders to be reconnected to the Alembic geometry caches when referenced into lighting. By his own admission, the setup is fragile and probably error prone, but it’s good enough as a starting point for our simple pipeline. If you’d like to see the code for this portion of our pipeline, you can check out this commit. Strictly for curiosity’s sake, we’d love to hear from someone that has implemented a surfacing pipeline in Maya to learn how we might handle shaders more robustly.
References are a very powerful tool for any Pipeline TD and can be leveraged for the good of everyone involved. They can also become quite a mess. I am not a fan of deeply-nested references, where the file you are referencing itself references another file, and so on down the line to the very beginning steps of the pipeline. What I want is a completely self-contained set of publish files, or as much so as is possible, coming out the back side of each step of the pipeline. There are many reasons why this is a good idea, some of which are outlined below.
1. In a multi-location workflow, data will be transferred from one physical studio to another. When those files are opened in the remote location, external file references that have not been transferred will not be available.
2. In a situation where cleanup is required to free up disk space, if a file is thought to be out of use and is removed, any other files referencing that file will no longer work properly. This is particularly the case in pipelines that track subscriptions to published files to know what is in use and what is not, and where that data is used to drive automated cleanup mechanisms.
3. I’ve personally experienced problems with Maya reference edits when dealing with deeply-nested references. I’m told that this is much better now than it used to be, but I hold an unhealthy number of grudges.
|Using preserveReferences=False and exportAll, we can write to a temp file with flattened references.|
Like references, Maya’s namespaces are a very powerful tool that a Pipeline TD can make good use of. Also like references they can become a giant mess very quickly. I’m of the opinion that Toolkit’s out-of-the-box behavior relating to Maya namespaces is incorrect. The way it is setup often results in an ever-deepening hierarchy of namespaces as data flows down the pipe. This means that where an object is, or should be, is a moving target. I dislike that. I want structure, and I want to know what things are called and where they’re going to live within a scene file.
What I’ve done is strip out all namespacing from the pipeline as a starting point. I want a clean slate to start from so that I can now build up the use of namespaces in the situations where I find them appropriate. One case where we do plan to make use of them is for asset “iterations” or “instances” or “whatever-you-call-multiple-copies-of-something-in-a-single-shot.” Seriously, though, what word does everyone use for this? At R&H they were called “instances” which is a problem because that has other meanings within computer graphics technology. At Blur they’re called “iterations” which sits better with me, but is something that I’m sure is not universally accepted around the industry.
If I have two tables in a shot, I need to be able to import two copies of our Table asset into the scene and they need to be able to coexist peacefully. Maya’s namespaces are ideally suited to making this possible, as it means we can maintain object names without having an object namespace collision. In this situation the namespaces add meaningful structure and allow us to bypass the need to rename objects. In the end we would end up with a structure something like the below example. Here we have two namespaces, Table_1 and Table_2, each of which contain a mesh named Table.
|Two tables, coexisting peacefully in their own namespaces.|
What I’ve given you here are just the basics of getting data flowing down the pipeline. This is a good starting point, but there are now some more-complex concepts that we are going to begin to explore in the next few weeks. Some of these we will have implementations roughed in for and others will be outlines of ideas that I hope we can use as discussion points. Josh and I have many ideas that we want to touch on, and we are now getting to the point in this series where we feel we have enough of a foundation laid out that we can start to get our hands dirty.
Topics that we will be discussing in the coming weeks will include the following.
1. Grouping published files. We want to be able to group together multiple published files into a single “group” publish. This will mean that PublishedFile entities in Shotgun will have the ability to carry links to a list of children. I’ve partially implemented this, but still have work to do in the coming week to make it presentable. This also leads to some challenges that I’ve had with query fields in Shotgun that we can discuss at the same time, as well as difficulties related to the publish app’s hook implementation.
2. Deprecation of published files. This is something that the Toolkit team’s intern, Jesse, already has up and running. The idea is that a user can mark a published file as no longer worthy of use. This would mean that users already using that file can continue to do so, but new users should steer clear. Jesse will be giving us an overview of how he’s implemented this feature next week.
3. Tracking of disk usage. We want to track more data in Shotgun about the published files themselves, and plan to cache the file size (and other bits of data) at publish time. This will allow us to build some views into the pipeline that are useful for support teams that are responsible for keeping the disks clean.
4. Subscription-based workflows. This is not something that we will have implemented before we talk about it. It is a major component of systems that Josh and I have developed and supported in years past, and it opens up a whole new world of possibilities if it’s implemented properly. We’ll discuss those implementation details and outline some of the benefits.
That’s it for this week. I’m hoping that the topics to come in the list above sound interesting. These first few weeks have been laying the foundation for a lot of really juicy topics going forward. What we really want to do is start a discussion. It’s great that we can convey our experiences building this pipeline, but the goal beyond that is to try and come to some common ground with everyone out there. Maybe what we do will act as inspiration for some of Shotgun’s customers, but equally important is for us to be inspired by your experiences, so that we can all learn from each other and build some really powerful tools for artists.
About Jeff & Josh
Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.
Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.
Jeff & Josh joined the Toolkit team in August of 2015.