Irish VFX + Animation Summit 2015
We're excited to be collaborating with the Irish VFX + Animation Summit this weekend, November 21- 22nd. The Summit encourages engagement between studios and future artists and focuses on the importance of students and prospective artists knowing how best to show others their work and find the best VFX and animation opportunities.

We recently had the opportunity to chat with co-founder of the Irish VFX + Animation Summit, Eoghan Cunneen, to learn more about the Summit:

Tell us about the Irish VFX+ Animation Summit.

The Irish VFX + Animation Summit is our (with Northern Irish co-founder Laura Livingstone) attempt to promote the VFX and animation industry back home in Ireland. Laura and I met in San Francisco where she worked number of years ago and we both had similar experiences when we were in school and university as there was so little information available to us about how to get into the industry. We assumed that you needed to be in LA, which isn’t the case at all. A year after we first met we figured we had enough contacts to beg to come and speak at our first event which we held in Dublin in 2013. We’re about to host our third event.

What is the Shotgun Showreel Clinic all about?

A big part of what we’re trying to do is expose students and those who aspire to work in the industry to what the realities are. There are articles online on how to build a good showreel, but you can’t beat that one-on-one advice from someone who’s responsible for identifying great talent in the industry, or who lead teams on a production. We set up the Shotgun Showreel Clinic to do just that. By matching someone who wants their reels reviewed, by someone who can offer really good and constructive feedback, we want to do our part in ensuring that those applying for roles know what’s expected of them in a VFX or animation facility. Shotgun gives us that ability. We’re asking students to submit their reels to so we can match them with a professional who has expertise in that area so they can get direct feedback in a way that’s similar to how we view dailies at work so they can work to create the best and most relevant showreel possible.

What is your favorite thing about the Irish VFX + Animation Summit?
The chance meetings you have with people. Last year an oscar winning VFX Supervisor casually asked one of our scheduled speakers on Facebook about the event and that he was interested in attending as he hadn’t been to Dublin before. We were really excited and asked if he’d like to speak at it. He ended up giving a really touching introduction at the event which everyone really loved. He later went on to win his second Oscar that February for his contributions to Christopher Nolan’s Interstellar. Another chance meeting which came up last year was a young animator from Japan happened to walk past the venue while it was on. She showed a number of animators her really amazing animation showreel which she had on her phone (it was so good) and has since worked as an animator at a number of Irish studios.

What led you to VFX/animation?

I was given a present of a pack of crayons from my sisters when I was seven or eight. I didn’t like them initially, but over time I began to love drawing and illustration. That led to an interest in animation and film. The Lord of the Rings trilogy finally cemented what I wanted to do. The final film came out around the time I was doing my leaving cert (school examinations before university), so that helped my decision. I watched the making of parts of those films more times than the films themselves.

The Irish VFX + Animation Summit sponsored by Screen Training Ireland and The Animation Skillnet.
Two Guys and a Toolkit - Week 9: Status and Cleanup

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Status and Cleanup

Hi everyone! Welcome back for part nine of our series dedicated to building a simple pipeline using Toolkit. Here are links to the previous posts, just in case you missed anything:

        Introduction, Planning, & Toolkit Setup
        Publishing from Maya to Nuke
        Dataflow and Workflow
        Multi-Location Workflows

This week’s post expands on a few topics that we’ve already mentioned, and introduces some new ones. Our primary goal is to outline how the idea of tracking the status of published files can be combined with other concepts we’ve been exploring to aid in data flow and cleanup of old or unused files.

Published File Status

Like any entity in Shotgun, PublishedFile entities have a status field. While this field hasn’t been utilized heavily by Toolkit in the past, it provides some useful information that can be used to provide additional functionality in the pipeline.

Deprecated Publishes:

One of the ways that we’ve used the status field in our simple pipeline is to define a “deprecated” status for published files that should no longer be used. Jesse Emond, the intern our team has been lucky to host, implemented this feature. He did this by adding a new status to the PublishedFile entity in Shotgun. Once the new status is available, a user with sufficient permissions can set the status of any published file to deprecated.

The way that Jesse made use of that in our pipeline was to modify the tk-multi-loader2 app to filter out published files from its list of available files to import/reference. In this way, when a published file is deprecated in Shotgun, users already making use of the file can continue to do so, but new references to the file will not be created. Jesse’s forked tk-multi-loader2 repository can be found here.

Official Publishes:

Another concept that is handled well by a status is to mark a specific version of a published file as the “official” version. This acts as an indication to users that, unless they have a specific reason to use a different version of the published file, they should be referencing that version. In addition to being a visual indicator for users, it could also be used in apps like tk-multi-loader2 to pre-select the official version of each published file that is presented to the user. This would ensure that most users are making use of what is generally considered to be the “correct” version of a published file. Similarly, the tk-multi-breakdown app could be made to present users with the “official” version of each published file instead of assuming that the latest is always what should be used.

There is a bit of a problem here with how Toolkit associates different versions of the same published file. The way it is structured out of the box is that PublishedFile entities each stand alone, and their association with other versions of the same file is handled at the code level.
The nature of the “official” status is that it’s exclusive to a single version at any given time, and so needs to be tracked at a level above the individual PublishedFile entities. One way to handle that would be to associate each PublishedFile with a parent “BasePublishedFile” entity that would act as the representation of ALL versions of the file. This would provide a location to store version-independent information as well as data that’s exclusive to a single version of a published file.

Location-Specific Status

We alluded to this briefly in a previous post about multi-location workflows, but it is important to discuss it again, as it plays a big role in a multi-location pipeline and how data on disk is cleaned up when it’s no longer needed.

As discussed in the previous post, the idea is to maintain a PublishedFile status per location.
These statuses are fundamentally different than those tracked on the PublishedFile entities themselves, which are used to track the global status of the published file, such as whether it has been deprecated. For location-specific statuses, we have a different set of requirements to sort out, as well as different statuses to track. Below are a few examples of location-specific statuses.

Online Publishes:

The “online” status indicates that the published file exists on disk at that location and is available to be read or referenced. In a subscription-based workflow, the online status would indicate that the published file is ready to be subscribed to and imported/referenced without the need to transfer the data from another location.

Deleted Publishes:

The “deleted” status indicates that the published file was online at that location, but has since been deleted there. Because a published file has been deleted in one location does not indicate that it has been deleted in any or all other locations, so it still might be possible to find another location that does have the file online and transfer it to the location where the file was previously deleted if need be.

Marked for Deletion:

The “marked for deletion” status, or “MFD” for short, is an indication that the file should be deleted in that location as soon as it is safe to do so. What’s considered “safe” to delete would be dictated by that file’s active subscriptions, which we discussed at length in last week’s post. Other benefits of marking a file for deletion rather than immediately deleting the data are speed and better balancing of file-server load. Because marking a file for deletion involves very little immediate processing, a large number of published files can be marked very quickly, which frees up the artist or TD performing the cleanup to move on to other tasks. It also allows the system to delete the data at some ideal time, and at a rate that’s healthy for the file servers.


The “transferring” status indicates that the published file is not yet online at that location, but is in the process of being transferred there. This will help resolve race conditions associated with multiple users or processes attempting to use a published file that isn’t yet online in quick succession. Rather than queue up the transfer of the file multiple times, apps can understand that they only need to wait for the existing transfer to complete before continuing their work.

Multiple Concurrent Statuses:

The types of statuses associated with locations will often need to coexist with one another. Using the statuses I’ve listed above as examples, it’s entirely reasonable for a published file to be considered both “online” and “marked for deletion.” Given how easy it is to add custom fields to entities in Shotgun, having a set of checkbox fields so that each can be checked on/off independent of the others is a great way to go. If there are a subset of statuses that are considered to be mutually exclusive, then a list field offering a choice of each could be used.

Cleaning Up Published Files

Every studio has dealt with a lack of available disk space at some time or another, and many run on the ragged edge of running out of space on a daily basis. This means that it is very important to have a robust system in place for safely removing files when they are no longer in use. To do this, a lot of data from the pipeline about file usage is required, as we need to know both what is on disk and who is using it. Toolkit provides the former, but the latter will require tracking more than what’s provided out of the box, as Josh wrote about last week when he outlined the basics of a subscription-based pipeline.

First off, cleanup should be handled per location. Removing a file from disk in Los Angeles does not mean that the same file should also be removed in Vancouver. It is possible and entirely reasonable to know that a file isn’t being used in one location, remove the file in that location, and leave it online elsewhere because it’s still being actively subscribed to in those other locations.

As for how published files are deleted, we would provide two different approaches: on-demand deletion, and deferred deletion.

On-Demand Deletion:

This is the most straightforward of the two approaches, and involves a user telling the system that they want to delete a specific published file. There are a few questions that need to be asked of the system before it’s known whether the file CAN be deleted, and once the data has been removed from disk there is additional processing that needs to occur.

Deferred Deletion:

This is where the “marked for deletion” status discussed earlier comes into play. The process of marking a file for deletion will also need to be checked to make sure it’s allowed, but those rules are much less stringent than those exercised for on-demand deletion.

Once a published file is marked for deletion, some process needs to periodically check to see if the file can be deleted and perform an on-demand deletion of the published file as described above. This process can be a script run as a cron job as often as is appropriate for your workflow. This script would ask the database for a list of PublishedFile records that have the marked for deletion status in that location and pass that list of files to the on-demand deletion routine. That will check to see if it is safe to delete the published file; if it is, then the file is deleted, and if it is not then it does nothing.

This cron would run locally in each location and would only ever operate on published files that are marked for deletion in its location.

Pipeline Rules

The two methods of deletion discussed above mention checking a set of rules to see if the requested action is allowed. These types of rule checks will come up in many places in a pipeline and are not reserved to cleanup systems.

- May I make this published file “official”? The answer should be “no” if that published file has been deleted in all locations, as the file itself is no longer available for use. The same is true if the file has been taken off of frontline storage and put into archive, as users won’t be able to access the data.
- May I transfer this published file to my location? The answer should be “no” if it is already in the process of being transferred to that location.
- May I mark this published file for deferred deletion? The answer should be “no” if it is the “official” version. The same would be true for on-demand deletion.

There are many other questions that need to be asked for various actions that can be taken within a production pipeline, so it makes sense to provide a simple, centralized location to store this logic. Implementing a simple “Rules” API is a good way to abstract away the logic used to answer your tool’s questions about the pipeline. It also means that those rules can change over time without needing to update the tools/apps themselves, as the rules of the pipeline are centralized.


That’s it for week 9! We hope you’ve enjoyed reading about statuses and how they can be used to aid in managing published files. As always, if you have any questions or suggestions, please add a comment below.

We will be back with week 10, but it will come two weeks later than normal, as the entire Shotgun team will be attending our annual summit next week and the week after is Thanksgiving in the USA. When we get back we will be hard at work putting together what will be the final post in the series! Even though we will be concluding with our next post, we’d like to invite everyone to offer topics for future pipeline related blog entries. We are very open to the idea of writing more in the future, so please let us know if there is something you would like to see discussed!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.


Two Guys and a Toolkit - Week 8: Subscriptions

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!


Hi everyone! Welcome back for part eight of our series dedicated to building a simple pipeline using Toolkit. Here are links to the previous posts, just in case you missed anything:

I’d like to start out this week’s post by looking at a few common production questions.

- What version of the rig is animation using?
- Who hasn’t pulled the latest model?
- What version of the plate should I be using?
- How can I revert to what I was doing last Thursday?
- Why don’t the Lighting and FX elements line up?
- What version of the animation was I looking at in dailies yesterday?
- Disk space is getting low! What can I clean up?
- This shot is going to be rendered in Vancouver, what files need to be transferred?

Do any of those sound familiar? I’m willing to bet that if you’ve been on production for a while you’ve heard at least a few questions like these on a fairly regular basis. 

The purpose of this post is to talk about the concept of Subscriptions and how they can be used to answer, or help answer, these kinds of common production questions, across multiple locations, and without hammering your production disks.

Jeff and I spent the majority of our careers working with some of the best pipeline engineers around building and supporting subscription-based pipelines. From experience, we know that there can be a lot of moving parts when it comes to subscription-based workflows. That's why we didn’t try to tackle them while implementing our simple pipeline. It’s safe to say that we each feel strongly that the benefits of tracking subscriptions far outweigh the overhead on a medium to large size production. Hopefully this post will outline some of the benefits we see and make it clear why we are subscription fans. It will also be interesting to consider as we go along how subscriptions-based workflows could be implemented using Shotgun and Toolkit.

As always, these posts are aimed at starting a discussion with you all. After reading the post, if you have a strong opinion either way regarding subscriptions, or if you have any questions or comments, we’d love to hear them - especially if you’ve implemented a subscription-based workflow already! 

We linked to it earlier in the series, but If you want some more information about subscriptions and how they can be used to build sophisticated workflows, see this talk from SIGGRAPH 2014.

Work Area Versioning

Before we get into the guts of how subscriptions work and their benefits, I’d like to talk a little bit about versioning. Toolkit handles versions at the file level. You can version up your work file, and you can publish new versions of your work for folks downstream. This can work well when you have a single file that contains all of your work and when you generate all your publishes from that single file, but I think it’s also fairly common for an artist to be doing their work in multiple files and multiple DCC applications simultaneously. 

An FX artist may go back and forth between Maya and Houdini, and a matte painter may bounce between Photoshop and Nuke. An artist may have auxiliary files with shot-specific Python or MEL that they’re reading in and executing as a part of their own day-to-day workflow. An artist may also have multiple work files for a single DCC referencing each other within the work area that they're using to generate various outputs. 

In these scenarios, having to manage versions on individual files is not something that an artist should have to deal with. At the same time, keeping track of the state of those files on production, and having accurate snapshots of what an artist was doing at a given time, is extremely valuable. 

What I’d like to be able to do is to version the artist’s work area and all of its associated files as a whole. It should be possible to snapshot a Maya file, a Houdini file, and anything else in the work area directory together. This way, if a user models something in Maya, deforms it procedurally in Houdini, then versions and publishes it, they have a saved state for all the files they used to generate that output.

In the diagram you can see that when you track versions on individual files, you don’t know how they’re related over time. If you version the work area as a whole, you can see which files are associated with each other and when files were added or removed from the work area.

Tracking the changes across a collection of files should sound familiar if you’ve used Git for revision control. In fact, it would be a really interesting exercise to think about a Git-backed system for managing the state of a user’s work area over time. Anybody doing that on production?

But I digress a little bit. To implement this in Shotgun, we would add a custom WorkAreaVersion entity. It would include metadata such as:

- A description of what changed for that version
- Who created the version
- When the version was created
- The Location the version was created in (Using the custom entity Jeff proposed last week)

We would link our publishes to the new entity as well so that we could properly traverse our production data once we started populating subscription data.

As you see in the diagram, the version number of the publish matches the version number of the work area. Whether this is implicit or explicit in your data model, it is an important association that helps artists quickly map between the publish file and the state of the work area it was created from. 

The rest of the post assumes we’ve put the WorkAreaVersion entity in place and we have a strong association between the work area and publish versions.

What are Subscriptions?

We define a subscription as an association between a version of a work area and a version of an upstream publish. 

Subscriptions track the usage of upstream publishes over time:

In this example, we see the history of the hero comp’s subscription to the camera from layout. You can see the compositor was using version 7 of the camera in version 23 and 24 of her comp, but updated her subscription to use version 10 of the camera in version 25 of her comp. 

In Shotgun, this could be modeled as a custom Subscription entity with fields for the PublishedFile being subscribed to and the WorkAreaVersion that is using it

Once we begin building subscription records for each version of our work areas, we can start building very powerful views into our data. 

Subscription Dataflow

High-level Views

Without subscriptions, we might be forced to parse work files to see what the inputs are for a given work area or collection of work areas. That type of pipeline introspection does not scale well and becomes problematic when you have a multi-location setup. With our populated subscription data, we can look at snapshots of our pipeline’s dataflow at a high level. 

We could do a simple query to ask what version of the hero rig shot ab013_13’s hero animation is using. We could also run that query against all the animation tasks within the sequence to quickly see who isn’t using the approved version of the hero rig for the main character. Going even further, since we know what the latest versions of published files are (or perhaps we track other publish file approval statuses), we could simply ask the system to provide a list of all the work areas on the project that aren’t using the latest versions of things they’re subscribed to. 

Upstream Conflict Identification

Review sessions can take advantage of subscriptions as well. It is possible to write tools that build the complete upstream dependency graph for a given work area. 

If, in Comp dailies, the supervisor wants to know why some elements don’t line up, you might query the dependency graph, tracing the subscriptions upstream to see that lighting was using version 11 of the layout camera but FX was using version 13. After hitting this situation a few times, you might consider writing a tool to warn about potential conflicts before the compositor renders. You might even go as far as writing a tool that does a conflict resolution pass to warn artists as the subscriptions are created. 

Identifying Potential Issues Downstream

We can also look up data in the other direction. Starting with a version of a published file, we can ask what work areas are using it.

A layout artist may see that the latest version of the camera for shot ab013_14 is being subscribed to by the animator on shot ab013_15. There may be a perfectly good reason for this, but it would certainly raise a red flag. Being able to see exactly where publish data is going can be extremely useful in identifying potential problems. 

Cleanup Candidates

Another benefit to seeing what versions of a published file work areas are subscribed to is for identifying things to be cleaned up. 

If I see that version 16 of a very large Alembic cache from animation isn’t being used by any recent versions of lighting, then it may be a good candidate to be cleaned up. Jeff is going to go deeper into cleanup next week, but it’s worth pointing out the power of being able to ask your pipeline what is and what isn’t being used currently across all of the studio’s physical locations.

Subscription Dataflow with Shotgun

From a data standpoint, I don’t think there’s much in the way of preventing someone from modeling subscriptions in Shotgun. The concept is fairly simple assuming you have the work area versioning in place. I’m curious what you all think about tracking subscription data in Shotgun. Do you think it could help your studio? Does it seem overwhelming based on what you’ve read so far? 

Next, let’s look at subscriptions from a workflow perspective.

Subscription Workflows

Managing Subscriptions

So how do artists actually populate and manage subscriptions as they work? We looked at Toolkit’s Loader in week 6, and I think it is reasonable to imagine adding a hook to create a subscription as an artist references a publish into their work file. It’s also easy to envision the Breakdown app updating subscriptions for the current version of the work area as it updates references. 

But there are problems with using the current set of tools if we’re working in a versioned work area scheme. Since subscriptions are at the work area level, updating a subscription to version 10 of the camera would imply that all my work files should now be using version 10. Ideally, a subscription update would immediately update the references across all my DCCs.

There are a couple of ways to approach this issue depending on the scenario. You could have a way to manage subscriptions outside of the DCCs in the way of an interface that shows you the current version of your work area, what it’s subscriptions are and which ones are out of date. Since managing subscriptions at this level only affects the database and not the working files, there would need to be a way to auto-update the references as the DCCs were loaded. This could be implemented by having custom startup code that checked references and compared them to the subscribed version. 

A more ambitious solution might be to implement a custom url scheme for referencing subscribed files. A custom url evaluation phase would attempt to do a best match on the work area’s subscriptions and forward that path onto the DCC for loading. This would allow you to reference your subscriptions without an explicit version number, meaning as long as you are subscribed to a matching publish, the DCC should load the correct version. In other words, you wouldn’t need to change the reference when the subscription was updated. Just be wary of doing this type of thing on the render farm as you probably don’t want subscriptions changing mid-render. You also don’t want to hammer the database with queries while trying to evaluate subscriptions for each frame being rendered. You’d want to consider a cached subscription layer for evaluating references in these scenarios.

Update Notifications

In situations where a subscription was updated externally, a new version of a published file has become available, or an existing subscription exists to a publish file that has been deprecated, a messaging system could be used to notify the users already in the DCC that they need to update the references in their file. 

It might also be nice to provide users with the ability to lock subscriptions. Locking a subscription would imply that the user knows they may be out of date but they do not want to be prompted again to update the subscription. 

Usage Suggestions

When tracking subscriptions, eventually there will be enough data in the system to start making predictions about dataflow. You may see that lighting almost always subscribes to all of the Alembic publishes from animation. 

You could build a rule into your code somewhere that would automatically suggest newly published Alembic caches from animation when the lighter opened their work file. 

Subscription Overrides

Subscriptions can also act as overrides in alternative workflows when using publish file groupings as discussed in week 4. A lighter in a shot might subscribe to a lookdev publish group that includes the base model, textures, and shaders. 

A subscription to the Alembic cache from animation could automatically override the base model packaged with the lookdev when everything is loaded in the DCC or at render time. 

Push on Publish 

As mentioned before, subscriptions can play a big role in helping you manage data across multiple locations. When a user in location B subscribes to a publish created in location A, we know that we need to initiate a transfer from A to B. Further, you might decide that your pipeline should automatically push subsequent versions of the publish file to location B even before the artist in location B has updated their subscription.

In this example, versions of 18 and 19 of the animation cache have preemptively been transferred to location B to prevent the lighter from having to wait when they update their subscription. Subscriptions allow you to make more educated decisions about what files need to be where which can drastically improve iteration efficiency and turnaround time in multi location setups.

Remote Rendering

In scenarios where you’re rendering in the cloud or sending work to remote locations, the combination of the work area version and the subscriptions should give you a complete manifest of the files that need to be transferred to the new location. If you combine this idea with the per-location publish file status that Jeff mentioned last week, you can see how it makes it much easier to be smart about what files need to be moved where in order to execute a render or get a remote user up and running.

Reverting State

Since the subscriptions are tied to a version of the work area, it should be possible to restore a work area and its subscriptions, to a previous state. If the director says they preferred the camera move from last Tuesday, it should be possible to identify the version of lighting that was shown on that day and query the system to determine what version of the camera was being subscribed to at that time. The lighter can then update their subscription to the older version and re-render if necessary. 

Subscription Workflow with Toolkit

When it comes to building subscription-based workflows with Toolkit, I believe it is very much possible. I think I’ve mentioned it before, but one of the strengths of Toolkit, in my opinion, is that it’s a platform for building consistent workflows across every stage of production. This is a critical component when it comes to Subscriptions.

Like I said above, Toolkit’s current mode of operating at the file level for versioning would be the biggest hurdle for implementing the types of workflows described. But, with the openness and flexibility of the platform, outside of time and resources, I don’t see any real roadblocks to adding subscription-tracking interfaces and workflows using Toolkit.

Subscription-based Workflow Suggestions

Finally, I wanted to make a few suggestions about building subscription-based workflows that will help keep the history of the dataflow intact. Some of these might cause consternation among artists on production depending on how rigid you adhere to them. But the more you can adhere to them, the more accurate your representation of the production dataflow will be. 

Once a work area has been published and versioned up, the subscriptions of the previous version should not change. 

If subscriptions change after the fact, the inputs that created the published files are no longer an accurate representation of what the artist was using at that time. 

In this example, changing the subscription of version 23 of the Lighting work area to a different version of the Alembic cache from animation would create an inaccurate history of the geometry used to generate version 23 of the lighter’s renders. Since the renders are published and being consumed by the compositor, the subscription should not change.

A work area version should only be subscribed to one version of a published file at a time.

There are some cases on production where this is a really tough sell, but it does create a clearer view into how data flows through the pipeline.

If you allow scenarios like this, where a work area can subscribe to multiple versions of a publish, you create ambiguity. In this example, without diving into the work files, how do you know which version of the animation was used to render the output frames?

Every external input to your work area should be represented by a subscription. 

If you’re referencing something externally that is not a subscription, then you don’t have a complete view of the dataflow.

In this example, the lighter is using an image file from his home directory and referencing it in his renders. This creates a situation where data is being used and not tracked. If this shot needs to be transferred and rendered remotely, there’s a good chance the render will break. 

Publish file versions should match the version of the work area from which they were generated. 

This makes it easy to tell at a glance what version of the work area created which published files. I mentioned this earlier in the post, but it is worth reiterating. Quickly being able to identify at a glance exactly where something was generated is extremely useful, and quick little wins like this add up on production. 

Whether or not you enforce these rules in a subscription-based workflow, and to what extent, is totally dependent on the goals of your pipeline. There are situations where these rules just aren’t practical given time constraints, disk resources, etc. In my opinion, though, the more you stick to these rules, the more reliable view you’ll have into your production data.


That wraps up week 8! I hope you’ve enjoyed my very quick overview of subscriptions and how they can be beneficial on production. If you have any questions about the details of any of the subtopics or want to give us your thoughts on subscriptions or other ways of tracking these kinds relationships, we would absolutely love to hear from you. Maybe you have ideas about how to tap into subscription data to get at some of the information you’ve always wanted. Please kick off a conversation in the comments section!

Next week Jeff and Jesse are going to dive deeper into multiple-location scenarios, specifically with respect to PublishedFile statuses, location awareness, and cleanup strategies. 

Have a great week!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person. 

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework. 

Jeff & Josh joined the Toolkit team in August of 2015.


Shotgun Performance Issue Post-Mortem
On Monday October 26th, we started seeing unusual contention on one of our database clusters, resulting in slow queries and even errors for our heaviest users. We immediately looked into the issue, and with the recurrence on Tuesday we created a crisis cell to investigate and fix the issue as soon as possible.

We have since introduced a number of quick changes that seem to have help mitigate the issue, and we will continue to apply improvements in the upcoming days. Even if we still have work to do, we feel we owe you some explanations now.

Some Background

Shotgun is a powerful tool for users, and we have been traditionally very open about how users can use Shotgun, crafting queries and UI to match their needs. This is very convenient, but is causing some complex issues for the engineering team when it comes to predicting impact of a change or a new feature.

Because of this, we are consistently monitoring clients pattern, optimizing queries and working with clients to improve their workflows.

Fire Fighting

We had our first incident on Monday. When this kind of thing happens, we always have a couple of people on the team jumping in, trying to identify what is adding unreasonable load on the system. We looked at the issue from a very broad angle, trying to identify an unusual number of requests, or new patterns that are not optimized in our system. We identified a couple of culprits, but nothing that obvious.

Then it happened again on Tuesday. We have put a lot of effort in the last months to reduce the contention points in our system with positive results. Having the issue happen twice in a row raised some flags. We started to suspect that the performance degradation could be related to the 6.3 release. We then split the investigation team in two. One group was looking into optimizing queries on a broad angle. One group looked into possible regressions or changes that could explain that sudden performance degradation. The core team started getting another database cluster ready. If we couldn't find a solution rapidly, we would at least be able to lower the load by re-balancing our clusters.

We attacked the slowest and more time consuming queries in our system by adding indexes, and released a patch on Wednesday night that was in part aimed at removing stress from the system.

We were also able to correlate some performance issues with the Shotgun 6.3 release. While the release was not the direct cause, it allowed a badly optimized workflow to be executed often enough to do some damage. In the first 3 days of the week, because of the new Media App, we served 10 times more versions and playlist than usual. We believe that this additional stress, along with the performance issue it underlined, was at the root of our performance issues.

What's Next

We still have a couple of improvements coming out in the upcoming days. We are also putting in production our new database cluster and recalibration will start today. While not directly solving the issue, it will give more breathing room for the clients on the affected cluster.

Further out, we have a lot of actions planned to reduce the likeliness of such events. We have put a lot of effort into segregating clients from each other, and we have more work to do at the database level. More specifically, we are looking into introducing different levels of quality of service for requests, in part to make sure the Web App is always responsive even under heavy load. Some of these features are being developed as we speak.

We will also look into making sure our monitoring can help pin-point issues before hitting production. Our QA team is already investing a lot of effort replicating clients patterns to identify regressions, but we want to invest more on the performance regression side. We are also currently integrating a new reporting tool that will help us optimize our queries more effectively.


Finally, our sincere apologies for this week's issues, and be assured that we are not taking them lightly. We realize that Shotgun is an important part of your pipelines and workflows. We will be working hard to continue improving Shotgun in every way.
"La Noria"- Darkness Isn't Always What it Seems

We recently had the chance to speak with Carlos Baena, director of La Noria, a new animated short horror film which was produced as an online collaboration with artists from around the world.

La Noria is bringing a new vision to animated films by exploring darker themes, elegant visuals and producing the short using online production technology. Check out what Carlos has to say about the new indie film short, becoming an artist, and staying inspired here.

Two Guys and a Toolkit - Week 7: Multi-Location Workflows

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Multi-Location Workflows

Hi everyone! Welcome back for part seven of our series dedicated to building a simple pipeline using Toolkit.

Up to this point in the series, we’ve been looking at setting up and building the fundamental pieces of our simple pipeline. Here are links to the previous posts, just in case you missed anything:

        Introduction, Planning, & Toolkit Setup
        Publishing from Maya to Nuke
        Dataflow and Workflow

As always, please let us know if you have any questions or want to tell us how you’re using Toolkit in exciting and unique ways. We’ve had a really great response from you in previous posts and we look forward to keeping that discussion going. So keep the feedback flowing!

This week we’re going to take a look at multi-location workflows. This could refer to something as structured as a studio with two physical locations, or as amorphous as a completely decentralized workforce where each artist is working from their own remote location. While the implementations of these two cases might differ, some of the core philosophies and designs will remain the same. We will be discussing some of these philosophies and the features they require. Toolkit provides the customizability necessary to act as the platform upon which a multi-location pipeline can be built, but a specific implementation is not provided out of the box. This week’s discussion will hopefully fill in some of those gaps from a design standpoint.


Let’s discuss two scenarios that are becoming more common in today’s visual effects and animation studios: studios with multiple physical locations, and decentralized workforces without a physical studio location. The former scenario has been common in the visual effects industry for a number of years, and the latter is becoming more popular as cloud technologies evolve. In fact, a large portion of the Shotgun team works remotely, with developers and other staff spread all around the world.

Multiple Physical Locations:

A typical multi-location workflow that involves multiple physical locations usually utilizes local storage at each location, and either a push- or pull-based synchronization of data between each.

Decentralized Workforce:

Decentralized workforces are relatively new in the visual effects industry, but are becoming more common as technology has evolved to allow for the possibility of having a distributed workflow with no centralized, physical studio location. Most of the discussions around this type of workflow revolve around the use of cloud storage combined with web services to build a location-agnostic pipeline. Shotgun is well positioned to act as the backbone for this type of workflow, as the use of a hosted, web-accessible database is a core feature of Shotgun Toolkit.

It makes time travel possible.

This setup would likely involve a series of pushes and pulls to and from some cloud-storage service. Paired with that storage would be an internet-accessible database that is used to track who has produced what data, where it lives within the cloud, and where it would be if it were to be cached on a local disk.

Problems and Solutions

As things relate to Shotgun and Toolkit, we have a few problems to solve. There are likely many different approaches to each, so our ideas should not be taken as gospel. If you have implemented any of this in the past, or if you have a good idea of how you would implement it, please let us know. Your ideas surrounding possible solutions to these problems could very well help set the direction that Toolkit takes in the future, as these are workflows that we would love to better support going forward.

What data should be shared?

There are three types of data associated with a project: configuration data, user generated “work files”, and tool produced, “published” data. Each one of these types of data should be sharable between users in different locations, but when and how that data is shared is different for each.

Configuration Data:

Configuration data, as it relates to Toolkit, would include all of the data within the project’s config root directory. This is mostly comprised of YAML files and all of the apps, engines, and frameworks that act as the backbone of the pipeline. Active project configurations should always be kept in sync for all users in all locations.

As things currently stand, this is a real challenge for Toolkit-driven pipelines. The project’s config typically lives in some centralized location that all users reference, and attempting to distribute that out to multiple locations raises synchronization issues and introduces possible race conditions when multiple people are modifying the project’s config at the same time in different physical locations.

The solution to this problem is a development project that we internally call “zipped config.” The goal is to store a project’s configuration in Shotgun, and put into place a push/pull system for modifying and distributing changes to that configuration.
In the above diagram, if user A were to make a change to the project’s configuration, Shotgun Desktop would be used to upload those changes to Shotgun. Users B and C would be notified of the config changes and given the opportunity to bring their local configs up to date. Should a conflict arise where user A uploads config changes and user B also made changes at the same time, user B would be notified of this when they attempt to upload their changes.

We don’t yet have a release date scheduled for the “zipped config” project, but it is a high priority development task. It is core to Toolkit’s ability to support multi-location workflows, and we will work hard to make it as robust a solution as possible.

User Work Files:

An individual user’s work files will likely need to be shared with other users, but only under certain circumstances. There are definitely situations where artists will share their work files, including the need to reassign work from one user to another, as well as for simple debugging of problems by support teams. Because the need is based on situation, transfer of this data from one physical location to another could be handled as a manual push or pull process. Taking this approach will reduce disk usage in remote locations (or in the cloud) and also reserve available bandwidth for the transferring of data that is known to be needed in remote locations.

Toolkit’s Perforce integration already implements a checkin/checkout system for user work files. It illustrates the flexibility of Toolkit and how it can be used to build similar workflows; that of managing access to shared files. The challenge becomes knowing what workfiles live where, and how to get them. Since they’re not explicitly tracked in Shotgun the way that published files are, some way of tracking workfiles that doesn’t rely on asking the filesystem will likely be required. We’d love to hear your thoughts on this, as we think there might be multiple approaches here should some thought be put into it.

The Perforce integration is implemented as custom hooks for tk-multi-publish, tk-multi-loader2, and tk-multi-workfiles, all of which can be found here.

Published Data:

This is where things get interesting. All published data needs to be accessible to all users, regardless of location. However, when that data is pulled down to local storage can be controlled, since all of the available files are tracked in Shotgun as PublishedFile entities. Where these files should reside on local storage is known, so knowing whether to download from cloud storage (or a remote physical location) can be determined when the file is requested.

The database records tracking these published files will need some additional bits of information populated at publish time. I’m listing a few here, but there are certainly more possibilities out there, so feel free to suggest others and what they could be used for.

1) A checksum computed from the locally-exported file prior to upload to the cloud. This can be used for verification of pulled/downloaded data when a user in a different physical location requests access to the file.
2) The cloud-storage location of the file. The PublishedFile entity will already have a record of the local-disk path of the file, but we will also need to know where to get that file from should we not already have it on local storage.
3) The size on disk of the published file. It’s always good to know how much data is about to be pulled down to local storage before actually doing so.

The order of operations for a publish could look something like this:
The order of operations when importing a published file could look something like this:
The upside to a pull-based workflow for files published in remote locations is that only data that’s requested is housed on local storage, which is an efficient use of disk space. The downside is that users can be left waiting for large data sets to download from another location’s storage before they can begin their work. It’s a trade off, and there are likely mechanisms that can be put into place to ease the pain. If you have ideas, please be vocal!

What about the project root path?

One of Toolkit’s limitations is that it relies on all data being stored underneath a project root directory. When dealing with a single studio location this works quite well, but it can become a problem when distributing users across multiple locations. The requirement is that ALL locations will need to have the same root directory. If the config says that the project’s data is stored in /shotgun/projects/tk_pipeline, then every location needs to mirror that same structure. This will likely not be a problem for studios with multiple physical locations, as they can structure their filesystems the same way across all locations. However, distributed workflows where artists work remotely will also be held to this requirement. Without control over the hardware and OS setup by a central Systems/IT group, the onus is on the individual users to properly configure their workstation to work within the confines of the pipeline.

This restriction does have its benefits, however, because it also ensures that reference paths in user work files and publishes will be consistent across locations. This will act to head off potential pathing problems should work need to be shared.

An Interesting Idea: Per-Location Status

Josh and I have discussed the notion of tracking a published file’s status on a per-location basis. The idea of published file status hasn’t been well explored in Toolkit, but it has the potential to provide some interesting functionality. The way that we envisioned it working would be to have two new types of entities in Shotgun: PublishedFileStatus and Location. Each PublishedFile would would have a PublishedFileStatus entity for each Location that exists. The status entity itself would have a field for tracking the following statuses:

1) Online: The published file is online on frontline storage in this location.
2) Nearline: The published file is online on nearline storage in this location.
3) Archived: The published file was online in this location, but has been archived.
4) Deleted: The published file was online in this location, but has been deleted.
5) Transferring: The published file is in the process of being transferred to this location.

Any number of other statuses could be added to that list to serve the purposes of your pipeline. Having the ability to track these kinds of statuses on a per-location basis can act as the foundation for additional, location-aware functionality within the pipeline. It also provides a great deal of information about the data your pipeline is tracking on a global scale, all accessible from the database without the need to stat files on disk in remote locations.


That’s where we will end this week’s post. Multi-location pipelines is a massive topic, and we’ve only hit the very basics here. This post is the start of a set of deeper, more philosophical topics of discussion that we’ll be writing about going forward. Because we’re moving into this territory, we hope to continue to elicit discussion from all of you out there. The remaining topics cover workflows that are not provided out of the box by Toolkit, but cover things that we would love to better support in the future. As such, your input in these discussions will help drive the direction of future development.

See you next week!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: , ,

Available Now: Shotgun 6.3!
We’re excited to officially release Shotgun 6.3 - now available to all Shotgun subscribers. This release is jam-packed with new features and updates to the Media App, Client Review Site, and Screening Room for RV Submit Tool making it even easier for entire teams to review, share, and provide feedback on creative projects. Here’s what’s included with the 6.3 release:

Media App Enhancements

Shotgun's Media App helps simplify the way artists and supervisors create, search, browse and review media on a project. But, we didn't want to stop there, so we’ve added new features and updates making it better than ever. Here’s what’s new:

The Media App has gone global allowing users to easily access and manage media across all of their studio’s projects from one, easy-to-access location in Shotgun. We're also obsessed over performance and the new Global Media App runs 2x faster than the previous Media App.

Browse your media by your project's hierarchy, so artists, supervisors, and managers can quickly find the media they need within their projects. For example, quickly filter by sequence>shots on film projects or episode>shot on episodic TV projects.

An All Playlists view helps users easily find and see all the playlists on their Shotgun site or drill down to playlists on a specific project.

Media Launching Preferences let users customize exactly how their media is launched from the Media App; turning the Media App into a super-powered springboard to your media, whether it's in the cloud or stored locally.

We've also made it easier to share a specific version or playlist with your colleagues with the addition of Version/Playlist Sharing. Quickly copy a link to what you want to share and then email or IM it for immediate review.

Client Review Site

The Client Review Site allows Shotgun users to present work to their clients in a brand-able, simple, secure website. And, with Shotgun 6.3, we’re introducing a round of highly anticipated updates based on feedback from the community, including:

- The ability to add attachments to notes and replies

- Shotgun users can now reply to Client Notes from anywhere within Shotgun and send notifications back to the client with ease

- Configurable sharing security settings let you control whether a password is required to access your work. This one's for our commercials clients out there who sometimes favor speed above all else

- Time-saving features remember who you share work with most commonly and make it simple to re-share with those people without the overhead of managing groups

- Improved email notifications let you and your clients know the second feedback is made or new work is added to a playlist

- A revamped Manage Share menu puts you in control of who has access to your shared work

SR for RV Submit Tool: My Tasks View and Notify

The release of SG+RV 6.0 has opened up a new world of possibilities to explore. To get things started, we're making it easier for artists to submit their work with the addition of a My Tasks view in the Screening Room for RV Submit Tool. They can even Notify others about their new work via the Shotgun Inbox in just one click.

Read the full Shotgun 6.3 release notes here.


Two Guys and a Toolkit - Week 6: Dataflow and Workflow

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Dataflow and Workflow

Hi everyone! Welcome back for part six of our series dedicated to building a simple pipeline using Toolkit.

Up to this point in the series, we’ve been looking at setting up and building the fundamental pieces of our simple pipeline. Here are links to the previous posts, just in case you missed anything:

  1. Introduction, Planning, & Toolkit Setup
  2. Configuration
  3. Publishing
  4. Grouping
  5. Publishing from Maya to Nuke

As always, please let us know if you have any questions or want to tell us how you’re using Toolkit in exciting and unique ways. We’ve had a really great response from you all in previous posts and we look forward to keeping that discussion going. So keep the feedback flowing!

This week we thought we’d talk about how all the pieces we’ve been building fit together and discuss the dataflow and artist experience within the context of our pipeline. As usual, we'll take a look at what bits of Toolkit worked well for us and which ones we think could be better. This will give us a solid foundation for the rest of the series as we transition into a discussion with you all about our pipeline philosophies and building more sophisticated workflows.


Hey everyone, Josh here! One of the strengths of Toolkit, in my opinion, is that it exposes a common set of tools for every step of the pipeline. This means there is a common pipeline "language" that everyone on production speaks. If someone says, "you need to load version 5 of the clean foreground plate from prep", that means something significant whether you're in animation, lighting, or compositing, because you're all using the same toolset. The more your pipeline can avoid building step-specific workflows and handoff tools, the more flexible your pipeline will be. Now, obviously you have to be able to customize how the data flows between steps, but you should avoid hardwiring that into your pipeline in my opinion.
Since we’ve made a conscious effort to keep our pipeline simple, and because we like the fact that we have a consistent set of tools across all of our pipeline steps, we haven’t deviated much from the standard, out-of-the-box Toolkit apps.  So rather than analyzing the workflow at each step of the pipeline individually I think it might be more efficient to look at how the average artist working in the pipeline uses these tools.  I’ll also point out all the customizations we've made (most of which we've mentioned before), but hopefully, combined with the Dataflow section of this post, you’ll be able to put together a complete view of how the pipeline is meant to work and how the packaged Toolkit tools are used.

Loading Publishes

The Loader app is used by almost every step in the pipeline as a way of browsing and consuming upstream PublishedFiles. The loader has quite a few options for configuring the browsing and filtering experience for the user, which is really cool. And of course there are hooks to customize what happens when you select a publish to load.

PublishedFile loading

From a user standpoint, there seems to be a lot of clicking to get at what you’re actually interested in. Between drilling down into a context, filtering publishes, selecting a publish, and finally performing an action, you can rack up quite a few clicks. If you need publishes from more than one input context, you potentially have to start all over again. I think that users often know exactly what they want, and having the ability to type a name into a search widget might be more convenient. There is a search/filter widget in the UI, but it’s for limiting what shows up in the already-filtered view. It would be great to have a smart search that prioritized the returned publishes that were in the same Shot or Asset as the user’s current context.

I also found the filtered list of publish files difficult to parse visually. You can see in the screenshot above that the Loader is displaying PublishedFiles in a single list and they are sorted by name. As a user, I would love to be able to sort by task, version number, username, date, etc.

To me, the Loader is similar enough to a file browser that it is easy to notice where some of the common file browser features are missing. In addition to the sorting and filtering options, I noticed immediately that there were no buttons at the bottom of the UI. I was expecting at least a Cancel/Close button. What’s the general feedback you all get from artists using the Loader UI?

I also wonder how people know what publishes they need to load on production. Is this just a discussion people have with folks upstream (which is perfectly reasonable)? Or does your facility do anything special to track the “approved” publishes in Shotgun and relay that information to the artists somehow. Have you used the configuration capabilities of Loader to filter/show only publishes with a certain status, for example? It would also be interesting to spec out how we might use Shotgun to predict what a user might want or need for their context.

You may have noticed a “Show deprecated files” checkbox in the bottom-right corner of the Loader screenshot. That’s a really cool feature that was added by Jesse Emond, the Toolkit intern, who has been kicking some serious butt around here. We’ll give Jesse a formal introduction in a future post where he’ll be able to talk about deprecating PublishedFiles in our simple pipeline. So definitely be on the lookout for that!

We mentioned in a previous post that we customized the loader hooks to connect shaders and alembic caches as they’re imported. You can see that hacky little bit of code here. And here’s what it looks like in action:

Auto shader hookup on Alembic import

File Management

Next up is the Workfiles2 app which is in pre-release. This app is tasked with managing the files that artists are working in. Every step of our simple pipeline uses this app to open and save working files.

Saving the work file

The default interface, in Save mode, has fields for specifying a name, the version, and the extension of the working file. These fields are used to populate the template specified in the app’s configuration for the engine being used. In this example, the template looks like this:

The template being referenced in the workfiles config

The template itself

Not having to manually keep track of which version number to tack onto the file is a nice convenience, but I do wish Toolkit had a more robust concept of versioning. Right now, the user can manually create as many versions of a maya file as they want, which is great, but the version is isolated to that single file. My preference would be to have the version be a part of the work area context itself, and to have the state of the work area, including my main work file, any associated files I’m using, and all my upstream references, versioned together. In simple terms, I’d like to be able to get back to the state of my entire work area at any time on production. I’m getting a little ahead of myself though. I want to discuss this more in a future post but, in the meantime, definitely let us know how you handle versioning at your facility.

The Save dialog can be expanded as well:

Expanded File Save dialog

You’ll notice the similarities with, and reuse of, some of the UI elements from the Loader. The expanded view allows you to browse and save within a different context.

As mentioned, the workfiles app is also used for opening working files:

File Open dialog

As with the other views, you can browse to the context you’re interested in and find work files to open. I think the interface looks clean, but I still find myself wanting the ability to do more sophisticated searching and filtering across contexts. What do you all think?


The Snapshot app is a quick way to make personal backups of your working file.

Snapshot dialog

Artists can type a quick note, take a screenshot, and save the file into a snapshots area. The app also provides a way to browse old snapshots and restore them. It’s a simple tool to use and a nice feature to provide artists. I’d actually like to have comments and thumbnails attached to artists working files as well. I wonder if this functionality shouldn’t just be part of the workfile app’s Save/Open. Thoughts?

It would also be nice to have a setting and hook that allowed for auto-saving, or auto-snapshotting, the files. There could be a setting that limits the number of saves to keep, too. I realize this type of functionality exists in many DCCs already, but having a consistent interface for configuring and managing auto-saves across all pipeline steps and DCCs would be great.


Publishing and the Publisher app is something we’ve referenced quite a bit about in previous posts, so we don’t need to go into too much detail here. We’ve shown some of the customizations we’ve made to the configs and hooks in maya. Here are the secondary exports we’ve mentioned in action:

Custom secondary publishes

Updating the Workfile

The Scene Breakdown app displays and updates out-of-date references and is used by all steps with upstream inputs. There is a hook that allows for customization of the discovery of referenced publishes in the file and determining which are out-of-date. There is also a hook to customize how to actually update the references. This makes the app, like all the Toolkit apps, very flexible and able to adapt to changes in pipeline requirements.

Breakdown of references in the file

The interface itself is fairly simple and easy to understand. I like being able to filter for what is out of date and bulk-update them. I do think there’s room for a discussion about combining the Breakdown and Loader apps into a single interface where you can see what you have in your file, what’s out-of-date, and what new things are available to reference. I’d also like to have the ability to lock out-of-date references if I know I don’t ever want to update them. This might be useful when a shot is close to final and you don’t want to risk the update.

One of the things we’ve teased is our upcoming discussion about a Subscription system to compliment Publishing. We’ll be talking about what it would mean to keep track of subscriptions at the context level and having a Breakdown-like interface that allows you to manage inputs across DCCs. I won’t go into more detail right now, but definitely check back in for that post.


Hey everyone, Jeff here! I’m going to give you guys a high-level view of how data flows through our pipeline, plus some ideas about what it would look like if we added a couple more stages of production that are likely to exist in your studio. There isn’t anything here that we’ve not talked about in detail in previous posts, but what it does do is show everything together from 10,000 feet, so to speak.

What We Have

Let’s take a look at what Josh and I have up and running in our pipeline.

Pretty simple, right? As we discussed in the first post of this series, we’ve limited ourselves to the bare minimum number of pipeline steps for a fully-functioning pipeline. Something else that’s important to discuss is that we’ve focused on limiting the types of data we’re making use of. From the above diagram you can see that our outputs are limited to Alembic caches, Maya scene files, and image sequences. By utilizing Alembic in all places where it’s suitable, we cover much of the pipeline’s data flow requirements with a single data type. This is great from a developer’s point of view, because it means we’re able to reuse quite a bit of code when it comes to how to import that data into other DCC applications. In the end, the only places in the pipeline where we’re required to publish DCC-specific data are for our shader networks and rigs. These components are tightly bound to Maya, and such need to remain in that format as they flow down the pipeline.

What Could Be

If we expand our pipeline out to cover live-action plates, which are obviously a requirement in any live-action visual effects pipeline, we add a bit of complexity.

You’ll notice, though, that we have not added any additional TYPES of published data. We have more Alembic, which will contain the tracked camera plus any tracking geometry that needs to flow into the Layout step of the pipeline, plus the image sequence itself that comprise the plates for the shot. In the end, we’ve added very little complexity to the platform underneath the pipeline in order to support the additional workflow elements.

We can expand this further by adding in an FX step.

I will fully admit that this is an optimistic amount of simplicity when it comes to FX output data. It’s entirely possible that the list of output file types could expand beyond what is shown here, as simulation data, complex holdouts, and any number of other FX-centric data could come into play. However, the basics can still be covered by everything we’ve already supported. Final elements rendered and passed to a compositor, or cached geometry sent to lighting for rendering.


That’s it for this week! At the end of the series we’ll be putting together a tutorial that shows you how to get our simple pipeline up and running. So if you still have questions about how things work, that should help fill in the blanks.

Like we always say, please let us know what you think! We love the feedback and are ready to learn from you all, the veterans, what it’s like to build Toolkit workflows on real productions. The more we hear from you, the more we learn and the more prepared we’ll be for supporting you down the road.

Next week we’re planning on diving into the realm of Toolkit metaphysics - or at least more abstract ideas about pipeline. If you have strong opinions or philosophies about how production pipelines should work, we’ll look for you in the comments! Have a great week everyone!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: , ,

<< Older Posts    

Our Story

We are industry folk who love production. A handful of us met while building...
Read More
Subscribe to updates via email