Two Guys and a Toolkit - Week 4: Grouping


Welcome back for part four of our series dedicated to building a simple pipeline using Toolkit.

Last week, we talked a lot about publishing and how various types of data flow down the pipe in our simple pipeline. This laid the foundation for future discussions about new features that do not come out of the box with Toolkit. We will revisit the topic of publishing in future posts when we will talk about larger potential features. We also plan to devote some time to discussing the more theoretical and philosophical aspects of publishing and digital asset management. As we move into that realm it will be important for us to have an open dialog with all of you. These topics will be less about how Josh and I did something and more about what could be done, and what approaches to asset management and publishing work best in what situations. It would be great if everyone could start thinking about this now, and if you have some thoughts on how we should approach this or what points we should be sure to hit, please let us know!

This week, I’ve been working on a grouping mechanism for published files and have a working proof of concept implementation that I will share. We’ll discuss the bits and pieces, but also the potential uses for such a feature.

Below are a few of the pages that we found useful for this week’s post:

All About Fields
Query Fields
Publishing and Sharing your Work
Load Published Files
App and Engine Config Reference: Hooks

Grouping Published Files

A simple explanation of what a group of published files is would be to say that they are a collection of multiple PublishedFile entities represented by a single item in Shotgun. From a user’s perspective, using the loader app to reference or import this group would result in ALL of the group’s contents being referenced or imported.

The grouping of published files is not a concept that is native to Shotgun, but it is something that can be added. It requires a number of small changes, and a truly-flexible implementation would take a good bit of thought and additional development beyond what I’ve put into the proof of concept that we will be discussing.

Why Group Publishes?

We should talk about why we would want to do this. What are the advantages of being able to group published files together?

What problems we can solve with this depends on what area of the pipeline we are looking at using it in. Below are a few example use cases, but I’m sure all of you can come up with many more. If you’re using something similar in your studio, or even if you have an idea of how it could be used that I’ve not covered, let us know!

Rendered Elements:

Publishing rendered elements from lighting to be used by a compositor can often produce a large number of published files. Grouping these published files before they flow down the pipe can allow the tools to logically structure these elements in a way that informs other code, like the routine that imports a published image sequence into Nuke, on how these elements fit together. Add to that the ability to store some sort of metadata file (or even a pre-built Nuke script?) as part of the group and it’s easy to see how quite a bit of information about the collection of elements can be gathered and sent along the way.

Another advantage to this sort of organization of published files is that we have a point in time when we know that a set of files are intended to be used together. We can group those compatible published elements so that when the compositor loads those into their session they know that they are getting everything, and that each element should be compatible with all of the others.

Another possibility would be to group in the camera(s) used to render the elements, along with any Alembic caches. This would help the compositor get everything they need into their Nuke script required for 3D compositing, and guarantee that they are using the same camera and geometry caches that the lighter or FX artist used to produce the rendered elements.

Look Development:

An Asset is made up of a number of components by the time it is ready for use in a shot. This typically includes a model, texture maps, shaders, and a rig. Each of these components come together to make the logical whole of the Asset, and a change to one component often requires updates to one or more of the others. An example is that when a model changes, if that requires the UV layout to also change, then the texture maps produced for the previous version of the model might need to be tweaked before they can be used on the new model. Similarly, topology changes might require rigging to adapt their work to the new model.

Given that at some point along the way we know what texture maps and rig pair properly with a specific version of the model, we could group those published files together. We would have a nice package of files that we know are meant to be used together.

Geometry Caches:

We discussed last week publishing multiple Alembic caches out of a scene file rather than a single cache containing the entire scene. One situation where that can be used to our advantage is when multiple resolutions of a mesh are exported and published from an Asset’s rig. It’s fairly common to build more than one resolution of geometry into a rig, as this allows background characters (or, even more importantly, crowds) to be made up of lower-density geometry than foreground, hero characters. It is also typical for these different geometry resolutions to be incorporated into a single rig, as it allows a rigging team to develop and maintain one rig rather than duplicating effort across multiple rigs for the same character.

This setup lends itself well to exporting an Alembic cache per resolution of the character. In this way, in Maya we would end up with a cache reference per resolution of Asset, and we could provide a tool that unloads/reloads those references when the user requests a specific resolution of the Asset to be used.

As for how grouping comes into play, the idea would be to bundle up all of the cache resolutions that were published for an Asset and provide a single group that has linked to it each of those caches as children. When a user loads the group into their scene, that group flattens out to a component list of published files and each one is referenced into the scene accordingly.


The basic implementation of groups as I have built them is simple. I added a children field to the PublishedFile entity in Shotgun. This field is configured to take a list of other PublishedFile entities, which are then considered to be its children. It’s simple and flexible, and creating new fields is a piece of cake.

This led to some difficulty, however. I wanted to add a couple more fields related to child entities and figured I would be able to make that happen with query fields. I was mostly right about that, and was able to get one of the two working after some frustration. I wanted to create is_group and child_count fields. The former comes in handy if you want a quick yes/no answer on whether something is a group, and the latter is more useful in the Shotgun web interface, as it’s a visual indicator of how many children a group has. I wanted is_group to end up as a boolean field, but I was not able to figure out how to make that work as a query field. I’m not saying it isn’t possible, only that I got frustrated and moved on before figuring it out. As for child_count, I did get it to work, as you can see:

You can also see that it took me 36 publishes to get one that was completely correct.

I got it to work, but I honestly don’t know why or how. Below is what I did to make it work, but I couldn’t for the life of me describe why that gives me the correct behavior. I just tried stuff until it did what I wanted it to do, but the words in that query field configuration dialog make little or no sense to me. I’m sure there’s a logical structure there, but to me it is nearly inscrutable.

I don't know what this means.

Initially, I purposely did not speak with an expert about the hows and whys related to query fields. I figured I would take a crack at it the way that I normally would have when working for a studio and see how it went. Randomly flailing about until it works is a time-honored tradition of mine, but it’s obviously not the ideal way to have to learn something.

Since then, I’ve had the opportunity to take a second look at this, and also got some feedback from some of the team. The general consensus seems to be that it would be best to not use query fields for this sort of thing at all. Instead, it would be better to make them normal fields and have the publish routine populate them at the time the group is created. This is simple and also bypasses the limitations of query fields; you can’t filter or sort on them, and they’re not accessible via the Python API.

Code and Configuration:

From a code and configuration standpoint, there were a few hoops to jump through. My first thought was to make use of the publish app’s post_publish hook to build the groups. It looked great, because it’s already provided a list of secondary publish tasks, which is exactly the list of things that I want to group. There were problems, though, as the secondary tasks did not come with the accompanying published file records from Shotgun, and I didn’t want to have to go to the database and look that stuff up again when I knew that the data was already available in the secondary_publish hook. What I ended up doing was taking the data returned from the publish routine and shoving it into the secondary publish item. Since that item is a reference to the same dictionary that is passed to the post_publish hook, I was able to save myself a trip to the database. You can see that here and how I extracted and used it here.

There’s another problem, as well, which is that in the official release of the publish app, the post_publish hook does not receive the same arguments as the other publish hooks.

This is most unfortunate.

Since I was planning to publish something from this hook I needed more than I was given, which exposes a bit of an unfortunate circumstance. Hooks are intended to allow for customization without the need to take ownership of an entire app. This is fairly successful, but with this I ran into something that required that I take control of the app itself, because I needed to change what a hook receives.

What I’ve done is fork tk-multi-publish, which can be found here. The changes are minor, and all of it is simply to provide more bits of data to the post-publish hook. The specific commit for these changes can be found here.

A more flexible solution might be to use the parent application object to store a dictionary of data that can be shared between hooks. That would allow one hook to store something away that another executed later could make use of. We will be discussing this as a team very soon, as it’s a situation that comes up often and it would be good to have a general-purpose solution for it.

As for configuration changes, I had to update several small things. You’ll notice in this commit in my forked publish app that there are two new keys added to the app’s info.yml file; one specifies whether to group a secondary output type’s publishes, and the other specifies the name of that group should one be created. The rest of the configuration changes can be found here and are very straight forward.

I know that I could have avoided altering the app itself by performing the grouping operation in the secondary publish hook, because it would have had access to all of the data that I needed already. Had I been doing this as a TD at a studio that’s exactly what I would have done, but in this case it seemed like a good example of the limitations of how publish hooks work.

Manifests and Metadata:

You might have noticed that the screenshot earlier in this post showing my group in the Shotgun web interface lists the path as a JSON file.

If you did then you also probably noticed the template that was added to templates.yml. In my hacked-together implementation, all I’ve done is shove the children of the group into a JSON file and used that as the path for my group.

This could just as easily contain nothing or anything deemed useful. Josh also had the idea for groups of rendered elements that the path for the group could be a Nuke script. It might be really cool for, as an example, an FX artist pumping out complex elements from Houdini to do their slap comp of everything the way that it should be put together, and then publish a group of elements with the group itself containing the node network that properly pieces the elements together. This would allow for custom comp setups curated by the artist handing off the elements. The possibilities are vast, so use your imagination and then tell us about it!


That’s it for this week. I hope that we’ve given everyone some things to consider and talk about. As always, we would love to hear from anyone that has comments, whether they be public or private. In fact, a portion of next week’s post will be directly related to comments we’ve received, as Josh will be going into publishing rendered elements from Maya for use in Nuke. In addition, he will be outlining how we’ve been managing our custom code via the tk-framework-simple framework. We will also update everyone about some small tweaks to Toolkit that we will have made (or will be making) that have come out of this project. Our hope is to continue to find things that we can immediately put into use that will make life easier for everyone using Toolkit.

Next week marks the halfway point for this blog series. Our intention is to dive into larger, more discussion-heavy topics as we progress. Some of those potential topics I mentioned at the end of Week 3, but we are always open to ideas, so feel free to let us know things you would like to see in the future!

About Jeff & Josh

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: ,

Two Guys and a Toolkit - Week 3: Publishing

Hey gang, Jeff here. Before we get started on this week’s post, I want to hop up on my soap box and say a few words. The industry needs to be vocal. TDs and engineers need to learn from each other’s mistakes, and we need share ideas. Our goal here is to continue talking to as many people as we can, and we think that everyone else should be trying to do the same. We got some great comments and questions, public and private, when we began this blog series, and we would like to hear from more people going forward. Please throw a comment in at the bottom of the post, or email us if you would prefer to ask questions privately.


Welcome back for part three of our series dedicated to building a simple pipeline using Toolkit.

Last week, Josh gave an overview of how he setup some aspects of our project’s configuration. He also went into some ideas that he has for the future of how configurations might be organized on disk and how we might be able to ease the process of altering and maintaining them throughout the lifecycle of a project. To my eyes, learning how configs work is the bulk of the learning curve associated with Toolkit. As such, it’s one of the most important aspects of the entire platform, and a place where small gains in clarity and ease of use could have a significant impact on the day-to-day lives of all TDs tasked with supporting a Toolkit-driven pipeline. We would very much like to hear what your take is, so please get in touch if you have complaints, ideas, or think that we should pursue something Josh presented.

This week, let’s talk about publishing, shall we? Below are links to some of the pages that I used to get my head around how all of this stuff works.

Publishing Alembic From Maya

Publishing and Sharing your Work

As Josh mentioned last week, here is our config repository. It’s still very much a work in progress, but as we go we will continue to refine it.

The Basics

The foundation for a publishing system is available right out of the box. This is a good thing. Passing a Maya scene file from one step of the pipeline to the others is as easy as opening up the Publish app and hitting a button. That’s really all I have to say about the basics of publishing, because going any farther moves us into territory that requires additional development and a lot of thought.

The Outputs

One of the first challenges that a Pipeline TD is faced with is the prospect of sending data from one DCC application to another. This is the case in our pipeline, as well, because we’re passing data from Maya to Nuke. In most studio environments, that list of DCCs is going to be much longer and the connections between them very complex. This raises an immediate problem concerning the out-of-the-box publishing system, which is that you’re publishing native scene formats from each DCC. Writing out a Maya ascii file is perfectly fine when passing data from one Maya scene to another, but what happens when a camera and/or geometry is needed in Nuke?

In comes Alembic, which most of you will be familiar with already. For those that are not, it is a DCC independent geometry interchange framework. That’s a fancy way of saying that we can export objects from a scene file in a format that can be read in other applications that also support Alembic. The good news is that the Toolkit team has some documentation on how to get Alembic secondary publishes setup for Maya, and the information there can be easily applied to other DCC applications. My preference would be to have everything in place needed to support Alembic in a Toolkit-driven pipeline from a code standpoint, such that only configuration changes would be required to make use of it. Right now there is a bit of tinkering involved with getting it ready for use, and I think we can do better than that.

I’ve always been of the opinion that granularity in outputs is important. I want a camera to be published separate from a piece of geometry, and I want individual pieces of geometry instead of a single output containing an entire scene’s worth. Josh and I set out immediately to get our pipeline working this way, and on the geometry front we ran into some issues with the process described in the documentation.


There is logic in the scan_scene hook to separate out each mesh group that it finds in the scene, but all of that work is squandered by the secondary_publish hook, as it’s setup to write out the entire scene’s worth of geometry and disregards the work that scan_scene has done. In addition, the way that the template is setup for Alembic output, as described in the docs, results in all of the geometry in the scene being written to the same output file path for each item that is specified by scan_scene. Not only did we not get what we wanted as far as granularity was concerned, but we ended up making an inefficient publish process at the same time.

The work required to sort this issue out and publish per mesh abc files is relatively small, but not necessarily obvious. We are going to release a fix for this soon so everyone has access to it, but in the meantime you can find what I’ve done to get this up and running by looking at the following two hooks:




For cameras, Josh was quickly able to get individual cameras published as Maya ascii files. This was a good first implementation, as we wanted to be able to publish a camera from our Layout pipeline step for use throughout our pipeline. The implementation for that secondary publish can be found in the same two hooks listed above concerning geometry.

The next step for us will be to leave Maya ascii publishing for cameras behind and write that data out using Alembic. This will allow import of our cameras into Nuke as well as Maya, which is a necessity in most all production pipelines around the world.


Josh put the necessary logic in place to publish out Maya shader networks as secondary publishes from surfacing. He also put together a quick and dirty system that allows the shaders to be reconnected to the Alembic geometry caches when referenced into lighting. By his own admission, the setup is fragile and probably error prone, but it’s good enough as a starting point for our simple pipeline. If you’d like to see the code for this portion of our pipeline, you can check out this commit. Strictly for curiosity’s sake, we’d love to hear from someone that has implemented a surfacing pipeline in Maya to learn how we might handle shaders more robustly.

Maya References

References are a very powerful tool for any Pipeline TD and can be leveraged for the good of everyone involved. They can also become quite a mess. I am not a fan of deeply-nested references, where the file you are referencing itself references another file, and so on down the line to the very beginning steps of the pipeline. What I want is a completely self-contained set of publish files, or as much so as is possible, coming out the back side of each step of the pipeline. There are many reasons why this is a good idea, some of which are outlined below.

1. In a multi-location workflow, data will be transferred from one physical studio to another. When those files are opened in the remote location, external file references that have not been transferred will not be available.
2. In a situation where cleanup is required to free up disk space, if a file is thought to be out of use and is removed, any other files referencing that file will no longer work properly. This is particularly the case in pipelines that track subscriptions to published files to know what is in use and what is not, and where that data is used to drive automated cleanup mechanisms.
3. I’ve personally experienced problems with Maya reference edits when dealing with deeply-nested references. I’m told that this is much better now than it used to be, but I hold an unhealthy number of grudges.
How I implementing this as part of the primary_publish hook’s process is very simple.

Using preserveReferences=False and exportAll, we can write to a temp file with flattened references.

Maya Namespaces

Like references, Maya’s namespaces are a very powerful tool that a Pipeline TD can make good use of. Also like references they can become a giant mess very quickly. I’m of the opinion that Toolkit’s out-of-the-box behavior relating to Maya namespaces is incorrect. The way it is setup often results in an ever-deepening hierarchy of namespaces as data flows down the pipe. This means that where an object is, or should be, is a moving target. I dislike that. I want structure, and I want to know what things are called and where they’re going to live within a scene file.

What I’ve done is strip out all namespacing from the pipeline as a starting point. I want a clean slate to start from so that I can now build up the use of namespaces in the situations where I find them appropriate. One case where we do plan to make use of them is for asset “iterations” or “instances” or “whatever-you-call-multiple-copies-of-something-in-a-single-shot.” Seriously, though, what word does everyone use for this? At R&H they were called “instances” which is a problem because that has other meanings within computer graphics technology. At Blur they’re called “iterations” which sits better with me, but is something that I’m sure is not universally accepted around the industry.

If I have two tables in a shot, I need to be able to import two copies of our Table asset into the scene and they need to be able to coexist peacefully. Maya’s namespaces are ideally suited to making this possible, as it means we can maintain object names without having an object namespace collision. In this situation the namespaces add meaningful structure and allow us to bypass the need to rename objects. In the end we would end up with a structure something like the below example. Here we have two namespaces, Table_1 and Table_2, each of which contain a mesh named Table.

Two tables, coexisting peacefully in their own namespaces.
The changes necessary to clear out Maya namespaces to get yourself to a clean starting point can be found in the following hook.


What’s Next?

What I’ve given you here are just the basics of getting data flowing down the pipeline. This is a good starting point, but there are now some more-complex concepts that we are going to begin to explore in the next few weeks. Some of these we will have implementations roughed in for and others will be outlines of ideas that I hope we can use as discussion points. Josh and I have many ideas that we want to touch on, and we are now getting to the point in this series where we feel we have enough of a foundation laid out that we can start to get our hands dirty.

Topics that we will be discussing in the coming weeks will include the following.

1. Grouping published files. We want to be able to group together multiple published files into a single “group” publish. This will mean that PublishedFile entities in Shotgun will have the ability to carry links to a list of children. I’ve partially implemented this, but still have work to do in the coming week to make it presentable. This also leads to some challenges that I’ve had with query fields in Shotgun that we can discuss at the same time, as well as difficulties related to the publish app’s hook implementation.
2. Deprecation of published files. This is something that the Toolkit team’s intern, Jesse, already has up and running. The idea is that a user can mark a published file as no longer worthy of use. This would mean that users already using that file can continue to do so, but new users should steer clear. Jesse will be giving us an overview of how he’s implemented this feature next week.
3. Tracking of disk usage. We want to track more data in Shotgun about the published files themselves, and plan to cache the file size (and other bits of data) at publish time. This will allow us to build some views into the pipeline that are useful for support teams that are responsible for keeping the disks clean.
4. Subscription-based workflows. This is not something that we will have implemented before we talk about it. It is a major component of systems that Josh and I have developed and supported in years past, and it opens up a whole new world of possibilities if it’s implemented properly. We’ll discuss those implementation details and outline some of the benefits.


That’s it for this week. I’m hoping that the topics to come in the list above sound interesting. These first few weeks have been laying the foundation for a lot of really juicy topics going forward. What we really want to do is start a discussion. It’s great that we can convey our experiences building this pipeline, but the goal beyond that is to try and come to some common ground with everyone out there. Maybe what we do will act as inspiration for some of Shotgun’s customers, but equally important is for us to be inspired by your experiences, so that we can all learn from each other and build some really powerful tools for artists.

About Jeff & Josh

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: ,

We're Hiring!
We're excited to be hiring and are on the look out for some super awesome people to join our team. We search the globe for the smartest and friendliest people we can find (and we live be a strict no-asshole policy).

This is happening now, so if you’re interested in joining the team, check out what we’re looking for:

Senior Product Manager

We are on the search for an experienced and awesome Product Manager to lead a dynamic product and design team on our mission to build our next generation platform and updated suite of products and apps.

As a Product Manager on our team, you will work alongside Shotgun’s co-founders and product managers and develop the long term product strategy and short to long term roadmaps, communicating and validating both internally and externally. You will be a unifying force for the entire product team, mentoring and supporting each product manager while also implementing just the right amount of process and tools to keep the team united while moving quickly. Most of our team members work from home offices and are located in different parts of the globe. This role can be home or office-based.
And you’ll work closely with the other department leads, such as engineering, marketing and support, to continually improve the way we work together.
Ideally, you would have worked on a large cloud platform product, with hands-on experience both launching something new and scaling something really big. Desktop/native software experience is also a plus, since we do that too.

More info here

Projects & Programs Manager

We are looking for an experienced producer / project manager to join our senior team, driving a wide variety of cross team projects in an effort to help us function like a well-oiled, butt-kicking machine. We’ve gotten pretty big and have a lot of firepower, which we are determined to use in the best possible way, but we need your help to keep things moving forward.

You’ll work directly with all the team leads on projects or issues that cut across departments. Creating and managing schedules and budgets should be second nature to you, and you should be able to gently wrangle a (virtual) room full of smart and energetic people and help them clarify problems, solutions, or goals. Then push forward to help get things done.

You’ll know when to be gentle, and when to be tough. When to keep a meeting focused and when to let a team ramble a bit. When everyone is clear and when you need to clarify. You’ll be comfortable working with product managers, designers, engineers, marketing, support, etc., communicating at their level but without any issues asking a for clarification when you don’t understand something being discussed. You’ll be focused on keeping a wide variety of plans on track but will understand when new information may very well warrant a course correction. You’ll have super-power intuition (even with distributed teams) to spot and address problems, and an instinct towards peace making and finding win/win solutions.

We're excited to bring these folks in to join the Shotgun team, so if you think you're a good fit for either of these positions, we'd love to talk!

the Shotgun Crew
Two Guys and a Toolkit - Week 2: Configuration


Hi everyone! Josh here. Welcome back for part two of our series dedicated to building a simple pipeline using Toolkit.

Last week we gave a quick introduction of the project and talked about the early phases of design and set up. This week I want to go a little deeper into the setup and management of a Toolkit project and talk about the configuration system. Since I’ve been responsible for the pipeline configuration for our little pipeline, this will be a brain dump of my thoughts and experiences regarding the configs. I’d love to find out what you all have done as far as config customizations go and how you manage them on a daily basis across multiple projects and/or locations.  We got some great feedback in the comments last week, so let’s keep the conversation going!

Here are the links I used to get up to speed with the configuration system and to better understand how it works within Toolkit:

Basic Configuration

Managing your Project Configuration

Also, if you are interested in following along on the development side, you can find our simple pipeline config repository here. We don’t recommend anyone actually use our config yet, but we’ll be referencing it throughout the series.


As we mentioned last week, getting the initial configuration installed and working for the pipeline was relatively painless. Using Shotgun Desktop was very convenient and we were able to launch DCCs in no time.

The next step was actually configuring the pipeline to behave the way we wanted. After reading the docs, I decided to just dive into the config directory structure, take a look at all the files, and try to get the “lay of the land,” so to speak.

In the core directory I found the templates.yml file and the schema directory tree, which are used to map Shotgun entities into the desired file-system structure. There was a lot to take in as far as the documentation for this portion of the config. To keep things simple for us, I cleaned out some of the references to DCCs we wouldn’t be using and left the rest of the structure largely unchanged.

The one thing I did add in the schema hierarchy was deferred folder creation files for the work area DCC directories. The default config does not include these files by default, so when you create the directories for an asset or shot on disk, you get many more directories than you likely need. Adding these little configuration files in there defers creation of the DCC directories until the engine is launched within that context. My preference would be to have this as the default behavior, but I’m curious what you guys think.

Deferred folder creation file

The other big chunk of the configuration structure is the env directory, which houses the various environment configs that drive what engines and apps are available for a given context. A core hook called pick_environment is used by Toolkit to map a context to one of these files (with the exception of the shotgun_*.yml files which are mapped by convention from within shotgun).

One of the things I’ve found really convenient about working with the environment configurations is the location setting available on each of the engine, app, and framework bundles. The location setting makes it really easy to pull the bundle from a local directory, a github repo, or the Toolkit app store. This makes development and testing changes to an environment configuration very convenient.

After browsing through these environment files some more, the thing that stuck out the most was the amount of repetition in the settings. Take look at the asset_step.yml and shot_step.yml files from the default config, for example.

Repetition in the config files

You’ll notice quite a bit of repetition in the engine configurations of each file. You’ll also see the tk-multi-* apps being used by several engines in each file, again with a very repetitive list of settings. After looking through these, I had a some thoughts about how to make managing these files a better experience.

Inheritance Experiment

My first thought was that I’d much prefer an inheritance scheme where I could define common settings in one file that I could include into an asset or shot environment. From there I could override the default settings with only the settings that differed for a given context. The config system already has an include mechanism, but it updates at the key level of the config dictionary rather than doing a merge with value-level overrides. So I decided to see if I could hack into the Toolkit core (which sounds really dangerous and awesome) to see if I could get the includes to do what I wanted. After fiddling around with it for a bit, I was able to get it working. You can see the changes in this branch of the tk-core github repo.

I didn't take the time to map the entire default config into a new organizational structure, but I did run some tests on a simple inheritance-based config, and it seems to work as expected. The changes allow you to define an engine and its apps with default values in one file, and then include that into an environment config that only overrides the settings that are unique to that context.

Includes overrides

You can browse these modified environment configs in this branch of the tk-config-simple repository.

I think this type of organization could make environment configs easier to manage. For example, when you want a specialized environment for one sequence on your project or for an animation step on one shot, you wouldn’t need to duplicate an entire config.

I should also note that I haven’t fully tested these ideas, and I’m not exactly sure how bad this would break the tank command. It was an interesting experiment, and it helped me learn more about how the includes worked and what goes on inside of core. I’m curious if any of you have done similar experiments or if you have an environment config setup that works well for you.

Management UI and API

Having looked through the configs, and played around with their organization, it seems to me that the best long-term solution would be to not have to manage them by hand or via the tank command at all. It seems what Toolkit really needs is a slick configuration management interface. It would be neat to have features like drag and drop installation of engines/apps/frameworks from the app store, push-button configuration deployment across multiple projects or locations, and visual indications when apps/engines/frameworks are out of date, just to name a few.

I’d also like to think about an environment configuration API that wasn’t file-centric. From a conceptual standpoint, how the data is stored under-the-hood shouldn’t really matter and we should be interacting with an API rather than files on disk. I’m guessing that if there was such an API, and perhaps some modifications to the pick_environment hook to accept an environment object, some of you might build your own slick UIs or come up with clever ways to organize and store your environment configurations. Then again, maybe you’ve already done that. If so, I’d love to hear about it!

Taking things a bit farther, in the long term I’d really like it if we could all share the bundled configurations (and other pipeline-related tools, scripts, widgets, gizmos, etc) that we’re building in a pipeline marketplace. Shotgun has the Toolkit App Store, and Autodesk owns Creative Market; what if we could build a community around pipeline-related content? Imagine working in Maya and being able to install a toolkit app, try it out, comment on it, rate it, or even suggest improvements for it, all within a single session!


This seems like a good place to wrap up this week’s post. It was fun getting the config into a semi-working state with the stripped down engines and apps, and the deferred directory creation. Tinkering with the Toolkit core was also an interesting experiment that we may end up fully fleshing out in the coming weeks.

Right now, I’m still down in the guts of the config system, making changes and trying to learn how some of the more subtle bits and pieces work. Feel free to watch the tk-config-simple repo if you want to follow along with the development over the coming weeks.

I hope this week’s post gave you some insight or perhaps sparked an idea in your mind about how we might move forward. I’m excited to hear your thoughts on and experiences with Toolkit configurations and to learn how you manage them at your facility.

I know last week we teased our studio code integration, but that’ll have to wait for a future post. Next week, Jeff will be giving you the low down on his experiences with file publishing and how data flows down the pipe from one step to another.

Updates from Last Week

Hey gang, this is Jeff. Last week’s post elicited a handful of comments in the blog post itself, but also a bit of feedback in support ticket conversations that revolved around some of the same things that we are trying to achieve with this project.

In last week’s post, I think we could have provided a little more detail in the section concerning cloning a pipeline configuration to create a development environment. It is a very useful feature and one that we are using daily, but there is some information that we failed to mention. After cloning the primary pipeline configuration, things should be up and running. However, when I set this up for myself, I immediately removed the cloned config directory  and created a config symlink to a clone of our tk-config-simple repo in my development area. This mostly worked, but there were autogenerated files that were part of the project’s primary config that were created on project setup.. These files had to be copied across from the primary config into the repo that I had linked to, and we added them to our .gitignore file since they shouldn’t be kept under revision control. This step is not immediately made obvious by the documentation, and so it could be a bit of a gotcha for new users.

Below is the list of ignored files that I copied across for reference. Hopefully it will help some of you get past that gotcha with a minimum amount of frustration.

Our .gitignore which includes the auto-generated files from the primary config

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: ,

Thank you to our clients, your work inspires ours!
Shotgun is excited to support so many amazing studios and we will continue to build tools to help you bring the world inspiring work.

Here's some of the awesome work our clients have done this year using Shotgun in film, TV, and games.

Two Guys and a Toolkit - Week 1: Introduction, Planning, & Toolkit Setup


Hi everyone! We are Jeff and Josh, the new guys on the Toolkit team, and we’d like to welcome you to part one of a ten part, weekly series where we discuss our experiences building a simple pipeline using Toolkit. Each week we’re going to walk you through our thought process and how we approach some of the common aspects of building a pipeline using the framework. We will tell you what bits of Toolkit worked well for us, what roadblocks we hit, and how we think things might be improved.

One thing we recognize is that many of you have been down this road before. You’ve built a pipeline with Toolkit, and you’ve come up with some really cool solutions at your studio. That’s why we don’t want these posts to just be a lot of us telling you what we’re doing. We want to know how and why you built your pipeline the way you did. Any input you (the Toolkit veterans) can provide will be tremendous insight for us. We'll do our absolute best to learn from what you all have to say in the comments each week and apply what we hear to this little pipeline we're putting together. We’re brand new to Toolkit, so tell us what we’re doing wrong. The more we know about how you all work, the kinds of solutions you’ve built, and some of the Toolkit best practices, the better we’ll be able to support you in the long run as members of the Toolkit team.

On the flip side, if you’re someone who is considering using Toolkit in your facility and you have questions about how certain pieces fit together, let us know that as well. If we can help answer those questions or address any concerns you may have, we’d love to do so. There’s a good chance that other readers have been, or are, in a similar situation, so please be vocal in the comments and we can get a conversation going.

We do expect that if you’re reading this you have some experience with Shotgun, or at least a general understanding of the role it serves in production. Throughout these posts we’ll do our best to link to pre-existing Shotgun or Toolkit documentation that we’ve used or that we think might be relevant to the discussion. Along those lines, here are some links that we found useful as we were getting up to speed.

Shotgun Support

Getting Started with Shotgun

Shotgun User Guide

Shotgun Community Forums

Toolkit Support

Toolkit Overview

Toolkit Example Videos

Other technologies we’ll be talking about and using during this series include Python, Github, Maya, Nuke, and Alembic.

Still with us? Good! Let’s get to work!


The first thing on our to-do list was to map out how we wanted our pipeline to work. We both came from the visual effects side of production and have had success in the past building subscription-based, pull pipelines. That kind of workflow requires features that are outside of the scope of Toolkit as it exists out of the box, but we would like to explore some of those ideas in the coming weeks. With that in mind, our initial goal has been to implement the broad strokes of our pipeline in such a way that we have a solid foundation upon which to build more complex features in the coming weeks.

We also wanted to minimize the number of content-creation packages we would be dealing with. So far we’re using Maya for everything except for compositing where we’re using Nuke. The reason for minimizing the number of DCCs is that we wanted to put the focus on learning and exploring Toolkit, not the content-creation software. We realize this isn’t a luxury many of you have in the real world, but the focus here is on Pipeline. And while the concepts and features we implement will often be DCC-specific, they should translate well, from a design standpoint, to other software packages.

A very simple representation of the pipeline 

We decided to stick pretty closely to the default Shotgun configuration for assets and shots. On the asset side we wanted to pull the model into rigging (if needed) and surfacing. On the shot side, we wanted to pull either the model (prop) or rig into layout, which would then be pulled into animation. Lighting would pull an Alembic cache from animation and it would then render out frames to be pulled in by comp. Our shot camera would be generated in layout and pulled by each of animation, lighting, and compositing. Surfacing was a bit of a question mark as neither of us are Maya experts. We decided to see if we could get shader networks pulled into lighting (not pictured) and attached to the Alembic caches as a first pass. Another requirement was that we needed to be able to generate reviewable material at each step in the pipeline.

Toolkit Setup

The first step in making this all work was to get Shotgun and Toolkit up and running. We got our Shotgun site set up and started customizing it based on our pipeline design. Jeff set up a couple of assets and a shot and once we had the tasks created for each, we were ready to get rolling with Toolkit.

It should be mentioned that we are in different physical locations, on opposite sides of the U.S. In the coming weeks we might formally explore how our pipeline would work with users in different locations, but for now we’re simulating a shared file system using an auto-synced folder via a free cloud-storage service. It’s not ideal, but suitable for our needs.

Getting toolkit set up for our project was fairly straight forward. There is some excellent documentation for installing and running Shotgun Desktop which walks you through setting up your project’s Toolkit configuration.

One small issue we hit during the process was getting locked out of the being able to define the primary storage location for our project.

Locked paths in the Setup Wizard 

What we didn’t realize was that these paths are shared across all projects and had already been set in the Shotgun site preferences. The docs state this clearly enough (we got a little ahead of ourselves), but this particular page of the setup wizard was confusing. We’ve created an internal ticket for this one to add a little more polish to the UI to clarify why all the paths are locked and to not suggest you can enter new paths.

We ended up creating a github repo (which we’ll be opening up to you guys in the coming weeks) based off of the default configuration and pointing Desktop to that during the install. We knew we were going to be making many changes, so being able to track those in a git repo was a must. We noticed that after Desktop finished with the setup process, there was a working clone of our repo in the newly created configuration directory. This seemed really nice! The next thing we did was go to the Pipeline Configurations page in Shotgun and clone the Primary configuration so that we could each have our own sandbox to play in. As a bonus, the cloned configuration directories also retained the necessary git repo structure, so we were ready to start independent development.

We noticed that the cloned configurations you have access to in Shotgun show up as additional items in the action menus.

Repeated configurations in the Action Menus 

It would be nice if you could customize the Shotgun action menus to some extent or just have cloned configurations show up as sub menus by default. We can see cases where some users, especially support folks, may clone configurations often which could make the menu really long. We created an internal ticket for this as well.

Next up was customizing the launchers for the content-creation software we’d make available to the users of our pipeline. This is typically managed within the pipeline configuration in these two files:



In the paths.yml file we entered the paths to the software we’re making available in the pipeline and cleaned out anything we weren’t using. In the app_launchers.yml file we configured instances of the tk-multi-launchapp with additional launch settings and again cleaned out the configurations for software we weren’t planning on using.

One thing that might be a bit confusing or misleading to folks is the use of the word app. In Toolkit, there’s the concept of an app which is a functional bit of a workflow that runs within one or more engines. The word app is also being used to refer to the content-creation packages. The app_launchers.yml file drives the launching of the content-creation software, but houses various configuration blocks for the tk-multi-launch Toolkit app. There’s another file in the includes directory called common_apps.yml which defines common configurations of Toolkit apps. Long term it might be nice to be more consistent with the terminology. Perhaps it would make sense for the file to be named dcc_launchers.yml and for the Toolkit app to be named tk-multi-launchdcc. What do you guys think?

Next we found the toggle in Desktop that allows you to set the pipeline configuration you want to use when launching content creation software.

Set the configuration to use within Desktop 

This was really convenient, but it did make us realize that desktop only allows launching the content creation software within a project context. It would be nice to be able to set an asset or shot context via Desktop and launch Maya or Nuke within that context; maybe even on a workfile displayed in Desktop. From our internal discussions, it sounds like the Toolkit team anticipated this as well; see all the extra space next to Apps at the to of the Desktop UI? There didn't seem to be a ticket for this internally, so we created one.


That’s it for this first week. Hopefully you’re able to get an idea of where we’re going with this series. This entry was more about the setup and preparation for building a pipeline, but we’ll certainly be getting more into the guts of Toolkit in the coming weeks. Once again, if there’s anything you want us to cover or you have questions or thoughts on what we’re doing, please let us know in the comments.

Next week we’re going to talk quite a bit about the configuration system and how we think it might be improved. We’ll look at how we integrated our “studio” code with Toolkit and how that could be bundled and referenced with pipeline configurations.

We hope you all have a fantastic week!

- Jeff & Josh

About Jeff & Josh

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: ,

Now Available- Shotgun Panel & Shotgun 6.2
We’re excited to announce that the Shotgun Panel is now available to all users of the Shotgun Toolkit! Now, artists can access all key Shotgun information and collaborate with other artists right from inside Maya, Nuke and other creative applications. We’ve also rolled out Shotgun 6.2 to all Shotgun users, making Shotgun more secure than ever before.

The new Shotgun Panel gives artists a simple mini-Shotgun right inside their most commonly used applications. The Shotgun panel is embedded directly in Maya and Nuke, and acts as a floating window with any other supported applications like Flame and Houdini. Artists can quickly view information from Shotgun relevant to the task they’re working on, have instant access to the Shotgun activity stream, notes, tasks, versions, publishes, and can play back versions sent to review. The UI is native with built-in caching, making it super fast, and just like all other toolkit apps, it is customizable to support each studio’s specific workflows. Read the full release notes here.

Shotgun 6.2
Also now available is Shotgun 6.2, which focuses on backend security improvements. We are modernizing our infrastructure to provide increased security for our customers. Read the full release notes here.


The start of a series of posts on pipeline
For a while now we've been helping studios put together pipelines with our Pipeline Toolkit, and it's been great seeing how these tools can really help organize and automate a lot of the work that you do. Unfortunately, it has been hard to bottle up that experience and help everybody new to Toolkit get past the currently steep learning curve that Toolkit has. We are working on making Toolkit easier to use, and we have just hired two new pipeline developers to help us with that.

Jeff Beeland
Jeff and Josh have both just joined the Toolkit team and while they really know how to put together a production pipeline, neither one really knows how to use Toolkit. So we thought what better way to get them up to speed than to ask them to do the same thing we ask all of you to do: put together a pipeline with Toolkit. We figured that would let them learn the system, what it can do, and where it falls short for people trying to use it.

Josh Tomlinson
For the next 10 weeks, Jeff and Josh will be building their pipeline and will write weekly posts on our blog sharing the progress they are making, their trials and tribulations, and their impressions of (and frustrations with) Toolkit. We old hands on the Toolkit team are here for help if asked, but this will be the Josh and Jeff show. We've asked them to be upfront and honest with their experiences, so that we learn from them what ends up being truly difficult. Anybody reading this series of posts will see what they go through, warts and all.

Additionally, to try to make the most of what they learn, we plan on turning what they create (hopefully a simple, yet functional, end-to-end pipeline) into a tutorial that new users of Toolkit can follow along with to learn the system and put their basic workflows together.

I hope you enjoy their adventures!

We're Coming to IBC!
We’re heading to IBC this year and will be showing our newest set of tools and features including the new Shotgun Panel and the upcoming Shotgun 6.3. We’ll also be highlighting the Shotgun/Flame workflow which allows artists to spend more time creating and less time on logistics.

The Shotgun Panel is a simple mini-Shotgun UI directly inside of creative tools like Maya, Flame and Nuke that allows artists to communicate directly with other artists, and see only the information relevant to their tasks from inside of their creative tools. The Panel works as a floating window inside of Flame, allowing artists to have access to all of their Shotgun project information without having to go out to a web browser.

Shotgun Panel inside Flame

The Shotgun Panel will be available to all users of Shotgun Toolkit next week! So stay tuned for more details.

Here's where you can find us:

AJA booth (7.F11)

Our own Ken LaRue will be guest demoing at the AJA booth throughout the show. Come by to see the Shotgun Panel in action and how it works inside creative tools like Maya and Flame.

Friday, September 11, 10:30am-6pm
Saturday, September 12, 9:30am-6pm
Sunday, September 13, 9:30am-6pm
Monday, September 14, 9:30am-6pm
Tuesday, September 15, 9:30am-6pm

Want to Meet up?

Want to meet up with us at the show? Email us at to schedule a demo.

Looking forward to seeing you there!

Get to Know... OMSTUDIOS
We recently chatted with Timor Kardum, a partner, VFX supervisor, and director at OMSTUDIOS in Berlin, Germany who was also recently recognized by Shotgun with a Pipeline Shotty Award for developing one of the top tools of 2015. Timor shared some insight on the innovative ways that OMSTUDIOS uses Shotgun, including custom integration with After Effects for their artists, and even using Shotgun as an easily accessible freelance artist database.

Tell us about your company and the type of projects you work on.
OMSTUDIOS is a media production company based in Berlin with four departments, each led by a partner: film, design, VFX, and post production. All partners are part of the daily operation as director of photography, creative director, visual effects supervisor, and post production supervisor. As part of the film department we also have a drone unit that uses our OMCOPTER – the first drone in the world to fly with a RED Epic. This was used to shoot plates for Fast/Furious 6, Monuments Men, Hercules and many more. We work on many different types of projects: VFX for commercials, corporate films, and feature films, and a lot of massive motion design projects. The bulk of work is in the motion design projects, which often have crazy resolutions, frame rates, and other special requirements (live projection mapping, etc.).

Is your team working in multiple locations? Who is using Shotgun?
We all operate out of our office in Berlin, Germany. 30-40 people there currently use Shotgun. Overseeing the artists, we currently have six producers, three art directors, one creative director, and one VFX supervisor (me). If there is a car show in some place like Singapore, we even go there with a little team of artists with a mobile render farm and give support on-site.

What content creation tools do you use in-house?
We have two departments that use different tools: motion design is entirely After Effects and Cinema4D; VFX is primarily Nuke and Maya. We edit on Avid and our grading suite runs DaVinci Resolve.

Do you develop proprietary tools? If so which one are you most proud of?
I am definitely most proud of the After Effects integration, as this is something that I have not seen anywhere else – and I have seen what happens if you do not have a pipeline in place but a lot of creative forces creating a lot of creative files at a lot of creative places on the server. Receiving the Shotgun Pipeline Award for this was hugely satisfying, believe me!

Why is it important to pay such close attention to your pipeline?
I spend a great deal of time thinking about how NOT to annoy the artists with the pipeline. I want to give them a system that enables them to get going with the fewest clicks possible. I always picture the artists like painters that should not have to search for their canvas and tools, but their tools should rather be there and ready when they need them. This helps them to focus on the creative stuff and reduces frustration. I try to spend at least a couple of minutes every day thinking about how to improve the pipeline. I would love to spend more time and resources on it; unfortunately the budgets and timing often do not allow this.

Why has your company been so successful?
I think the rare combination of a film production house with a full blown post, design and VFX branch is a powerful combination, especially in the days of complex technical projects like 360° VR, high frame rates, drone shoots, etc. We have the expertise to plan a shoot based on the requirements for post, something that most film productions have to look for externally.

What’s a day in the life like for you?
Depending on the project I mostly start with the dailies, as I want to give the artists feedback as quickly as possible. After this I usually start answering all relevant emails and then do the tour through the office, checking to see if all is going well. If we have a shoot going on, it’s all quite different of course, that’s the beauty of the job!

What are the three most important things in your office?
1. A great atmosphere
2. Good communication (Shotgun helps a lot here)
3. The Italian coffee machine

How do you do to stay connected to the artist community?
We have the luxury to have a lot of different freelance artists coming and going on every project, so staying in touch is easy. We also use Shotgun as a freelancer database with more then 650 artists all tagged with their skills, tools, reel, etc. so we can find the right people for the job fast. And then there are great meet-ups in Berlin both for VFX and motion design people.

What inspires you?
A million things. I love the techy-geek stuff as much as beautiful design and great international crews that bring their own cultural background into the office every day. 

What is your favorite thing about working in your city?
Berlin is currently (apparently) THE place to be, especially in the digital design business, which is a great thing for me as company owner as I can tap into the best international talent, something that was not so easy a couple of years ago. Also Berlin still has a great deal of uncharted territories and wild, inspiring hidden corners.

When you aren’t working, what’s the ideal way to spend a day?
I’d rather not stay in the city over the weekends if I can avoid it, but go into nature and do things like rock-climbing or kite surfing with my family (including my 4 year old daughter).

What led you to visual effects?
I was already freelancing in 3D modeling/animation when I was still at school, so there was never really the question where I would end up. I started the company with my partner right after school and we went straight for it.

Can you describe a recent project where Shotgun was essential?
For every project we run, Shotgun is at the core of it. It not only creates the project folder in the beginning, it also pushes it to archival storage after it is finished (we can even put a project "on hold" and it will be "parked" on a different storage location just by changing its status inside Shotgun). We do a lot of international car shows for brands like Volkswagen, Porsche, Lamborghini, etc. and for this we developed a custom After Effects integration which you can see here: (we just won the Shotgun Pipeline Award for this and are stoked!!). We can currently run three car shows simultaneously, one in 8K 50fps, the other two in 6K 50fps. It’s impossible without Shotgun.

Also in April we finished a 4K corporate film for Hanergy, the biggest solar company in the world. I directed that project and it is a great example of how smoothly a project can go with a solid pipeline. We shot in Germany and China and you cannot imagine how awesome it was to see the latest renderings automatically popping up inside the Shotgun app in the back of a truck in the Tibetan Autonomous Region and being able to give feedback instantly. The clients could not believe their eyes!

What’s your favorite feature of Shotgun?
There are countless features I could not live without anymore: the consistency it gives to all projects, the reduction of human errors to a minimum, the ability to see the status of the entire show at a glance, the deep integration into other tools like Thinkbox Deadline and RV, etc.

What is the biggest challenge in running a studio today?
Probably finding the right balance between projects that are necessary to survive, while at the same time staying true to your love of crafting impossible worlds and projects that are close to your heart.


<< Older Posts    

Our Story

We are industry folk who love production. A handful of us met while building...
Read More
Subscribe to updates via email