Two Guys and a Toolkit - Week 6: Dataflow and Workflow

New to Shotgun or Toolkit? Check out Our Story, learn about Our Product, or get a Quick Toolkit Overview. If you have specific questions, please visit our Support Page!

Dataflow and Workflow

Hi everyone! Welcome back for part six of our series dedicated to building a simple pipeline using Toolkit.

Up to this point in the series, we’ve been looking at setting up and building the fundamental pieces of our simple pipeline. Here are links to the previous posts, just in case you missed anything:

  1. Introduction, Planning, & Toolkit Setup
  2. Configuration
  3. Publishing
  4. Grouping
  5. Publishing from Maya to Nuke

As always, please let us know if you have any questions or want to tell us how you’re using Toolkit in exciting and unique ways. We’ve had a really great response from you all in previous posts and we look forward to keeping that discussion going. So keep the feedback flowing!

This week we thought we’d talk about how all the pieces we’ve been building fit together and discuss the dataflow and artist experience within the context of our pipeline. As usual, we'll take a look at what bits of Toolkit worked well for us and which ones we think could be better. This will give us a solid foundation for the rest of the series as we transition into a discussion with you all about our pipeline philosophies and building more sophisticated workflows.

Workflow

Hey everyone, Josh here! One of the strengths of Toolkit, in my opinion, is that it exposes a common set of tools for every step of the pipeline. This means there is a common pipeline "language" that everyone on production speaks. If someone says, "you need to load version 5 of the clean foreground plate from prep", that means something significant whether you're in animation, lighting, or compositing, because you're all using the same toolset. The more your pipeline can avoid building step-specific workflows and handoff tools, the more flexible your pipeline will be. Now, obviously you have to be able to customize how the data flows between steps, but you should avoid hardwiring that into your pipeline in my opinion.
 
Since we’ve made a conscious effort to keep our pipeline simple, and because we like the fact that we have a consistent set of tools across all of our pipeline steps, we haven’t deviated much from the standard, out-of-the-box Toolkit apps.  So rather than analyzing the workflow at each step of the pipeline individually I think it might be more efficient to look at how the average artist working in the pipeline uses these tools.  I’ll also point out all the customizations we've made (most of which we've mentioned before), but hopefully, combined with the Dataflow section of this post, you’ll be able to put together a complete view of how the pipeline is meant to work and how the packaged Toolkit tools are used.

Loading Publishes

The Loader app is used by almost every step in the pipeline as a way of browsing and consuming upstream PublishedFiles. The loader has quite a few options for configuring the browsing and filtering experience for the user, which is really cool. And of course there are hooks to customize what happens when you select a publish to load.

PublishedFile loading

From a user standpoint, there seems to be a lot of clicking to get at what you’re actually interested in. Between drilling down into a context, filtering publishes, selecting a publish, and finally performing an action, you can rack up quite a few clicks. If you need publishes from more than one input context, you potentially have to start all over again. I think that users often know exactly what they want, and having the ability to type a name into a search widget might be more convenient. There is a search/filter widget in the UI, but it’s for limiting what shows up in the already-filtered view. It would be great to have a smart search that prioritized the returned publishes that were in the same Shot or Asset as the user’s current context.

I also found the filtered list of publish files difficult to parse visually. You can see in the screenshot above that the Loader is displaying PublishedFiles in a single list and they are sorted by name. As a user, I would love to be able to sort by task, version number, username, date, etc.

To me, the Loader is similar enough to a file browser that it is easy to notice where some of the common file browser features are missing. In addition to the sorting and filtering options, I noticed immediately that there were no buttons at the bottom of the UI. I was expecting at least a Cancel/Close button. What’s the general feedback you all get from artists using the Loader UI?

I also wonder how people know what publishes they need to load on production. Is this just a discussion people have with folks upstream (which is perfectly reasonable)? Or does your facility do anything special to track the “approved” publishes in Shotgun and relay that information to the artists somehow. Have you used the configuration capabilities of Loader to filter/show only publishes with a certain status, for example? It would also be interesting to spec out how we might use Shotgun to predict what a user might want or need for their context.

You may have noticed a “Show deprecated files” checkbox in the bottom-right corner of the Loader screenshot. That’s a really cool feature that was added by Jesse Emond, the Toolkit intern, who has been kicking some serious butt around here. We’ll give Jesse a formal introduction in a future post where he’ll be able to talk about deprecating PublishedFiles in our simple pipeline. So definitely be on the lookout for that!

We mentioned in a previous post that we customized the loader hooks to connect shaders and alembic caches as they’re imported. You can see that hacky little bit of code here. And here’s what it looks like in action:

Auto shader hookup on Alembic import

File Management

Next up is the Workfiles2 app which is in pre-release. This app is tasked with managing the files that artists are working in. Every step of our simple pipeline uses this app to open and save working files.

Saving the work file

The default interface, in Save mode, has fields for specifying a name, the version, and the extension of the working file. These fields are used to populate the template specified in the app’s configuration for the engine being used. In this example, the template looks like this:

The template being referenced in the workfiles config


The template itself

Not having to manually keep track of which version number to tack onto the file is a nice convenience, but I do wish Toolkit had a more robust concept of versioning. Right now, the user can manually create as many versions of a maya file as they want, which is great, but the version is isolated to that single file. My preference would be to have the version be a part of the work area context itself, and to have the state of the work area, including my main work file, any associated files I’m using, and all my upstream references, versioned together. In simple terms, I’d like to be able to get back to the state of my entire work area at any time on production. I’m getting a little ahead of myself though. I want to discuss this more in a future post but, in the meantime, definitely let us know how you handle versioning at your facility.

The Save dialog can be expanded as well:

Expanded File Save dialog

You’ll notice the similarities with, and reuse of, some of the UI elements from the Loader. The expanded view allows you to browse and save within a different context.

As mentioned, the workfiles app is also used for opening working files:

File Open dialog

As with the other views, you can browse to the context you’re interested in and find work files to open. I think the interface looks clean, but I still find myself wanting the ability to do more sophisticated searching and filtering across contexts. What do you all think?

Snapshotting 

The Snapshot app is a quick way to make personal backups of your working file.

Snapshot dialog

Artists can type a quick note, take a screenshot, and save the file into a snapshots area. The app also provides a way to browse old snapshots and restore them. It’s a simple tool to use and a nice feature to provide artists. I’d actually like to have comments and thumbnails attached to artists working files as well. I wonder if this functionality shouldn’t just be part of the workfile app’s Save/Open. Thoughts?

It would also be nice to have a setting and hook that allowed for auto-saving, or auto-snapshotting, the files. There could be a setting that limits the number of saves to keep, too. I realize this type of functionality exists in many DCCs already, but having a consistent interface for configuring and managing auto-saves across all pipeline steps and DCCs would be great.

Publishing

Publishing and the Publisher app is something we’ve referenced quite a bit about in previous posts, so we don’t need to go into too much detail here. We’ve shown some of the customizations we’ve made to the configs and hooks in maya. Here are the secondary exports we’ve mentioned in action:

Custom secondary publishes

Updating the Workfile

The Scene Breakdown app displays and updates out-of-date references and is used by all steps with upstream inputs. There is a hook that allows for customization of the discovery of referenced publishes in the file and determining which are out-of-date. There is also a hook to customize how to actually update the references. This makes the app, like all the Toolkit apps, very flexible and able to adapt to changes in pipeline requirements.


Breakdown of references in the file

The interface itself is fairly simple and easy to understand. I like being able to filter for what is out of date and bulk-update them. I do think there’s room for a discussion about combining the Breakdown and Loader apps into a single interface where you can see what you have in your file, what’s out-of-date, and what new things are available to reference. I’d also like to have the ability to lock out-of-date references if I know I don’t ever want to update them. This might be useful when a shot is close to final and you don’t want to risk the update.

One of the things we’ve teased is our upcoming discussion about a Subscription system to compliment Publishing. We’ll be talking about what it would mean to keep track of subscriptions at the context level and having a Breakdown-like interface that allows you to manage inputs across DCCs. I won’t go into more detail right now, but definitely check back in for that post.

Dataflow

Hey everyone, Jeff here! I’m going to give you guys a high-level view of how data flows through our pipeline, plus some ideas about what it would look like if we added a couple more stages of production that are likely to exist in your studio. There isn’t anything here that we’ve not talked about in detail in previous posts, but what it does do is show everything together from 10,000 feet, so to speak.

What We Have

Let’s take a look at what Josh and I have up and running in our pipeline.


Pretty simple, right? As we discussed in the first post of this series, we’ve limited ourselves to the bare minimum number of pipeline steps for a fully-functioning pipeline. Something else that’s important to discuss is that we’ve focused on limiting the types of data we’re making use of. From the above diagram you can see that our outputs are limited to Alembic caches, Maya scene files, and image sequences. By utilizing Alembic in all places where it’s suitable, we cover much of the pipeline’s data flow requirements with a single data type. This is great from a developer’s point of view, because it means we’re able to reuse quite a bit of code when it comes to how to import that data into other DCC applications. In the end, the only places in the pipeline where we’re required to publish DCC-specific data are for our shader networks and rigs. These components are tightly bound to Maya, and such need to remain in that format as they flow down the pipeline.

What Could Be

If we expand our pipeline out to cover live-action plates, which are obviously a requirement in any live-action visual effects pipeline, we add a bit of complexity.


You’ll notice, though, that we have not added any additional TYPES of published data. We have more Alembic, which will contain the tracked camera plus any tracking geometry that needs to flow into the Layout step of the pipeline, plus the image sequence itself that comprise the plates for the shot. In the end, we’ve added very little complexity to the platform underneath the pipeline in order to support the additional workflow elements.

We can expand this further by adding in an FX step.

I will fully admit that this is an optimistic amount of simplicity when it comes to FX output data. It’s entirely possible that the list of output file types could expand beyond what is shown here, as simulation data, complex holdouts, and any number of other FX-centric data could come into play. However, the basics can still be covered by everything we’ve already supported. Final elements rendered and passed to a compositor, or cached geometry sent to lighting for rendering.

Conclusion

That’s it for this week! At the end of the series we’ll be putting together a tutorial that shows you how to get our simple pipeline up and running. So if you still have questions about how things work, that should help fill in the blanks.

Like we always say, please let us know what you think! We love the feedback and are ready to learn from you all, the veterans, what it’s like to build Toolkit workflows on real productions. The more we hear from you, the more we learn and the more prepared we’ll be for supporting you down the road.

Next week we’re planning on diving into the realm of Toolkit metaphysics - or at least more abstract ideas about pipeline. If you have strong opinions or philosophies about how production pipelines should work, we’ll look for you in the comments! Have a great week everyone!

About Jeff & Josh 

Jeff was a Pipeline TD, Lead Pipeline TD, and Pipeline Supervisor at Rhythm and Hues Studios over a span of 9 years. After that, he spent 2+ years at Blur Studio as Pipeline Supervisor. Between R&H and Blur, he has experienced studios large and small, and a wide variety of both proprietary and third-party software. He also really enjoys writing about himself in the third person.

Josh followed the same career path as Jeff in the Pipeline department at R&H, going back to 2003. In 2010 he migrated to the Software group at R&H and helped develop the studio’s proprietary toolset. In 2014 he took a job as Senior Pipeline Engineer in the Digital Production Arts MFA program at Clemson University where he worked with students to develop an open source production pipeline framework.

Jeff & Josh joined the Toolkit team in August of 2015.

Labels: , ,

4 Comments


At October 23, 2015 at 10:03 AM , Blogger Unknown said...

Hi,

Nice overview/recap!

We use our loader app for both published files and the element library. When used to browse the element library, one request we're getting from the users is the ability to filter by tag, and a way to add an element to a Favorite list (come to think of it, a "recently used" list could be useful as well).
I'm not sure how useful that would be when browsing publishes though!

One thing that's a bit surprising is that both the breakdown app and the publish app need to find what's referenced in the current file (the former to display the list, the latter to add upstream dependencies). We ended up having both hooks reference a in-house library as there's no way to share code between hooks (they're not in a proper python module), and I think it's a shame we have to get away from the proper toolkit way of working for this....

And I wish it was this easy to add tracking to the graph! It usually adds a great deal of complexisty with matchmoved geometry, undistorted plates, overscan values, camera rigs, distortion/stmaps, etc. But I get the idea that once you've added all the types of publishes that you need, adding new departments is not that hard really!

Benoit

 

At October 29, 2015 at 11:22 AM , Blogger Jeff Beeland said...

Hey Benoit,

It's interesting that you bring up your element library. The last big development project that I handled at Blur was to build a library framework, and then to implement a manager application for an FX library (mostly Fume setups and caches). We built it on top of Blur's database rather than Shotgun but, as you've proven with your setup, using Shotgun/Toolkit as the backbone is also possible. The ability to tag library items, filter on those tags at the view level, and user-level favorites were all requirements that I was given, as well. The challenge with libraries is that they're typically global to the studio, and so can contain a LOT of items. That means giving users a way to organize and filter that data is key from a usability standpoint.

One of the issues I have with the Loader app is exactly that; I feel like things become cluttered and it's difficult to visually sift through what's there to figure out what I should be using. I think this is going to be even more the case with an element library due to the sheer volume of items to be presented. Now that you've brought this up, I actually think it could be useful to tag published files in general. We discussed doing this with the R&H asset system a number of times, as well. Who would create these tags (or would there be tags attached at publish time by the code in certain circumstances?) I think depends on the individual studio's workflow. One thing I would stress would be to NOT modify the behavior of an app based on tags attached to the entity that it's processing; instead, they should be used for filtering and ONLY filtering. I bring this up because it could easily become an issue that "when I loaded this file it didn't come in the way it should have because it didn't have the right tag on it" which is a pain to debug.

For sharing code between apps and hooks while still staying "within Toolkit", I would suggest building your own framework. Josh and I have done this to house our custom code here:

https://github.com/shotgunsoftware/tk-framework-simple

There are plenty of other examples in Toolkit, as well. The advantage is that you stay within the Toolkit platform, but elevate your custom libraries to a more global vantage point.

As for Tracking, yes. I definitely simplified the problem for the sake of making a point. We left out the live-action component in our pipeline for exactly the reasons you mention. Dealing with plate prep work like flattening lens distortion, rig removal, set extensions, camera tracking and LIDAR set data, retimes, CG takeover of a tracked camera, film/digital color spaces, CDLs...the list goes on and becomes VERY complicated. It's giving me some anxiety just thinking about it. ;)

 

At November 26, 2015 at 2:52 AM , Blogger Marijn Eken said...

I fully agree with everything you said about the Breakdown App. It makes so much sense to integrate this with the Loader, as that is where you'll be looking for (new) stuff anyway. The locking idea is a very nice one, since I've come across it numerous times that I've wanted to keep a certain version and would have to take care not to 'overwrite' it with a higher version.

And speaking of overwriting. I have a 'problem' with how most companies have implemented a Breakdown type of app into their pipeline. I generally don't ever blindly press the update button and hope for the best. Because you might be updating things that you're not currently looking at and if you find out 10 versions later, you could have a hard time figuring out when/where that went wrong exactly. So (in Nuke at least) I would always make a copy of all the Read nodes that I would be checking for updates. And after updating, I would toggle between the old and new Read nodes to check for anything 'weird'. Way too often there's something wrong with a render that you will spot this way (missing layer, wrong holdout, no overscan, etc. etc.) that is quite hard to spot if you simply update the Read nodes to the latest version without looking.

So, this is not necessarily a comment on the implementation by Shotgun, but more a question of how this could be done in a better way? Sticking to what I know best, Nuke, here's a start of an idea: I guess you could make a gizmo that encapsulates the Read node (as many companies do), which could show their status through color (red=out of date, green=latest version, yellow=locked, etc.). It could have two controls: one to select the version to use in your script, and one to select a different version you want to compare to. Then a toggle switch to toggle between the two, coloring the node black when the toggle is on, to remind you to turn off the toggle.

I still like the idea of 'discovering' new versions through the Loader app though, because you can easily preview from there without needing to go through all the Read nodes. Maybe it could work in conjunction with the Read node gizmos and update the 'toggle version' to the latest version upon request (and color the ones that were changed).

I'm not sure if this idea is easily translatable to other pieces of software though, like a 3D package. Because loading/toggling between two versions of a huge asset might not be a good idea. So I'm curious what anyone's thoughts are on this subject. Or, should I just be hitting the update button, render, and see if anything is broken? I would still feel like losing control a bit.

Marijn

 

At September 13, 2016 at 1:48 AM , Blogger Pmac said...

Nice suggestions Marijn. I was thinking about other apps, and the issues you mention could be overcome if the comparison is done of the version's media; eg you could pull up RV with the current and new version quicktimes. You could then potentially see what has changed between versions (unless it's some kind of meta-data change that wouldn't be visible in a playblast).

 

Post a Comment

<< Home

<< Older Posts     Newer Posts >>

Our Story

We are industry folk who love production. A handful of us met while building...
Read More

Subscribe to Our Blog

Follow Us!