Pipeline

While not purely related to rigging, having a good pipeline system is essential to any mid-to-large-sized project, and even beneficial on smaller ones. The complexity of pipeline system will vary based on the needs of the studio and/or project, but it can be helpful in organization of files, version control, backup and many other functions.

Why use a pipeline system?

Saves time and sanity!! So, you only need to use it if you value time and sanity. :)

Helps to prevent confusion and ensures that correct version of files will be used

Keeps files secure and safe from artist tampering--harder to accidently delete, harder to illicitly download

Zeth Willie's Pipeline Basics (thank you!)

Why not to use a pipeline system?

Costly and time-intensive to set up

Needs to be constantly maintained due to changes in technology, software version updates, etc.

Can be rigid and idiosyncratic

Unnecessary level of complexity, if a project is small enough

Commercial Pipeline Software

Deadline

Temerity Pipeline

Free Pipeline Software

OpenPipeline

Basic Pipeline Setup

(from Zeth Willie)

The Poor Man/Woman's Pipeline, for those without programing knowledge..

Suppose your project is organized into a set of folders with different types of assets, ie:

ConceptArt

Boards

3D (ie., Maya, c4D, ZBrush..)

Edit

Compositing (ie., Nuke, AFX)

To_Client

From_Client

Live Action Plates

Mocap Data

Etc..

the 3D section will be divided FIRST into:

workingAssets

publishedAssets

and then into:

model

rig

animation

lighting

render

and then into:

characters

props

sets

Of course there are various ways to do this and the file order can be changed, but the main idea is that you have a separate tree for WORKING and for PUBLISH, so that it is always clear to all team members what is the latest, most current version of each storyboard, model, rig, animation, lit scene, finished edit etc.

while working on your character/prop/set in a some capacity or other, it will be in 'workingAssets' directory, and you can save as many versions as you like. Once it is ready to PUBLISH, you save one version (one only) called 'myModel_master', or 'myModel_current'; 'myRig_master'; 'myAnimatedScene_master', following your favorite, or your studio's favorite naming convention.

If you don't have access to a better pipeline system that gives you ways to store external information about a file (ie., what version it is? who worked on it last? what did they do??), one workaround is to store info about the version within Maya, in the 'Notes' section of an empty node that exists for this purpose.

@river - Start simple (if you do not have any rudimentary pipeline setup).

In designing pipeline (and eventually tooling for it) you need to start

light but solid with some fundamental principles incorporated.

*Core Tenets*

Three things are of fundamental importance - Scalability, Reliability, and

Security. Each of these aspects should be addressed.

*Scalability* - The pipeline should be able to handle 10 shots as easily as

it should 2000. Although, from the get go it is not easy to design for

future proofing the data quantity but still you should pay some heed to it

*Reliability *- The data should be reliably stored and backed-up. In the

case of database outages, the pipeline should be able to recover itself as

quickly as possible. If possible, there should be no central point of

failure - go as distributed as possible(git is a wonderful example of such

a distributed system). With Cloud Computing now becoming affordable and

secure each day, you can take a look at it also.

*Security* - Protecting the IP is important. User / Tool permissions to

access the database should be implemented in the most secure way. This

could start by harnessing the power of the os user permission (in the case

of folders). Users should not be able to access the data only tools should.

If you are using a database, most of them have this inbuilt, look at their

relevant APIs.

*Tools*

Some basic tools need to be designed -

*Publishers *- For exporting data from DCCs / Departments. Naming

conventions and validity checks are the first part, then comes the part of

exporting the data to a database.

*Builders/ Importers *- For importing data into DCCs, basically looking up

the data in a database (these can be as simple as folders or as complex as

large database spread on huge servers)., retrieving it and then importing

it in the scene.

*Versioning*

It also worth mentioning that versioning system is of uttermost importance

here as well, do keep in mind what kind of versioning framework you are

going to use. It should allow easy versioning operations like versioning up

or down and should be intuitive to the artists. They should be able to

easily access and use any old version and also able iterate over versions

of their work/tasks in the simplest possible way. Again, look at the design

principles used in git for versioning. If whole Linux Kernel can depend on

it, so can a pipeline.

*Inter Studio Data Flows*

While talking of inputs and outputs, do mind that at times most modern

pipelines have both 'inter-' and 'intra-' data flows. Meaning, you have

intra-departments data flows in your studio, but also there is the case of

inter-studio transfers. Collaboration is a growing trend in our industry.

Even the biggest studios these days are outsourcing their work to smaller

shops. This means you should be able to efficiently handle and transfer

back the data coming from and to an outside studio. Most of such transfers

employ industry-accepted cutting-edge formats like alembic. Thankfully our

industry has encouraged and embraced open source projects more than any

other.

*Free Open Source Projects*

There is a host of open source projects, big and small, helping you to

facilitate such pipelines. Alembic is just one them. Also, USD from Pixar

is gaining popularity since it has gone open source. In short, don't

reinvent the wheel, use already available libraries to your advantage. This

can save huge time and development costs.

*Pipelines are living entities*

Pipelines are a complex system and have a life of their own. They evolve

over time responding to various production pressures. You cannot think of

every possible requirement before it comes to haunt you, but you can surely

design loosely coupled components (as generic as possible) so that it can

evolve easily. You are never allowed to say no.

*The Human Element*

Pipelines have two brains (just like us human have left and right brain).

One is the free flowing, creative, innovative side and the other is

logical, calculating and rational. Both of these, fight with each other all

the time. The resulting conflict causes the pipeline to evolve. The human

side is the artist, they need fewer buttons on the UI, simplified workflows

and be able to focus on their art rather than the technology. Tool design

should take care of this. On the other hand, there is the database, the

data, the metadata and all the stuff like versioning systems, naming

conventions, data access, io operations that need to be designed in the

most rational, pragmatic and logical way possible. A good pipeline

constantly tries to broker peace between these satisfying the common

denominator. And of course to wreck all of this are 'the clients'. They are

paying and are the boss. They can and do demand most outrageous things and

ask it to be delivered at exactly 2.37am the next morning. Do account for

that also.

Finally,

*import this*

It would be a disservice to write on the *python*-inside-maya group,

without mentioning the awesomeness of python. Go to your REPL and type

`import this`. Read carefully the stuff that your REPL spits out. Called

'Zen of Python', these are like the 'ten commandments' (only better, these

are 20) of the religion called python. These apply to any design process.

Each one is a gem and most of them pertain to pipeline design. Timeless in

nature, these are relevant to any sort of engineering. Think about them and

how they apply to your pipeline design. Use them in your pipeline and thank

Tim Peters for writing 19 of them.

These are my thought, quickly written and somewhat incohesive, but I hope

they help.

Good luck!

- Alok.

Real Pipeline Features

Check-in/Check-out file system

Skinning and Pipeline

Is it better to skin a model referenced into a rig file, or bring model into rig file and do updates by copying skin weights, etc.?

One suggestion of Zeth is to reference in the model to your rig file while you are working on it locally, and when you go to publish, import all the references and run some script or other to clean up namespaces. However if you're skinning a referenced model, it creates a lot of 'ShapeDeformed' nodes

Resources

Marc Beaudoin and Martin Poirier - Cross-platform Animation Pipeline at Behaviour Interactive Studios

http://area.autodesk.com/gdc2011/class2

(you have to register, but it's free!)