Refactor

Refactoring Part 5: Component Settings Widget by Jeremy Ernst

In the beta version of the tools, in order to change settings on a component, a settings widget needed to be built. This widget was written each time for each component. Even worse, it was impossible to do anything without a graphical user interface. You could not change a property on a component via command line, it had to be done through a UI. It. was. bad. You can see in the instantiation of the leg class, that it even took in the instance of the user interface! These two things should be totally separated and a component should never need to know about the user interface!

What the hell was I thinking?

What the hell was I thinking?

In order to address this, obviously things were rethought from the ground up. This is covered in previous posts, but to summarize, a component creates a network node. It uses properties to get and set data on the network node. The UI simply displays that data or calls on the setter for a property if a widget value is changed. You can see the general flow of this below.

Components are instantiated by either passing in no network node, in which case one is created, or if a network node is passed in, you create an instance with that data. You can see an example of that here:

Properties are an important element in this refactor. In order to have a component’s settings widget auto-generate, it simply gathers the properties of that class (including the inherited properties) and builds a widget off of those. By looking at the corresponding attribute types on the network node, it knows what type of widget to build. And because it’s a property, the changing of a widget value just calls on setattr! Here is what the code looks like for generating the property widgets and setting a property:

Okay, so let’s look at this in action. The UI has been redesigned to be faster and easier to use. In the clip below, when the Rig Builder is launched, it will create an asset and a root component. Then components can be added to the scene, which will add them to a list widget. Each item in the list widget has icons next to them for hiding that component in the scene, toggling aim mode, and toggling pin-in-place. Clicking on an item builds the settings widget for that component, which is generated from the component’s properties. Any changes to those settings then call on the component property’s setter, which handles what happens when a value is changed.

Since these changes, writing tools for the components has been a breeze. It’s amazing what a difference a good design can have on development efficiency. I’ll go over the Rig Builder interface and its various tools next time.

Refactoring Part 4: Component Creation by Jeremy Ernst

It’s been a while since I’ve posted an update on the progress of the tools. I’m pretty happy with where things are right now where they’re headed. I figured I’d make a post comparing the differences of creating a new component in the beta version of the tools to the new version of the tools.

In the beta version, a fair amount of code needed to be written and a fairly complex maya file had to be created before you could have a component that could generate some joints. This was frankly due to bad design and a lack of forethought and planning.

The highlighted methods were ones that needed to be implemented in this case, for the leg to work and generate joints.

The joint mover file was also complex and had some assumptions about hierarchy and naming. All bad.

There’s a lot to address here. For instance, the class for a component should be much simplified and should not need to be building UI widgets and such. Lots of the bespoke functionality was because of a lack of a unified system, so each component might have its own way of pinning a component, or setting up aim mode, or whatever.

Here’s a class diagram of the refactored code.

There’s a lot to look at, but the important bit is BipedLeg and how little is needed to get that component creating some joints. To create a component, you simply need to define the unique properties of that component (ex: number of thigh twists) by adding them as attributes to the metanode and then implementing their property getters and setters. You also need to define/create a joint mover file, which is now incredibly easy.

For the new joint mover file, you start by creating the joints you want your component to have in its max configuration (there are exceptions to this like the spine and chain which you actually create the min configuration).

Create the joints you want your component to create, and give them a name (which the user can then overwrite if they wish).

Once you’ve created your joints and ensured your joint orients are nice and tidy, there is a tool to mark the joints up with attributes. These attributes will build the joint mover controls, determine how aim mode is setup, etc. Once you’ve set the attributes, save the file, set the class attribute for the path, and you’re good to go!

Mark up joints with attributes to determine the control shape that will be applied, if the joint aims at another joint, the aim details and so on.

With these changes, creating new components in the refactored code is incredibly easy and quick. I’m sure there are things that could still be better, but it’s definitely a marked improvement from where things were. So far, there are 11 components in the ARTv2 refactor. Some of the previous components like arm, have been broken down into arm and finger.

The components in the ARTv2 refactor build.

The components in the ARTv2 refactor build.

Creating a component instance brings in the joint mover file, then builds a joint mover on top of the joints according to the markup data.

In the next post, I’ll go into the new user interfaces and how the refactor helps automate widget creation for components.

Refactoring Part 3: Proxy Model Maker by Jeremy Ernst

In the last post, you saw a glimpse of what one of the refactored components looks like, but you may have noticed that the proxy geo that comes with the components in the ARTv2 beta version was missing. One of the things I wanted to do when doing this refactor was simplify and separate responsibilities. The joint mover was doing too much. It was responsible not only for placing joints, but for defining the proxy geometry. This made the class huge and also made the joint mover carry around lots of baggage that was really only needed at the very beginning of the process.

ARTv2_leg_component.PNG

Also, some people may not even want or need the proxy geometry. They may just want to simply place a component’s joints and not have to worry about or fuss with that step. So, I decided to remove it from the component and separate it into its own tool. This way, people that want to use proxy geometry still can, but it is not included with the components.

At work, we use proxy geo extensively. It lets us get characters in game with only a rough concept sketch and validate and iterate on proportions and height quickly. This also provides a template for the modelers to build the final asset from. We wanted to add more features to the proxy geo to be able to validate form, which the current ARTv2 beta setup was too clunky to do. It was at this time, that I decided to separate out proxy geo from components and add more features to the proxy geo in order to get better results that would allow validation and iteration on proportions, form, and scale.

Basic shaping in the Proxy Model Maker tool

The stand-alone tool (meaning it can be used outside of ARTv2 altogether), is setup in a similar fashion to the ARTv2 refactor, meaning it is component-based. For every ARTv2 component, there will likely be a matching proxy model maker component. As you can see in the above video, proxy geo components are no longer segmented. There is a simple “rig” that allows for some basic shaping.

In the component settings, you will see that there are sliders for the physique. These allow some basic detailing to rough in the form of the body.

Furthermore, there are shaper controls that can be used to further shape a component. These shaper controls support local mirroring (mirroring within the component).

Some components, like arms and legs, can be mirrored. Settings from any component can be copy/pasted to similar components, and transforms can be mirrored across components like arms and legs.

Settings, Transforms, and Shaper values can be copy/pasted and mirrored.

Settings, Transforms, and Shaper values can be copy/pasted and mirrored.

Some components can be mirrored.

Some components can be mirrored.

So, in short, that’s what I’ve been working on (albeit, not a ton as other work-related tasks have popped up!). To be honest, while I know it’s a marked improvement over what was there initially, I still think it might be a bit limited compared to something like CG Monastery’s MRS, in terms of this catering more to a semi-realistic style. I also really like their lofted setup. For my shapers, I’m using wire deformers, which I think works well enough.

As you can see in the UI, the output of this tool will be a single mesh without all these deformers that can then be rigged and skinned. Now, if you use ARTv2, the plan is that this will be automated (it will know where joint placement should go based on the mesh and should know how to skin it based on your ARTv2 component settings). This work hasn’t been completed yet, and I still need to do the head component, prop components (single joints), chain components (tails, tentacles, etc), and the export mesh feature. If you don’t use ARTv2, then the plan is to have the hooks there so you can automate that with your own stuff. Oh, also, all the meshes are already unwrapped, so you can paint a quick texture on there for color-blocking your proxy. Part of the plan for the export mesh function is to take the UVs and combine them onto a single set.

UVs.PNG

Lastly, here’s a demo of what I have so far:

If anyone is interested, I can go over the code stuff in a follow-up post. Let me know what you think, as I think this is a good direction, but honestly, I’m just winging it.

Refactoring Part 2: Basics by Jeremy Ernst

I didn’t mean for two months to pass between these posts, but c’est la vie. The last post went over some high level concepts of refactoring. In this post, I’ll start to show how the concepts are being applied to ARTv2. Let’s start with the base component class. This is an abstract base class that all components inherit from.

Abstract classes may not be instantiated, and require subclasses to provide implementations for the abstract methods

The original (currently available) version of ARTv2, the base class was huge. It did way too much and was too cumbersome to sort through and debug issues. One of the goals for the refactor was to do a better job simplifying classes and their responsibilities. Below is the current state of the base class.

new_base_class.PNG

The base class contains the bare minimum amount of common functions and a few necessary properties. Properties are being used to handle lots of functionality when modifying aspects of a component. In the previous post, I mentioned how many ways I had implemented setting a parent of a module. This is now done via a property on the abstract base class.

For those that don’t know about properties, they’re essentially class attributes that contain functionality. There are plenty of good articles out there explaining them, like this one. Take for example, the property parent. If I want to know a component’s parent bone, I can call inst.parent, which will use the getter function of the property decorator to return the parent bone. This functionality of how that info is returned in defined in the property, like this:

This is just returning the attribute value on the metanode (more on that later). If I want to set or change the parent of this component, I can do inst.parent = “new_bone”. This will call on the setter of the property, which contains a little more functionality.

Compared to how I was doing this before, this is a significantly cleaner way to handle getting and setting the parent bone of a component. You may notice the setter calls on some extra functionality. This line in particular is of interest:

In order to separate out responsibilities, I’ve been using composition.

Composition means that an object knows another object, and explicitly delegates some tasks to it.

At the beginning of the base component class, the following code is executed:

The last two lines are an example of composition. An instance of a class is assigned to a class attribute, which then delegates functionality to that class. So rather than include all the joint mover functionality in the base class, it gets separated out into its own class that only handles joint mover functions. Then the base class can call upon that JointMover class to execute functions related to joint movers. (In this case, adding the joint mover for this component to the scene). An important thing to note here, is that the ART_Component knows about JointMover, but JointMover does not need to know anything about ART_Component. It is given all the information it needs on instantiation (which is the joint mover maya file and the metanode that contains all the metadata it needs).

To finish this post, I’ll talk about the metadata/metanodes. While the current version of the tools utilizes these, it does not nearly utilize them enough. Probably because I didn’t fully grasp how to properly utilize them. First, they (in my refactored implementation) are a huge part of the component’s class. Any information the class returns when asked is going to be pulled off the metanode. Anytime data is changed, it is on the metanode. The properties mentioned earlier, are essentially getting and setting metanode data as well as doing the extra needed functionality.

For example, when setting a parent for a component, one of the first things it does if the parent is valid, is set that data on the metanode.

When returning the parent, it returns the data from the metanode. Why does this matter? Well, the biggest reason is that it makes it incredibly easy to make an instance of a component to get access to its functionality when you have a way of supplying all of the information an instance of the class would need.

In the ARTv2 beta, I actually do not have a great way of getting instances of classes to access functionality. If I want to call on a component’s buildRig method, I do all this extra work to build up an instance of that class in order to do so. Now, a component can be instantiated with a metanode, which it will then use to populate its properties.

Furthermore, everything can be done via a command line now. Embarrassingly, this was not the case in ARTv2 beta. So much of the functionality was only accessible through the user interface. Here is an example of some of these concepts in action:

Creating a root and leg, and setting some properties on the leg.

Creating a root and leg, and setting some properties on the leg.

Accessing an instance of a component by passing in its metanode.

Accessing an instance of a component by passing in its metanode.

One thing you might notice is that proxy geometry is gone. More on that next time!

Refactoring Part 1: Concepts by Jeremy Ernst

I wanted to write some posts about refactoring ARTv2 as I go through it. Personally, I’ve learned a lot developing these tools over the last few years. When I started writing these tools, I had a very different outlook on writing code. This had a lot to do with the incredibly fast-paced production environment I was in. I definitely looked at code as a means to an end, and if it “worked”, it was done.

Depending on the tool or the scope of the tool, this might be fine. When I start thinking about our industry though, where most of us are working on games that are considered services, a successful game (League of Legends, Fortnite, World of Warcraft, etc) could span 10+ years. And when you start thinking about the tools and pipeline you are using now, and being stuck with it in 10+ years because your project is still successful, you’ll probably wish you would have put more effort and thought into your code.

The neat thing about where ARTv2 is now, is that it is much easier to look at the big picture and see where things can be fixed and cleaned up. When I first started writing it, I didn’t really have a big picture in mind. I’d develop a feature, then think of the next feature, and develop it. This led to lots of giant files with lots of duplication. So, now I’ll talk about what refactoring is for anyone that doesn’t know, and why it’s important.

Code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior.

When you tell your producers or lead that, it can be hard to sell them on the idea that this is a valuable endeavor. So, I actually made a slide deck going over the benefits of refactoring and giving some examples. I’ll start with a completely true example that came from ARTv2.

I was working on a character and a bug presented itself where joint and chain modules weren’t being parented correctly. I tracked it down and implemented a fix. A couple days later, I change the parent on one of those modules to a different joint, and the bug pops up again. I track it down and find that I had duplicated that parenting code into the change parent method. So I fix it again. Some time later, I go to create a mirror of a module, and sure enough, the bug pops up again. It also popped up when loading a template. There were four separate places where the parenting code was implemented. And this comes from the way I thought about code before.

By implementing things on a feature-to-feature approach, each feature was built as a complete tool. Each feature would have code duplicated throughout with little regard to re-use or sharing common functions. Did the code work? Sure. But as the above example points out, it makes tracking down and fixing bugs a massive pain (and it’s just sloppy). When I ran into that same bug over and over, I realized that maybe I should do a pass and clean things up.

However, as I looked into it more, I realized I should just take this opportunity to really think things out and to also write unit tests as I went. If you don’t know what a unit test is, it’s basically code you write that tests code you’ve written :) A quick example would be if you had a function that took in an integer and added two to it. Your test would then call on that function with different inputs and maybe different types of inputs, and assert that your output assumptions are correct.

import unittest

def example_func(value):
    return value + 2

class MyTest(unittest.TestCase):

        def test_simple(self):
          self.assertEquals(example_func(2), 4)
          self.assertEquals(example_func(0), 2)
          self.assertEquals(example_func(-2), 0)

          # here, we know this should fail, since we haven't added anything to deal with strings. 
          with self.assertRaises(TypeError):
              self.assertEquals(example_func("one"), 2)

        def runTest(self):
            self.test_simple()

test = MyTest()
test.runTest()

This is a super simple example, but hopefully it illustrates what a unit test does. If you know that each of your methods has a test, it becomes very easy to isolate problems and ensure problems don’t arise in the future.

Moving on, these are the main reasons for refactoring ARTv2:

  • Remove Duplication

  • Simplify Design

  • Add Automated Testing

  • Improve Extensibility

  • Separate Form and Function (UI from functionality)

I’ve talked about the first and third, so let me quickly explain the second using the duplication example. That implementation was something akin to this:

duplication_example.jpg

A better implementation would be something like below, where each of those tools simply calls upon the module’s set_parent() method. This approach not only removes duplication, but simplifies the design. Any user who wants to set the parent on a module can probably guess correctly that such a method exists on the module class.

simplify_example.jpg

It all seems so very obvious now, but when I first started out writing this, my mind just didn’t think about the design of code at all. Being self-taught likely means I skipped over a ton of the basics that most programmers just know.

Lastly, extensibility. (Is that a word? Spellchecker seems to think not) Basically, this is designing your code in such a way that if the parameters or requirements change, code modifications are minimal. Here’s an example of that:

Here, we have an exporter that has a monolithic method for exporting bone animation, morph targets, and custom curves. Later, we now need to add the ability to export alembic caches. This export method is already a beast to dig through. It’s not at …

Here, we have an exporter that has a monolithic method for exporting bone animation, morph targets, and custom curves. Later, we now need to add the ability to export alembic caches. This export method is already a beast to dig through. It’s not at all easy to modify.

Here, we’ve refactored it so the main exporter just finds export object subclasses and runs their export function. Now, anyone can add a new subclass of the export object and implement it’s do_export method and not have to worry about the rest. (Thi…

Here, we’ve refactored it so the main exporter just finds export object subclasses and runs their export function. Now, anyone can add a new subclass of the export object and implement it’s do_export method and not have to worry about the rest. (This was just a mock-up example to illustrate a point!)

In the next post, I’ll go over some of the fundamental changes that have been made so far to ARTv2 with these things in mind. (also, apologies if any of this was dead obvious to any of you. Perhaps I am the last to catch on to all this good code design stuff)