ARTv2

Refactoring Part 5: Component Settings Widget by Jeremy Ernst

In the beta version of the tools, in order to change settings on a component, a settings widget needed to be built. This widget was written each time for each component. Even worse, it was impossible to do anything without a graphical user interface. You could not change a property on a component via command line, it had to be done through a UI. It. was. bad. You can see in the instantiation of the leg class, that it even took in the instance of the user interface! These two things should be totally separated and a component should never need to know about the user interface!

What the hell was I thinking?

What the hell was I thinking?

In order to address this, obviously things were rethought from the ground up. This is covered in previous posts, but to summarize, a component creates a network node. It uses properties to get and set data on the network node. The UI simply displays that data or calls on the setter for a property if a widget value is changed. You can see the general flow of this below.

Components are instantiated by either passing in no network node, in which case one is created, or if a network node is passed in, you create an instance with that data. You can see an example of that here:

Properties are an important element in this refactor. In order to have a component’s settings widget auto-generate, it simply gathers the properties of that class (including the inherited properties) and builds a widget off of those. By looking at the corresponding attribute types on the network node, it knows what type of widget to build. And because it’s a property, the changing of a widget value just calls on setattr! Here is what the code looks like for generating the property widgets and setting a property:

Okay, so let’s look at this in action. The UI has been redesigned to be faster and easier to use. In the clip below, when the Rig Builder is launched, it will create an asset and a root component. Then components can be added to the scene, which will add them to a list widget. Each item in the list widget has icons next to them for hiding that component in the scene, toggling aim mode, and toggling pin-in-place. Clicking on an item builds the settings widget for that component, which is generated from the component’s properties. Any changes to those settings then call on the component property’s setter, which handles what happens when a value is changed.

Since these changes, writing tools for the components has been a breeze. It’s amazing what a difference a good design can have on development efficiency. I’ll go over the Rig Builder interface and its various tools next time.

Refactoring Part 4: Component Creation by Jeremy Ernst

It’s been a while since I’ve posted an update on the progress of the tools. I’m pretty happy with where things are right now where they’re headed. I figured I’d make a post comparing the differences of creating a new component in the beta version of the tools to the new version of the tools.

In the beta version, a fair amount of code needed to be written and a fairly complex maya file had to be created before you could have a component that could generate some joints. This was frankly due to bad design and a lack of forethought and planning.

The highlighted methods were ones that needed to be implemented in this case, for the leg to work and generate joints.

The joint mover file was also complex and had some assumptions about hierarchy and naming. All bad.

There’s a lot to address here. For instance, the class for a component should be much simplified and should not need to be building UI widgets and such. Lots of the bespoke functionality was because of a lack of a unified system, so each component might have its own way of pinning a component, or setting up aim mode, or whatever.

Here’s a class diagram of the refactored code.

There’s a lot to look at, but the important bit is BipedLeg and how little is needed to get that component creating some joints. To create a component, you simply need to define the unique properties of that component (ex: number of thigh twists) by adding them as attributes to the metanode and then implementing their property getters and setters. You also need to define/create a joint mover file, which is now incredibly easy.

For the new joint mover file, you start by creating the joints you want your component to have in its max configuration (there are exceptions to this like the spine and chain which you actually create the min configuration).

Create the joints you want your component to create, and give them a name (which the user can then overwrite if they wish).

Once you’ve created your joints and ensured your joint orients are nice and tidy, there is a tool to mark the joints up with attributes. These attributes will build the joint mover controls, determine how aim mode is setup, etc. Once you’ve set the attributes, save the file, set the class attribute for the path, and you’re good to go!

Mark up joints with attributes to determine the control shape that will be applied, if the joint aims at another joint, the aim details and so on.

With these changes, creating new components in the refactored code is incredibly easy and quick. I’m sure there are things that could still be better, but it’s definitely a marked improvement from where things were. So far, there are 11 components in the ARTv2 refactor. Some of the previous components like arm, have been broken down into arm and finger.

The components in the ARTv2 refactor build.

The components in the ARTv2 refactor build.

Creating a component instance brings in the joint mover file, then builds a joint mover on top of the joints according to the markup data.

In the next post, I’ll go into the new user interfaces and how the refactor helps automate widget creation for components.

Refactoring Part 3: Proxy Model Maker by Jeremy Ernst

In the last post, you saw a glimpse of what one of the refactored components looks like, but you may have noticed that the proxy geo that comes with the components in the ARTv2 beta version was missing. One of the things I wanted to do when doing this refactor was simplify and separate responsibilities. The joint mover was doing too much. It was responsible not only for placing joints, but for defining the proxy geometry. This made the class huge and also made the joint mover carry around lots of baggage that was really only needed at the very beginning of the process.

ARTv2_leg_component.PNG

Also, some people may not even want or need the proxy geometry. They may just want to simply place a component’s joints and not have to worry about or fuss with that step. So, I decided to remove it from the component and separate it into its own tool. This way, people that want to use proxy geometry still can, but it is not included with the components.

At work, we use proxy geo extensively. It lets us get characters in game with only a rough concept sketch and validate and iterate on proportions and height quickly. This also provides a template for the modelers to build the final asset from. We wanted to add more features to the proxy geo to be able to validate form, which the current ARTv2 beta setup was too clunky to do. It was at this time, that I decided to separate out proxy geo from components and add more features to the proxy geo in order to get better results that would allow validation and iteration on proportions, form, and scale.

Basic shaping in the Proxy Model Maker tool

The stand-alone tool (meaning it can be used outside of ARTv2 altogether), is setup in a similar fashion to the ARTv2 refactor, meaning it is component-based. For every ARTv2 component, there will likely be a matching proxy model maker component. As you can see in the above video, proxy geo components are no longer segmented. There is a simple “rig” that allows for some basic shaping.

In the component settings, you will see that there are sliders for the physique. These allow some basic detailing to rough in the form of the body.

Furthermore, there are shaper controls that can be used to further shape a component. These shaper controls support local mirroring (mirroring within the component).

Some components, like arms and legs, can be mirrored. Settings from any component can be copy/pasted to similar components, and transforms can be mirrored across components like arms and legs.

Settings, Transforms, and Shaper values can be copy/pasted and mirrored.

Settings, Transforms, and Shaper values can be copy/pasted and mirrored.

Some components can be mirrored.

Some components can be mirrored.

So, in short, that’s what I’ve been working on (albeit, not a ton as other work-related tasks have popped up!). To be honest, while I know it’s a marked improvement over what was there initially, I still think it might be a bit limited compared to something like CG Monastery’s MRS, in terms of this catering more to a semi-realistic style. I also really like their lofted setup. For my shapers, I’m using wire deformers, which I think works well enough.

As you can see in the UI, the output of this tool will be a single mesh without all these deformers that can then be rigged and skinned. Now, if you use ARTv2, the plan is that this will be automated (it will know where joint placement should go based on the mesh and should know how to skin it based on your ARTv2 component settings). This work hasn’t been completed yet, and I still need to do the head component, prop components (single joints), chain components (tails, tentacles, etc), and the export mesh feature. If you don’t use ARTv2, then the plan is to have the hooks there so you can automate that with your own stuff. Oh, also, all the meshes are already unwrapped, so you can paint a quick texture on there for color-blocking your proxy. Part of the plan for the export mesh function is to take the UVs and combine them onto a single set.

UVs.PNG

Lastly, here’s a demo of what I have so far:

If anyone is interested, I can go over the code stuff in a follow-up post. Let me know what you think, as I think this is a good direction, but honestly, I’m just winging it.

Refactoring Part 2: Basics by Jeremy Ernst

I didn’t mean for two months to pass between these posts, but c’est la vie. The last post went over some high level concepts of refactoring. In this post, I’ll start to show how the concepts are being applied to ARTv2. Let’s start with the base component class. This is an abstract base class that all components inherit from.

Abstract classes may not be instantiated, and require subclasses to provide implementations for the abstract methods

The original (currently available) version of ARTv2, the base class was huge. It did way too much and was too cumbersome to sort through and debug issues. One of the goals for the refactor was to do a better job simplifying classes and their responsibilities. Below is the current state of the base class.

new_base_class.PNG

The base class contains the bare minimum amount of common functions and a few necessary properties. Properties are being used to handle lots of functionality when modifying aspects of a component. In the previous post, I mentioned how many ways I had implemented setting a parent of a module. This is now done via a property on the abstract base class.

For those that don’t know about properties, they’re essentially class attributes that contain functionality. There are plenty of good articles out there explaining them, like this one. Take for example, the property parent. If I want to know a component’s parent bone, I can call inst.parent, which will use the getter function of the property decorator to return the parent bone. This functionality of how that info is returned in defined in the property, like this:

This is just returning the attribute value on the metanode (more on that later). If I want to set or change the parent of this component, I can do inst.parent = “new_bone”. This will call on the setter of the property, which contains a little more functionality.

Compared to how I was doing this before, this is a significantly cleaner way to handle getting and setting the parent bone of a component. You may notice the setter calls on some extra functionality. This line in particular is of interest:

In order to separate out responsibilities, I’ve been using composition.

Composition means that an object knows another object, and explicitly delegates some tasks to it.

At the beginning of the base component class, the following code is executed:

The last two lines are an example of composition. An instance of a class is assigned to a class attribute, which then delegates functionality to that class. So rather than include all the joint mover functionality in the base class, it gets separated out into its own class that only handles joint mover functions. Then the base class can call upon that JointMover class to execute functions related to joint movers. (In this case, adding the joint mover for this component to the scene). An important thing to note here, is that the ART_Component knows about JointMover, but JointMover does not need to know anything about ART_Component. It is given all the information it needs on instantiation (which is the joint mover maya file and the metanode that contains all the metadata it needs).

To finish this post, I’ll talk about the metadata/metanodes. While the current version of the tools utilizes these, it does not nearly utilize them enough. Probably because I didn’t fully grasp how to properly utilize them. First, they (in my refactored implementation) are a huge part of the component’s class. Any information the class returns when asked is going to be pulled off the metanode. Anytime data is changed, it is on the metanode. The properties mentioned earlier, are essentially getting and setting metanode data as well as doing the extra needed functionality.

For example, when setting a parent for a component, one of the first things it does if the parent is valid, is set that data on the metanode.

When returning the parent, it returns the data from the metanode. Why does this matter? Well, the biggest reason is that it makes it incredibly easy to make an instance of a component to get access to its functionality when you have a way of supplying all of the information an instance of the class would need.

In the ARTv2 beta, I actually do not have a great way of getting instances of classes to access functionality. If I want to call on a component’s buildRig method, I do all this extra work to build up an instance of that class in order to do so. Now, a component can be instantiated with a metanode, which it will then use to populate its properties.

Furthermore, everything can be done via a command line now. Embarrassingly, this was not the case in ARTv2 beta. So much of the functionality was only accessible through the user interface. Here is an example of some of these concepts in action:

Creating a root and leg, and setting some properties on the leg.

Creating a root and leg, and setting some properties on the leg.

Accessing an instance of a component by passing in its metanode.

Accessing an instance of a component by passing in its metanode.

One thing you might notice is that proxy geometry is gone. More on that next time!

Refactoring Part 1: Concepts by Jeremy Ernst

I wanted to write some posts about refactoring ARTv2 as I go through it. Personally, I’ve learned a lot developing these tools over the last few years. When I started writing these tools, I had a very different outlook on writing code. This had a lot to do with the incredibly fast-paced production environment I was in. I definitely looked at code as a means to an end, and if it “worked”, it was done.

Depending on the tool or the scope of the tool, this might be fine. When I start thinking about our industry though, where most of us are working on games that are considered services, a successful game (League of Legends, Fortnite, World of Warcraft, etc) could span 10+ years. And when you start thinking about the tools and pipeline you are using now, and being stuck with it in 10+ years because your project is still successful, you’ll probably wish you would have put more effort and thought into your code.

The neat thing about where ARTv2 is now, is that it is much easier to look at the big picture and see where things can be fixed and cleaned up. When I first started writing it, I didn’t really have a big picture in mind. I’d develop a feature, then think of the next feature, and develop it. This led to lots of giant files with lots of duplication. So, now I’ll talk about what refactoring is for anyone that doesn’t know, and why it’s important.

Code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior.

When you tell your producers or lead that, it can be hard to sell them on the idea that this is a valuable endeavor. So, I actually made a slide deck going over the benefits of refactoring and giving some examples. I’ll start with a completely true example that came from ARTv2.

I was working on a character and a bug presented itself where joint and chain modules weren’t being parented correctly. I tracked it down and implemented a fix. A couple days later, I change the parent on one of those modules to a different joint, and the bug pops up again. I track it down and find that I had duplicated that parenting code into the change parent method. So I fix it again. Some time later, I go to create a mirror of a module, and sure enough, the bug pops up again. It also popped up when loading a template. There were four separate places where the parenting code was implemented. And this comes from the way I thought about code before.

By implementing things on a feature-to-feature approach, each feature was built as a complete tool. Each feature would have code duplicated throughout with little regard to re-use or sharing common functions. Did the code work? Sure. But as the above example points out, it makes tracking down and fixing bugs a massive pain (and it’s just sloppy). When I ran into that same bug over and over, I realized that maybe I should do a pass and clean things up.

However, as I looked into it more, I realized I should just take this opportunity to really think things out and to also write unit tests as I went. If you don’t know what a unit test is, it’s basically code you write that tests code you’ve written :) A quick example would be if you had a function that took in an integer and added two to it. Your test would then call on that function with different inputs and maybe different types of inputs, and assert that your output assumptions are correct.

import unittest

def example_func(value):
    return value + 2

class MyTest(unittest.TestCase):

        def test_simple(self):
          self.assertEquals(example_func(2), 4)
          self.assertEquals(example_func(0), 2)
          self.assertEquals(example_func(-2), 0)

          # here, we know this should fail, since we haven't added anything to deal with strings. 
          with self.assertRaises(TypeError):
              self.assertEquals(example_func("one"), 2)

        def runTest(self):
            self.test_simple()

test = MyTest()
test.runTest()

This is a super simple example, but hopefully it illustrates what a unit test does. If you know that each of your methods has a test, it becomes very easy to isolate problems and ensure problems don’t arise in the future.

Moving on, these are the main reasons for refactoring ARTv2:

  • Remove Duplication

  • Simplify Design

  • Add Automated Testing

  • Improve Extensibility

  • Separate Form and Function (UI from functionality)

I’ve talked about the first and third, so let me quickly explain the second using the duplication example. That implementation was something akin to this:

duplication_example.jpg

A better implementation would be something like below, where each of those tools simply calls upon the module’s set_parent() method. This approach not only removes duplication, but simplifies the design. Any user who wants to set the parent on a module can probably guess correctly that such a method exists on the module class.

simplify_example.jpg

It all seems so very obvious now, but when I first started out writing this, my mind just didn’t think about the design of code at all. Being self-taught likely means I skipped over a ton of the basics that most programmers just know.

Lastly, extensibility. (Is that a word? Spellchecker seems to think not) Basically, this is designing your code in such a way that if the parameters or requirements change, code modifications are minimal. Here’s an example of that:

Here, we have an exporter that has a monolithic method for exporting bone animation, morph targets, and custom curves. Later, we now need to add the ability to export alembic caches. This export method is already a beast to dig through. It’s not at …

Here, we have an exporter that has a monolithic method for exporting bone animation, morph targets, and custom curves. Later, we now need to add the ability to export alembic caches. This export method is already a beast to dig through. It’s not at all easy to modify.

Here, we’ve refactored it so the main exporter just finds export object subclasses and runs their export function. Now, anyone can add a new subclass of the export object and implement it’s do_export method and not have to worry about the rest. (Thi…

Here, we’ve refactored it so the main exporter just finds export object subclasses and runs their export function. Now, anyone can add a new subclass of the export object and implement it’s do_export method and not have to worry about the rest. (This was just a mock-up example to illustrate a point!)

In the next post, I’ll go over some of the fundamental changes that have been made so far to ARTv2 with these things in mind. (also, apologies if any of this was dead obvious to any of you. Perhaps I am the last to catch on to all this good code design stuff)

ARTv2 Beta Available Now by Jeremy Ernst

I really didn’t want to release ARTv2 until I was entirely happy with it, but I’ve had a ton of people requesting it, so I finally caved. This is not the final version! I am in the midst of doing a huge refactor to clean things up a ton. Check out the roadmap post here.

Head over to the ARTv2 page to read the rest of the details.

ARTv2 Space Switcher Updates by Jeremy Ernst

Over the holiday break, I worked on some updates to the space switcher, which was originally written back in February of 2018. This was to address some feedback from animation at work and to fix issues with cycles happening even if spaces were inactive (for instance, if you had a space on the hand for a weapon, and a space on the weapon for the hand, this would cycle, even if only one of those spaces were active). The updates implemented changes to address these issues and I ended up re-designing the system from scratch, rewriting most of the code, and redesigning the interfaces to be much simpler.

I forgot to point it out in the video, but when creating global spaces, you can save and load those out as templates. So if you just want to create a template for your project for your space switch setup, you can do that. It’s also scriptable, so when building the rig, you can also just add a call to that class, passing in the template file, and it will build the spaces as part of the rig build.

Check it out and let me know what you think :) (Hopefully, the animators at work like the updates!)

(Oh, and since it keeps coming up, there are two major things left to do before releasing. The first, is to document the hell out of everything. That’s in-progress. The second, is to make sure the updater tool is still working, since it’s been about two years since I wrote it :/ Once both those are done, it’s going live!)

New Feature: Pose Library by Jeremy Ernst

This feature took some time. Between various tasks popping up in between trying to work on it, and having to re-learn a bunch of math stuff, it took way longer than I would have liked, but it’s nearly complete. With this feature now complete, I’ve got some bug fixes I want to hit, some documentation I want to write (well, not want to, but need to) and then I want to get all of this stuff out there.

Take a look at the pose library tools and let me know what you think!


More fun in PySide! by Jeremy Ernst

This week's adventure involves doing something that you would think would be super simple, but instead involves image manipulation! I wanted to have the icons of the tabs of my animation control picker darken if they were not selected. With the image below, it isn't as clear as it could be as to which character tab is currently active. I added some height margins, but it would sure be a whole lot clearer if the images weren't the same value!

animPicker.png

It became evident that I was going to need to take some of the knowledge from last week, and apply that to this problem. So let's drive into that.

First, I hooked up the tabWidget (currentChanged) to a new function that would do the image manipulation and set the icon. In this new function, the first thing I do is get the total number of character tabs, as well as get the currently selected tab.

As I loop through the tabs, if the tab I am on in the loop is the currently selected tab, I access a property on the tabWidget I created that will give me the QIcon in memory, so that I can set the tab icon back to the original image on disk.

If the tab is not the currently selected tab, I get the QIcon of the tab, then get the pixmap of the QIcon, and then convert that to a QImage.

This is the fun part! Now, I loop through the x and y positions of the image, sampling the rgb value of the pixel at those positions, darken that value using QColor's darker function, and then set the pixel on our temp QImage at the same x,y location to that new darker color. This continues until all pixels are read, darkened, and then set, on the new QImage.

Now all that is left to do, is to convert this QImage to a QPixmap, and set the tab icon to that new, darkened image (which only exists in memory, not on disk).

The end result now gives me exactly what I was looking for!

Much more clear!

Much more clear!

Here's the full function as well:

Hope this helps anyone else looking to do something similar! 

 

 

 

Customizing QToolTip by Jeremy Ernst

This is not a post about style-sheets. I wish it were that easy to add a background image to a QToolTip, but it's not.

I wanted to look into adding background images to tool-tips. The first thing I found was that you can use html as your tool-tip text to display an image in the tool-tip. But, I didn't want to just display and image. I wanted to display an image with text on top of it.

Here's how you can simply display an image as your tool-tip using HTML:

With this method, I would need to author tons of images just for tool-tips, which is crazy. I started digging into generating my own image using QPainter. While looking at the documentation, I found that QPainter had all sorts of handy functions to draw things, and this could all then be saved to a QPixMap. This worked really well! I supply an image to paint as the background, then draw text on top, then save that out as an image. I was pretty stoked when I got to this point. Here's the code for that:

My intention was to have 1 tool-tip image. It gets overwritten anytime a tool-tip is requested with the new image. However, when I would have widgets call on this method to generate their tool-tip image, it would only happen when that interface was instantiated, meaning the singular tool-tip would get stomped, and all widgets would end up with the same tool-tip.

The next idea was to give this method a unique filename to save out. But then I could end up with hundreds of tool-tip images, which isn't really much better than authoring my own. I really wanted the tool-tip image to be generated when a tool-tip was requested by a widget. To do this, I need to intercept the ToolTip QEvent. Okay, fine. How can I do this?

I created another function that is my own tool-tip event handler.

Now for the last steps. When creating a widget, I reassign the widget's event method to this method instead.

The tooltip_text property holds the text I want displayed on top of the image. The tooltip_size property, which is optional, determines which background image gets used. The first line, which is where this button's event method is reassigned, passes in itself as an argument so that I can query the above properties and set the tool-tip on that widget. This means there is only ever 1 tool-tip image, and it gets generated whenever a ToolTip event is intercepted (if that widget has reassigned its event method).

Below is what the end result looks like. Keep in mind it's the same image file being displayed on all of the buttons.

tooltips.gif

This was one of those things where I had the idea, and went down the rabbit hole until I figured it out. Is it super useful? Not really. But it adds an extra 5% of polish to my tool-tips I suppose!

ARTv2: New nested directory and existing directory support! by Jeremy Ernst

I've been wanting to do this for a while now and finally got around to it. In ARTv1, the tools could publish to a project directory, and that was it. In the initial implementation in ARTv2, you could publish to a project directory and one sub-directory of your creation. Now, you can create limitless sub directories under your project!

Furthermore, you can use an existing directory structure, like your source control directory, as the tool's project path. Then you can publish into that existing directory structure so you can keep your existing source assets and rigs all in the same place!

Also shown in the videos is the new UI styling. It's still a work in progress, but most of the rigging tools are re-styled.

Left: Old Style

Left: Old Style

Hover states

Hover states

As always, thanks to Epic and Riot for allowing me to share these tools with you all. Go support their games!

ARTv2: New Controls by Jeremy Ernst

I recently got some feedback from an animator that they found the new animation controls to be too busy. I can totally see that. I wanted to have controls that had some depth to them, but it really does add a bunch of clutter. The controls are taken from the joint mover curves below:

before_ctrls.PNG

So, if you had a character fully in FK, that's basically what you'd see (though the controls would be colored differently).

After working with him to find a scheme he liked, I've added a new feature that adds support for adding custom control shapes to the joint movers. These curve shapes hook up to the existing joint movers, and when the rig gets built, if connections exist, it will use those connections as the template for building rig controls. If not, it defaults to the joint mover curves.

So now, I can add a control shape to the joint mover file and get it where I want it. Then I parent it under the corresponding joint mover and select the joint mover and the new control and run the following to hook up the connection.

import maya.cmds as cmds

cmds.addAttr(cmds.ls(sl=True)[0], ln="fk_rig_control", at="message")
cmds.connectAttr(cmds.ls(sl=True)[1] + ".message", cmds.ls(sl=True)[0] + ".fk_rig_control")

cmds.addAttr(cmds.ls(sl=True)[0], ln="ik_rig_control", at="message")
cmds.connectAttr(cmds.ls(sl=True)[1] + ".message", cmds.ls(sl=True)[0] + ".ik_rig_control")

Which in turn, give me these attributes on the joint mover:

rig_controls.PNG

The end result looks like this now once the rig is built:

IK controls

IK controls

FK controls

FK controls

Definitely a cleaner look. This allows the controls to always be present as soon as you start adding modules, which means you can edit those control shapes and those edits will persist. No more making post-scripts to scale controls or manually doing it in the rig after build!

There is also a new tool in the rig creator interface for accessing these control shapes for editing:

edit_rig_ctrls.gif

 

 

Massive update on ARTv2 progress. by Jeremy Ernst

It's been a long time since an update, and a lot of changes have gone into ARTv2. These changes aren't out for grabs yet, but I wanted to show what progress has been made. Probably the biggest change that has been requested for a long time, is support for Y-up. ARTv2 now works in Y or Z up!

The first changes are on the rigging side, with a completed chain module, improvements to the arm module, and some other new features.

The next large batch of changes have been for animation. Lots of new tools! Take a look!

I'm still not entirely sure what the final platform will be for releasing these tools. Whether it will be github, or the UE4 marketplace, or something else entirely. I want to thank again, Epic Games, for allowing me to take these tools with when I left and also, Riot Games, for allowing me to continue to share the work I do on them with the community. 

The next feature I am working on right now is the pose library. I'll do some updates on that when I have more to show. I feel like once that feature is in, and it's been battle tested in a few different versions of Maya and on different operating systems, I could do an initial release. Hopefully, that means within 2 months time, these tools will be out and available for free.

Also, thanks to Ky Bui for providing the new proxy geometry and associated physique shapes! 

 

 

I'm still alive. by Jeremy Ernst

With the transition to Riot came moving across the county, selling a house, buying a house, and just a ton of other shit that life throws at you, I've been busy to say the least. I forgot how much moving sucks!

However, ARTv2 development is picking back up and lots of progress has been made in the last month or so. The chain module is currently in progress and some other new features have been added.

Hotkey Editor

The hotkey editor allows you to assign hotkeys to ARTv2 commands and functions.

The hotkey editor allows you to assign hotkeys to ARTv2 commands and functions.

 

Custom Pickwalking

Each module has pickwalking setup between controls within that module. However, pickwalking between different modules can be setup by the user using these tools.

Each module has pickwalking setup between controls within that module. However, pickwalking between different modules can be setup by the user using these tools.

pickwalkingDocs.gif

This stuff isn't on github yet, but I'll post an update once it is. Once I wrap the chain module and tidy up some documentation, I will do a big git update (before Christmas).

 

ARTv2 Now on Github! (and other news) by Jeremy Ernst

An alpha build of ARTv2 is now up on Github! This build is not fully feature complete, but if you're interested in testing the tools out and seeing what's there, or using it as a starting point to build from for your own pipeline, then go grab it! You'll have to have your github ID linked with Epic.

https://www.unrealengine.com/ue4-on-github

Once the tools are feature complete (for a minimal viable product), they will be released on the Unreal Engine Marketplace for free. That should happen later this year. In terms of reaching MVP, there isn't too much left. Below is what is needed before it will go onto the marketplace:

  • Chain Module

  • Pose Library Tool

  • Space Switcher Tool

  • Full Documentation

 

Now, for the other news. I am leaving Epic Games. At the beginning of the year, I definitely didn't think I'd be saying that, but I was offered a really great opportunity. In a couple months, I will be heading to Riot Games as a Principal Technical Artist. If you're concerned about ARTv2, don't be! Epic has been amazing with all of this and is letting me continue development of the tools. I was blown away by this gesture. So I will be continuing to work on them and then release them on the UE4 Marketplace for free when they are farther along. It's a win-win for everyone! I get to take the tools with me on my new adventure, Epic gets to still get updates on the tools, and the UE4 community will also be getting the tools!

 

 

 

 

 

ARTv2 Export Skeletal Meshes Tool by Jeremy Ernst

One of the things that ARTv1 does not have at all, is any type of tool to export skeletal meshes. On Paragon, our export process is fairly complex, as we have to manage multiple level of detail models (LODs), with bone removals, weight transfers, and LOD poses. So, for ARTv2, I wrote a tool that handles all of this. Originally, this was part of the publish process, but I broke it out into its own unique tool. 

With ARTv2, there is no longer an export file and an anim rig file, just the one rig file. Because of that, the export tool is now made to work with the rig itself. Once a rig is built, if you open or edit the rig file, and launch the rig creator tools, there is now the option to export skeletal meshes:

When you go to hit the button, it will prompt you to make sure the file is saved before continuing. What happens next is a temporary file is created that strips out the rigging, and sets the skeleton back to model pose. This temporary file is where you will be working when setting up your export data.

Once the temporary file is created, you are then presented with this UI:

The first thing you want to do, is choose which meshes are associated with this particular LOD. There is always a LOD 0, but additional LODs can be added or removed using the top right buttons.

Then you can choose the file path for the exported FBX.

If you do not need to remove any bones from LOD 0 (likely the case), then that is all you need to do here, and you could export at this time. However, to show the other features, I will add another LOD.

Now I can choose to remove bones, which presents me with another interface. In this interface, we can add entries for bone removal, which will also allow us to choose which bone to transfer the weighting to for all of the removed bones. There is logic here that prevents any mishaps or impossibilities, like assigning weight to a bone that is being removed, etc.

You can also handle LOD poses in this interface. Since we are removing all of the finger bones in this LOD, we may want to pose the fingers before doing so. (This prevents that paddle hands look when the model switches to the LOD in game).

This tool allows you to save that pose and will apply it when doing the export before transferring the weighting and removing the bones.

This file also has morph targets on the arms currently. The upper arm morph mesh exists in the scene while the lower arm morph mesh has been deleted. More on that later.

At this point, we are ready to export.

After the process is done, it reopens the rig file. All of those settings you set up for your export? Those get immediately transferred and set in your rig file as well, so the next time you export, all of the settings are already there.

Ok, so those morph targets. Because LOD 1 is removing bones and transferring weighting, it gets a bit difficult to deal with morphs, especially if the morph meshes don't exist. When the process gets to LOD1, it has to export the skin weights, pose the mesh with the LOD pose, delete mesh history, import the skin weights, transfer weighting, and remove bones. In that process, if a blendshape node exists on the mesh, it determines whether or not the morph mesh still exists. If not, it creates it by turning on the attr in the blendshape, and duplicating the render mesh. Once this is done for all meshes with morphs, it will reapply the blendshapes before importing the skin weights (after deleting the mesh history). 

So opening the LOD1 FBX, we see that bones have been removed, the LOD pose applied, the weighting transferred, and we see both morph targets in tact:

That about covers it!

June 2016 Update by Jeremy Ernst

A lot has happened here at Epic since the last post! We've shipped Paragon on early access, we had an amazing GDC showing, and we continue to ship a hero every three weeks. In between all of that, I've been working on the tools when possible.

GDC

McLaren Enterprise Demo

I had the privilege of working with our enterprise division on a demo showing a McLaren 570s in our engine. I rigged the car, which before I started the task, thought would be simple. Turned out, the model was from the CAD files, where every nut, bolt, and screw is modeled out. Needless to say, to took far more time to do than I had anticipated, but it was a lot of fun.

Here are a couple more tidbits from rigging the 570s that the trailer doesn't really show.

The interesting thing to note about this is that there is no skinning information here. It's all static pieces attached to joints, inheriting the joint transformations. Check out the live stream to learn more about that.

The interesting thing to note about this is that there is no skinning information here. It's all static pieces attached to joints, inheriting the joint transformations. Check out the live stream to learn more about that.

Hellblade Realtime Performance Demo

We partnered up with Ninja Theory, Cubic Motion, and 3lateral to do something unprecedented; driving a real-time character through live body motion capture and a live facial solver in UE4. 

 ARTv2

Since the last update, a lot of progress has been made. Both the arm and torso modules are now done, leaving only the head and chain left (which are probably the easiest of the bunch).

Quick demo of the arm rig. 

Quick demo of the arm rig. 

The auto clavicle has been re-written to use pose space, which achieves much more reliable results.

The auto clavicle has been re-written to use pose space, which achieves much more reliable results.

Quick demo of new finger rig features

Quick demo of new finger rig features

Quick demonstration of torso rig features.

Quick demonstration of torso rig features.

One of the things I spent some time on that I'm really happy with was how users install the tools and how they will get updates. The old installation method is messy at best, and prone to errors. Updates are a nightmare. Users need to wait on either new engine releases or know about the dropbox link that holds the latest scripts. It's a mess for everyone, including me.

So I spent a couple days and now have a super simple way of installing the tools. 

Now the issue of getting updates. I wanted to investigate the work in adding a feature directly to the tools that would search for updates, and automatically apply them. 

Lastly, and this last tool is more for me, I needed a reliable way to generate release notes and a zip archive of the tools that would coincide.

The next update, I should have the head and chain modules done. I'm also looking into writing a 'report a bug' feature that will utilize github's issue tracking system.