Here’s a quick tutorial that has been requested on how to build a skeleton/rig for your own custom mesh:
I still plan on doing one covering skinning and possibly one on getting it into UE4 as soon as I can find some more time!
Here’s a quick tutorial that has been requested on how to build a skeleton/rig for your own custom mesh:
I still plan on doing one covering skinning and possibly one on getting it into UE4 as soon as I can find some more time!
I was working on making some ARTv2 rigging tutorials using this model (how to rig a custom asset), but got distracted and decided to get it in UE4 and play around with getting the layered clothing looking okay. Trying to get clothing not to clip into everything can be challenging, and this mesh had lots of layered bits that all needed to collide properly. It’s not 100% perfect, but good enough for the time put into it. I wanted to do more, like add more dynamics to the hair and such, but I figured I should get back to actually making those tutorials I set out to make in the first place!
In the last post, you saw a glimpse of what one of the refactored components looks like, but you may have noticed that the proxy geo that comes with the components in the ARTv2 beta version was missing. One of the things I wanted to do when doing this refactor was simplify and separate responsibilities. The joint mover was doing too much. It was responsible not only for placing joints, but for defining the proxy geometry. This made the class huge and also made the joint mover carry around lots of baggage that was really only needed at the very beginning of the process.
Also, some people may not even want or need the proxy geometry. They may just want to simply place a component’s joints and not have to worry about or fuss with that step. So, I decided to remove it from the component and separate it into its own tool. This way, people that want to use proxy geometry still can, but it is not included with the components.
At work, we use proxy geo extensively. It lets us get characters in game with only a rough concept sketch and validate and iterate on proportions and height quickly. This also provides a template for the modelers to build the final asset from. We wanted to add more features to the proxy geo to be able to validate form, which the current ARTv2 beta setup was too clunky to do. It was at this time, that I decided to separate out proxy geo from components and add more features to the proxy geo in order to get better results that would allow validation and iteration on proportions, form, and scale.
The stand-alone tool (meaning it can be used outside of ARTv2 altogether), is setup in a similar fashion to the ARTv2 refactor, meaning it is component-based. For every ARTv2 component, there will likely be a matching proxy model maker component. As you can see in the above video, proxy geo components are no longer segmented. There is a simple “rig” that allows for some basic shaping.
In the component settings, you will see that there are sliders for the physique. These allow some basic detailing to rough in the form of the body.
Furthermore, there are shaper controls that can be used to further shape a component. These shaper controls support local mirroring (mirroring within the component).
Some components, like arms and legs, can be mirrored. Settings from any component can be copy/pasted to similar components, and transforms can be mirrored across components like arms and legs.
So, in short, that’s what I’ve been working on (albeit, not a ton as other work-related tasks have popped up!). To be honest, while I know it’s a marked improvement over what was there initially, I still think it might be a bit limited compared to something like CG Monastery’s MRS, in terms of this catering more to a semi-realistic style. I also really like their lofted setup. For my shapers, I’m using wire deformers, which I think works well enough.
As you can see in the UI, the output of this tool will be a single mesh without all these deformers that can then be rigged and skinned. Now, if you use ARTv2, the plan is that this will be automated (it will know where joint placement should go based on the mesh and should know how to skin it based on your ARTv2 component settings). This work hasn’t been completed yet, and I still need to do the head component, prop components (single joints), chain components (tails, tentacles, etc), and the export mesh feature. If you don’t use ARTv2, then the plan is to have the hooks there so you can automate that with your own stuff. Oh, also, all the meshes are already unwrapped, so you can paint a quick texture on there for color-blocking your proxy. Part of the plan for the export mesh function is to take the UVs and combine them onto a single set.
Lastly, here’s a demo of what I have so far:
If anyone is interested, I can go over the code stuff in a follow-up post. Let me know what you think, as I think this is a good direction, but honestly, I’m just winging it.
I didn’t mean for two months to pass between these posts, but c’est la vie. The last post went over some high level concepts of refactoring. In this post, I’ll start to show how the concepts are being applied to ARTv2. Let’s start with the base component class. This is an abstract base class that all components inherit from.
Abstract classes may not be instantiated, and require subclasses to provide implementations for the abstract methods
The original (currently available) version of ARTv2, the base class was huge. It did way too much and was too cumbersome to sort through and debug issues. One of the goals for the refactor was to do a better job simplifying classes and their responsibilities. Below is the current state of the base class.
The base class contains the bare minimum amount of common functions and a few necessary properties. Properties are being used to handle lots of functionality when modifying aspects of a component. In the previous post, I mentioned how many ways I had implemented setting a parent of a module. This is now done via a property on the abstract base class.
For those that don’t know about properties, they’re essentially class attributes that contain functionality. There are plenty of good articles out there explaining them, like this one. Take for example, the property parent. If I want to know a component’s parent bone, I can call inst.parent, which will use the getter function of the property decorator to return the parent bone. This functionality of how that info is returned in defined in the property, like this:
This is just returning the attribute value on the metanode (more on that later). If I want to set or change the parent of this component, I can do inst.parent = “new_bone”. This will call on the setter of the property, which contains a little more functionality.
Compared to how I was doing this before, this is a significantly cleaner way to handle getting and setting the parent bone of a component. You may notice the setter calls on some extra functionality. This line in particular is of interest:
In order to separate out responsibilities, I’ve been using composition.
Composition means that an object knows another object, and explicitly delegates some tasks to it.
At the beginning of the base component class, the following code is executed:
The last two lines are an example of composition. An instance of a class is assigned to a class attribute, which then delegates functionality to that class. So rather than include all the joint mover functionality in the base class, it gets separated out into its own class that only handles joint mover functions. Then the base class can call upon that JointMover class to execute functions related to joint movers. (In this case, adding the joint mover for this component to the scene). An important thing to note here, is that the ART_Component knows about JointMover, but JointMover does not need to know anything about ART_Component. It is given all the information it needs on instantiation (which is the joint mover maya file and the metanode that contains all the metadata it needs).
To finish this post, I’ll talk about the metadata/metanodes. While the current version of the tools utilizes these, it does not nearly utilize them enough. Probably because I didn’t fully grasp how to properly utilize them. First, they (in my refactored implementation) are a huge part of the component’s class. Any information the class returns when asked is going to be pulled off the metanode. Anytime data is changed, it is on the metanode. The properties mentioned earlier, are essentially getting and setting metanode data as well as doing the extra needed functionality.
For example, when setting a parent for a component, one of the first things it does if the parent is valid, is set that data on the metanode.
When returning the parent, it returns the data from the metanode. Why does this matter? Well, the biggest reason is that it makes it incredibly easy to make an instance of a component to get access to its functionality when you have a way of supplying all of the information an instance of the class would need.
In the ARTv2 beta, I actually do not have a great way of getting instances of classes to access functionality. If I want to call on a component’s buildRig method, I do all this extra work to build up an instance of that class in order to do so. Now, a component can be instantiated with a metanode, which it will then use to populate its properties.
Furthermore, everything can be done via a command line now. Embarrassingly, this was not the case in ARTv2 beta. So much of the functionality was only accessible through the user interface. Here is an example of some of these concepts in action:
One thing you might notice is that proxy geometry is gone. More on that next time!
I wanted to write some posts about refactoring ARTv2 as I go through it. Personally, I’ve learned a lot developing these tools over the last few years. When I started writing these tools, I had a very different outlook on writing code. This had a lot to do with the incredibly fast-paced production environment I was in. I definitely looked at code as a means to an end, and if it “worked”, it was done.
Depending on the tool or the scope of the tool, this might be fine. When I start thinking about our industry though, where most of us are working on games that are considered services, a successful game (League of Legends, Fortnite, World of Warcraft, etc) could span 10+ years. And when you start thinking about the tools and pipeline you are using now, and being stuck with it in 10+ years because your project is still successful, you’ll probably wish you would have put more effort and thought into your code.
The neat thing about where ARTv2 is now, is that it is much easier to look at the big picture and see where things can be fixed and cleaned up. When I first started writing it, I didn’t really have a big picture in mind. I’d develop a feature, then think of the next feature, and develop it. This led to lots of giant files with lots of duplication. So, now I’ll talk about what refactoring is for anyone that doesn’t know, and why it’s important.
Code refactoring is the process of restructuring existing computer code—changing the factoring—without changing its external behavior.
When you tell your producers or lead that, it can be hard to sell them on the idea that this is a valuable endeavor. So, I actually made a slide deck going over the benefits of refactoring and giving some examples. I’ll start with a completely true example that came from ARTv2.
I was working on a character and a bug presented itself where joint and chain modules weren’t being parented correctly. I tracked it down and implemented a fix. A couple days later, I change the parent on one of those modules to a different joint, and the bug pops up again. I track it down and find that I had duplicated that parenting code into the change parent method. So I fix it again. Some time later, I go to create a mirror of a module, and sure enough, the bug pops up again. It also popped up when loading a template. There were four separate places where the parenting code was implemented. And this comes from the way I thought about code before.
By implementing things on a feature-to-feature approach, each feature was built as a complete tool. Each feature would have code duplicated throughout with little regard to re-use or sharing common functions. Did the code work? Sure. But as the above example points out, it makes tracking down and fixing bugs a massive pain (and it’s just sloppy). When I ran into that same bug over and over, I realized that maybe I should do a pass and clean things up.
However, as I looked into it more, I realized I should just take this opportunity to really think things out and to also write unit tests as I went. If you don’t know what a unit test is, it’s basically code you write that tests code you’ve written :) A quick example would be if you had a function that took in an integer and added two to it. Your test would then call on that function with different inputs and maybe different types of inputs, and assert that your output assumptions are correct.
import unittest def example_func(value): return value + 2 class MyTest(unittest.TestCase): def test_simple(self): self.assertEquals(example_func(2), 4) self.assertEquals(example_func(0), 2) self.assertEquals(example_func(-2), 0) # here, we know this should fail, since we haven't added anything to deal with strings. with self.assertRaises(TypeError): self.assertEquals(example_func("one"), 2) def runTest(self): self.test_simple() test = MyTest() test.runTest()
This is a super simple example, but hopefully it illustrates what a unit test does. If you know that each of your methods has a test, it becomes very easy to isolate problems and ensure problems don’t arise in the future.
Moving on, these are the main reasons for refactoring ARTv2:
Add Automated Testing
Separate Form and Function (UI from functionality)
I’ve talked about the first and third, so let me quickly explain the second using the duplication example. That implementation was something akin to this:
A better implementation would be something like below, where each of those tools simply calls upon the module’s set_parent() method. This approach not only removes duplication, but simplifies the design. Any user who wants to set the parent on a module can probably guess correctly that such a method exists on the module class.
It all seems so very obvious now, but when I first started out writing this, my mind just didn’t think about the design of code at all. Being self-taught likely means I skipped over a ton of the basics that most programmers just know.
Lastly, extensibility. (Is that a word? Spellchecker seems to think not) Basically, this is designing your code in such a way that if the parameters or requirements change, code modifications are minimal. Here’s an example of that:
In the next post, I’ll go over some of the fundamental changes that have been made so far to ARTv2 with these things in mind. (also, apologies if any of this was dead obvious to any of you. Perhaps I am the last to catch on to all this good code design stuff)
I really didn’t want to release ARTv2 until I was entirely happy with it, but I’ve had a ton of people requesting it, so I finally caved. This is not the final version! I am in the midst of doing a huge refactor to clean things up a ton. Check out the roadmap post here.
Head over to the ARTv2 page to read the rest of the details.
Over the holiday break, I worked on some updates to the space switcher, which was originally written back in February of 2018. This was to address some feedback from animation at work and to fix issues with cycles happening even if spaces were inactive (for instance, if you had a space on the hand for a weapon, and a space on the weapon for the hand, this would cycle, even if only one of those spaces were active). The updates implemented changes to address these issues and I ended up re-designing the system from scratch, rewriting most of the code, and redesigning the interfaces to be much simpler.
I forgot to point it out in the video, but when creating global spaces, you can save and load those out as templates. So if you just want to create a template for your project for your space switch setup, you can do that. It’s also scriptable, so when building the rig, you can also just add a call to that class, passing in the template file, and it will build the spaces as part of the rig build.
Check it out and let me know what you think :) (Hopefully, the animators at work like the updates!)
(Oh, and since it keeps coming up, there are two major things left to do before releasing. The first, is to document the hell out of everything. That’s in-progress. The second, is to make sure the updater tool is still working, since it’s been about two years since I wrote it :/ Once both those are done, it’s going live!)
This feature took some time. Between various tasks popping up in between trying to work on it, and having to re-learn a bunch of math stuff, it took way longer than I would have liked, but it’s nearly complete. With this feature now complete, I’ve got some bug fixes I want to hit, some documentation I want to write (well, not want to, but need to) and then I want to get all of this stuff out there.
Take a look at the pose library tools and let me know what you think!
Worked on a new feature to add custom selection buttons, script buttons, or labels. Selection buttons can be either a solid fill color or have an icon. Ability to edit labels, colors, icons, selection contents, and scripts included in context menu.
Gun, knife, and room meshes are from https://free3d.com/
This week's adventure involves doing something that you would think would be super simple, but instead involves image manipulation! I wanted to have the icons of the tabs of my animation control picker darken if they were not selected. With the image below, it isn't as clear as it could be as to which character tab is currently active. I added some height margins, but it would sure be a whole lot clearer if the images weren't the same value!
It became evident that I was going to need to take some of the knowledge from last week, and apply that to this problem. So let's drive into that.
First, I hooked up the tabWidget (currentChanged) to a new function that would do the image manipulation and set the icon. In this new function, the first thing I do is get the total number of character tabs, as well as get the currently selected tab.
As I loop through the tabs, if the tab I am on in the loop is the currently selected tab, I access a property on the tabWidget I created that will give me the QIcon in memory, so that I can set the tab icon back to the original image on disk.
If the tab is not the currently selected tab, I get the QIcon of the tab, then get the pixmap of the QIcon, and then convert that to a QImage.
This is the fun part! Now, I loop through the x and y positions of the image, sampling the rgb value of the pixel at those positions, darken that value using QColor's darker function, and then set the pixel on our temp QImage at the same x,y location to that new darker color. This continues until all pixels are read, darkened, and then set, on the new QImage.
Now all that is left to do, is to convert this QImage to a QPixmap, and set the tab icon to that new, darkened image (which only exists in memory, not on disk).
The end result now gives me exactly what I was looking for!
Here's the full function as well:
Hope this helps anyone else looking to do something similar!
This is not a post about style-sheets. I wish it were that easy to add a background image to a QToolTip, but it's not.
I wanted to look into adding background images to tool-tips. The first thing I found was that you can use html as your tool-tip text to display an image in the tool-tip. But, I didn't want to just display and image. I wanted to display an image with text on top of it.
Here's how you can simply display an image as your tool-tip using HTML:
With this method, I would need to author tons of images just for tool-tips, which is crazy. I started digging into generating my own image using QPainter. While looking at the documentation, I found that QPainter had all sorts of handy functions to draw things, and this could all then be saved to a QPixMap. This worked really well! I supply an image to paint as the background, then draw text on top, then save that out as an image. I was pretty stoked when I got to this point. Here's the code for that:
My intention was to have 1 tool-tip image. It gets overwritten anytime a tool-tip is requested with the new image. However, when I would have widgets call on this method to generate their tool-tip image, it would only happen when that interface was instantiated, meaning the singular tool-tip would get stomped, and all widgets would end up with the same tool-tip.
The next idea was to give this method a unique filename to save out. But then I could end up with hundreds of tool-tip images, which isn't really much better than authoring my own. I really wanted the tool-tip image to be generated when a tool-tip was requested by a widget. To do this, I need to intercept the ToolTip QEvent. Okay, fine. How can I do this?
I created another function that is my own tool-tip event handler.
Now for the last steps. When creating a widget, I reassign the widget's event method to this method instead.
The tooltip_text property holds the text I want displayed on top of the image. The tooltip_size property, which is optional, determines which background image gets used. The first line, which is where this button's event method is reassigned, passes in itself as an argument so that I can query the above properties and set the tool-tip on that widget. This means there is only ever 1 tool-tip image, and it gets generated whenever a ToolTip event is intercepted (if that widget has reassigned its event method).
Below is what the end result looks like. Keep in mind it's the same image file being displayed on all of the buttons.
This was one of those things where I had the idea, and went down the rabbit hole until I figured it out. Is it super useful? Not really. But it adds an extra 5% of polish to my tool-tips I suppose!
I've been wanting to do this for a while now and finally got around to it. In ARTv1, the tools could publish to a project directory, and that was it. In the initial implementation in ARTv2, you could publish to a project directory and one sub-directory of your creation. Now, you can create limitless sub directories under your project!
Furthermore, you can use an existing directory structure, like your source control directory, as the tool's project path. Then you can publish into that existing directory structure so you can keep your existing source assets and rigs all in the same place!
Also shown in the videos is the new UI styling. It's still a work in progress, but most of the rigging tools are re-styled.
As always, thanks to Epic and Riot for allowing me to share these tools with you all. Go support their games!
I recently got some feedback from an animator that they found the new animation controls to be too busy. I can totally see that. I wanted to have controls that had some depth to them, but it really does add a bunch of clutter. The controls are taken from the joint mover curves below:
So, if you had a character fully in FK, that's basically what you'd see (though the controls would be colored differently).
After working with him to find a scheme he liked, I've added a new feature that adds support for adding custom control shapes to the joint movers. These curve shapes hook up to the existing joint movers, and when the rig gets built, if connections exist, it will use those connections as the template for building rig controls. If not, it defaults to the joint mover curves.
So now, I can add a control shape to the joint mover file and get it where I want it. Then I parent it under the corresponding joint mover and select the joint mover and the new control and run the following to hook up the connection.
import maya.cmds as cmds cmds.addAttr(cmds.ls(sl=True), ln="fk_rig_control", at="message") cmds.connectAttr(cmds.ls(sl=True) + ".message", cmds.ls(sl=True) + ".fk_rig_control") cmds.addAttr(cmds.ls(sl=True), ln="ik_rig_control", at="message") cmds.connectAttr(cmds.ls(sl=True) + ".message", cmds.ls(sl=True) + ".ik_rig_control")
Which in turn, give me these attributes on the joint mover:
The end result looks like this now once the rig is built:
Definitely a cleaner look. This allows the controls to always be present as soon as you start adding modules, which means you can edit those control shapes and those edits will persist. No more making post-scripts to scale controls or manually doing it in the rig after build!
There is also a new tool in the rig creator interface for accessing these control shapes for editing:
It's been a long time since an update, and a lot of changes have gone into ARTv2. These changes aren't out for grabs yet, but I wanted to show what progress has been made. Probably the biggest change that has been requested for a long time, is support for Y-up. ARTv2 now works in Y or Z up!
The first changes are on the rigging side, with a completed chain module, improvements to the arm module, and some other new features.
The next large batch of changes have been for animation. Lots of new tools! Take a look!
I'm still not entirely sure what the final platform will be for releasing these tools. Whether it will be github, or the UE4 marketplace, or something else entirely. I want to thank again, Epic Games, for allowing me to take these tools with when I left and also, Riot Games, for allowing me to continue to share the work I do on them with the community.
The next feature I am working on right now is the pose library. I'll do some updates on that when I have more to show. I feel like once that feature is in, and it's been battle tested in a few different versions of Maya and on different operating systems, I could do an initial release. Hopefully, that means within 2 months time, these tools will be out and available for free.
Also, thanks to Ky Bui for providing the new proxy geometry and associated physique shapes!
I'm probably going to sound like an idiot, but I was working on something today, and found a solution that I was really excited about and thought I'd share. For experienced programmers, this is probably a big duh, but I was pretty stoked.
Okay, so the task I was working on was adding a control's spaces to the context menu in the control picker.
The task was pretty straightforward. When creating the menu initially, I check to see if a control has spaces, and if it does, add an action to the menu for each space. This worked well!
However, if I created a new space, it wouldn't show up in the menu unless I re-launched the UI. This is fine, but I wanted to see if I could generate the menu on the fly when the context event was called.
The picker button is its own class that creates its context menu. This class has the event for actually displaying the menu when you right click. I did a test and added a function to the button class that the contextMenuEvent would run first. That worked as expected.
Now, here is where I add items to the button class' menu in the torso class. The 'button.menu' refers to the button class instance and the menu of the button class. So it's just going through and adding the menu items. This is where I initially had it add the spaces, but because this function is only run when the animation picker class gets instantiated, it doesn't update.
I decided to try something, and to my amazement, it worked. Now, I don't have a ton of formal training in programming, so again, this might be stupid, but in the function that builds the picker for the torso where I was initially adding spaces, I take that button instance and reassign its addSpaces function to my torso's new addSpacesToMenu function.
Now, every time the context menu event is called, it runs the torso's addSpacesToMenu function before showing the menu, always ensuring any new information is added. I thought this was pretty neat!
Hopefully this is helpful to someone!
No development updates, as I've been on vacation, but I wanted to write a post regardless. I'll warn you though, it's a bit long, and a bit of a ramble.
A couple of years ago, I got to this point in my career where I realized I knew very little when it comes to this field of rigging/tech art/tools development. I would see videos online of crazy rigs and crazy tools, and it was easy to just feel like I wasn't very good. And when I'd go to learn new things, I would realize just how much more there was to learn. I've seen the phrase: "the more I know, the less I understand", and I feel like that rings very true.
I mean, sure, I know enough to be competent at my job, but when you look at the depth of knowledge in this career path, and all the things you could potentially learn, it's overwhelming. Rigging, deformation, anatomy, python, C++, API, math. It's like trying to climb a mountain that keeps growing as you climb it.
I don't know how I come off online, but I'm actually pretty insecure about my work. Me releasing tools to the public was not an act of confidence. I imagine there are incredibly talented people out there that have probably looked at the tools and thought that the code was sloppy, or it was amateur, or any number of things. And they're probably right. Each time I write something, it's a learning experience. The next thing I write is better, and then I want to go back and rewrite all the previous things, but that is a slippery slope that leads to nothing new getting done.
Whenever I post something online, it isn't because I think it's the best thing ever, it's because I'm proud of it (at the time) and it's the best thing I've ever done. There was once a time I was proud of ARTv1! Ha! At the time though, it was an achievement for me. Now, it's embarrassing. All I can see is the lack of any coding standard, the sloppiness of the code, how disorganized it is, etc. But I wouldn't have learned anything if I hadn't tried to do it in the first place, and I think that's the important thing.
As I get older, the question of how to use my time becomes more important. I want to be the best at what I do, but that is an unreasonable goal. It's also hard to quantify and measure. Do I spend my free time constantly learning more and more, building and maintaining relationships, or working towards other goals? (or getting through Stormblood content in FFXIV)
I think it's good to know there is always more to learn, and that you will never be the best at all the things, and that's okay. There's a reason why MMORPG parties usually consist of a tank, healer, and some DPS. It creates a well-rounded, balanced team, as no one class is the best at all of those things. (can you tell I want to get back to playing some FFXIV?)
Unfortunately, real life isn't as clear and the lines in tech art aren't so nicely drawn. Most companies throw all sorts of types under the tech art umbrella, which can make it confusing on where to focus.
I'm not very good at writing, and I don't have some tidy ending to this. So I'll end this by saying, don't bother comparing yourself to others. Congratulate their successes and use their work as inspiration or motivation. It's easier said than done, for certain. (This is more a note to myself than anything.)
Oh, and Happy New Year :)
Just a quick update showing some chain module stuff from a couple weeks ago. It's nearly complete now, but not quite at a stage where I can show it.
With the transition to Riot came moving across the county, selling a house, buying a house, and just a ton of other shit that life throws at you, I've been busy to say the least. I forgot how much moving sucks!
However, ARTv2 development is picking back up and lots of progress has been made in the last month or so. The chain module is currently in progress and some other new features have been added.
This stuff isn't on github yet, but I'll post an update once it is. Once I wrap the chain module and tidy up some documentation, I will do a big git update (before Christmas).
An alpha build of ARTv2 is now up on Github! This build is not fully feature complete, but if you're interested in testing the tools out and seeing what's there, or using it as a starting point to build from for your own pipeline, then go grab it! You'll have to have your github ID linked with Epic.
Once the tools are feature complete (for a minimal viable product), they will be released on the Unreal Engine Marketplace for free. That should happen later this year. In terms of reaching MVP, there isn't too much left. Below is what is needed before it will go onto the marketplace:
Now, for the other news. I am leaving Epic Games. At the beginning of the year, I definitely didn't think I'd be saying that, but I was offered a really great opportunity. In a couple months, I will be heading to Riot Games as a Principal Technical Artist. If you're concerned about ARTv2, don't be! Epic has been amazing with all of this and is letting me continue development of the tools. I was blown away by this gesture. So I will be continuing to work on them and then release them on the UE4 Marketplace for free when they are farther along. It's a win-win for everyone! I get to take the tools with me on my new adventure, Epic gets to still get updates on the tools, and the UE4 community will also be getting the tools!
One of the things that ARTv1 does not have at all, is any type of tool to export skeletal meshes. On Paragon, our export process is fairly complex, as we have to manage multiple level of detail models (LODs), with bone removals, weight transfers, and LOD poses. So, for ARTv2, I wrote a tool that handles all of this. Originally, this was part of the publish process, but I broke it out into its own unique tool.
With ARTv2, there is no longer an export file and an anim rig file, just the one rig file. Because of that, the export tool is now made to work with the rig itself. Once a rig is built, if you open or edit the rig file, and launch the rig creator tools, there is now the option to export skeletal meshes:
When you go to hit the button, it will prompt you to make sure the file is saved before continuing. What happens next is a temporary file is created that strips out the rigging, and sets the skeleton back to model pose. This temporary file is where you will be working when setting up your export data.
Once the temporary file is created, you are then presented with this UI:
The first thing you want to do, is choose which meshes are associated with this particular LOD. There is always a LOD 0, but additional LODs can be added or removed using the top right buttons.
Then you can choose the file path for the exported FBX.
If you do not need to remove any bones from LOD 0 (likely the case), then that is all you need to do here, and you could export at this time. However, to show the other features, I will add another LOD.
Now I can choose to remove bones, which presents me with another interface. In this interface, we can add entries for bone removal, which will also allow us to choose which bone to transfer the weighting to for all of the removed bones. There is logic here that prevents any mishaps or impossibilities, like assigning weight to a bone that is being removed, etc.
You can also handle LOD poses in this interface. Since we are removing all of the finger bones in this LOD, we may want to pose the fingers before doing so. (This prevents that paddle hands look when the model switches to the LOD in game).
This tool allows you to save that pose and will apply it when doing the export before transferring the weighting and removing the bones.
This file also has morph targets on the arms currently. The upper arm morph mesh exists in the scene while the lower arm morph mesh has been deleted. More on that later.
At this point, we are ready to export.
After the process is done, it reopens the rig file. All of those settings you set up for your export? Those get immediately transferred and set in your rig file as well, so the next time you export, all of the settings are already there.
Ok, so those morph targets. Because LOD 1 is removing bones and transferring weighting, it gets a bit difficult to deal with morphs, especially if the morph meshes don't exist. When the process gets to LOD1, it has to export the skin weights, pose the mesh with the LOD pose, delete mesh history, import the skin weights, transfer weighting, and remove bones. In that process, if a blendshape node exists on the mesh, it determines whether or not the morph mesh still exists. If not, it creates it by turning on the attr in the blendshape, and duplicating the render mesh. Once this is done for all meshes with morphs, it will reapply the blendshapes before importing the skin weights (after deleting the mesh history).
So opening the LOD1 FBX, we see that bones have been removed, the LOD pose applied, the weighting transferred, and we see both morph targets in tact:
That about covers it!