[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/3/ - 3DCG


View post   

File: 67 KB, 640x480, shoulder.jpg [View same] [iqdb] [saucenao] [google]
939151 No.939151 [Reply] [Original]

How do you get actually good shoulder deformation. I need a no fucking around, honest to goodness shoulder solution. Preferably one that's compatible for Blender. But if you have it in another program, I may be able to re-engineer it for blender if I only knew the steps.

I swear to god if you link me to some lame-o tutorial that has a mediocre solution, I'm going to be so mad. I want something REAL.

>> No.939156

just be yourself, bro.

>> No.939161

rigging a good body is hard, bro.
if you don't like the geometry after deformation, fix it the same way everyone does; add a shape key, edit the mesh, and define a driver for the shape key interpolation based on the bone poses

>> No.939164

>>939151
A dab, anon? Really? What is this, the 2010's?

>> No.939165

>>939164
I pulled the image off google. I didn't even realize he was dabbing. I thought he had some weird shrek like ears protruding from his head

>> No.939167

>>939161
At the Blender Foundation, they rarely use Blender to make assets, rather they import them from other proprietary software. The shape keys you often see are the baked result of whatever method the proprietary software used to get a proper asset. They were almost certainly now drawn by hand.
You have to be a Blender User to believe professional grade models use hand drawn shape keys.

>> No.939179

>>939167
finally, a rare post on /3/ that actually knows what they are talking about

>> No.939183

>>939151
>I need a no fucking around, honest to goodness shoulder solution
twist bones and corrective blend shapes.

>> No.939184

>>939167
>You have to be a Blender User to believe professional grade models use hand drawn shape keys.
Sure they use dynamics and muscle rigs to generate shapekeys now, but they did do it by hand fairly recently and with good results. The automation is about making hundreds of shapes for every movement of every joint. not because a believable shape is hard to make by hand.

>> No.939218
File: 201 KB, 650x536, Capture.png [View same] [iqdb] [saucenao] [google]
939218

>>939151
This is the base for me.

>> No.939226

I will get a lot of shit for this but I don't care - use Ziva. Alternatively use a Pose Space Deformer.

>> No.939281

>>939183
This, although to be fair if you have a good neutral pose, good topology and decent skinning, you can sometimes get great results event without corrective shapes.

As a maya rigger 90% of the time on the shows I worked on we didn't use any corrective shapes

>> No.939285
File: 2.59 MB, 800x450, shouldervol.gif [View same] [iqdb] [saucenao] [google]
939285

>>939281
yes, in most cases just using a few clever helper joints can get you 90% of the way. The last 10% you just sweep under the rug and call it done :D

>> No.939286

>>939285
yuck. Why even bother

>> No.939291

>>939286
blendlets get irrationally upset when they see quality work done in maya

>> No.939294

>>939291
>quality

>> No.939296

>>939294
it wins by default since you have not posted your rigged shoulder

>> No.939298

This guy >>939161 >>939184 >>939286 >>939294 has been going around demanding people use manually drawn shape keys in Blender for a while. I have no idea why he does it but he has yet to post a single example.

>> No.939299

>>939298
no, >>939286 is my first post in this thread. You dont know who is reading and posting on this board.

>> No.939300
File: 11 KB, 674x113, Screenshot 2023-03-07 173441.png [View same] [iqdb] [saucenao] [google]
939300

>>939298
If you dont have access to shape key generating tools you have no choice but to do them by hand or just not do them.

>> No.939301

>>939299
I guessed 3 out of 4 right then. Sorry about that.

>> No.939304

>>939298
Why does every board always have unselfaware schizos accusing anyone who says anything they don't like of being 1 schizo

>> No.939308

>>939298
>claims to know what he's talking about
>calls creating shape keys "drawing"

>> No.939335

>>939285
OP here. This level appears acceptable. Though, I have to wonder how her chest/shoulder area deforms when she stretches her arm above her head. The gif only shows her raising her arms to a medium degree. But what if she was really reaching up high. Or even stretching her arm behind her head. Then how would it look?

>>939308
No him, but I use a drawing tablet as my primary navigation device. Not a mouse. And I have a 2D art background. So I call all kinds of actions "drawing". I'm also skeptical at the idea that I'm supposed to "draw" every shapekey. That kind of work sounds insane. I would like a less laborious solution. Surely, someone has discovered a bone structure that does the basic task of deforming a shoulder, right?

>> No.939337

>>939335
you dont draw a blendshape, idiot, you sculpt it

>> No.939338

>>939337
Yeah no shit. But through word association, I would call sculpting "drawing". You don't actually sculpt in 3d, btw. Because the objects don't have mass, you're technically not adding/removing anything. You're not really pushing the mass around. You're just recalculating imaginary points in imaginary space.

>> No.939339

>>939338
You're insane. I guess zbrush is a drawing application too, retard

>> No.939342

>>939337
>>939339
You need to draw us a simple example.

>> No.939348

>>939339
According to your logic, it's a mathematical computational program. You're not a sculptor. You're a data organizer.

>> No.939352
File: 88 KB, 714x586, shoulderdef.jpg [View same] [iqdb] [saucenao] [google]
939352

>>939335
arms up like this?
Looks ok I think, Its just bones at this point. Could be better if I made some corrective shapes but eh. Cant be bothered.

>> No.939357

>>939352
Try to rotate the arm bones on their axis because, as it is, I think it's an impossible pose.

>> No.939359

>>939352
mmm.... It's ok... A tad malformed. Not the worst I've seen. I've seen professional video games with worse deformation. But like the other anon says, I would like to see the bones rotate on the Y axis. In fact, I would like to see your bones. I might be able to recreate what you have. When I try to twist my character's arm, it gets all gimbal locked and starts acting funny. Using my own arm as a references, starting from a T pose, I can twist my shoulders forward 90 degrees, and back 90 degrees. I'm not flexible at all, so I suppose a 180 degree window is normal for the shoulders. That's how I set up my IK. 90 degrees forward and back at the shoulder.

>> No.939361
File: 298 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
939361

>>939352
I'm not an anatomy expert by any means. I'm also learning myself but I want you to notice that when the forearm goes past horizontal, it also has to rotate on itself 90 degrees on itself to reach vertical. I think it's a limitation of the human shoulder joint.

>> No.939364

I also wanted to add that so far I've found it more fruitful to focus on "geometrically consistent" rather than "anatomically correct". One reason is because most people don't know or care that much about anatomy (including myself) but they perceive strange losses or gains in volume for example very well and that's distracting.

>> No.939371

>>939361
see a doctor

>> No.939372

>>939371
Post and example of your hand drawn shape keys. This is the 5th time I ask.

>> No.939374

>>939339
>I guess zbrush is a drawing application too,
Well... that's not entirely wrong.
I'm pretty sure ZBrush started off as a 3d drawing software and then they cobbled together sculpting on top of that code. It's both why it's so good and fucking weird at the same time.
The drawing part of it is still there though.

>> No.939389

>>939361
are you serious my guy? you aren't even fit to be replying to him let alone give him advice

>> No.939406

>>939389
Yes. I've noticed a possible issue with bone roll, so I pointed that out. >>939359 seems to have noticed that too.
With that aside, if you like >>939352 art style better than mine, I completely understand you.

>> No.939412

>Surely, someone has discovered a bone structure that does the basic task of deforming a shoulder, right?
In my opinion, especially going forward, there really is no middle ground. You can choose to use a simple no effort method like I'm doing, take what you can get and try to get away with it, or get yourself some proprietary software with machine learning and a ready made training set.
But under no circumstance I would waste my time in an attempt to manually draw into the model anatomy concepts I don't even know.
Auxiliary bones may be useful if you control them by hand for some purpose such as facial expressions or fake collisions.
Other than that, ignore the youtube tutorials people and take your pick from above or you'll be stuck in a limbo where you'll become that guy who goes around telling other people their work "looks like shit" and not producing anything or providing any example.

>> No.939413

Also, I think I've figured out what Blender Guru and clones problem really is.
Their faces look like their parents drew them in Blender and I can only imagine how hard they were bullied in school because of that.
So, as adults, the enjoy deceiving, openly mocking and waste other people's time by telling them to draw donuts.
I can't think of any other explanation.

>> No.939415
File: 48 KB, 1200x603, scissor.jpg [View same] [iqdb] [saucenao] [google]
939415

>>939412
>you'll become that guy who goes around telling other people their work "looks like shit" and not producing anything or providing any example.
Welcome to /3/, it's pretty much everyone still lurking in this shit-pile, people unironically post photos on this board and get called 'unrealistic, looks like shit, get gud' by faggots who spent too much time jerking off to terrible daz porn to remember what real things look like

>> No.939419

>>939151
shoulder is the most easiest shit bro. Its the butt thats fucking hard

>> No.939428
File: 2.93 MB, 900x720, shouldervol2.webm [View same] [iqdb] [saucenao] [google]
939428

>>939357
>>939359
>>939361
>>939406
sigh, I should've made the pic including her forearms. Oh well. They are indeed already rotated, not sure how you concluded that wasnt the case already. But here is a better video to show the full thing in action.

>> No.939429
File: 2.63 MB, 800x450, assvolume.gif [View same] [iqdb] [saucenao] [google]
939429

>>939419
imho butt/thigh is easier than shoulder but not by much, they are both fairly hard to make a robust and good looking rig for

>> No.939430
File: 2.90 MB, 900x720, shouldervoljnt.webm [View same] [iqdb] [saucenao] [google]
939430

>>939359
>In fact, I would like to see your bones
Here you go buddy, hope it helps.

>> No.939433

>>939430
looks pretty shitty. Like something you'd see on xvideos

>> No.939444

>>939428
I think I concluded there was no rotation maybe because you're using some sort of bendy bone. I've never used the software you're using but I think the roll should happen all at the shoulder joint, not being distributed along the forearm.

>> No.939445
File: 33 KB, 463x344, wwaafrvfb.jpg [View same] [iqdb] [saucenao] [google]
939445

Again not an anatomy expert but somehow what is the top part of the shoulder when the arm is down ends up on the back when the arm is up. Here >>939352 it looks like it ends up on the side instead.

>> No.939446
File: 3 KB, 698x1284, rfuckoffretard.png [View same] [iqdb] [saucenao] [google]
939446

>>939430
It looks fine, actually great job on the rig, don't let /3/ggots get to you
>>939433
Either show your better work or shut your faggot ass up
Literally nobody is gonna scour every frame of your animations to find a few weird deformations in the armpits, this anon's rig is better than 99% of actual professionally made games where despite $100k of mocap joints are still nearly always made of playdough

>> No.939447

>>939419
>>939433
"Anybody who comes into the Youtube Blender Tutorial forest will be lost. Everybody will become a Stalfos. Everybody, Stalfos." — Fado Fado

>> No.939449
File: 265 KB, 1654x1080, maxfaggivesblendletsnmayansthecoldshoulder.jpg [View same] [iqdb] [saucenao] [google]
939449

>Stand aside blendlets and mayans, maxfag who like to make rigs coming thru.

The nice way to solve it is to use nothing but bones so you can pretty much autorig everything you make after skinning the nude version of your character.
You will need to use multiple bones to get good deformation.

If you solve this using blendshape/morphs you run into the hell that you'll need to create new morphs and blends for every outfit you want your
character to wear.

My current realtime solution uses 8 extra bones per shoulder joint to emulate how muscle would move around scapula and shoulder on my hero characters
and a lower fidelity variant with way less on my NPC (two or three, haven't touched them in a while).

>>939445
You can just raise your arm to slightly above horizontal, the rest of the range is provided by your trapezius elevating
and rotating your shoulderblade/scapula.

>> No.939456

>>939447
fucking kek anon, I almost choked

>> No.939458

>>939449
too sick

>> No.939459

>>939449
I like Maya anon's version a lot better. You are making the same mistakes the blendlet is making. The breasts not being affected by the arm motion is virgin tier understanding of anatomy.

>> No.939461

>>939449
don't pretend to know how to rig when you just autorig everything (and get bad results, too)

>> No.939463

>>939449
Not sure what software has to do with this but okay. You're right on the bones vs blendshapes thing tho, I do bones-only for the same reasons. Our characters have too many outfits to manage making these shapes for all of them.
Your images look well enough in my eyes, would like to see a moving version like the one I posted above.

>> No.939464

>>939459
Rigging the chest on this base model to deform the breasts would cause too many issue with clothes.
If I ever wanted to make a '4fap variant' I'd have to rig the soft tissue of the titties separately only for the naked version.
Game she's for is good clean fun action pew-pew type stuffs tho, not any naughty coomercial purposes.

>Not til I turn ~60 and fall down a flight of stairs taking damage to my endocrine system and start outputting nothing but hentai.
>like my role model Shirow Masamune before me. #stillwaitingforAppleseed.

>>939463
>Not sure what software has to do with this but okay.

Absolutely nothing, you can ofc do the same thing in any package. I just find the boards silly software-war culture amusing so I pretend to partake.
Making some yoga poses to showcase the articulation has been on my 2doList for a while now. I'll post some if I can find the time.
>What's a good gif/webm solution these days? All the capture/converter stuff I used to have on hand is so old it's been discontinued.

>> No.939465

Before you invest more and more time playing a game of whack-a-mole adding more and more corrective shape keys you sculpt by hand you can do the following things in order:

- Use "preserve volume" on the armature modifier.
- Check that the vertex weights for bones are normalized and tweak them if necessary. Don't forget to normalize after each edit.
- Convert affected bones to bendy bones and add roll/tilt drivers depending on the state of the child bone axis. This works pretty much the same way corrective shape key drivers do, just that you don't have to sculpt anything and it's non-destructive. You have to use the "transform channel" input for the driver to work in animation.

>> No.939475

>>939459
Clearly not many people in this thread have had the opportunity to examine an RTX HD 8K 90FPS 3D woman, so let me tell you something.
Have you noticed that women usually wear a bra? Have you noticed that those bras often have a string that runs on top of the shoulder?
For reasons only known to them and perhaps the Lord Of Darkness, they love to puppeteer their tits using those strings but without a bra, those large displacements don't happen that much on natural women. I know. It's shocking.

>> No.939477

>>939475
It varies a lot and boobs of that size and shape as shown in the video will tend to follow the pectoral area more than "hangier" tits.
So its quite accurately animated anon. I know. shockin.

>> No.939478

>>939464
If you can't do it, you can't do it. No need to make excuses.

>> No.939495

>>939477
That's why I said "natural women". Because unfortunately a lot of them undergo surgeries that actually ruin their looks in an effort to appeal to people like you.

>> No.939496

The reason I've said this >>939475 >>939495 is because at some point you're going to have to decide what you want. Do you want realism without compromise or are you willing to accept that what you're looking at is cartoonish in some way?

>> No.939497

I'm telling you all this because I'm worried that with your mentality when someday you'll have an opportunity to face a real woman, your're going to be somewhat disappointed, you're going to tell her "you look like shit" and you'll decide to go with your own male sex instead because male skin is stiffer and more conducive to the kind of the large displacements you like.
But I don't blame you for this. It's the media industry that is at fault for this.

>> No.939500

>>939497
> with your mentality when someday you'll have an opportunity to face a real woman, your're going to be somewhat disappointed

I wouldn't worry about it, people (including artists) have very different sets of criteria for their fantasies and their partner before them.
Most dudes don't get all insecure because how ladies fantasize about horsecock hung men with chiseled abs.
Likewise most women don't experience crippling levels of boob-envy either.

Artists striving for whatever sense of beauty is ideal to them in art is a hell of a lot more interesting than when people make
bland boring everyday looking people in feeble attempts to offend no-one amongst those who made being offended their main mode of existing.

Glassy people who keep going all 'mirror mirror on the wall' and throw a hissy when faced with beauty unattainable to them are the broken ones.
There's a big difference between 'body positivity' and 'accepting the way you look' and demanding that nothing you see in fiction makes you look inferior in comparison.
That's really the very opposite of accepting who you are.

In art we're not limited to what's normal or even realistic, one should not degrade ones fantasy by making it too mundane.
If the anon like his titties real perky I advice we let him celebrate that anti-gravity titty that sets his heart ablaze.

>> No.939503

>>939500
>ladies fantasize about horsecock hung men with chiseled abs
I think I've watched a lecture from Jordan Peterson outlining the detrimental effects those fantasies have had on the western civilization. (just kidding)
But I agree with you. I just felt the need to state the obvious just in case it's of any use to anybody.

>> No.939516
File: 206 KB, 446x336, FquKEUiaAAAFz29.gif [View same] [iqdb] [saucenao] [google]
939516

>>939151
The Art of Moving Points
<< from https://hippydrome.com/

another good resource
http://wiki.polycount.com/wiki/Topology

>> No.939522

>>939430
Very interesting. And thank you for sharing. I swear people treat bones like they're million dollar industry secrets. So sharing your bones is a commendable thing.

>> No.939524

>>939516
>loses volume

/trash/

>> No.939573

>>939524
I don't think >>939516 has issues with volume. It's very nice looking if you assume it represents the stretching of some (rubberized) synthetic fabric. It may be what you like.

In more general terms I was trying to understand what could be the underlying cause of artists almost always overdoing the stretching action.

Aside from personal taste, audience's taste and other hard to quantify reasons, I think there may be a misconception about how distortions propagate on surfaces.

If you look at several examples of very well curated rigs and try to reverse-engineer what kind of formula the artist had in mind when estimating the influence of a point being moved on it's surroundings, it often resembles 1 - distance or 1 / distance. However, from many other places in nature it has been observed that the correct formula is almost always variation of 1 / distance ^ 2.

I may be wrong. This is just my engineer's take on the issue.

>> No.939575

>>939573
> what kind of formula the artist had in mind when estimating the influence of a point being moved on it's surroundings, it often resembles 1 - distance or 1 / distance..

It arise from how vertex translation is calculated between the transforms it been skinned to, historically it's all linearly interpolating between the skinned coordinates and esp on real time side we're often very limited to how many weights we can have at each point.

Ideally each joint would not be interpolating in such a crude way but rather work more as a 3D spacewarp.
Rotating a skinned vertex should have it arc thru space instead of moving in a straight line.
Ofc that'd be much harder to implement as a coherent skin tool usable by artists as well as being computationally expensive.

I'm a realtime guy so my workaround is to us multiple helperbones and very hard skinweights to force each point to trace arched paths to preserve roundness of volumes.
That's what all the numerous bands are about in >>939449 it forces verts to keep a fixed offset from the volume and curve thru space by effectively 'parenting' verts to rotation centers other than the main shoulder-joint itself.

>>939516
Hippy is a legend. Taught myself so much from his stuff back in the day.

>> No.939583

>>939575
>Rotating a skinned vertex should have it arc thru space instead of moving in a straight line
That's what the "Preserve volume" (a.k.a. double quaternion interpolation) does in Blender. In practice however I don't see it being used very often because it tends to overshoot. It's not too hard to compute but some amount of loss of volume seems to be more visually tolerable than a gain in volume.
It also needs to be supported in the game engine and I'm not sure all of them do.

>> No.939586
File: 57 KB, 582x514, mvp_falloff.jpg [View same] [iqdb] [saucenao] [google]
939586

That's a panel in Weight Paint Mode in Blender. The default is on the left. If you ignore the start and the end of the curve which have been rounded off, it's indeed a linear falloff. On the right is another preset which I would think it's more correct.

>> No.939590

>>939583
Yeah that's what mainly ruins DQ for my purposes. While it's a lot faster and more predictable than than log-matrix blending that performance impact is still pretty bad, like lose ~30% of your performance kinda bad, why most game engines prob still shy away from it and don't even provide the option.

The artist in me would love to use it, but between a perfectly smooth butt and a perfectly smooth frame rate the gamer in me knows what's the right call.

>> No.939595

>>939449
That's really nice. Wasn't the guy from God of War Ragnarok going to give a talk about exactly this at GDC 2023? I wish someone films it

"Joint-Based Skin Deformation in 'God of War Ragnarök' on March 24th

>> No.939615
File: 283 KB, 1910x1080, 1716CDD1-B2B9-46F3-A017-A6AB215DE8DD.jpg [View same] [iqdb] [saucenao] [google]
939615

>>939415
To be fair you don't have to be good, you just need to be able to show a jury that Alec Baldwin pulled the trigger.

>> No.939619

>>939590
>While it's a lot faster and more predictable than than log-matrix blending that performance impact is still pretty bad, like lose ~30% of your performance kinda bad, why most game engines prob still shy away from it and don't even provide the option.
You're saying nothing. It DEPENDS WHAT PLATFORM YOU ARE ON.

Playstation 5 is literally using Ziva RT muscle rigs in AAA games like spiderman and using raytracing on top of that

>> No.939637

>>939619
Do you have a sony devkit and targeting the PS5 anon? I don't.

>> No.939651
File: 844 KB, 419x346, shoulders.gif [View same] [iqdb] [saucenao] [google]
939651

>>939430
boy I wish I werent working alone and could have someone else do all that work. sheesh

>> No.939653

>>939496
What you're doing is cartoonish "in same way" no matter how realistic it looks. IMO the correct philosophy is to embrace stylization and find a look that suits your project.
There is though, this big push for indistinguishably real CG to use to replace live action performs in stunts and when they've ODed. Can't get away from that and they're pouring billions into the goal.
For me I like cartoons and I want to make cartoons, and I'm just OK with the soulless masses who hate cartoons hating me and what I do. Even though, there's plenty of styles which are "real enough" to not draw ire your average "general audience" member.

>> No.939662

>>939464
Licecap for quick gifs

>> No.939667
File: 45 KB, 640x360, minimal_rig.jpg [View same] [iqdb] [saucenao] [google]
939667

>>939651
That pose you've depicted is impossible for a human. We were discussing it earlier in the thread.
And you have way too few bones. I've been told by the youtube tutorial people that pic related is the minimal amount.

>> No.939670

>>939653
I get it that people spend their time attempting to imitate studios style because they would like to be hired by them. But the job market for those positions is such a lottery, so unless you're super talented and you know it, I wouldn't bother.
Also, why would a studio hire the kind of talent they already have?

>> No.939675

>>939670
idk? idc though. I'm not seeking industry employment. I'm trying to make my own project. I think the industry is gated against certain types, and in the end your reward is going to be working in a vfx sweatshop.

>> No.939696
File: 403 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
939696

I've added some realism.

>> No.939698

Unfortunately in Blender, the Soft Body modifier which is responsible for realism also causes a noticeable loss of volume on extreme bends. I've tried it several times in the past but I was never satisfied with it.

>> No.939748
File: 453 KB, 294x233, buttrig.gif [View same] [iqdb] [saucenao] [google]
939748

>>939696
you need an extra joint to conserve butt volume

>> No.939750

>>939151
Have you tried lifting?

>> No.939761

>>939748
good idea but lazily executed. Smooth out the vertex weights ffs.

>> No.939778
File: 140 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
939778

>>939761
>Smooth out the vertex weights
That seems to be a very popular misconception around here. No doubts it comes from some youtube tutorial. Smooth vertex weights translate to bad pinching, the need for more corrective joints, the need for more smoothing and so on.

>> No.939779

>>939778
you need to weight paint it better because it's still deforming badly.

>> No.939782

>>939779
You show me how you do it.

>> No.939785

>>939782
not that anon, but this is mine
>>938927
>>938926

>> No.939787

>>939785
It's interesting. At least it's not the same thing the Youtube Tutorial Cargo Cult tries to replicate over and over.
I made >>939054 in that thread.

>> No.939997

Why is it that when I use a "stretch to" constraint, the length of the bone has to be manually recalibrated by pressing the little x? Why can't the stretch 2 constraint just know how long the bone needs to be automatically? And then if I wanted to make a custom length for whatever the fuck reason, I can play with the slider.
Because every time I move a bone that relates to the stretch to constraint, I have to go through the ritual of returning the pose to rest, and pressing the X to get it back to normal. It's a huge pain in the ass. I swear 25% of my time is spend resetting the length of stretched bones.

>> No.940066

>>939997
idk blender specifically but the answer to your question is that that constraint was made with some specific use case in mind and yours isn't it.
Schools get criticized for teaching the back end of tools or obsolete techniques for effects which are solved by off the shelf tools, but here's an instance where knowing how a constraint works helps a lot.
More often than not I find I can write and expression that does some complicated thing faster than I can find the right constraint in the menu. What does "stretch to" need to do?
-aim at the target
-scale the joint on Z till the child joint touches the target
So you first need a look-at constraint, EZ. Then you need to capture the length of the bone which is the tip's local z position. Then you need the distance from the base bone to the target. There's nodes for that in maya, idk about blender, otherwise there's trigonometric functions that do that. Then you divide the distance by the length of the bone and make that the z scale of the joint. voila you have a stretch to constraint that works how it's supposed to.

>> No.940087

>>939151
>How do I actually get good shoulder deformation?
>if I only knew the steps.
>I need a no fucking around, honest to goodness shoulder solution.
Hard work, studying, practice, everything you're avoiding by making this thread. You don't need the fancy setups if you can't do it without them.

>> No.940088

>>940087
Back again? Post shoulders.

>> No.940091

>>940088
I don't know what you're talking about, but you won't get anywhere just by blindly begging.

>> No.940094

>>940091
You don't have to know what I'm talking about. Prove you have the solution by posting your shoulders. And then from there, you can set me on the start of the path toward this knowledge you speak of. Until that happens, you're just speaking in empty platitudes.

>> No.940216

>>940094
the answer is twist joints and corrective blend shapes. It is the best way. Industry has already abandoned dynamic skin on a muscle rig. It just doesn't look better and isn't less work.

>> No.940227

>>940216
Even if it does look better done right the amount of work needed to have it look correct means it's unusable by all except absolute top tier artists.

A muscle rig implemented naively will just add another level of uncanny.

>> No.940245
File: 208 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
940245

It's not perfect because I did it very quickly, but I wanted to try a new model.

>>940216
I think the answer is that you're a dipshit who has yet to post one example.

>> No.940249

>>940216
>Industry has already abandoned dynamic skin on a muscle rig
what did you mean by this? A muscle rig by its very nature replaces skincluster

>> No.940250

>>939151
>>940094
>Prove you have the solution by posting your shoulders.
>you're just speaking in empty platitudes.
I couldn't care less what you think I can or can't do. You won't get the skills to understand the half dozen examples that have already been posted for you out of charity, if you're not putting in the work.

You sound exactly like the Frenchman who went schizo a few months ago.
>You don't know what you're talking about!
>You WILL give me the easy solution, or I'll kill myself!
Go for it, anon.

>> No.940254

>>940250
reply to this >>940249

>> No.940275

>>940250
What part of: "post an example of your work or stop giving bad advice to people" don't you understand?

>> No.940292
File: 976 KB, 467x408, shoulder2.gif [View same] [iqdb] [saucenao] [google]
940292

>>940249
by dynamic skin I mean using a cloth simulation on a skin mesh which stretches over a muscle rig. It's supposed to cure all of the loss of volume, pinches, pulls, etc of rigging a very realistic character. They jerked off about it really hard for the incredible hulk movie in the 2000s. It does work, and it does look good, but in the end it's a rue goldberg device to make you base mesh deform how you want. You can just make it do that with less laborious methods. Twist joints and corrective blenshapes.
>>940245
My very first post in this thread is of a shoulder I made. It has 1 twist joint in the upper arm, whish isnt really relevant, and no corrective blendshapes. Despite that it looks pretty much fine.

>> No.940294
File: 3.18 MB, 640x360, 1676661616078673.gif [View same] [iqdb] [saucenao] [google]
940294

>>940292
>cloth simulation on a skin mesh which stretches over a muscle rig
what in the fuck? Why would you want to put skincluster on top of a muscle simulation? Are you ok? You haven't done this before with industry tools, have you? Seems like your stuck in the 2000s.

>> No.940303
File: 244 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
940303

This is the rig that I've used for >>940245 just for curiosity. There are no manual corrections but I'm experimenting with bone weights that are able to trick Corrective Smooth into doing the right thing.

>>940292
Looks good.

>> No.940311

>>940294
you dont put a skin cluster on it. By "skin" I mean it in the biological organ. The hulk's skin was a cloth sim which collided with the muscle rig underneath so that it slides over bones and expanding muscles.
I am also CRITICIZING this technique. It's just the ultimate expression of any kind of dynamic solution. You want the body to move exactly like how it does in real life? A muscle rig will work, but it's a lot of work and twist joins + corrective blend shapes look just as good. Additionally, you can get away with hardly doing anything at all. Just a joint for the scapula to bridge the twist from the ribcage to the shoulder.

>> No.940338

>>939361
>>939449
Character names where are they from?

>> No.940351

>>940338
I don't remeber the character name but:
https://www.deviantart.com/xnalara
https://github.com/johnzero7/XNALaraMesh
You should be able to get many like it.

>> No.940544

>Experimenting with different rigging methods.
>Getting some really good results.
>Most of the poses look really clean and flexible.
>One pose has an awkward pinch.
>Spend hours trying to diagnose the problem.
>Finally figure it out.
>Try to fix the problem.
>A little change here.
>A little change there.
>A bigger change here.
>A bigger change there.
>My rig is now completely unraveled.
>There are 5 more problems than I started with.
>Have to completely rework the rig from the ground up.
>Try a new method.
>The cycle repeats.
Rigging is a goddamn scam.

>> No.940570

>>940544
imagine if youd had just fixed that 1 issue with a corrective blend shape.

>> No.940571

>>940570
Imagine if you posted even one example of you do that.

>> No.940574

>>940570
Ok, now it's crossing from obnoxious to kind of funny.

>> No.940586
File: 224 KB, 740x478, correctiveblend.gif [View same] [iqdb] [saucenao] [google]
940586

>>940571

>> No.940587
File: 383 KB, 1118x716, blendFast.webm [View same] [iqdb] [saucenao] [google]
940587

>>940571

>> No.940592

>>939748
are those a pair of balls im seeing?

>> No.940594

>>940586
I would have posted something similar myself eventually but I'm glad you did it mostly because I wanted to know if you had ever tried your advice yourself.
>>940587
I've already explained what the general issue with hand crafted Shape Keys is here >>939167
To be more specific it's the fact that mesh editing tools (including sculpting) don't work correctly when there's an active pose on a model. You have to modify the mesh while it's in the T-pose, then switch back and forth to any pose you want to correct for every joint.

It's not impossible but it's so impractical to the point that machine learning is being applied to this problem.

>> No.940599
File: 134 KB, 480x520, shape_key_pinch.jpg [View same] [iqdb] [saucenao] [google]
940599

And after all, look. To correct that specific condition by hand is particularly nasty. You'll know when you'll try.

>> No.940632

>>940594
there are tools for this. A handy script comes with "The Art of Rigging" which will duplicate the mesh in the pose to be corrected, lets you edit it, and then subtracts the pose deformation. It's perfect and unassailable. I also misplaced it on my computer and may have deleted it. Hand making blends is not as precise as you think but if you REALLY need to you can always make a full muscle rig with a dynamic skin and use it to generate the corrective blenshapes.
>>940599
I spent more time trying to remember where the relevant menus were than actually creating that shape. If it's not right fix it. If it's still not right fix it more. There is no counter to the corrective blend solution. you are forcing it to be right up to your own standards/laziness.
anything
ANYTHING else will require way more work to set up just to get the same results. You're trying to make davinci's screw copter work when helicopters are already flying in the sky.

>> No.940698
File: 1.84 MB, 640x360, Shoulders Progress.webm [View same] [iqdb] [saucenao] [google]
940698

OP here. Working hard on it. Pushing as far as I can without shape keys or manual weight painting. But it's still very far from ideal. Regardless, I wanted to share progress.
I've been kind of obsessing on it for a while, so I think I'm going to take a break on it and come back later. It would be nice if you guys had any tips or ideas for improvement.

>> No.940700
File: 1.39 MB, 640x360, Shoulder Progress Back.webm [View same] [iqdb] [saucenao] [google]
940700

>>940698
Another view. It looks better in the back than the front. I like how the shoulder and neck area deforms. It look a long time to get that to work.
All the deforming bones are colored red or purple. The red ones flex with stretch to constraints and bendy bones. The purples ones are solid and approximate real bone locations.
Defaulted bones don't deform, but are helping in some way.
Teal bones don't deform, and approximate real bone locations.
Again, the results are far from ideal, but I'm still kind of liking it so far. I wish I could make it better.

>> No.940701

>>940700
Sorry for triple posting. But I just wanted to add, that the idea here isn't to make every muscle of the body work flawlessly though bones(Though, wouldn't that be sweet?) But rather, it's to get the overall large movements to deform in a clean way. To get the *macro* deformations clean.

>> No.940702

>>940698
Looks cool and the rig looks cool even by itself. I would like to see it as a robotic arm made of rigid objects parented to it.
I question the practicality of something so complex, but if you can make it work, it certainly looks interesting.

>> No.940703

>>940700
very good progress, this is looking quite good already mate.

>> No.940751
File: 3.23 MB, 1280x720, Shoulders No Flex.webm [View same] [iqdb] [saucenao] [google]
940751

>>940702
>I would like to see it as a robotic arm made of rigid objects parented to it.
How do you mean? I'm not sure how that would work. Ignoring all the red flexible bones, you're left with the rigid skeleton. So here, I disabled all flexible bones, and enabled some of the rigid ones to compensate. Surprisingly, the deformation is mostly the same. Perhaps even better? Huh...

>I question the practicality of something so complex
The ultimate goal is to make the mesh deform in a way that looks more natural. Less uncanny. Less robotic. And all in real time. I don't know why, but the idea that this system of rigging/skinning has been around for decades, mostly unchanged, and no one has found a solution for shoulder deformation without some auxiliary tool like shapekeys, fills me with a rage. And I'm honing that rage into the pursuit of what I desire.

When I figure out the magic arrangement of bones that gets really really good shoulder deformation, then the next time I set it up, it won't be so complicated, because I will know what I'm doing. And even if it's a little more complicated than your basic rigify skeleton, it will be worth the effort, because it will get better results.

>>940703
Thanks, mate.

>> No.940754

>>940751
this is kinda hot

>> No.940757

>>940751
>no one has found a solution
I've found that there is a problem with the math of bone weights in general. I'll make a thread about it and I'll explain it in detail if and when I'll have time.
In the meantime, the workaround I've found is to use less "green" as possible. Like here >>939778 because long story short, half weights are the most affected by the math problem.
Give it a try. It may save you some extra bones.

>> No.940764

>>940751
>>I would like to see it as a robotic arm
>I'm not sure how that would work
Like the Terminator.

>> No.940766

>>940757
I would be interested in hearing about the math. I'm not really good at all the number stuff. I never learned stuff like vectors and tangents and all that kind of math. But I sort of get how it "moves".
When a single bone is weighted 100%, then when it rotates, the mesh rotates 100%. And because it's radial rotation, it kind of moves within the boundaries of a sphere.
But when two bones share the mesh, well now they have to divide the influence between each other. If both are weighted 100%, then effectively speaking both only have half influence, because they're sharing. If you weight one bone down to 50%, and the other bone remains 100%, then effectively one bone has a 33% influence, and the other bone has 66% influence.
If the total weight is below 100, then the mesh will get left behind. As neither bone is strong enough to move it fully from it's original position.

That's my understanding of the math, which makes sense to me. But the problem I have with it, is that there is no way to weight two bones perfectly preserves the volume of the mesh. There are weight distributions that make creasing and flattening less obvious. But "less obvious" is not a solution to me. I want to figure out how to simulate mass, so that the creases occur where they're supposed to occur.

Another thing: Months back, some anons yelled at me about weight and topology. And while I think they were wrong to put so much faith in weight painting. I do think they were right about good topology. The topology of my model isn't perfect. But I'm constantly altering it to become flexible in the ways I need it to flex. It will probably change more as the experimentation continues.

>>940754
I'm going for the muscle mommy build.

>> No.940770

>>940764
You can find those kinds rigs on youtube. Mechanical rigging is much simpler, because you don't have to make anything bend. Rigidity in mechanics is expected. But muscles squish, and skin warps. I built my rig with squish in mind.

>> No.940771

>>940766
I don't know if you can read anything on my mouse drawn diagram, but that's the mechanism by which bone deformations lose volume and pinches appear.
Technically it's unavoidable but you can improve things a lot of to the point that it becomes almost unnoticeable as I've shown here >>939852 >>939858 if you know what you're doing.
The problem is that the "Assign Automatic from Bones" function in Blender gives you weights that are for the most part around the max error area.
That's the outline of the issue.

>> No.940772
File: 40 KB, 477x401, diagram.jpg [View same] [iqdb] [saucenao] [google]
940772

Forgot diagram.

>> No.940773

>>940766
>Months back, some anons yelled at me about weight and topology
That's another rabbit hole. People around here treat topology like a religion, however for the most part it only influences the per vertex normal averaging. We were loosely discussing it at the end of >>939893
Long story short, Blender default to an algorithm that is especially sensitive to edge angles, so naturally Blender people think that God intended that only models with an aesthetically pleasing topology will render good.

>> No.940777

>>940771
>>940772
I don't get your diagram. Where are the bones? What's moving?

>>939852
That's nice for the rotations you gave it. However, the arm and shoulder have much higher mobility range. The arms has to be capable of reaching down behind the back, and the rotating all way up and over behind the head. Which is a rotation of more than 180 degree. Possibly around 210. Granted, at the extreme extension, even the human body would appear stretched. But still, it has to rotate pretty far before losing volume or extreme clipping.
Then add to, the shoulder joint has to be capable of twisting roughly 90 degrees.

>>939858 (Cross-thread)
Bendy Bones have a better time of creating smooth shoulder rotations. I've done a number of tests myself in that regard. The version I posted is like version 10 of bendy experiments. However, one bendy bone going through the shoulder and arm, creates something of noodle arm effect. The arm pit curves in an unconvincing way when the arm is down, and the bulk of the shoulder bends inward when the arms is up.

I don't know. I think we both agree that weights are fundamentally flawed. But where we differ, is that I think whether or not you use automatic weights or fix the weights manually, it doesn't solve the fundamental problem. Like, there's some key element missing in the whole system, that I can't quite conceive. It's like on the edge of my mind. But I feel it. There's got to be a better way.

>> No.940783

>>940777
>I don't get your diagram
The circle represents the motion of a generic hinge. The Linear Interpolator is the method that's actually used by the Armature system. There is a discrepancy and it gets worst when a vertex in "equally contended" between two bones.

>weights are fundamentally flawed
They're implemented the same way everywhere at the moment. It's not just a Blender problem.

>it doesn't solve the fundamental problem
Mathematically it doesn't.

>There's got to be a better way.
Almost certainly. The biggest problem I've encountered so far are Blender people telling me to shut up, watch more Youtube tutorials and make some donuts instead.

>> No.941590
File: 677 KB, 1280x720, Shoulder Progress 04.webm [View same] [iqdb] [saucenao] [google]
941590

Shoulder update.
Going back to an older idea. Instead of using a bunch of stretch to joints. I just place a bunch of small bones in the center of a muscle's mass. And then I let automatic weights blend between them. It gets pretty good results, because automatic weights always creates perfectly even gradients. So It's like you creating an even blend between muscles. I can move the bone around to dictate where the center is, and grow and shrink the bone to effectively increase or decrease the bone's weight strength.

Deformation is nice, but the hard part to this method, is making the bones move where they're supposed to move. I have to make this bone relate to that bone, then set up a copy transform to another bone and set it to 50%. And then I have to set child of constraint to another bone, and set that to 50%.

By the way, it's really annoying that I have to set the inverse every time I set up a child constraint. I don't understand that. I wish the child of constraint automatically figured out the inverse for me. It's just so inconvenience to have to press the button every time I edit the bone.

In fact, this system would really work well if there was a way to quickly set up multiple child constraints to a single bone. Like, why can't one bone be a child to multiple bones? And then it automatically determines the difference between the bones. And there are sliders to set the weighted influence of each parent. This parent has 20% influence. That parent has 50% influence. And this parent has 30 percent influence. Imagine the possibilities. Essentially, you could create a lattice of floating points. And with enough tweaking, it would deform in this really cool dynamic way.

>> No.941634

>>941590
It looks good but you're getting close to a one bone per vertex situation.

>> No.941648

>>941634
Not him but does that really matter?

>> No.941649

>>941648
There is nothing wrong in principle. In Blender you can also stack multiple armature modifiers on the same object. I have no idea how that works exactly but you can.

It's a matter of how much complexity and required effort are you willing to tolerate.

>> No.941651

>>941590
>automatic weights always creates perfectly even gradients. So It's like you creating an even blend
That's what most people instinctively think but in reality it doesn't work that way.
You can see that it doesn't work because it leads you to add more and more corrective bones. If it did work, you wouldn't need to do that.

>> No.941654 [DELETED] 

>>939151
https://youtu.be/1sFyrfqTdcg

>> No.941656

>>941634
>>941648
nah looks quite alright, but I wouldnt really do more if you want to use that system in a game engine. Managing that many bones in realtime brings its own challenges.
The rigs I use in UE5 now have about 400 bones and its already a bit of a pain. Characters have about 50k polys so its still far far from a bone per vertex but I wouldnt go over.

>> No.941695

>>941651
But it does. Weights always have to equal 1, or else when you move the entire rig, the vertexes less than 1 will be left behind. Automatic weights always achieves 1. It has to. If it didn't, then the option would be useless. Automatic weights achieve 1 in the most linear way possible. If you have two bones, then the weights will be evenly distributed between them, equaling 1. If you have three bones, again, even distribution equally 1. However, when you add more and more bones, you will notice that each bone has a smaller and smaller amount of color in weight paint view. Visually speaking, it looks like each bone is weak individually. But if you were to add all of those individual bone weights together, it would equal 1.

I do understand that automatic weights don't blend "overlap" weights so to speak. At least, not in a way that optimally preserves volume. But personally, I don't think that matters to what I'm trying to achieve. I want to take advantage of the crease and collapse. I think I can utilize that. For example, notice the bones I used for the actual shoulder muscles. 3 bones to approximate the 3 muscle groups that make up the bulk of the shoulder. When the arm lifts up, the shoulder bones fold and collapse against the trapezius muscles. Now, if I were to manually weight paint that, it would actually kind of a hassle to get it to fold so smoothly. But with automatic weights, it's folding exactly in the center between the shoulders and traps, with complete linearity, so it's nice and clean.

Most of the "corrective" bones you see, actually mime realistic muscle positions. They're corrective because they are correct.

>>941634
Nah, it's cool. I doubt it will come to that. The way I envision it, you will only need to define center points. The center of a muscle. And then vertex weights will handle the rest. The idea is really starting to blossom in my head, I'm just having a difficult time making it work with the tools I'm given.

>> No.941706

>>941695
>Weights always have to equal 1
That is enforced by the algorithm independently of the weight paint. There is a threshold. If a vertex receives nearly 0 influence, it's left behind but in all other cases, the sum of the influences is always normalized to 1. I take advantage of that behavior specifically to do my creases so to speak.

>> No.941708
File: 205 KB, 764x562, weight stuff.png [View same] [iqdb] [saucenao] [google]
941708

>>941706
Sheeeeit. I think you're right. Just did a little test. Moved the rig at a jaunty angle, then deleted all the vertex groups for a clean slate. Assigned .3 to some vertices on the forearm, and they snapped into place with the bone like they're all 1s. Proceeded with the upper arm, assigning .3 to some vertices there, and they snapped into place just the same as if they're 1s. I extended the upper arm weight to overlap the forearm weights, and the overlapping portion went askew. So I guess I don't really understand how weights work after all. This confuses me.

It seems the threshold for for whether or not they snap into place, is 0.0001. Anything above is normalized, equal or below is treated as though it's zero. I don't understand it.

>> No.941722

>>941708
>This confuses me.
It make sense but it's counter intuitive. The only way I know is because I've read the source code.

>> No.941726
File: 1.43 MB, 1280x720, Shoulder Progress 05.webm [View same] [iqdb] [saucenao] [google]
941726

>>941722
However Blender handles weights, it seems to synergize with what I'm doing. I'm getting some nice looking creasing. This "center of muscle" idea might work out if I keep at it. If I can make the actual bones move in the way I need them to move, then it will work. That's the hard part.
How do I tell a bone to hover equally between multiple moving bones?

>> No.941766
File: 1.50 MB, 1280x720, Shoulder Progress 06.webm [View same] [iqdb] [saucenao] [google]
941766

>>941726
>>941722
Important update: I found the magical constraint that allows me to equalize bones. It's called the "armature" constraint. I never noticed it before, but it's there, and it's what I was looking for.

I used it for a few of my bones here. Don't know if the difference is noticeable, but the shoulders bones are moving how I want them to. Yet, there is still a lot of experimentation to be done, in order to use this magical constraint to the fullest of it's ability. It's going to take me some time to make this work. But hopefully my next update will be way better than what I have now.

In the mean time, if you guys have any suggestions for improvements, or insights on weights, or breakthroughs for shoulder deformation, then please share.

>> No.943250

maybe one day:
https://www.youtube.com/watch?v=ViIvQWLm9rQ&

>> No.943253

>>943250
This video is my main inspiration, and the reason why I'm not settling for subpar solutions. It's also the reason why I'm posting my bones. Because fuck that guy for never revealing how he did that. Fuck his fucking selfish face.

>> No.943274

>>943253
I have a suspicion on how he did it. I'll give you two hints for now. Notice his framerate and also notice that he hasn't shown the wireframe.
If I'm right (and we may never know) you're not going to like it.

>> No.943276

The reason you're not going to like is because it may have more to do with:
https://en.wikipedia.org/wiki/Finite_element_method
than with helper bones and constraints.

I can't tell you to the letter what he did. But if I'm right that mesh subdivided to hell into tiny squares. That's the short version of it.

>> No.943277

With this I don't mean it's impossible to approximate or replicate his results with bones and constraints.
I just wanted to tell you that in my opinion, that's not how he did it.

>> No.943280
File: 46 KB, 403x601, dynamic_mesh.jpg [View same] [iqdb] [saucenao] [google]
943280

If for any reason you want to experiment with the Cloth Simulator, remember to enable the Dynamic Mesh option, like I did in >>942465
It seems to remove the contribution from the armature deformation that preceeds it, which is what you probably want.

>> No.943281

For the Pin Group above, go to the vertex group panel. Enter Edit Mode, select the whole mesh, create a new group and assign to it with a weight other than exactly 1.0 or 0.0.
And I know, it's annoying, it's slow, it's bugged, but from what I've seen is the closes thing to what you want to achieve.

>> No.943289

>>940772
I haven't really followed this discussion very closely, but isn't all of this just caused by the default linear interpolation of vertex positions during rotations?

You can always change this right in the 'Armature' modifier, just enable 'Preserve Volume'.

Usually doesn't work that great for me, and doesn't translate well to most game engines, but there you go.

>> No.943291

>>939161
can confirm, precisely why character finaling is a thing in VFX pipeline, and everyone hates it.

>> No.943295

>>943289
No. It's different. Look at >>942795
Unfortunately the complete explanation is scattered between 3 threads. I'm going to have to piece it back together at some point.

>> No.943345

>>943280
Look at this video. https://www.youtube.com/watch?v=WtNuewlgDq0

This guy is capable of doing it between two objects. Though, his solution isn't exactly what I'm looking for, it is a way to understand what I'm trying to accomplish. He starts the video by stating self-collision isn't possible without simulation. He's a knowledgeable guy, so I'm tempted to believe him. However, I still have to believe it's possible, because nodes are so powerful. Perhaps he simply hasn't figured out the solution yet.

You don't have to watch the entire video if you don't want to. But at least skim it to see what I mean. To see how smoothly they squish. With perfect precision, and thus perfect creasing.

One of the guys in his comments says self-collision is possible with ray tracing. Furthermore, he has a video that talks about it. So I went and hunted down his video. https://www.youtube.com/watch?v=_kjp6AZpY24
But I was very tired, and literally fell asleep while watching. His video is part of a series where he attempts to model an entire human through geometry nodes. So I'm not sure if it will actually help. I'll have to watch it again more attentively and fresh to make sense of it.

>> No.943358

>>943345
I forgot to post the formula to find a point C in between A and B. It's just C = (A + B) / 2. Interestingly he users a Mix RGB node to do it. But that's not really applicable to self-intersections.

>> No.943360
File: 11 KB, 381x351, average.jpg [View same] [iqdb] [saucenao] [google]
943360

The issue is when a third point comes into the picture. You would expect A, B, C and D to be equally spaced, instead they're biased towards one side depending in which order you take the averages.

>> No.943362

I'm sorry I can't give you a full explanation of this issue in a post, but if you look around there is tons of literature.
The way nature seems to work is that it tries all the possible combination and finally it converges to the only solution that's order-independent. Why that is, why nature does that I have no idea but you can observe it everywhere.

>> No.943382
File: 3.22 MB, 1280x720, Mouthy Bones.webm [View same] [iqdb] [saucenao] [google]
943382

>>943360
I don't know what you're referring to exactly, because you seem to be allergic to replying to relevant posts, or providing context. However, I did found out about this issue with the armature constraint. When attempting to weave multiple bones with armature constraints, they don't average out evenly. I can't create the master lattice of my dreams. However, I'm still finding the armature constraint useful. It's still able to average 1 bone between multiple bones evenly. By deciding on one bone as a "base", I can chain a number of bones to average out between the base, and the next link in the chain.

I've been using this chain link arrangement as a test for mouth and lips. I haven't gone into face riggering for real yet. Pic related is just me tinkering with it. I still have to move individual lip points to get the exact shape I want. And pulling at the bone on the corner of the mouth, tugs at all the other bones. So tugging the corner up gets started on a smile, and tugging it down gets started on a frown. Then I adjust the middle bones as needed to show more or less teeth.

The bones along her cheek are affected by the lip bones. So they move up and down accordingly, without any need to adjust.

The fat bone in the center of the head rotates the jaw. And the corner mouth bone is average to the chin and nose. So the corner of the mouth always knows the middle point. Even after I adjust it manually, it knows where it should be on average+adjusted Making it easy to get open mouth smiles, and open mouth frowns.

So yeah. I'm not going to avoid the bias, but there's use for the constraint still.

>> No.943385

>>943382
>I'm not going to avoid the bias
Shit, I meant I AM going to avoid the bias.

>> No.943392

>>943385
It's all one reasoning. You showed me a video where some guy explains his exact solution to two sphere coming into contact and deforming.
I was trying to explain to you that when a third sphere comes along, an error appears. That's all.

>I AM going to avoid the bias
I believe you can do anything

>> No.943398

>>943392
I think I see. It's going to take a while for that to sink in.

>> No.943444
File: 183 KB, 734x663, file.png [View same] [iqdb] [saucenao] [google]
943444

>>943382
>Advanced Face boning
Nice to see someone else doing that shit. I keep trying to find some cool constraints or drivers but every time i search for face rigging it's either "Use rigify", "Put jaw bone into the jaw" or shape key shit. Almost makes me believe >>939522 is right.
Any tips you might share on face joints regarding constraints and such?

>> No.943494

>>943444
>>Advanced Face boning
I don't know if I would call it "advanced". I literally haven't watched any face rigging tutorials. I watched a few "for beginner" rigging tutorials. And they all start with "use rigify, and then delete the face, because we don't need it". So I've only gotten a glance at the rigify face skeleton before deleting all those bones. In the webm, I just placed bones where I thought would make good transformation points. I'm literally just learning as I go. Tinkering. So I can't provide you with tips I don't know myself.

Did you understand all I said about the armature constraint? I think the armature constraint is an unsung hero. As it allows a single bone to have multiple parents. Making the bone find the average between all of them. This is perfect for when you want a bone to float between two or more points.

It's not a good idea to create circular parenting. Where two bones are the parents of each other. Because like the other anon illustrated with average bias, you get a situation where the bones skew more and more as they move. The more movement, the more they escape their average range. Because it's going a fraction one way, then a fraction back, and when you multiply fractions, you get smaller and smaller numbers. Resulting in the bones not returning to their positions.

For example, I want to make it so the cheek bone is affected by the lip bone, and the lip bone is affected by the cheek bone. Because hypothetically, when you flex your cheeks, your lips move, and when you flex your lips, your cheeks move. It makes sense to me to assign them both ways, creating that circular parenting. But when I do that, the bones start moving wacky. So I have to decide what is going to move what. I decided that the lips will move the cheek. I see the mouth movement being dominant. More proactive. While cheek muscles are more reactive. So the cheek is the child to the lip.

>> No.943510

>>943494
Also, I noticed this: https://www.youtube.com/watch?v=mB468Jh9aAY
Basically, the jaw doesn't rotate perfectly on a point. It sort of slides forward and down. It's a small detail, but when I applied it, the jaw suddenly looked better. I did it by moving the jaw controller into a particular position. Just trial and error, move the controller, test, move it again, test. Until the jawbone gets that little slide. You can probably find a dozen solutions to make it move that way.

>> No.943669
File: 430 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
943669

I didn't put much effort into this but it's a combination of my patch for Automatic Weights and the Cloth modifier.
I hand painted amount of bulging by varying the goal strengths and by giving it internal pressure.

>> No.943673
File: 353 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
943673

Some UVs and slightly different parameters. The last part of the animation was too see how far I could sharp twist without pinching. It's a good 30 degreess.

>> No.943715
File: 447 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
943715

Closeup. I'm not good at this Cloth sim business yet but it is of interest to me. Basically, with the Dynamic Mesh option enabled it acts like a Corrective Smooth on steroids.

>> No.943776

>>939449
so you took the AMD approach, just with bones

>> No.943777

>>939778
>>939782
>>939785
see the yellow/orange?

>> No.943779

>>941590
traps

>> No.943830

>>943715
Twist testing is important. But don't lose sight of the fact that forearms don't truly twist like that. There are two bones connected at the elbow. So when the wrist twists, the elbow doesn't. That's why the elbow zone doesn't lose mass in real life.

>> No.943834

>>943830
Sure but I'm not going for realism. I've made >>943715 to show that it's possible to make large bends without self-intersecting geometry. The twist in the end was an accident.

>> No.943836

The problem is that Cloth simulation with the Dynamic Mesh option enabled acts like some sort of Corrective Smooth + jiggle physics and that's OK but then you get no skin stretching.
Without the Dynamic Mesh option instead you get skin stretching but it ruins the details of the mesh and loses volume a lot.

>> No.943838

So, like, there's an hard limit as to how realistic things you can make with the Cloth simulation alone. Needs more experimentation.

>> No.943896

>>943715
hahahaha.
What about your heinous pinching, I don't even have to look at the wires to see it through shading. Are you delusional enough to truly believe this is a good result?

>> No.943904

>>943896
Yes? And the wires I've just posted them here >>943894 if you're curious about my excellent topology.

>> No.943905

>>943904
Why do your wireframes look like that? Is it a side effect of the method you're using?

>> No.943916

>>943905
I can post an example using a Makehuman model if you want. In that example I've used a metaball as the base mesh, but it works the same.

>> No.944013

Every blender tutorial is like: "I'm going to teach you how to use this node... By distributing instances on it!"

No! That's not showing me a practical use of the node. That's just another instancing tutorial. By the end, I still don't know how to use the fucking node. God, learning nodes is frustrating. And Blender's manual has the most vaguest fucking descriptions ever. How do they expect people to actually learn???? Where's the REAL information???

>> No.944054

>>944013
You need to look at the spreadsheet and you need to consider that although it may seem that for every node, inputs are on the left and outputs on the right, that would be too simple and straightforward for Blender.
What happens instead is that node inputs also send data backwards and the red nodes really are spreadsheet column selectors.
It's amazing.

>> No.944078

>>944054
I vaguely get the idea that data is being sent back. I'm just not clever enough to utilize that.

And yesterday, I was learning about the spreadsheet, which is in part what prompted my angry post. Because even looking at the spreadsheet, I still can't manipulate face how I want.

So basically, I saw this tutorial about how to add wrinkles onto a surface. This one: https://www.youtube.com/watch?v=bNGGwaHOzHY
It was really cool how he isolated specific zones to be the subject of deformation, based on how small or large the faces became. Stretched faces were like no-go zones for wrinkles. Then it blended into black, which experienced no changes as well. Black space was like the in-between. And then blending from black into blue space , the wrinkle deformations appeared. It was essentially like he created his own weight paint in geometry nodes, that changed dynamically with the deformation of the mesh.

But the thing I didn't like about his demonstration, is that it required creating two objects. One object was the standard mesh with standard "weights"(figuratively speaking). And then the second object deformed the weights. He grabs the data from the deformed object, and projects them onto the standard object. Mixing the two datas together creates that dynamic weight effect.

But I figure, there HAS to be a way to pull this off with a single object. Blending two objects together seems unnecessary. I figure I can create the same effect if I can take the face area data, and map each face to be a value of 1 when the object is in rest position. And then as the faces shrink, they become a fraction, until the area reaches zero, and then their value is zero. And when the faces stretch, their value grows in proportion to their area. Twice the area = a value of 2. Thrice the area = a value of 3. Etc. And then I could force the wrinkle deformation(or whatever else), to only appear under values of 1.

>> No.944080

>>944078
>>944054
Using the Face Area node, I can get the data for all the face areas. Seeing the data on the spreadsheet is the easy part. The hard part, is getting blender to understand that I need all faces to change value in proportion to themselves. Not in proportion to the mesh as a whole. Not only that, I need their areas mapped within that specific range I mentioned before. I try to use the Map Range node for that, but I don't know how to tell the map range node to take all the face data individually. How can I say "each face's resting area, is 1? The Map Range node does... something. I'm not even sure what it's doing. But I know it's not giving me the results I expect from it.

>> No.944086

>>944080
I can't tell you anything very specific but in general can measure the angle between two vectors with the dot product, which you should find under the vector math operations.
Obviously, if you measure the angle between two neighboring normals you can tell if you're on the inside, the outside or on the straight part of the bend.
How to do that in Geometry Nodes? I have no idea. I only know how to do that in C.

>> No.944089

Basically you're looking for the second derivative of the surface and you want to use it to modulate the wrinkles. If I understand it correctly.

>> No.944090

And the fact that he uses face areas is probably because it's difficult (or impossible, or he doesn't know how) in Geometry nodes to do adjacency operations.

>> No.944094
File: 864 KB, 1280x720, Face area compression.webm [View same] [iqdb] [saucenao] [google]
944094

>>944086
>>944089
The angles of the faces aren't exactly what I had in mind. Only the areas. The sum of X and Y, disregarding Z.

>>944090
Do you think adjacent operations would get better results? Pic related is me fooling around with the face area node. Using shader nodes, you can visualize the effect of the face area node. It's just a material that reads the color of the geometry data, or something like that. Anyway, by attaching a color ramp node, I can isolate the color so that only the compressed area shows as white. While all the other areas show black. This isn't accurate to what I want. It's only a visual aid to show approximately what I'm trying for. Only the face that shrink turn white. That would be the "wrinkle" zone.

I'm actually not trying to recreate wrinkles exactly.(that would be a cool byproduct) My aim is still in the pursuit of the perfect squish. I figure that by isolating compressed faces in this way, I can make them deform in such a way, as to squish against each other. The smaller they get, the greater effect the deformation will have. Or perhaps, the smaller faces require less deformation? I haven't thought that far, honestly. I just want to get this compression mask working first. That's what this is essentially. A mask.

>> No.944097

>>944094
>Do you think adjacent operations would get better results?
If you're working with an uniformly tesselated mesh, then conceptually it's the same thing.
>by isolating compressed faces
It may be a good idea in principle. Usually, when programming low level 3D stuff I try to stay away from anything that involves second derivatives because it get complicated, but maybe there is a way.
Although I'm not sure how much better it can get compared to the Cloth sim in >>943715 I totally understand that you don't want to use it because it's slow and annoying but I'm not aware that there's anything better in existence either.

>> No.944099

>>944097
What does uniformly tessellated mean? I can guess by googling the term. I plan on having clean topology, if that's what you mean. I've been working on that. Working on keeping everything in quads.

>Usually, when programming low level 3D stuff
I don't know what the levels of 3D are. What's low level stuff?

>Although I'm not sure how much better it can get compared to the Cloth sim
That cloth sim is admittedly very clean. However, I can sense in my bones, that there is a solution that is comparable in quality, while being done in real time. I see the fragments of the solution in this tutorial, that tutorial, and the other tutorial. Many tutorials come close to the solution via different means. Their ideas involve second objects. They *always* create a second object. But there has to be a single object solution to squish. There HAS to be.

>> No.944101

>>944099
If you're going to ask me what "uniformly tesselated" means, then I going to have to ask you what a "clean topology" is first.

"low level" is stuff like this >>943062 >>943091 where I wrote the whole rendering engine.

>> No.944111
File: 1008 KB, 1451x750, Mesh Body.png [View same] [iqdb] [saucenao] [google]
944111

>>944101
So low level means like programming? And I guess higher levels means stuff like UIs and stuff?

>If you're going to ask me what "uniformly tesselated" means, then I going to have to ask you what a "clean topology" is first.
That's a tough question to answer. But from what I've been told, clean topology is topology that remains generally uniform, while the edges align in such a way, that when the object deforms, the edges shrink or grow in a way that makes stretching, pinching or twisting less obvious. Retaining the structure as much as possible. Generally, using quads is a good way to ensure decent edge-flow.

Pic related is my current mesh. I wouldn't call it perfect, but I've been adapting it along with the rig, so it does a pretty good job at deforming. I will likely make more adjustments later. There are more faces than strictly necessary. I tend to think in 4x4 squares, or 4 integer triangles. Because of that, the mesh is twice as dense as it needs to be. You could remove half of the edges, if you're worried about optimization. Say for a video game model. But personally, I'm only thinking of posing for now, and perhaps animation down the line.

>> No.944139
File: 119 KB, 586x568, makehuman_topology.jpg [View same] [iqdb] [saucenao] [google]
944139

>>944111
This reminds me that I hadn't even looked at the topology of the Makehuman model. Normally I don't care because good/bad topology is a dogmatic thing.

>> No.944164

>>944139
Uh huh... well I dunno what to say about it. Looks serviceable, I guess. Kind of basic. You can make almost any mesh look good with enough subdivision. But there are some basic rules that do matter. Just, ya know... Keeping it mostly uniform, so it's easy to map onto and shading doesn't have weird artifacts here and there. Including more faces around bendable bits, so they don't lose volume. That kind of stuff.

Clean topology probably matters more for lower poly models. As there's little room for waste. You have to make the most out of every face.

>> No.944418

>>944164
I went through this argument in another thread and I don't have the energy to repeat it but:

https://graphics.pixar.com/opensubdiv/docs/mod_notes.html
"The strategic placement of a pentagon in one of these critical spots ensures that the surface remains smooth, while allowing for complex topology to flow around."

There's that and the specific way Blender derives the vertex (smooth) normals from face normals. It's complicated but basically, the tutorial guys are always wrong.

>> No.944449

>>939184
>but they did do it by hand fairly recently
Even IF that is true, why on earth would you keep doing it that way? Work smart, not hard...

I see this all the time and I don't understand why some people are flat out refusing to keep up

>> No.944451

>>939151
>How do you get actually good shoulder deformation
Congratulation, the one body part whose relevant bones aren't a hinge or ball socket. Have fun with the shoulderblades, most figure riggers pretend they don't exist.

>> No.944453

>>944418
>"The strategic placement of a pentagon in one of these critical spots ensures that the surface remains smooth
I have been thinking about that recently. As a passing idea. I haven't put it into practice yet. But the idea crossed my mind. I guess it has to do with how blender decides which way the triangles of a face are oriented? With a pentagon, the triangles can change depending on the deformation. Also rearranging itself for the optimal flow.

I'm not quite as dogmatic as the other people when it comes to topology. I like doing quads because it's easy to follow, and it's stimulating in the way a puzzle it. One *solves* topology. It's a lot of fun. It feels good to figure out how to make everything quads.

But yeah, I don't take tutorials as gospel. If I did, I would be making shapekeys and useing cloth physics already. I would be so far advanced. But instead, I'm stuck trying to figure out how to do the impossible in geometry nodes, and make objects squish into themselves.

>> No.944457

>>944453
>which way the triangles of a face are oriented
Triangles have no orientation but quads have two diagonals, so the orientation matters.
For subdivision modeling it doesn't matter too much because quads tend to be flattened by the subdivision process.
For low poly modeling it's important and it's another rabbit hole.

>figure out how to make everything quads
It's an illusion. A pentagon manually subdivided into 5 quads, topologically speaking, is still a pentagon. Same for triangles.

>> No.945593

>>944094
After a lot of trial and error, and thinking on it long and hard. It's finally dawned on me, that this face area method is impossible. Because Geometry nodes doesn't actually understand what the armature is doing. In my head, I think of this object have 2 poses. 1 pose is resting, the other pose is where I rotated the armature. And the object is going from pose 1, to pose 2, back to pose 1.

But nodes don't see it that way. Nodes only sees one pose. No matter how I manipulate the object, nodes only sees it as a single pose. And this is the idea that's been so slippery for me. I just couldn't grasp it. But I see it now.

I have to tell nodes to take a snapshot of the object at rest, and then compare the resting data to the deformed data, and that's how I get the heat map. The only problem is: I don't know how to tell nodes to do that. I mean, in the tutorials, they all duplicate the object at rest, and then transfer data off the resting object. But I don't want to do that. There has to be a cleaner way. There has to be a way to get an object to see its past self through geometry nodes.(There isn't, but I'm going to obsess over it for another week before giving up)

>> No.945610

>>945593
If there isn't a way to access the rest pose in Geometry Nodes, you should request it as a feature. The modifier stack supports it.

>an object to see its past self
That's a slightly different idea and I think it's being implemented in the new physics nodes. I haven't tried them yet, so I don't know for sure.

>> No.945688
File: 917 KB, 1280x720, Tension Map.webm [View same] [iqdb] [saucenao] [google]
945688

I caved and just duplicated the object. And did the data transfer technique. I didn't follow the tutorial exactly. But I'm fairly certain this is accurate.
Using edge distance rather than face area this time.
Red zone indicates areas shrinking.
Blue zone indicates areas growing.
Nothing is happening yet. Next step, is figuring out how to displace the red zone, so that the faces don't intersect. I could use this to make wrinkles or whatever. But I don't want to.(yet) I'm pursuing a clean squish. A faux-collision type effect.

>> No.945690
File: 144 KB, 1658x686, Tension Nodes.png [View same] [iqdb] [saucenao] [google]
945690

>>945688
The relevant nodes, if anyone cares.

>> No.945694
File: 385 KB, 1840x651, rest pose shape key toggle.png [View same] [iqdb] [saucenao] [google]
945694

>>945610
>If there isn't a way to access the rest pose in Geometry Nodes, you should request it as a feature. The modifier stack supports it.
For some reason, reading "the modifier stack supports it", made me go back to google and try searching again. I tried a different search, and found an interesting result.
Apparently, there is an option in shape keys called "add rest position". It's right there are the bottom of the box. Clicking it appears to do nothing, until you check the spreadsheet. Under vertices, a new column called "rest pose" appears. Pic related.

You can access this data in geometry nodes, by making a new input, and then selecting "rest_position" from the modifier stack.
This random guy on twitter noticed it, and demonstrated how to access it for nodes. https://twitter.com/Nahuel_Belich/status/1549522380292296705 Googling it doesn't yield many results, except people asking for more information.
This *might* be what I've been looking for. I already played with it a little, and I can't get it to function with my current node set up. I can set the input to integer, vector, color, and float, and it gets different results, but nothing I'm looking for. The closest it came to getting good results is when I set it to boolean. I don't know what it was doing, but it almost resembled the tension map, only glitchier. I'll have to fiddle with it some more. Maybe I can change my nodes to use it in a different way. If "add rest position" fails, then I'll probably go asking for a proper rest position node.

>> No.945741
File: 247 KB, 1440x815, inside_of_bend.jpg [View same] [iqdb] [saucenao] [google]
945741

Notice that I've put two Geometry nodes modifiers in the stack. One before the Armature modifier and one after.

>> No.945755
File: 220 KB, 1852x573, Tension Nodes 2.png [View same] [iqdb] [saucenao] [google]
945755

>>945741
Thank you so much. You just saved me hours of headache. It works now. Duplicating the object is no longer necessary.
So basically, I've tried this yesterday, and failed obviously. Because my previous attempted didn't yield any results. In retrospect, I think I had the modifiers out of order. It needs to be rest-nodes > armature > tension-nodes in that order. Which, seems obvious to me now. But yesterday, I was so mired in the nodes, I must have overlooked it.

Such a relief to actually have something *done* for once. And this is only the first step. I don't even know if what I'm attempting next will work.

>> No.945963
File: 1.60 MB, 720x720, Pinch Average.webm [View same] [iqdb] [saucenao] [google]
945963

Current idea: take the average of the vectors in the pinching region, and set the position to the average. All these vectors converge to the average point. However, I don't want them all to be placed on the average 100% like that. I want them to gradually find their way to the average using the tension map I created. Currently, I don't know how to do that. So I just got these wonky results.

>> No.946210
File: 2.90 MB, 720x720, Pinch Average 2.webm [View same] [iqdb] [saucenao] [google]
946210

>>945963
Not getting much of anywhere. But check it out. It's starting to bend in on itself. OK, this isn't a good result, but I'm starting to conceive of what the mesh needs to do. There's 2 things that has to happen.

1, The position of each vertex needs to space itself evenly in accordance to the tension map. With the highest tension pointing toward the average, and then the gradually as the tension lowers, each vertex points less and less to the average. The average indicates where the most pinched point is. So when all the surrounding vertices are gradually moving toward that point, then they should avoid intersecting.

2, However, they won't avoid intersecting, unless I can make the position up in real time. Right now, the tension map is made by reading the different between the rest position and the armature deformation. The problem with this, is that the armature deformation intersects. So the tension for the pinching zone goes from 1, to 0, back to 1 again. As the bone goes from rest, to 90 degrees, to 180 degrees. I need to somehow tell the mesh to change the position of vertices, check for the change, then go back and make the necessary adjustments to the tension map.

However, adjusting the tension map dynamically like that, would then change the vertices again, which would adjust the tension map again, which would change the vertices again. Around and around. I think it's called a feedback loop. And it would essentially fuck up the logic. So that's probably why I can't figure out how to make nodes do it.(or maybe feedback isn't a problem, and I'm just too stupid to figure it out.)

I'm starting to understand why things like this are handled through simulation. Because one could make the necessary checks every frame. But I don't want to believe that simulation is required yet.

>> No.946212
File: 1021 KB, 640x480, 0001-0120.webm [View same] [iqdb] [saucenao] [google]
946212

>>946210
>I think it's called a feedback loop.
I think it's called a Cloth simulation. But, you know, sometimes it's not the Geometry Nodes you get at the end, it's the Geometry Friends you make along the way.

>> No.946474

I swear I saw the same thread years ago. Even remember >>939285 - but only the .gif.
/3/, you...you're not that slow of a board are you?

>> No.946482
File: 109 KB, 640x480, 0001-0150.webm [View same] [iqdb] [saucenao] [google]
946482

There are still a number of issues that I need to solve, but I was able to set up a full body cloth simulation.

>> No.946485

>>946482
>>946212
This is only wanting to make me learn how to squish with geometry nodes even more.

>> No.946520

>>946474
could be, I made the gif oct. 2021
but has it ever occurred to you that there are multiple people on this board trying to learn the same thing?

>> No.946522

>>946485
The new 3.6 beta finally has simulation nodes which make timeline dependent manipulation possible.
For procedural rigging we'd need access to armature bone properties though which isn't there yet.

>> No.946526
File: 95 KB, 640x480, 0001-0120.webm [View same] [iqdb] [saucenao] [google]
946526

>>946522
I'm totally burned out with simulations for now. I've also made some attempts at placing actual bones inside of the cloth simulation.
I can see that it could be made to work, and that there's a lot of potential but it's too slow to be practical on my computer and it's going into that anatomically correct territory where I don't want to go.

>> No.946535
File: 294 KB, 1442x836, dynamic_mesh.jpg [View same] [iqdb] [saucenao] [google]
946535

If you want to see some wireframe just for curiosity. I've had to decimate the mesh in order to reduce the simulation times so, there really is no topology. Which, in a way, is a good thing because the process isn't sensitive to it and also works with triangles.

>> No.946538
File: 158 KB, 640x480, 0001-0150.webm [View same] [iqdb] [saucenao] [google]
946538

And I've also discovered that the female Makehuman model I'm using has the geometry for the accessory. Check it out. I've also found more bugs in Blender, problems with the Mixamo rig. And it's almost summer again. It's all so tiring...

>> No.946541
File: 239 KB, 1024x2048, 1678965370769311.jpg [View same] [iqdb] [saucenao] [google]
946541

>>939338
>>939339
>>939374
It's not about voxels, or even pixols. Pixols are just pixels with an extra two channels of information (so Zdepth and Material in addition to RGB color data). These extra channels have nothing to do with speed, and if you ever use other programs like Photoshop or Substance Painter then you'll probably understand the concept of being able to paint information into more than just three color channels.

Zbrush is faster because it is not a 3d environment like other 3d software. You don't have a camera with coordinates to move through a 3d environment in order to provide a view of a model (or several models) that has its own set of coordinates. You don't have lights physically placed around that model with their own set of coordinates, with the computer then having to do complex math to determine the different vectors between the angle of the camera, the direction of a surface based on the blending of several vertices, the location of the lights, etc.

Zbrush forgoes a lot of 3d scene data to instead focus on the model, and that model is pretty much limited to just raw vertex data floating above an image document. There's no 3d scene, there's no real camera. The model doesn't even use custom vertex normals / smoothing groups to blend for its shading. Lights aren't in the scene but are instead faked (you light up a sphere instead, and zbrush then matches the direction of a polygon to the corresponding direction on that sphere and gives it that value, which is why lighting is always based on screenspace).

>> No.946542

>>946541
You don't get to have multiple UV sets per vertex, or different cluster groups (instead it's only one partition per polygon). You don't get to have multiple objects placed all around a virtual environment (just a single active tool, with a single active subtool). You don't have multiple camera views to provide several viewports with different views simultaneously. Zbrush jettisons just about every thing it can so that it can focus as much of its attention into making pure geometry data (points) its king. The result of this approach was that you could push and pull millions of vertices around with no GPU, even in 2008.

From the Official Documentation:

Unlike other 3D software, ZBrush doesn’t have a 3D space scene in which the camera (the user point of view) can be moved; instead 3D objects are manipulated in front of the camera, within the canvas. Think of the ZBrush canvas as being like a window within your house, looking out onto the 3D world beyond. Objects can be moved around outside the window, but you can’t move the window itself. This is a key element to understanding and being comfortable within ZBrush, and is actually part of the reason why ZBrush is able to work with millions of polygons in real-time. An animation package must track every element of your scene at all times, from all angles, regardless of whether something is visible to the camera at the moment or not. That’s a lot of system resources being reserved for scene management. ZBrush takes all those resources and focuses them on a single object, letting you do things that wouldn’t be possible in any other program.

>> No.946583
File: 241 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
946583

>>946541
>>946542
Wow. I didn't know that. Do you have a newsletter I can subscribe to?

I've finally figured out the issue I had with the automatic weights on the butt, where it's sticking out too much. It's due to the segments on the spine in Mixamo rig interfering. I'll have to abandon it unfortunately.

>> No.946585

MJNOXA

>> No.946599
File: 485 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
946599

List of things I've used to make this:
https://github.com/makehumancommunity/mpfb2
https://www.mixamo.com/
https://substance3d.adobe.com/plugins/mixamo-in-blender/ (optional)
https://sourceforge.net/projects/snes9l/files/AutomaticWeightRemap.py/download (optional)

Blender 4.0 and in the modifier stack: Decimate, Armature, Cloth simulation, Corrective smooth. In that order.

And that's it. With this I'm going to free myself of this curse and enjoy the summer.

>> No.947773
File: 136 KB, 359x451, Chain Deformation.png [View same] [iqdb] [saucenao] [google]
947773

Saving thread from Page 9.
I haven't made much headway. But I made a jank geometry node set up that kinda-sorta does what I want in a really ugly and impractical way.

Basically, I managed to select a single vertex and off set it by an arbitrary amount. Then, the neighboring vertices are offset, spread out evenly between their normal position, and the leading vertex's position.

My hurdle atm, is figuring out how to select and move the leading vertex where it needs to be. The current arrangement is a shitty mess. Then, hypothetically, the rest should automatically stretch to the lead in a neat, proportional fashion.

>> No.948216
File: 210 KB, 537x348, 1678572594324451.png [View same] [iqdb] [saucenao] [google]
948216

>>944111
Finally for the first time I see someone with some fucking talent on this board, nicely done anon.
I would have given her twice the size on tits just cuz im a sucker for huge milkers, but the model could definitely be suited for Motoko from GITS.

>> No.948748

>>943274
I went on a deepdive on this. It's from the Daz forums. There are videos of him manipulating the rig in real time with no issues. He hinted at using lots and lots of bones.

>> No.948872
File: 115 KB, 640x480, 0001-0160.webm [View same] [iqdb] [saucenao] [google]
948872

>>948748
I can't tell you what I don't know but let me show you this.

>> No.948887
File: 104 KB, 516x618, braces.jpg [View same] [iqdb] [saucenao] [google]
948887

The four bones I've marked in red are parented to the spine and don't move on their own. They're there to stop the automatic weights from propagating.

>> No.948958
File: 221 KB, 365x431, 1687069590122089.png [View same] [iqdb] [saucenao] [google]
948958

>>940594
>To be more specific it's the fact that mesh editing tools (including sculpting) don't work correctly when there's an active pose on a model. You have to modify the mesh while it's in the T-pose, then switch back and forth to any pose you want to correct for every joint.
you would be wrong. your misconception highlights you don't know what's going on under the hood.
it's really simple, let me break it down for you: you set the new shape key weight to 1 (and all others to 0) using a driver based on the armature pose under consideration. that's it, you fucking goon.
shape keys are computed before armature deformation, so what you edit IS the base geometry.

>> No.949006
File: 21 KB, 765x684, vertex neighbors.png [View same] [iqdb] [saucenao] [google]
949006

>>948216
Thanks, I guess. But I'm just a lowly Blendlet who hasn't even learned how to texture yet, and have been stuck on the same problem for over a month now. And I don't like getting compliments at the expense of others. You don't have to put others down to lift me up.

Anyway, can someone help me with this problem? How do I select neighboring vertices with geometry nodes? The idea seems simple in my mind, but I don't know how to tell Blender what I want.
Hypothetically, you should be able to know which vertices neighbor any given vertex. Like, "oh hey, that's verrtex number 1456, he's neighbors with 1674, 1394, and 1207. Let's just select those guys real quick, and check what their float values are." Hypothetically, you should be able to check *all* neighbors for *all* vertices, simultaneously. Like some huge census. You have a list of all the residents, and then underneath, a list of every they're neighbors with.

>> No.949018

>>949006
ctrl numpad+ grows selections
ctrl numpad- dissolves selections
check the manual

>> No.949020

>>949018
>with geometry nodes
read, dumbass

>> No.949031
File: 13 KB, 620x514, vertex neighbors 2.png [View same] [iqdb] [saucenao] [google]
949031

>>949006
Another visual aid to hopefully make the idea more clear.
Using a simple 4 face mesh. I numbered all the vertices. And then on the right, it displays every vertex, and every vertex they neighbor.

You can kind of get these results using the Edge Vertices node and Edges of Vertex node. However, I'm not very certain how to use either. It's all very complicated. I wish there was just one simple node that did this automatically. Just grab all of the neighbors in a big list, and then allow the user to access the attributes. Maybe there's a simple node arrangement that does this already, and I just can't think of it.

>> No.949032
File: 41 KB, 640x480, 0001-0120.webm [View same] [iqdb] [saucenao] [google]
949032

>>949031
Before I forget, I wanted to show you a Cloth sim with a collider under the skin.

>> No.949128
File: 113 KB, 1920x1080, Elbow.jpg [View same] [iqdb] [saucenao] [google]
949128

>>949032
That's pretty neat. Nice proof of concept. Cloth physics are a little floaty, but otherwise, it's pretty good.

I suspect that if you arranged your bones like pic related, then it would probably look more like an actual elbow. Though, it would require a lot more musculature to make it really look good.

>> No.949134
File: 41 KB, 640x480, 0001-0120.webm [View same] [iqdb] [saucenao] [google]
949134

Probably but you can't expect too much. It's always going to look somewhat cartoonish. Not necessarily in a bad way but at the moment there's a limited number of parameters you can set for the simulation and those apply globally.

The most annoying things are that there is no middle ground between Dynamic mesh on or off. It needs to be on because otherwise too much detail gets lost, but at the same time it also stops large displacements.
And once you set a guard distance for self-collisions, any detail in the mesh that's smaller that that becomes unstable, which is why it works better if the mesh has been decimated in advance.

There's a lot of work to do and it's all hardcore C development. I can't tell if/when/who is going to do that. Sometimes you have to take what you can get...

>> No.949465
File: 94 KB, 640x480, 0001-0120.webm [View same] [iqdb] [saucenao] [google]
949465

I've been reading the source code for the Cloth simulator and I've been working on sewing needle mechanics.

>> No.949467

>>949465
ew. That's far off from what this thread is about. You take your weird stitching experiments to another thread.

>> No.949468

>>949467
Yeah sorry. I need to make a new thread but that's what I had on hand after a night of coding. I'm trying to make a simplified version of the Cloth simulator that's more approximate but hopefully faster and easy to use so people can fulfill all their sewing needs.

>> No.949470

>>949468
Hey, you know what? When it's fully developed, and actually resembles skin, comes back to me with your skin sim. I'd be interested in seeing it. Minus the tearing part.

>> No.949478
File: 97 KB, 640x480, cloth_sim.jpg [View same] [iqdb] [saucenao] [google]
949478

I wanted to share a little bit of what I've learned in the last few days of reading the Cloth sim source code, just in case it may be helpful to you in your Geometry Nodes project.

I have previous experience with rigid body physics/collision systems and they usually revolve around the idea of points vs edges and faces interactions. But the Cloth sim doesn't work that way.

The collision balls work as you would expect: they have a position and a velocity and they repel each other. The springs transfer forces between the balls.

One important detail is that structure springs resist elongation but not compression. Torsion springs resist both ways. This system is iterated many times per frame with various friction factors applied.

So, if you ever wanted to know, this is a very coarse overview of what the Cloth sim does behind the scenes.

>> No.949479

The problem with this approach is that the radius of the collision balls is an arbitrary parameter and they usually don't cover the whole surface, so if you choose a radius too small, there's a probability that a pair of balls may slip past each other. If you choose a radius too big, then neighboring balls will start to repel each other a the whole structure will crumple and become unstable.

Since you're trying to cover a planar surface using spheres, there actually is no optimal radius that you can pick and that's annoying because it requires trial and error.

As I've mentioned before, since in the current implementation the balls all have the same radius and are placed at the vertices of the mesh, a mesh with varying detail level may not work correctly.

>> No.949595

>>949478
>>949479
I vaguely recall reading about that sometime last year. It's explained a bit in the cloth section of the manual. I can't say for now if that will be helpful. But I appreciate the insight regardless.

I don't think I mentioned this earlier. But I decided that I need simulation to make it all work. So I'm trying to get simulation nodes going. Except it's just as confusing as everything else. I can't fucking understand how to make the simulation nodes update along with the armature. The video I'm following is this one https://www.youtube.com/watch?v=rg1pZ3e0LwQ
And it starts out pretty ok. Like, the initial node set up is updating frames correctly.(so it seems) But then he goes into point distribution nonsense, which doesn't serve my purposes.(I could be wrong. Maybe there's some magic in there I'm not seeing)

I set up the initial frame advancing loop, then try to veer off and do my own thing, setting up a set position node. But the set position node doesn't loop like it should. It's static.

>> No.949622
File: 145 KB, 640x480, 0001-0250.webm [View same] [iqdb] [saucenao] [google]
949622

>>949595
>get simulation nodes going
I'm glad that they've added those because the ability to iterate is crucial to actually get physics systems to do anything interesting.

>the set position node doesn't loop like it should
I have no idea how the physics nodes work yet. I'm working on the C side of things right now because that's what I enjoy doing the most, but I'll get back to Geometry Nodes one day.

>> No.949683
File: 281 KB, 1442x836, musculature.jpg [View same] [iqdb] [saucenao] [google]
949683

I don't know if this may be of interest to you.
>>949670
I've noticed that there was an option for Musculature. I don't know if that that option is still available in later version. I'm pretty sure it's not in MakeHuman.

>> No.949685

>>949683
I've just checked and the musculature option seems to be still there in:
https://mb-lab-community.github.io/MB-Lab.github.io/
It does not handle self-intersections and squishing but you should be able to install that version.

>> No.950980

understand how you want the landmarks to move and apply helper joints with contraints to help leverage a sliding action along the surface. for the shoulders this could be the delt flexing and also translating ever so slightly, or the edges of the scapula moving around.

>> No.951073

>>939151
Learn anatomy, look at muscle fibers, use that as topological reference on how a fucking living thing deforms. That is half the battle, now do proper fucking rigging, try to learn muscle anatomy again for a musculature rig, learn about blend shapes too and a way to activate them automatically based on bone to bone relative position. And finally stop being a cock-sucking faggot OP.

>> No.951090

>>950980
>>951073
Can you post examples of your advice in action?

>> No.951208

>>951073
>>950980
Thank you guys for the advice. However, I'm not convinced that traditional rigging can get the results I want. I am now seeking new methods.
I haven't been totally open about my thoughts. But I'm tackling the issue from 3 different ways.

1. I'm trying to use the tension node I created earlier, in order to sort of "overwrite" the weights of bones. The node can detect squish and stretch areas. From there, I can set it up so that those particular zones "subtract" from a bone's weight. That's how I think about it in my head. But in geometry node terms, it's just mixing between the regular animated position and the resting position, using the difference in edge size as a factor.

This idea hasn't gone very far, because I keep running into the same roadblock: all of my math is predicated on way the bone weights move the bones first. And the bone weights themselves are bad, so building off that, my math will always be bad. Unless I do one of two things...

1a. I can somehow find a factor that ignores previous bone weight, and use that as a factor. Say if I used the rotational degrees of points. The rotation of points aren't as strictly tied to the bone weights as edge size is. Like, if you move a bone, an edge might go from 1, to .5, then back to 1 again. But that same edge counted by degrees, moved 180 degrees in space. idk, If that's better. I haven't set it up an experiment yet.

1b. Using simulation nodes, I can store the points of the previous frame to use on the next frame. This should solve the bone weight restriction problem entire, but I just can't wrap my head around it.

>> No.951211
File: 261 KB, 2560x1351, Edge to face detection.jpg [View same] [iqdb] [saucenao] [google]
951211

>>950980
>>951073
2. I'm trying to think of a way to create self collision in geometry nodes. It's probably way to advanced for me. But I asked blender stack exchange how to make edges detect faces. And someone came up with a pretty good idea. Pic related.
So we have a way for edges to know when they're intersecting faces. We only have to figure out how to tell the edges to shrink back to the intersection point. Or rather, have the point of the edge that is on the "wrong" side, snap back to the intersection point. Telling blender what is the wrong or right side is another trick. I would think that simulation nodes could figure that out. Because we could look at position of the previous frame(pre intersection), the position of the current frame(post intersection), and determine the trajectory of the points. Then sort of subtract the difference between the current frame's points, and and intersection points. Effectively pushing the points back where they came from, but only just enough to be touching the object. Effectively making them collide.

Do that per frame, and it should create a squishing effect. I know my math is vague an incorrect. But I think the bigger idea is near the truth. I just have to figure out the particulars.

>> No.951212
File: 41 KB, 545x495, cloth_sim_force_field.jpg [View same] [iqdb] [saucenao] [google]
951212

If you remember >>949032 I've made another experiment using Force Fields instead of mesh colliders, to compensate for loss of volume at extreme bends.

>> No.951214
File: 183 KB, 999x421, voronoi 3d cube.png [View same] [iqdb] [saucenao] [google]
951214

>>950980
>>951073
3. I want to create a volume system, where you can place points, and it auto generates cells underneath the mesh . And then you can affect the way points on the surface of the mesh deform, based on the pliability of the cells. So if cell is set to stiff, then points on the surface can't bend inward very well. But if it's set to soft, then points can bend in with ease. And of course it would be a slider between 100% stiff with no give. Or 100% soft with no resistance whatsoever. Do not confuse this with bones and wight painting. Rather, this is something that works together with bones.

The most logical way to create these cells would be with some voronoi math. You see those points inside of the cube? I believe the user should be able to place those points manually. To whatever their preference is. And those cell points should be parented to bones. The bones are weighted to the mesh like normal. But when you move the bone, and the mesh attempts to deform, it first checks it's cell. And if the cell point says it can move into the cell, then it will do that. But if the cell point says "hold on, you can't come inside", then the point won't move inside. Rather, it will be pushed back to the surface nearest the location it wishes to deform to. But if the cell says "you can come in 30%", then the mesh will only be pushed back 70% of the to the surface. If the cell says "you can come in 90%", then the mesh will only be pushed back 10% to the surface.

I hope that makes sense. It's essentially a dumbed down muscle system. But also a pretty advanced one. Because you're able to control the softness level, you can create clusters of soft fatty tissue in soft areas, and and clusters of hard tissue in hard areas. So you might use 50% soft points going down an arm. But then put a 95% stiff point at the elbow to ensure that area remains stiff.

>> No.951216

>>951214
This idea is the least developed of all. I don't know how I'm going to pull it off. Blender has a voronoi texture node. which can be used to create 3D voronoi patterns. Essentially "cells". However, they only allow you to make random patterns. Again, nodes is really good at generating lots of little random points and particles. But you have to do a shit ton of math to organize those points and particles in any functional way.

My current idea, is to place empties, parent the empties to bones, and then somehow trick the voronoi node to snap to the empties.

>> No.951218

>>951216
Take a look at:
https://docs.blender.org/manual/en/latest/modeling/modifiers/deform/mesh_deform.html
There's a pixar paper linked at the bottom with some math. It can work as an alternative to Armature deformations.

>> No.951254
File: 306 KB, 640x480, 0001-0121.webm [View same] [iqdb] [saucenao] [google]
951254

I remember trying the Soft Body sim years ago but I wasn't able to make it work, but now somehow it does and it runs almost realtime.
I've stuffed the inside of the model with a few force fields but I haven't done any other manual correction.

>> No.951265

>>951254
>>951212
My brother in deformation. Can you please post in another thread. The WIP thread perhaps. Instead of burning posts in my thread.
I believe I've made it clear that I'm not interested in cloth sim. And by extension, soft body. Not unless you show me something truly outstanding. The thing that puts me off to these methods, is I can see the affected areas wiggling. So while collision is possible, it comes at the cost of this weird floaty jiggle effect. And on top of that, I still see clipping between the thigh and hips. I feel bad for even asking, because I think we're both trying to get good deformations. I just think we're taking different paths at this point.

>> No.952170
File: 418 KB, 1920x1080, 2023-07-19 13-07-08 (1).webm [View same] [iqdb] [saucenao] [google]
952170

how's this? automatic weights, no shape keys

>> No.952192
File: 307 KB, 957x801, shoulders.png [View same] [iqdb] [saucenao] [google]
952192

>>952170
Room for improvement.

>> No.952205

>>952170
Also, check out this nice reference. https://youtu.be/KQwVUFdVek4?t=403

It shows that when you raise your arms, even the shoulder joint moves, Not merely rotates, but moves position. It sort of rotates up and in. Thus, the whole shoulder comes in a little closer to the head, and pushes upward more.

It's hard to say without your bones being visible. But it appears you don't have that movement.

>> No.952221

>>952205
You're right, its only rotation, thanks!

>> No.952670

>>939161
Shape keys are a shit ton of extra work. Sculpting a new face means you have to redo all the corrective shape keys by hand. Much better to have a sophisticated rig that's entirely bone-driven.

>> No.952682

>>952670
What? Just cut off the head and stitch the new head on top - new face, old shape keys. Even without that, I molded a 5-foot girl into a 7-foot bodybuilder and still didn't lose any of my shape keys.

NEVER insult shape keys around here again.

>> No.952686

>>952682
You fool. Hand drawn shape keys for poses are so 2002 or something. Animating vertex by vertex, frame by frame by hand it's where it's at.

>> No.952925
File: 3.16 MB, 720x720, Pinch sample nearest surface.webm [View same] [iqdb] [saucenao] [google]
952925

Update. I managed to mock up a pinch. It's not totally accurate yet. But It's a step closer toward a solution. Look closely at pic related. The mesh flatly meets in the middle, as if the two surfaces are pressing against each other. It's an illusion though. I actually created a flat plain, and then made it rotated in the middle, where the "elbow" joint is. The plain always corrects its angle to be the midway point between the top and bottom. The mesh of the pill is then projected onto the surface of the plain, by using the "sample nearest surface" node. This results in the whole pill to squish into the plain. However, using the tension node from earlier, I'm able to select only the inner crook for squishing, leaving the rest of the pill to remain normal. And then I figure the distance between the pill's vector and the plain to control how much each point should be effected.

>> No.952927
File: 16 KB, 585x608, Pill projection.png [View same] [iqdb] [saucenao] [google]
952927

>>952925
Hopefully a helpful illustration.
I created 3 empties. These are kind of my "voronoi" points. That's how I think of them, but in truth, I'm not really making a voronoi pattern. However, despite that, I can still figure out the halfway point between two points, and create a plain there. In theory, if you added a dozen more points, and created a plain between each nearest point, it would created a voronoi kind of pattern.

But anyway, the idea is that this halfway point represents the boundary between two masses. Everything "below" the plain shouldn't be allowed to go above it. And anything "above" the plain shouldn't be allowed to go underneath it. The plain represents the limit of a vertex's potential.

So when the pill is bent, the two empties move. And when the two empties move, the "mass" moves. When the mass moves, the maximum potential shifts between the two masses. As the pill is bent, vertices come to reach its max potential, and then it can no longer pass. This creates the illusion that the mesh is squishing. Because it kind of is. Just not into itself. But rather, an imaginary plain.

>> No.952930

>>952925
>>952927
Have you ever felt that you have a lot to say and nothing to say at the same time?
I don't feel like getting into math but I'm going to tell you a couple of things I've realized while doing my own experiments.

One is that if the squishing doesn't also cause some volumetric effect, it doesn't look good. In fact, it looks worse than self-intersection.
Another thing is: suppose it's a leg. When the calf presses against the thigh, the calf is stiffer than the thigh. This also happens between the forearm and the arm.
Also limbs tend to be somewhat tapered and that's essentially why you can't just use a plane at half-angle.

So, my advice is to maybe focus on the general volume deformation and skin stretching instead of trying to avoid self-intersection at all costs.

>> No.952936

>>952925
>>952927
Two thoughts came to mind:
Sampling a generic surface sounds really expensive, a plane is essentially just a point and a normal vector so you can save a lot of computation by just doing the projection math directly.

Bulging can probably be faked by nudging the vertices along tangent vectors in the plane that are orthogonal to the bone the vertex is bound to.

>> No.952938
File: 562 KB, 950x708, squish node on and off cycles.png [View same] [iqdb] [saucenao] [google]
952938

>>952930
>Have you ever felt that you have a lot to say and nothing to say at the same time?
Ayup. But I say it anyway, because an experienced person can glean the meaning from too many words, but an inexperienced person cannot glean the meaning from too few words. I personally find it very frustrating the way smart people talk, because they assume you know everything they know, and thus use only short curt explanations that don't convey their ideas very well.

>One is that if the squishing doesn't also cause some volumetric effect, it doesn't look good. In fact, it looks worse than self-intersection.
I thank you for the warning, but this is something I've already visualized in my mind. I'm aware of this potential problem. Personally, I don't think it will look worse than self intersection. I think that even flat squishing will look better, because when it's rendered, the crease will be more convincingly shaded than an intersection. Pic related. On the left, my squish node is on. On the right, the squish node is off. Rendered in cycles. Notice how when they don't intersect, you get that nice deep black crevice. But when they do intersect, cycles is unable to make such a shadow.

But no, I don't plan on settling for flat squishing. My ability to handle geometry nodes is so far behind what I'm attempting, that it took all of my ability to come up with that jank mock up. That is merely a stepping stone toward improving my node ability and math comprehension. I have ideas on how to create a volumetric effect. It's just going to come together slowly.

>> No.952939

>>952930
>suppose it's a leg. When the calf presses against the thigh, the calf is stiffer than the thigh. This also happens between the forearm and the arm.
That's the kind of thing my voronoi point distribution idea will handle. And that's why I based this around a few empties. Think of the empties as sort of a center of mass. I chose the plain to be the halfway point just to make it simple to mock up. But the idea is that if one empty is the center of softer tissue, and the other empty is the center of firmer tissue, then the plain would be adjusted so that it shifts closer to the softer empty when the armature moves. making the points controlled by the soft empty squish more quickly, and the points controlled by the harder empty squish less quickly.

In theory, I would make as many center mass points as I need. And be able to adjust their hardness and softness. The plain is only the best way I could think to mock this up. But I'm certainly hoping for better ways.
In fact, the ultimate goal would be to soften the transition from plain to plain, so it all squishes more organically. And also link center points together, so the distance between them is "shared". That way, they form sort of muscle groups and tendons, and fatty pockets. That's getting way ahead of myself. But just letting you know, I'm thinking big.

>Sampling a generic surface sounds really expensive, a plane is essentially just a point and a normal vector so you can save a lot of computation by just doing the projection math directly.
Yep, that was actually my first idea. I was trying all kinds of things with the "project" vector math node. But I just couldn't figure it out, so I settled on the sampling the surface, just to mock up what it might look like if I could get projection to work.

>> No.952940

>>952936
>Bulging can probably be faked by nudging the vertices along tangent vectors in the plane that are orthogonal to the bone the vertex is bound to.
Yeah, something like that. I was thinking that there might be a way to calculate how much volume was lost from the squish as the plain, and divide that amount and distribute the fractions out to the points that haven't been squished by the plain yet. And also make it so that points *near* the plain get most of the volume distribution, while points far from the plain get less of it. That would isolate the bulge effect mainly around the squish, where you want it. While also moving everything slightly, so it looks natural.

>> No.952969
File: 44 KB, 581x503, ao.jpg [View same] [iqdb] [saucenao] [google]
952969

>>952938
If all you want is to shade the self-intersection profile a certain way, then there are all sorts of shader tricks you can do. Ambient Occlusion doesn't care if there's self-intersecting geometry or not.

>> No.953039
File: 806 KB, 720x720, Pinch taper.webm [View same] [iqdb] [saucenao] [google]
953039

>>952969
No thanks. I want the geometry to stop intersecting.
For all the aforementioned advantages. I'm so sick of reading half-baked solutions.

Check this out, I altered one thing in my node group, and the squish improved. I'm not really sure what I did. The top curve got squished inward, so that's no good. But otherwise, it's squishing. The squish does eventually intersect when the bone bends over far enough. And the bulge is unintentional. But again, this is not accurate yet. It's just a mock up.

I made the shape tapered to test that it works regardless of the shape. So this is how it might look at the crook of one's arm.

>> No.953107

>>939151
Hard way = bones
Easy way = blendshape / shapekey drivers

>> No.953133

>>939430
I dont understand why so many people are mocking your work, this rig is really, really well made. I honestly think most people piling on your work havent touched rigging, anyone who has a modicum of experience with rigging can immediately recognize this is stellar work

>> No.953436

You guys need to learn that clavicles are supposed to move when you raise your arms.

>> No.953603

>>953133
Thanks, I dont think its THAT great myself. But im happy with it. Its exactly what I expected to get based on the time I invested. I could go further but in our production I have multiple other tasks in character animation and modelling so I have to settle with the best "time vs quality" result, and so far I think this is it. Rigs are locked and Im no longer developing them, now we just go with that and move on with the production.

>> No.953632

>>939430
I know where this is from lol I can't believe you still frequent this shithole, you're too good for this place

>> No.953666

>>953436
Some games forget this and I suffer every time

>> No.954117

>>939151
Take a look at RBF drivers

>> No.954390
File: 1.87 MB, 1920x1080, Slim Shoulders Back.webm [View same] [iqdb] [saucenao] [google]
954390

Set geometry nodes aside today, and went back to bones. Got some interesting results.

>> No.954391

>>954390
Really nice.

Just for curiosity I did some research about 2nd order drivers, that is, drivers based on velocity or acceleration rather than position or angle.
I may be wrong but there doesn't seem to be a simple way to implement them in Blender.
Or you can think of them as "delayed action drivers". Because a muscle bulges not due to the angle of the joint but due to the angular acceleration. That's the idea anyway.

Just an idea because since I have no knowledge of anatomy myself, I'd rather let the simulator do the work for me.

>> No.954393
File: 1.69 MB, 720x720, Slim Shoulder Front Side.webm [View same] [iqdb] [saucenao] [google]
954393

>>954390
Posting bones. It's a little crazy. The control bones are simple however. 1 bone for the upper arm/bicep region, 1 bone for the forearm region. A bone at the wrist to control with IK, and a bone at the elbow for the pole target. That's all there is in controlling the rig. The rest is a bunch of gobbledygook to make it deform right. The controlling bones have their deform disabled.

I then created the motion for the scapula. I just put a bone sort of in the middle of the chest area, disabled deform, and then applied a bunch of deform constraints. When the control bones rotate up, the scapula control rotates up too. When the control bones rotate down, the scapula control rotates down too. etc, etc. Changing location up and down, forward and back. All that. It's important that the movements the scapula control makes, is fairly realistic to an actual scapula. Because you're going to parent the bicep control to the scapula control.(I prefer using the armature constraint for all parenting) And this will start to make the whole arm and shoulder move together in a realistic way.

Now that all the controls are moving nicely, then you create all the deform bones. And that shit is way to complicated to explain. But I'll say this: You got scapula movement. So start there. Parent the applicable muscles to the scapula. And then try to sort of "nest" muscles underneath each other. Remembering that the bone closest to the mesh, will dominate the weight when you parent automatic weights. So you can create sort of a hierarchy by making bones overlap in a precise manner. Some bones are just there as helpers. Used as measurements for other bones. Or targets for constraints.

This is still far from perfect. But it's got me kind of excited. I wonder if I can make it even smoother and more functional.

>> No.954394
File: 34 KB, 487x397, Weight Toggle.png [View same] [iqdb] [saucenao] [google]
954394

>>954391
That's an interesting idea. I haven't learned anything about drivers yet. They're intimidating. I copied some shoulder drivers from a turtorial before. But I didn't really understand how it all functioned. Which made modifying the drivers nearly impossible. I'll probably have to read up on it eventually. I've been thinking that there may be a way to create a parenting switch using drivers on the armature constraint. Like say you moved the arm up, the drivers would turn the weight down on one parent, and turn the weight up on another parent. Effectively shifting the bone's position.

Don't ask me what it would be useful for, because I forget. But in the moment, it seemed like a solution to something I was doing.

>> No.954395
File: 3.47 MB, 720x720, Dongle Points.webm [View same] [iqdb] [saucenao] [google]
954395

One other thing. This is how far I got with geometry nodes.
I placed 5 empties. Put them into a collections. Dragged the collection to geometry nodes. Converted the collection of empties into points. Then made the mesh sample the nearest point in the collection. I don't know what to do with it yet, but I used a set position node to make the mesh go to its sampled point. So you can visualize how each point effects the nearby mesh. Acting as sort of center of mass. I want to blend the effect. But I don't know how yet.

>> No.954396

>>954394
>Effectively shifting the bone's position
For me it's really hard to visualize the effect of shifting bone weights.

I'm going to try this:
https://github.com/JacquesLucke/animation_nodes
https://www.youtube.com/@ZachHixsonTutorials
I'm looking for a "slow parent" effect of some sort. I'll let you know if it's any good.

>> No.954432
File: 147 KB, 1642x583, armature constraint weight example.png [View same] [iqdb] [saucenao] [google]
954432

>>954396
>For me it's really hard to visualize the effect of shifting bone weights.
Just open up blender and create 3 bones. Select one of the bones(doesn't matter which), and add the armature constraint. Then using the constraint, parent the bone to the other two bones.

Now, play with the weights of the parents, and you'll notice nothing moves, because it's still in the resting position. So take the two parents and move them around.(doesn't matter how you move them) Then, go back to the weights of the parents and play with them some more. Now you'll see that the bone moves back and forth between the two parents. It doesn't matter how you rotate more move them, the child bone always knows its path between the two parents. As long as the total weights equal 1.

Easy test. Takes a minute.

I don't know if that helps with your muscle twitching idea. I'm just stating what I think drivers might be useful for. By changing the parent weights dynamically, you could probably do some funky shit.

If I'm being totally honest, I think looking for angular acceleration drivers are not all that important yet. It sounds like a cool addition *after* basic collision and volume are figured out.

>> No.954444

>>954432
Sure whatever but can you help me with the following dilemma?
Imagine you're holding a some weight in your hand in mid air.
Now suppose you take a picture the exact moment you start lifting the weight.
Now suppose you take another picture the exact moment you start lowering the weight.
Now you have two pictures of the arm in the same pose but with different shapes.
If you're trying to find a static solution to the shape of the arm, how do you reconcile that?
Do you take the average of the two cases? I'm asking because an average is a statistical entity that's true in a sense but not necessarily real. Like when you say that on average a family has two and a half children.

>> No.954542
File: 8 KB, 499x545, average between poses.png [View same] [iqdb] [saucenao] [google]
954542

>>954444
I think I get the picture taking part. But I don't believe I understand why you have to choose between two different poses. Wouldn't the pose the arm is in, be determined by which way you, the artist, wish to see the arm move? If you want the arm to move down, then use the arm down shape. If you want the arm to move up, then use the arm up shape. In real life, pictures are taken in the moment of motion. Cameras don't capture an average of various possible motions the person *might* have made.

So to answer your question about finding a static solution: I say use shape that makes sense in context.
However, if for some reason, a mad mathematician held a gun to your head, and demanded that you somehow average two shapes into one, then... I don't know. It's a good question.
Pic related is how I'm imagining the problem. The orange dot could be in the red position, or the yellow position, depending on whether the arm begins goings up or down. However, averaging the red and yellow brings us to the gray dot. Which isn't where you want it to be, the gray dot is in an awkward location.

Maybe you average starting from the resting position? Vector math is pretty slippery for me, but you can probably do some trickery like first getting the average between up and rest, then average of down and rest, and then average the averages. That would result in points never straying too far from the rest position, but they would still end up in strange places. So I don't know. I don't know if I can answer your question without knowing what counts as reconciliation. What kind of shape do you want as a result?

>> No.954545

>>954542
>In real life, pictures are taken in the moment of motion.
But the Armature system doesn't take that into account. Every pose is derived in one step from the T-pose, while in reality a pose is derived from the previous pose.
If you watch some behind the scenes of some modeling session you'll see that the model always does some little motion and then the picture is taken at the end. I believe that that mechanic is very significant in how appealing or "realistic" an image can be. That's all I'm saying.

>> No.954548

>>954444
>Every pose is derived in one step from the T-pose, while in reality a pose is derived from the previous pose.
Uh-huh. I can imagine that. It's one of the things that makes what I'm attempting difficult. I'm trying to think in terms of "is", as opposed to "will be". Because there is no true passage of time, unless you simulate it.(And simulation nodes are a bitch)

That being said, I'm still not sure that you're thinking about it the correct way. Why are you trying to "reconcile" two possible outcomes? If you use acceleration to determine how a body is shaped, then the accelerator will decided which outcome you get. Either the up arm shape, or the down arm shape. What's all this about averaging?

In fact, the more I think about it, the more I'm convinced that acceleration is not possible without simulation. I won't say it's impossible to find a "static" solution, as you say. I don't know enough about anything to make that call. But currently, at this point in time, it sounds irrational.

>> No.954561

>>939298
Beginner here. Currently doing youtube tutorials about making digital pastries in Blender.

If character animation is my goal (hobby, not professional), should I expect to be adding additional software to my toolkit at some point?

>> No.954565
File: 3.53 MB, 1216x884, shoulderfuck1.webm [View same] [iqdb] [saucenao] [google]
954565

>>941590
Nice job anon. Might sharing the full setup? I'm very curious. I was sort of obsessed with shoulder deformations a while ago. I managed to get somewhat decent results with basically using a fuckton of stretch bones.
Your setup looks simpler and also deforms better.
Do you use subdiv on top of the armature, because those creases look really nice and smooth.

>> No.954566
File: 2.21 MB, 1216x884, shoulderfuck2.webm [View same] [iqdb] [saucenao] [google]
954566

>>954565

>> No.954601

>>954566
>>954565
Looks pretty good. Not sure what you can gain off me. But I can't post that previous bone set up, since I've already heavily edited it. Effectively dismantling it. It looks completely different now. In its current state, it's all messy and doesn't deform right.
But the basic idea of the previous rig, was to place floating bones where muscle groups should be. A floating bone for the trapezius near the neck, and floating bone for the trap near the upper back, and other for the middle back. A floating bone for the scapula.
3 floating bones for the shoulder. Like a back part of the shoulder, middle part, and front part. Floating bone for the tricep and bicep. Floating bone for that part that connects the triceps to the elbow. etc, etc.

This is because, when you apply automatic weights, the space between bones will be more flexible so to speak. As the space is being controlled by multiple bones. The problem I continued running into, is that when the arm twisted, all those floating bones would get bunched up and create very bad creases. That's why I haven't come back with an even better version of the floating bone mesh. I have to figure out how to prevent those creases.

I didn't subdivide that mesh. But I probably work with more polygons than strictly necessary. You can see my mesh here. >>944111 I've altered it since then, but not by much. I'm pretty autistic about keeping the face counts in groups of 4s and 2s. So actually, my mesh can be cleanly un-subdivided 1 level.

>> No.954605
File: 2.97 MB, 720x720, Shoulder Bones.webm [View same] [iqdb] [saucenao] [google]
954605

>>954565
>>954566
With my slimmer body here >>954390 >>954393 , I'm getting interesting results with fewer bones.(Though, I'm still considering adding more bones to it later) The IK is a lot better than my previous rig, because I'm using transformation constraints to drive the shoulder movement around. It reduces the feedback loop.

But check out this webm, because the shoulder bones are doing something that even I can't understand. I set it up in a fugue state. Now I look back on it like "what the fuck was I thinking?" Look at the ones highlighted in red. They perform a sort of "knitting" motion, where one goes up and over, then the other one goes up and over. Yet, they manage to keep the mesh intact. So weird. In a T-pose, they rest together in an arrow, like this < . They form the general shape of the shoulder when looked at from above. However, when they move, they work independently.

Anyway, the real thing to focus on here, are the bones I highlighted in purple. The fat purple bone has all of the scapula transform constraints. You can use drivers to achieve the same thing, but I'm too stupid for drivers. Thus, I use the transform constraint, which is nearly the same concept as drivers, except simplified to be more user friendly. The scapula controller reacts to the location of wrist IK(and elbow), and responds accordingly. Then, I parented the relevant bones to the scapula controller. The takeaway here, is that I stopped thinking about it as the shoulder leading the motion, and switched to thinking about the scapula leading the movement. Shoulder is secondary to scapula. I'm not sure how realistic that is irl, but it seems to work conceptually when dealing with bones. The scapula is deciding where to place the ball joint of the humerus.

In my previous rigs, I sketched the location of the scapula and collar with a few inert bones. But with this rig, I didn't bother. Still, the motion they make is implied through transform constraints.

>> No.954857
File: 2.55 MB, 720x720, ID control.webm [View same] [iqdb] [saucenao] [google]
954857

Figured out IDs. Now I can manipulate the empties individually.

>> No.954944
File: 819 KB, 750x700, 1688914245748642.webm [View same] [iqdb] [saucenao] [google]
954944

>How do you get actually good shoulder deformation. I need a no fucking around, honest to goodness shoulder solution.
Not a problem! Use a muscle rig with FEM. You can do this in real time, too. Here's a view of a basic rig - muscle and bone only here.

>> No.956550
File: 2.18 MB, 720x720, Parented Position, Failed Roation.webm [View same] [iqdb] [saucenao] [google]
956550

Frustrating.
I can get the mesh to parent to the empties. The empties are parent to the bones. Thus, the mesh moves with the bones without weights. Using the sample nearest node, and setting the sampled index into ids for the mesh, it works in a weird way.(I'm honestly very confused by th the sample nearest node. It's like magic, and I can't control it well.) the problem, is that I can't get the mesh to rotate. I figure it should be possible using the vector rotate node. The vector rotate node can make all the empties center points, so all points rotate independently. But I don't get what math is necessary to make it rotate properly. So it's all wonky. No matter what I change, it comes out wonky. I need to somehow get the proper angle.

I also made the resting empties extrude a line to the posing empties. Thinking that perhaps having an edge that extends from the reset to pose can help with alignment somehow. But that idea isn't working out either. You can see the line. There's also a line you can't see. Extruding from the resting empties to the origin. My thinking was that I could use the two lines to get a dot product, because the line begins perpendicular, it essentially starts at a dot product of zero. And then approaches 1 when the two lines face the same way. Giving me a clean way of measuring 0 to 1. And then I could multiply that into degrees, and that would give me the rotation. But I can't put that together. I'm too fucking stupid.

>> No.956960
File: 3.65 MB, 720x720, Parented Position with Jank Rotation Setup.webm [View same] [iqdb] [saucenao] [google]
956960

Phew. Look at that. The mesh is moving along with the bones, except the mesh is not parented to the bones. There is no bone weight. Zero weights whatsoever.

I found a node called "Instance Rotation". https://docs.blender.org/manual/en/latest/modeling/geometry_nodes/instances/instance_rotation.html
It grabs all the rotational data from the empties automatically. I don't have to worry about coming up with some wackadoo math solution to do that now. It just werks.

There's still a problem however. That I have two empties occupying the same coordinates, at the joint. And so the "Sample Nearest" node has to decide which one to sample from. Coincidentally, it sampled in a nice clean way.(After I changed some names to fix the order) But I can't rely on that coincidence. I need to figure out how to make it sample it right no matter the circumstance.

>> No.958674
File: 800 KB, 720x720, Wonky Bend.webm [View same] [iqdb] [saucenao] [google]
958674

Almost no progress. But saving thread from page 9.
I'm just trying to figure out how to make the mesh blend cleanly.

>> No.960329 [DELETED] 

>>>/vg/447875110
MX03lFNRR7U5UXJalA/pub
General FAQ:
https://web.archive.org/web/20200216045726/https://pastebin.com/bhrA6iGx
AAU Guide and Resources (Modules, Tans, Props, Poses, and More):
https://docs.google.com/spreadsheets/d/17qb1X0oOdMKU4OIDp8AfFdLtl5y_4jeOOQfPQ2F-PKQ/edit#gid=0

>Character Cards [Database], now with a list of every NonOC in the megas:
https://docs.google.com/spreadsheets/d/1niC6g-Xd2a2yaY98NBFdAXnURi4ly2-lKty69rkQbJ0/edit#gid=2085826690
https://db.bepis.moe/aa2/

>Mods & More:
Mods for AAU/AA2Mini (ppx format, the mediafire has everything):
https://www.mediafire.com/folder/vwrmdohus4vhh/Mods
/aa2g/ Modding Reference Guide (Slot lists for Hair/Clothes/Faces, List Guides, and More):
https://docs.google.com/spreadsheets/d/1gwmoVpKuSuF0PtEPLEB17eK_dexPaKU106ShZEpBLhg/edit#gid=1751233129
Booru: https://aau.booru.org

>HELP! I have a Nvidia card and my game crashes on startup!
Try the dgVoodoo option in the new win10fix settings.
Alternative: Update your AAU and see if it happens again. If so, disable win10fix, enable wined3d and software vertex processing.
>HELP! Required Windows 11 update broke things!
winkey+R -> ms-settings:developers -> Terminal=Windows Console Host

Previous Thread:
>>445943839

>> No.960863
File: 3.60 MB, 720x720, Flat Bend.webm [View same] [iqdb] [saucenao] [google]
960863

Another page 9 bump. Except this time I made a bit of progress. Still not the results I was going for, but I'm learning. My brain is slowly understanding the basic concepts of nodes. And with that new understanding, I'm capable of doing more tricks. This may look like a simple bend, but there's some shenanigans going on under the hood.

Instances are turning into points, then turning into new points with inverted IDs, and getting placed into the nearest index
The IDs are getting sampled twice. Once normal, once inverted.
Other instances are getting turned into points, then back to instances again.
There are two mesh positions mixed, then two rotations positions mixed.
The mixing math is takes the regular points, the inverted points, averages them out, then does a bunch of stuff with attribute statistics and map range nodes.(The math is still a little off, but close enough for now)

There's been a lot of strain involved, as I'm not accustomed to pushing my brain this hard. But unfortunately, there's still a long road ahead. I may have to rethink my whole approach to this. The bending is clearly still bad. The whole thing flattens out and loses volume by a lot. that's no good. But I'm just glad it's moving in what looks like one cohesive unit. It's actually so fragile. Just one node is connected improperly, and it breaks apart, whipping around like mad.

>> No.961934
File: 2.22 MB, 720x720, Point Cloud.webm [View same] [iqdb] [saucenao] [google]
961934

I found another piece of the puzzle. Thank god. This might be just what I needed to take the next step. This is the tutorial video: https://www.youtube.com/watch?v=tj6ZZYO5qPY

In the first half of the video, he explains how to create a point cloud where every point references every other point, and then how to remove bloat, and factor by distance. All without simulation or loops.
I didn't do the second half of the video, because it's not relevant to my purposes.

>> No.964629
File: 612 KB, 720x720, Mesh web.webm [View same] [iqdb] [saucenao] [google]
964629

>>961934
Alright. I have to save this thread from page 10. I was hoping to make more progress before posting again. But nodes are difficult. I've been hacking at them this whole time, and everything I've tried has been a bust.

Currently, I'm trying to take what I learned from the video in the previous post, and apply it to the entire mesh. Which I can. That's what pic related looks like. There's a line that goes from every point to every other point. Although, for the sake of visualization, I booleaned some of the lines away. When all 1.5M lines are active, it just looks like a solid black mass. So I hid some of the lines so you can see the connections.

This might sound expensive. Slow performing. And it is. Until you deleted most of the lines, then it works just fine. I don't need all of the lines. I only need some of them, some times. So the trick here, is creating the right conditions to delete the lines I don't need. Achieving that, the performance will not be an issue.

That's what the lines are for. My goal is to only keep the lines that connect to points which reside inside of the geometry. At first I was only using points, but now I figure I need lines in order to check whether or not a connection intersects the mesh. This is something I've been wanting to do with the raycast node for a long time. But the problem is that the raycast node self-touches, which completely fucks everything up. I'm praying that setting up lines this way, will allow me to avoid the self touching issue. Then from there, I will actually be able to set up the faux-collision.

>> No.964660
File: 1.01 MB, 720x720, Inside rays.webm [View same] [iqdb] [saucenao] [google]
964660

>>964629
Small victory today. I changed my approach, and made some lines that cast on the inside, following the direction of normals. I can set the length of these lines. Make them longer so they intersect stuff. Perhaps I can figure out how to make hit detection. lol, fat chance that. It's way too hard. But I'll get to googling and see what I can find. SOMEONE must know the math for hit detection. The question is: where do I go to learn it?

I think the thread will hit bump limit with this post. If so, the ship will likely sink before my next update. It's a little sad.

>> No.965227

>>943669
I am sorry anon but your penis is broken

>> No.965228
File: 3.50 MB, 898x525, 1689373595744754.webm [View same] [iqdb] [saucenao] [google]
965228

>>943830
>Twist testing is important.

>> No.965388

>>965228
Funny.