[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers


View post   

>> No.25636387

The rigging isn't great in itself (although it mogs the cheap ass rigs from niji and holo) but the ability to just import a picture of a character and have it automatically rigged is pretty amazing

>> No.25636932
File: 152 KB, 660x742, ringo-riggr.jpg [View same] [iqdb] [saucenao] [google]
25636932

riggers btfo

>> No.25637168

>>25635510
God I miss Gibara...

>> No.25637285

>>25635510
Damn that's crazy ngl, riggers ETERNALLY btfo.

>> No.25637902

I miss Gibara...

>> No.25638252

My bet on this isnt trending right now is due to being resource intensive, causing latency + poor optimization.

Maybe in 5-10 years once the GPU prices have fallen down!

>> No.25638280

>>25635510
>*snap*
Thanks for the model. With this technology I can become the biggest vtuber on Earth.

>> No.25638370

>>25635510
How much does access to this technology cost and what can it run on?

>> No.25638466

>>25635510
FINALLY an update.

>> No.25638558

>>25638370
It's on github (open-source), so free.

Hardware-wise you might be in a pitch lol.

>> No.25638629

AHHH I MISS GIBARA

>> No.25638834

>>25638252
>>25638252
You're making the silly assumption that a potential streamer would run this program actively in the background and have it consume resources in real time on top of their video capture + whatever game they're playing. But there's no reason it has to be this way, it's like saying videogames need to run Blender in the background when displaying models, i.e. completely ridiculous. Let me tell you if this tech is developed enough to do the rigging in real time like this guy shows, it's developed enough to save the rigging and tracking information so it can be used as a regular Live2D model. Your only hardware bottleneck would be the time it takes to get to the end result.

>> No.25638931

>>25635510
Great for alt and secondary outfits and trying out new ideas before fully committing to things. Using this, you one can test-run a sketch of an alt outfit, before making the decision of getting fully rigged.

>> No.25639160

>>25638931
Now that you mention demoing, seems like a great usage case.

It can also be used by scammer riggers to pretend that this is their 'Live2D' work...

>> No.25639186

>raymoo vtuber
CUTE

>> No.25639325
File: 178 KB, 480x480, 1639814973834.png [View same] [iqdb] [saucenao] [google]
25639325

>>25639186
we already have Reimu vtuber

>> No.25639371

>>25635510
>Rion's rigging is still noticeably shitter than the other ones even in this software
Cursed

>> No.25639489

>>25635510
Riggers are going to use this as their 'cheap' budget option for clients who aren't willing to spend thousands for a rig

>> No.25639916

>>25638834
Yeah but unlike Live2D, this uses Machine Learning.

My experience with lots of Machine Learning projects is that most end results run slow as fuck. Not to mention it uses Python. Python projects are notoriously slow and not meant for In-real time stuffs like this.

This thing is also public in Github for a very long time, but so far no pngtubers have been using it in place of a png, why?

Maybe the first time if I input a few image and keyframes the initial generation process is slow as fuck (as expected), but what if the compatible software to be used with this model is ressource heavy (as it is nowhere stated to have crosscompatibility with Live2D files).

will the rig generated by this project be compatible with existing softwares like VtuberStudio? Sure it will get there eventually, but I do not think fast enough for the results above.

How does the tech works? Live2D uses tons of separated layers + objects to generate different permutations (so low footprint). But this uses ML (GAN), so likely that each permutation rendered as a different file/instance which is what I think would take much more memory? My perception of visual GAN is that it would do a lot things, but never efficiently as opposed to manual tasks (there would be way too many 'trash/artifact' to clean up like those waifu generators).

>> No.25640101

How long until someone crop the video, add Vox voice, and then post in youtube with the title "the real Vox Akuma" to bait people?

>> No.25642435

>>25639916
OK so I took a look at the v2 demo (v3 isn't out yet despite the article) and I understand your reasoning now. Basically what this thing does is hook up to Maya or Blender to do the actual work of adding the ML's suggested tracking output to the model. The 'live' tracking, such as it is, happens only while the app is open and only within the kinda makeshift wx GUI, so it can't be used for streaming even if you had this guy's crazy hardware (he's using an RTX Titan). Moreover unless I'm reading it wrong it looks like it can save the rigging information only as a pre-recorded animation or as part of a Blender/Maya model, so it's not compatible with a Live2D app without a bunch of extra work.

Also considering the fact most chuubas can barely work out OBS, it would be some real pie-in-the-sky shit to expect one of these girls to even find this demo, nevermind install Python and get it running.

>> No.25644447

How long would it be for the current leading corpos to take this shit(since it's open source) and throw money at it until it reaches its final form? Cost would be no object to them for this task

>> No.25644674

>>25644447
I have no idea about Nijisanji but NEVER EVER for Hololive
Because they don't want to hurt their usless "senior" engineers feelings by throwing their work away and use better solutions
This is why their 3D never improves despite all the money they generated too

>> No.25644904

>>25644674
There 3D has seriously improved though. Astel's 3D is way better than the Sankisei concert for example.
>https://streamable.com/wvtqs6
For some reason it's just shit in Hololive and Holostars has better production values, but it's not like they can't do good work, it's just for some reason reserved to the branch no one watches

>> No.25645253

>>25644447
Not until the guy finds a way for it to work on potato laptop.

>> No.25645290

>>25635510
Is that the Chocolate Rain guy?

>> No.25646160

>>25644904
I've had my own rrat for years that holostars aren't simply recruited, they're actually employees or contractors for Cover using the company tech for their own channels. First for fun, later to test tech live and screening games for the girls later on.

>> No.25647750

>>25637168
Babutaro
Go support her

>> No.25648173

>>25635510
This is actually genuinely impressive even if the riggin in itself is quite basic, unless you want something particularly advance this pretty much BTFO riggers

>> No.25648420

>>25635510
kek nijis were simping to a dude I'm not even surprised

>> No.25648486

>>25648173
not really, simple rigging like that isn't very expensive, and if you are poor enough not even to be able to spend money on basic rigging you probably don't have a good enough PC to run the program.

>> No.25648622

>>25638252
Pretty much, although if you read the developer's article about it he is fully aware of this and is trying to make it less demanding, so who knows where this project will go.

>> No.25651404

>>25635510
Adobe will buyout the tech

>> No.25651615

Nijisanji's 3.0 actually referenced his papers in part of the implementation. Don't ask me how I know this.

>> No.25653117

>>25644904
Holostars feels like beta testers for Hololive lmao

>> No.25653538
File: 139 KB, 1280x680, Chloe.jpg [View same] [iqdb] [saucenao] [google]
25653538

>>25648420
We got too cocky Holobro...

>> No.25656754
File: 40 KB, 400x400, 1638759024452.jpg [View same] [iqdb] [saucenao] [google]
25656754

>>25635510
i want him to try gura, just to see if the rigging can be any worst than what it is

>> No.25656975

>>25656754
If you took Gura's model into vtube studio and just set it to auto-rig it'd be better than what that shitter did to her. Her rigging is worse than most 2views.

>> No.25657337

>>25635510
Interesting, but this needs to be combined with facial expression buttons that the girls already have in Live2D or you'll have to be someone like Jim Carrey to achieve those facial expressions.

>> No.25657353

>>25656975
Is that why her eyes are always struggling to stay fully open?

>> No.25657456

AI could basically do anything at this huh?

>> No.25657792

>>25647750
it's not the same. gibara was a great model. one of a kind

>> No.25658585

>>25653538
I always knew Gibara would make it into Hololive, despite being male. Such talent.

>> No.25661976

>>25635510
This is pretty big for artists who dont have the time to learn how to fully rig

>> No.25662047

>>25635510
It's cool but where's the download link?

>> No.25662154
File: 511 KB, 462x1115, 16745456676755.png [View same] [iqdb] [saucenao] [google]
25662154

We can finally /become/

>> No.25664032
File: 475 KB, 1606x1426, Hololive.jpg [View same] [iqdb] [saucenao] [google]
25664032

>>25635510
If there are any /asp/ or /here/ this is great tech use for them.Imagine using pic related for auditions, or this https://files.catbox.moe/09ulov.jpg

>> No.25669148

bring back Aloe

>> No.25671056

Riggers will soon be a thing of the past.
Soon being a few years maybe, unless this person throws it behind a paywall with obscene pricing.

>> No.25675660

>>25635510
Gura 2.0 never ever

>> No.25676195

>>25644447
VShojo could

>> No.25676368

>>25637902
Wish granted
https://www.youtube.com/channel/UCdUpvKm_lTc9w9fy7HdrR9g

>> No.25676485

>>25635510
RIGGERS BTFO

>> No.25677488

>>25676368
Well, shit. Subscribed.

>> No.25679748

>>25676368
thanks

>> No.25681689

can someone share a github link

>> No.25682730

it's on github they say, they dont share a single link, more like bullshit.

>> No.25683082

https://github.com/pkhungurn/talking-head-anime-3

>> No.25685809

>>25683082
Good shit

>> No.25688768

>>25683082
thanks

>> No.25692674

>>25635510
wtf don't dox gibara

>>
Name
E-mail
Subject
Comment
Action