[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers

Search:


View post   

>> No.38607458 [View]
File: 1.56 MB, 4000x2260, xy_grid-0163-4176038301-high quality, best quality, high detail, very detailed, official art, SilverMommy-NAI-t8-5000, 1girl, solo, standing, wolf tail,.jpg [View same] [iqdb] [saucenao] [google]
38607458

>>38606234
I still havnt changed much of my workflow from the embed guide in the archive.
Ive only selected shuffling tags and deterministic latent sampling method while keeping the default 0.005 LR the same.
There are stuff like batch size and gradient accumulation steps now too which I have absolutely no idea what they do.
I went down to 0.001 LR with the bao embed as a test and it didnt show signs of overtraining while still looking good.

I wish I could explain overtraining better. But for now compare this bao embed steps comparison with the vei steps comparison and tell me the exact moment when it shows signs of what you belive is overtraining. Hint: There are a few, weird borders/letterboxing, blur, weird sharpness, its mostly visible on the eyes

Bao https://litter.catbox.moe/ojonm8.png
Vei https://litter.catbox.moe/xbbfsn.png

>>38606783
Ive used my embeds on NAI, AnythingV3 and Nutmegmix. Just dont think training on any mixes seems like a good idea the weights there are usually biased. My previews have been looking fucky. Im recommending base NAI [925997e9] for training
Comparing models and embeddings feels like a gateway drug to insanity in the search for perfection.

Navigation
View posts[+24][+48][+96]