[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/vt/ - Virtual Youtubers

Search:


View post   

>> No.43620985 [View]
File: 533 KB, 512x768, 00083-2891069402-masterpiece, (highres_1.2), (ultra-detailed_1.3), (extremely detailed CG unity 8k wallpaper_1.1), (best shadow), nekomata okayu,.png [View same] [iqdb] [saucenao] [google]
43620985

Well, what can I say? Seems like holoomelette is here to save the day, it gave me the best results out of models, creamsicle being second best. I also tried to run the prompt on grapefruit but the results were meh.

This is the best and closest I can get to Okayu's outfit without a lora. I'm sure I can tweak a few more prompts to strengthen/lessen but I'm honestly tired of adjusting it so I'll leave it like this.

>>43613608
I see. When we talk "older cards", how old are we talking? Irrelevant info, I know, I'm just curious. Thanks for the explanation.

>>43615742
Thanks for the model links and detail comparison showcase, anon!

>>43618589
>As in, did good_shit and HLL3 eat up space on your own Drive if you had a shortcut to my folder created? Nope. Beauty of Google Drive shortcuts.
Yeah that other anon was complaining about goodshit and hll3 eating up space, but I haven't noticed such a things, hence why I asked. Thanks for clearing that up.

>I only have the one throwaway account so Google hasn't bonked me on my paying one for it, bless.
I used to be very cautious about this thing so I only used 1 account fo SD, then I got fed up with 24 hour cooldown and made another account, then another and now I have about 12 accounts, polmao.
Only got fucked once because I ran 2 colabs at the same time, 1 text based AI and 2 SD. I got lucky cause I used my main account along with the throwaway one and the throwaway one got locked permanently until I let google run its "investigation". I simply cut my losses, I made that account the same very day and lost a few gens but nothing major. Currently sitting fine at 12 accounts but I'm thirsty for more because I want to upload as many different models as possible. With your colab however it is not necessary due to big variety of models, vaes and extensions such as latent couple and control net. As well as larger space(due to models not eating it up). All thanks to you for that.

>One thing though - the GPUs we use in Colab? Automatically use "half" - that means half precision, or fp16 - to save RAM, and run faster. They do that on the fly. So loading an fp32 the 4 GB ones model into Colab is just wasting time while that extra 2 GB downloads because webui uses "autocast" and tosses that shit out anyways
Ohh! I see. Very interesting, I was wondering if it affected colab, I looked up Tesla T4 specs and wow it is pretty powerful, so they allow us to use half of that power?

>Running on CPU, like my dumb ass was for quite some time? You have to use "--precision full --no-half". Pretty sure you have to use "--precision full --no-half" on AMD GPUs ROCm on Linux, and on those older nVIDIA GPUs where, like that Anon mentioned, FP16 performance is actually 64 times slower than FP32.
You lost me. Well I assume this is only if you run locally and on older GPUs as well as CPU(I wouldn't ever want to do that cause it seems like a sureway guarantee to burn your CPU.)

Navigation
View posts[+24][+48][+96]