[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/3/ - 3DCG


View post   

File: 1.47 MB, 820x823, 1613062415748.png [View same] [iqdb] [saucenao] [google]
937895 No.937895 [Reply] [Original]

What are you doing to incorporate the future (AI) into your workflow so you don't get left behind?

>> No.937896

Nothing, because I know what I'm doing.

>> No.937897

>>937895
I've used SD for texture generation. Mostly for wallpaper textures on walls and pictures to use in picture frames in my scenes. It wasn't too bad.

>> No.937898

>>937895
iterating upon images and gradually overcoming the intimidation of what prompts to use, letting it settle into my bones and become a normal part of the process. i did a shit tonne of leg work running up before the whole AI thing so i'm having an absolute blast with it.

its a skill unto its own using the AI to its fullest effect and i see nothing but good from it.

niggas who say its just a tool are wrong btw, its also a really fucking quick way to learn what makes a render visually appealing, it helps to ingrain fundamentals and theory that you can use to iterate and iterate until you really do have a fairly good piece of artwork churned out.

>> No.937899

I have AI draft all my postings to socials for me

>> No.937900

i do want to get around to messing with that one that generates animation for a bipedal skeleton based on prompts

>> No.937905

>>937899
Based. Also let a chatbot run all my DMs and leave the schizos on "seen".

>> No.937919

>>937895
fuck off, cris.

>> No.937924

>>937919
very perceptive, not OP, but damn teach me.

>> No.937990
File: 81 KB, 350x450, Alan-Watts-3.jpg [View same] [iqdb] [saucenao] [google]
937990

>>937895
>>937898
https://vocaroo.com/13MFac8nQTL4

>> No.937991
File: 1.06 MB, 2048x1024, stormclouds2.jpg [View same] [iqdb] [saucenao] [google]
937991

>>937895
I trained a model on my own work so it can shit out work just as bad as my own.
In all seriousness though, it's been pretty handy. I mainly use it for concept generation, where I'll make a blockout of a scene that I'm going for and img2img it for possible directions to take it, do the same for a scene I've run up against a wall with for (again) possible directions to take it, or just explore different novel concepts when I'm creatively drained and can't come up with anything.

I haven't given texture generation a go, but I have experimented with HDRI generation. Though less HDR and more "I". I've used them once or twice, mainly for clouds and reflections, but I haven't really nailed the process. You can't train on non-square images (last time I checked), so for the time being I'm shit outta luck for training a model or embedding on proper equirectangular images. Guess it'd be fine for backplates though.
I just really want funky looking clouds and sunsets that HDRI sites don't have.

>> No.939004

>>937898

I learned that a few weeks ago. Ive learned more with it than without.
Only stood away cause of the memes and faggots who said it was shit. It's actually a great learning tool.

>> No.939031
File: 17 KB, 350x350, 8s570jtzm5h41.jpg [View same] [iqdb] [saucenao] [google]
939031

>>937895
AI now exists for me to make background textures that aren't intended to be heavily viewed in a fraction of the time it used to take.

Also getting reference images and using img2img prompts where I draw my own rough draft and see the iteration AI comes up with and photobashing it together in photoshop until I have a solid piece to work off of for my next model has become insanely valuable.

I'm still doing all the work, but I'm now doing significantly more work now that I don't have to deal with the bullshit I suck at that should take me half the time it already does - and now it takes me like 5% of the time it used to. I just want to autistically move vertices around, this gets me to that goal faster.

>> No.939032

>>939031
>I'm still doing all the work

Ssssiiiiipppppp

>> No.939033

>>939032
There's no GOOD and performant 3D ai yet so yes I am doing all the work. I could just be a faggot and get some free textures on the internet instead, what's the difference?

>> No.939034
File: 43 KB, 411x418, 1656090611031408.jpg [View same] [iqdb] [saucenao] [google]
939034

>>939033
>so yes I am doing all the work
by your own admittance you now have a workflow that has you use a neural net trained on billions of other peoples work that generates a new image based on those billions

>> No.939040

>>939034
Who's coping now?

>> No.939053
File: 1.32 MB, 3840x1634, HomeGrownHuman.jpg [View same] [iqdb] [saucenao] [google]
939053

>>937895
put a label on my work

>>100% MADE BY HUMAN

>> No.939084

>>937895
Nothing. I installed the stable diffusion thing that runs in your browser with the danbooru database, and tried to create one of those hypernetworks so it could generate character sheets or front/side/back views to model off of. But I must've done something wrong, I had traceback errors in the log window during training and it seemed like the network was just borked and nothing really happened when I tried to use it.
I don't even know how the whole prompting thing works, to be honest. I thought it used danbooru tags but in retrospect I had no real idea how to do it, maybe it just works off of keywords or whatever and I did everything wrong to begin with. Guess I'll wait for a few more updates so the software becomes more appealing for a smoothbrain like me.

>> No.939098

>>939034
this is also how your brain works anon

>> No.939134

>>939084
There's apparently a full on installer now for Automatic1111's build.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/releases/tag/v1.0.0-pre
I can't comment on if it works or not, or how to fix any issues you might have with it, since I went the "difficult" route, but it might be an option for you.

As for prompting, it kind of depends on the model itself and how the images were described to it. 9 times out of 10, you can just say what you want and you should get a reasonable result, but to go the extra mile and speak to the model in the way it was trained, you might have to know the trigger words for it. The Novel AI model for example (and most anime models) was trained on Danbooru images and works off of its tagging system. So while you can get decent results with plain english, using tags seems to work better. Again, it largely depends on how the model was trained. The general rule of thumb I go by, is anime = tags, realistic = text.
For turnarounds, there's actually a nice embed for that.
https://civitai.com/models/3036/charturner-character-turnaround-helper-for-15-and-21

>> No.939144

>>939134
>For turnarounds, there's actually a nice embed for that.
Its not even close to orthographic and each image on the different "views" is off model to the other views. Are you even trying? Why try to push this shit onto us?

>> No.939148

>>939144
Not that guy but you don't need strict, pixel-perfect ortographic views of a 2D character to model them accurately in 3D. From eyeballing, you eventually develop a sense for how correct your proportions are or come up with ways to check even though the illustration has the character in a pose or at an angle.
There's nothing wrong with being an amateur of course, we all start knowing next to nothing. But only an amateur needs front/back/side views to get a model off the ground since they don't have any idea of what their workflow is yet besides
>I'm gonna set those reference views up in image planes and drag the vertices so they match
.

>> No.939149

>>939148
there's a reason we pay human concept artists - making orthographic views is one of them. Don't even think of projecting your amateurism onto me, kid. Your pathetic stolen SD automatik 1111 has a looonnnnggg way to go, if it doesnt become illegal first due to all the lawsuits

>> No.939150

>>939149
>there's a reason we pay human concept artists - making orthographic views is one of them
Yes. There's no reason the two can't exist side by side. Also, I wasn't calling you an amateur. I was calling everyone who can't even get started unless they have all three ortographic views amateurs. And it's true, they are amateurs. A couple dozen models and they'll develop the confidence and a workflow that doesn't rely on ortographic views as much.

>> No.939168

>>939149
>>939144
Please just kill yourself to save everyone the trouble of dealing with your retarded tourist ass.

>> No.942004
File: 3.19 MB, 2880x2880, 34487-1335019146-(masterpiece), (best quality), messy hair, full body, arms crossed, sunny day, (young child_1.3), (loli_1.1), (succubus demon g.jpg [View same] [iqdb] [saucenao] [google]
942004

>>937895
Prompting waifus. One day we'll be able to convert images into 3D models then drag and drop them into video games. I'll kill them then.

>> No.942012

>>942004
>Even if you're a walking holocauster like some warframe sweat who racks up a 1000 kills every 10min you're not gonna put a dent in the numbers of generated waifus walking the virtual halls.

>> No.942013

>>937895
There will come a point where AI generated shit will be generic shit, because everyone will overuse it.. i predict games and stuff coming out that has the same fucking shit, and they will sue each other over it... AI is useful until humans provide ideas, and when that runs out, ai wont crate anything original... in fact Ai creates nothing original.

>> No.942015

>>942013
>in fact Ai creates nothing original.

Neither does most humans, like 99.9999% of us will never make something novel. Neither is that what you want most of the time.
You want something familiar highly refined to perfection. We combine old ideas together into something new. The AI is already better than us at this.

I already generate concept art with AI that is more thought provoking to me than what I see browsing a place than art station at a rate that is impossible for any human to match.
If the machine would be soul less we would have nothing to fear but it's just far from it. These things will be better than all of us. To some extent it already is.

It still needs us to make the prompts but once these things are connected to instagram feeds and have metric of engagement, fuck prob track our eyes to figure out what draws or
attention before long. These things will be creating art that exceed human capabilities because of how fast it moves.
Our tastes will not evolve past it, rather it will be the thing that will evolve our taste. Before long the day will come it no longer need us for the prompt.

It'll look at you and go 'I know you, this is what you like. But here is something even better, enjoy!"

>> No.942016

>>942013
>What are you doing to incorporate the future (AI) into your workflow so you don't get left behind?

I'm playing with it to be familiar with what it does, what it can do and what would be the interesting use cases.
To me the promise it holds is how it enables fewer people to create more.

Fewer cooks in the kitchen means more focus to tell more diverse stories representative of the creative minds behind the production.
Diversity as in actual diversity, not pleasing a focus group but exploring nieche interest in genres that went extinct due to production cost.
The backside of human made art is how costly it is both in time and labor so all effort was gobbled up into these gargantuan projects that had
to sell and had to please everyone. With AI doing the heavy lifting for us a lot of that goes away.

Like it used to be a single artist could make a painting or a model in a certain time frame. Assisted by AI that same artist can now output
say an entire comic book or make enough art to fill out an entire level in a videogame in the time-span it would've otherwise taken to make just a few assets.

Like there is a lot of despair and negativity towards AI amongst us but if you rise to the occasions there's new ground here for us to traverse.
interesting ground that previously was only available to the Hollywood directors or Kojimas of this world given the freedom to operate big teams.
My tip to artists is to widen the scope of their ambition to match the capabilities we've now been armed with.

As artists our art is shifting from mechanical implementation to the art equivalent of a coder writing high level code instead of assembly code.
Problem for us is how that process was very enjoyable to us so we feel robbed, but I think it's possible to strike a balance there.
Work at only the pieces you truly love and do it from the enjoyment of the process alone to meditate and outsource generating all the parts you care less about to the AI.

>> No.943785

>>937895
I'm specifically leaving it out. or not using any of it

>> No.943822
File: 94 KB, 697x639, 1673270369083727.jpg [View same] [iqdb] [saucenao] [google]
943822

>>937895
I was going to because elevenlabs was perfect for dialogue audio to animate, but the polish did their PR stunt with the paywall and now i have to go back to scavenging for hours for the audio i'd like.
I'd even be willing pay for it if it was not their atrocious "Letter limit per month" model these blackrock fucks are using.
As for the image/animation/etc AI? eh, i like playing with local SD, but i'll probably seriously only use it somewhere in a decade from now, once it becomes way more advanced and stable in output. Right now i'll stay handcrafting shit.

>> No.943824

>>942015
>The AI is already better than us at this
The AI is better than YOU at this.

>> No.943825

>>942004
What was the prompt.

>> No.945236

>>943825
SEX SEX SEX

>> No.945244
File: 303 KB, 1024x671, 256.jpg [View same] [iqdb] [saucenao] [google]
945244

>>937899

>> No.945405

>>943822
>scavenging for hours to bypass a $5 paywall

>> No.945412

>>943822
Elevenlabs isnt that good. I paid for a month and was let down by the clone quality as well as the ToS. Cancelled and never looked back to TTS

>> No.945436

>>945412
Man, it makes me wonder if there's anything like img2img for TTS.
Like an AI where you can do voice acting with your own voice and then modulate it into someone else's would be top tier. That way you could 1-man army a whole animation or something. Voice2Voice I guess.
No having to worry about hiring people on Fiverr or looking for VAs who think they're way better than they are on Twitter only for each of them to have varying different microphone qualities so your audio is a mix of shit, not having to deal with TTS bullshit where you can't really direct their inflection perfectly (or people either), it'd be awesome.
I know what I want when I'm making voice parts, and it's hard to direct people when they're half a world away and all they do is send files. Even worse when you have to pay for that shit and you get what you get. I used to act in plays and shit, so it's way easier for me to just do what I want and get exactly what I expect, and being able to feed that into an AI or something to do the same thing but with a different voice would be the bee's knees' knees.

>> No.945583

>>945405
Can't pay for it from here, and as i said - the pricing model itself being "X Characters per month" is scam-tier, so even if i could, i'd only do so if they suddenly become queue-based with no character limits, or at least relax the limits to maybe X Characters a week or something.
>>945412
It requires tinkering but my results were really, REALLY great back when it was "free". I also found out it can read accented texts properly, which was a GREAT boon for me since there aren't many VA recordings doing a stereotipycal scottish/irish accent, and even less female ones - something i often have a need for since recently.
It's still considered the best option for a reason. It really was a powerful tool if you get the right samples and settings. I would post some of my results but i can't find my vocaroos on the archive, so you just have to believe my word.
>>945436
I think there were a few voice2voice solutions made already.
Here's what i can find on remnants of /vsg/:
>Speech-to-Speech:
>https://github.com/voicepaw/so-vits-svc-fork
>https://github.com/svc-develop-team/so-vits-svc
>https://github.com/prophesier/diff-svc
>https://github.com/fishaudio/fish-diffusion
They all had massive noise last time i heard it though.