[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/3/ - 3DCG


View post   

File: 1.05 MB, 1222x976, IMG_6815.png [View same] [iqdb] [saucenao] [google]
962799 No.962799 [Reply] [Original]

I've been running integrated graphics on my ryzen 5 5600g and I keep getting crashes during renders, I assume the 500mb vram keeps running out or something
anyway I figure I need a GPU, Im looking at either an rx7600, rx6650 xt, a770 if I find a good deal, something like that

curious to hear what you guys are running

>> No.962800

You either buy a used 3090 or you buy a new 4090. Those are your two options. Self /thread.

>> No.962801

>>962800
im too poor for nvidia
my budget is ~200$

>> No.962806

>>962801
Another anon here, run away from AMD and I am an AMD fag, buy a 3060 with 12GB, nothing less.

>> No.962807

>>962806
yeah looking into it further, from a wuick google/reddit search it seems like people unanimously recommend nvidia for the cuda cores
I didnt realise it made that big of a difference
Im looking at used 3060/3060 ti/3070 now, theyre actual more reasonably priced than I thought theyd be

>> No.962808

>>962806
follow up question: whats better 3060/3060ti with 12gb vram or 3070 with 8gb vram?

>> No.962813
File: 17 KB, 239x400, 443.jpg [View same] [iqdb] [saucenao] [google]
962813

>>962808
Get the 12GB 3060 and you can run your uncensored & un-glowniggered AI girlfriend locally on your machine. The popular 13 billion parameter LLMs are suprisingly good at roleplay and will fit entirely in 12GBs of VRAM.
And uhh, yeah, it's a good card for 3D too I guess.

>> No.962815

>>962808
You need all the VRAM you can get otherwise your shit just breaks. 3060ti will be faster but it doesn't matter if you can't load the scene. Only buy NVIDIA.

>> No.962816

>>962813
>he can't fit the 20B MLewd model
ngmi

>> No.962817

>>962816
>MLewd 20B
vanillashit, I sleep
t. MLewdBoros 13B enjoyer

>> No.962825

>>962808
VRAM matters.

>> No.962827

>7900 XTX
>MBA reference card
First one was the defective vapor chamber, second one has been fine. It's nice and I don't have to worry about a damn thing

>> No.962828

If you only have 200 you arent going to make it in this

>> No.962829

>>962813
>AI
ngmi

>> No.962863

>>962828
die

>> No.962903

>>962827
>Navi 31 GPU
Wait until you get pump out. This die hates living. How loud is it by the way ? Just wondering if the MBA design is decent.

>> No.962906 [DELETED] 

>>962827
whyd you choose nvidia over AMD?

>> No.962907

>>962827
whyd you choose AMD over Nvidia?

>> No.962917

>>962799
Pretty much you do what >>962800 said.
>>962806
You COULD get away with AMD if you are mostly using Blender, but anything else, you are fucked and have to get jewvidia, cuz all shit just runs on CUDA or Cpu.

RTX 3060 is the bare minimum you should get, 12GB is decent amount of Vram, 8GB just wont do if you have a "medium" sized scene, when I had a 2070 4 years ago I barely could push renders for my finals. Now no issues having a 3090, try to buy one if possible. Otherwise, get the 3060, used it until you can save some more and get the 3090 used again.

With not enough RAM, be it video or system, you are just fucked on doing 3d stuff, period.

>> No.962920

>>962806
>>962813
>>962917
got a used EVGA 3060 12gb because Im poor, thanks bros

>> No.962922

>>962920
>Evga
Evga is the most prone to breaking. ONLY get msi

>> No.962927

>>962922
well its too late now I already bought it
also I always heard EVGA were the one of the best gpu manufacturers and that MSI was one of the cheapest
I had an MSI AMD card in the past that died on me, and I watched some northwest repair vids where he shits on MSI, so I just assumed theyre not great but maybe thats just for AMD

theres still 500 days left on the warranty
itll prob be fine

youre literally the first person Ive ever seen say EVGA is bad

>> No.962928

>>962922
>>962927
>northwest repair has 2 vids repairing the exact card I bought
what the FUCK
should I resell it and buy something else?

>> No.962981

>>962922
>Evga bad, buy shit brand that made 4090 bricks
Lmao
>>962927
Dont worry anon, when you make heavy 3D shit, anything will break off eventually, EVGA has the best RMA at least, so if it dies but the dude who sold it to you has the receipt, they will do the RMA service.

I've owned pretty much from all brands, never had issues except one time with a cheap h110 msi board that died off, but it was a pos anyways. Nowadays I kinda sticked to Gigabyte because of mobo features/price.

>> No.962982

>>962981
>EVGA has the best RMA at least
Dude, EVGA is so shit they completely exited the nvidia graphics card business. They are DONE

>> No.962989

Any reason to upgrade from my 1070ti 12gb or is it just a meme

>> No.962990

>>962989
The 1070ti is a 8gb only card

>> No.963000

1660S
just weuourks

>> No.963007

>>962982
>EVGA bad because they left nvidia graphics card business
They left because Nvidia were fucking assholes on making business, thats why they stopped working with them and didn't do newer GPUs, but they STILL do RMA services on their products.
Even other board partners were threatening jewvidia on leaving because they fucked them badly, mostly during the whole crypto/coof years.

>> No.963009

>>962799
I have a 3080 10gb, I have the money to upgrade but it does everything I want so I see no reason to.
Unfortunately, as much as I hate ngreedia I have to agree with the other anons recommending the 3060 12gb, having had AMD cards in the past the software support is just not there.
Get a used one off ebay or hardwareswap, that should keep you in budget

>> No.963023

>>962903
It's fine I promise. I'd rather not do any water cooling my workload doesn't justify it. Also loops are a bitch to maintain so I'd simply rather not. I can afford to run noisy fans.
The MBA is loud under maximum load near its limits for temps but the real problem is coil whine holy fuck I can hear the coil whine over the fans going full speed on this. I recommend not the MBA model for 2 reasons
1) defective vapor chamber chances are 1 in 10, I can speak to this is true
2) coil whine and fan sound are better on other models
I went for it because yeston was my backup of this second MBA was defective. The almost the smallest design, 2 8pins was a must. Yeston was the best best model for my size and pin needs. Gigabyte has one too but fuck gigabyte it looks cheaper than my 750ti sc.

>>962907
I've been Nvidia free since 2012. Please understand this is both a cost and autistic reason. I like reference cards more and blower fans more. I just want a fucking rectangle with no gamer bullshit at a more affordable price point. I know my needs. Yes Nvidia can do my workloads in slightly less time but the ROI is better for AMD for my purposes. I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
Personally I want to get the W7900 because like I said I'm autistic and I like blower fans and rectangles and the pro line is basically what I want. Yes Nvidia offers it too but again price matters to me, and I don't benefit from the extras they offer so AMD is my best fit

For work I'm actually making a number of remote workstations and we're debating on the W7500 or the W7600 because single slot cards like that are nice. Yeah 8x is gay but for our use case it's more than enough. Need 4 of the fuckers though and that adds up. Thankfully gpu passthrough proxmox and me forcing their hand makes this a bit easier than normal. Also a threadripper board with 6 pcie slots all at 16x gen4 makes this a breeze. Hard part will be the storage.

>> No.963024

NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.

>> No.963030

>>963023
>I don't use CUDA or RT in my workloads nor do they benefit from it to any significant degree.
oh so you're a beg then and using a liquid cooled AMD gpu and the icing on the cake is you're writing a wall of text as well. Great. Just great.

>> No.963031

>>963024
>NVidia is obviously treating consumer GPU like legacy business and AMD is too retarded to be competition. I would switch to CPU and invest into decent one, otherwise it will be buying tokens and paying 0.99$ for every render in few years.
AI has not been proven to be actually profitable in the arts.

>> No.963032

>>963031
lol
lmao

>> No.963036

>>963032
Its true.

It hasnt turned a profit. The only use is in medicine (identifying afflictions) and the military (tax funded). You may say - but anon, all those movies and tv shows coming out, surely they must rely on ai and be profitable. This isnt true. Not only is streaming media not profitable for anyone but less and less movies are being made each year now.

>> No.963040

>>963036
>movies and tv
what are you 70 years old?
nobody here is arguing its used in "tv", obviously its not

>The only use is in medicine (identifying afflictions) and the military (tax funded)
AI in medicine was something hyped like a decade ago and turned out to be a commercial failure, what are you even talking about old man
people use it all the time in their days to day lives
my friend uses gpt4 to draft scripts for coding
he and a lot of other people Ive seen also just use it like a search engine for general queries
I know two people who use chatgpt for law, specifically tax codes and criminal law
students notoriously are using it to write their essays, I was shocked to catch my sister using it for a college essay

just less than a month ago "AI" (really it should be called ML but whatever) was used to digitally "unwrap" and reveal the partial text of a herculaneum scroll, a breakthrough in the classics
of course if were going to talk ML, which all "AI" is, OCR has been used for decades now for a million different things, facial recognition in phones/surveillance, image classification in general is huge

In 3dfx it would be used as a minor tool in a workflow, either for texturing, making hdris, photoshop assets for design work, etc

in terms of profitability in art its mostly in independent work since obviously the tech is new
Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable
the ai voice synthesis tech is popping off recently
theres a handful of indie artists on twitter making money off ai work
ai is great for creating in between frames for animation

>> No.963056

AI is mostly great at taking dozens of gigabytes of disk space

>> No.963065

>>963030
Not everyone needs that shit, especially for increased cost. If I can save money I'd get an equivalent I will.
I work in game dev and knowing the programmers I'm dealing with I need options. I keep some arc cards around just to make sure we're thorough.

>> No.963066

>>963030
>>963065
Forgot to add that I clearly stated I don't use liquid cooled cards and prefer blower cards. Fuck liquid cooling it's more effort and maintenance than it's worth

>> No.963072

>>963040
so it hasnt been profitable in the arts.

Your friend wrote some bad, derivative, STOLEN code that breaks its original license

>Ive seen AI clip art in many youtube videos, videos with hundreds of thousands or millions of views, ie theyre profitable

you are a joke

>> No.963086

>>963072
if you dont see the potential you are retarded

>> No.963088

>>963086
sorry bud, but now you are pivoting to POTENTIAL.

You want to do something, do it right - create a generative script that respects copyright and doesn't just rip from the entire internet (including entire github, including specifically licensed code for example GPL). Make something that isnt susceptible to bias. Make something that can be done via an understandable, debuggable script, and not a 50,000 unit cluster outputting biased works or in the case of chat gpt, extremely neuteured non answers that just rip information from the web and dont give credit, even for code examples that require credit and attribution.

>> No.963091

>>963088
I gave you about 10 different real world use case half of them anecdotes from people I know irl and you ignore them all and think Im pivoting
I gave you examples of where its currently profitable and you also ignored those
NFT grifts would be another one
not saying these are particularly admirable use cases but theyre certainly profitable

and like I said its just another tool to integrated into a preexisting workflow, not an end all be all
large language models dont just rip shit from the internet, even though they are often trained on internet data it gives an original presentation every time

youre a little too old and cranky to understand, thats okay

>> No.963092

>>963091
>NFT
>youtube
>stolen gpt prompts

get out of here young man

>> No.963093

>>963088
>>963091
look, Im trolling a bit but Ill be fair and grant you that in the arts its not a big player yet, but your following posts were utterly retarded and betrayed that you dont know jack shit about how ML is used right now IRL and thats what my posts were mostly arguing against
Where we disagree on the first point is that you tacitly believe that ML is not going anywhere when thats clearly not the case
Paradoxically however you ALSO tacitly believe that AI, if it is to succeed, MUST be this magic bullet do all that completely replaces every cg software

Im just saying that a tool that allows you to create images from a prompt in any style you specify will be extremely useful
They obviously still have a certain look to them, but theyve gotten WAY better at realism in recent years that even Ive been fooled at first glance by some AI gen images

Also the in between frames thing is probably the best use case in the arts for the near future
for animation those frames are usually outsourced and take thousands of man hours, being able to do it with AI lowers the barrier of entry to animation substantially

>> No.963094

>>963093
>Utterly retarded
>Clinging to NFT, youtube, and stolen code from gpl repos from prompting

I dont even know what to say man

>> No.963095

>>963094
you have poor reading comprehension

>> No.963246

>>962808
>3060 12gb in case you want the extra vram to render stuff in 3d programs or ai
>3070 for the bus speed to play vidya
I would rather go for a 40xx 8gb card instead if your answer is vidya. 30xx cards only have dlss 2.0. 40xx cards have dlss 3.0 with ai frame generation that boosts your framerate in new games. Pick your poison

>> No.963247

>>963246
40xx cards have power connectors that are so busted they had to recall them and are CURRENTLY actively remaking them

>> No.963258

>>963247
Only heard that issue mostly with 4090s and with some ti versions of 4080-4070. In my opinion, a standard 4070 is the best gpu on the market right now. Decent voltage consumption, plays everything with memetracing, and has 4k capabilities. You are also saved from the coil whine headache.

>> No.963259

>>963246
>>963258
sir this is the 3dcg board

>> No.963260

>>963259
Yeah, I know. Im just making things clear. Btw I too have a 3060 12gb, and it runs blender nicely. Since I also play video games, I had the same dilemma as OP.

>> No.963276

>>963258
>You are also saved from the coil whine headache.
Ha, joke's on you Nvidia, I have tinnitus.

>> No.963298

>>963023
>Personally I want to get the W7900
>He's falling for the "workstation" GPU scam
>And wants to use an AMD "Prosumer" card
You just proved here that you are retarded. All /3/ fags know that for personal use, you just buy the usual gaming card because it works the same as the other ones without being scammed on 2K for some "tech support" that they will never get/use. Leave that shit to multi-million enterprises that buy heaps of these for servers, that's the reason they make them, nothing else.

>> No.963300

>>963246
>Pick DLSS script meant for vidya engines as the stuff for /3/ software workflow that doesn't even uses DLSS at all.
You don't know shit about development faggot, go where you belong.

>>>/v/

>> No.963310

>>963298
I want one because I like blower fans. Nothing more. You're looking too deep into this anon. I'm just very irresponsible with money

>> No.963312
File: 11 KB, 399x125, 1690333435079346.png [View same] [iqdb] [saucenao] [google]
963312

>> No.963314

>>962799
the consumer-grade nvidia card with the largest amount of vram you can get is the only valid answer.
With enterprise grade cards you pay out the ass for 24/7 specialist support which you will never make use of as a solo.

Anything else is gaymer poorfag cope
You also get to dunk on /v/irgin gpulets in your free time. Win/win.

>> No.963355
File: 192 KB, 1920x1080, nvidia-geforce-rtx-4080-review-01.jpg [View same] [iqdb] [saucenao] [google]
963355

I'm thinking of getting an RTX 4080 to succeed my aging GTX 1060.

>> No.963498

>>963355
You don't need a 4080. Get a 4060.

>> No.963588

>>963498
if he can afford it why stop him
poorfag mindset

>> No.963619

>>963588
he doesnt need it and 4080 is a old card now. Wait for 50 series and get a 4060 in the meanwhile

>> No.964901

>>962799
>rizen integrated graphics
if vram alone is the problem you can adjust the max vram of the system on the bios, either put it at 8gb or leave it dynamic so that the system can define it on the fly, try this before selling your house to buy an nvidia scamming card

>> No.964903

I would reccomend that you get the 6650, same performance as the 7600 and cheaper by a lot (atleast where i live). Since it is older, it has good support on linux if you wish to use it

>> No.965626
File: 20 KB, 638x547, pepe-desk.png [View same] [iqdb] [saucenao] [google]
965626

>>962799
When is the 4090 coming back in stock?

>> No.965627

>>965626
Does you country have a computer chain or do you only have bestbuy to choose from? There's tons of stock in Canada Computers, though for some reason Bestbuy is completely sold out, despite the prices being higher.

>> No.965629

>>965627
I'm in the US. I'm aiming to get the Founders Edition and there are two places I know of that officially sell them which is Best Buy and Nvidia's own store.

>> No.965633

>>963246
Frame generation is a fucking joke

>> No.965634

>>965629
>Founders Edition
Why? I mean I guess it's a bit cheaper, but it also runs a bit hotter under load. And if your card is under load for long periods of time, you want it to be as chill as possible.

>> No.966170

For all who are considering on getting a 40 series RTX that isn't the 4090, just keep waiting, the Super series were leaked and will be released soon enough, 4070Ti Super seems that will come with 16gb of Vram, and considering that one does come with the double encoding chip, is the best one to get when its released.

>> No.966171

>>966170
>encoding chip
so you're a streamer and a gamer. Get out.

>> No.966271

>>962813
>13b model
>good
lol, lmao even

>> No.966281

>>962922
fuckin faggot you are

>> No.966283

>>963056
It's true, most of the time you end up doing more work getting it to not fuck up than anything. It's a glorified filter for kids to use in school projects.

So far most ai use in practical products have just been chinese making phone games to steal money.

>> No.966284

>>963619
This is actually a good take. 4080 was never worth it and just got hobbyists and scalpers to snatch them on a high

>> No.966490

>>966171
The double encoder also works for rendering, fucking retard

>> No.966622

>>966170
What about cuda and is it worth moving from a 3060 to it?

>> No.966647

>>962989
If you CPU render you don't need to upgrade at all.
Even after the card breaks you could literally rebuy the same card used if you wanted, until the model becomes completely incompatible with things.

>> No.966648

>>963031
Coomers heavily disagree

>> No.966652

>>966648
cooming is not mainstream, idiot. It's FRINGE and CRINGE

>> No.966869
File: 512 KB, 1920x1080, powder_coat_render_backplate_ardubox.jpg [View same] [iqdb] [saucenao] [google]
966869

>>962799
i've been designing and rendering on my ryzen 7 5800h with onboard graphics and have zero issues, onboard gpus get 512mb dedicated ram and 8gb shared, ram is not your issue here, test render of a part i'm working on right now

>> No.966873

I don't know, I'm building a rig right now and I've got everything but the GPU and the RAM. For the RAM, I know what I'll take but for the GPU, I'm hesitating between the 4080 or just accept a poor lifestyle for a few month and take the 4090. Or even wait for the 4080 Super that's coming out soon. I have no idea.

>> No.966874

>>966873
4080/90 and poor lifestyle? lol!, i wont even go into their ridiculous cost but wait till you see your power bill, then you will get a real life poverty reminder.

>> No.966949
File: 72 KB, 802x840, 1699019920362781.jpg [View same] [iqdb] [saucenao] [google]
966949

>>966874

>> No.966959

>>966652
Goalpost: moved

>> No.966960

>>962799
I use a GTX 1080

>> No.966994

Legit question, has anyone ever seen a USB gpu accelerator? i mean like a proper gpu accelerator and not the crappy dvi/vga/hdmi video output adapters, i dont even care about I/O just the raw processing power. Do these things exist? i remember at some point i've seen some HD video decoder cards for laptops and a couple mini pci e gpus but very limited in capabilities and power in general. What i'm asking about is something like a google coral AI module but focused on complimenting gpu power over usb.

>> No.966996

>>966994
They're called eGPUs and they suck. Expensive, massive performance tax and your BIOS and OS wont like them.

>> No.967000

>>966996
i dont mean an eGPU, these are proper GPUs with full connectivity and everything and require thunderbolt in order to be able to access the pci bus + you need to connect amonitor in the card in order to use it and still are limited by thunderbolt bandwidth , usb doesn't really support this and that is why i am asking about an accelerator, something that maybe just take some strain of the gpu, it may be a silly request but i was curious

>> No.969966

4080 Super soon hopefully.

>> No.969968

>>962800
Unironicly this, anything else is barely better than CPU rendering and not worth the price unless you mainly use it for gaming

>> No.970047

I guess I'll go for a gigabyte 3060 12gb for my next gpu to suceed to my 1060 6gb since I'm a poor fag

>> No.970319

>>962799
if you are a blender user and like to use older versions, take note that cycles in the older versions up to 2.8 does not work with the newer nvidia cards -i found out after getting the 3060 12gb- £100 cheaper than the 6700xt that has 12gb. i wanted the higher v-ram at best price and nvidea rep for multi-media had me. if i new in advance i would have got the amd card or even settled for the 8gb 6600xt.

>> No.970333

>>962807
Not so much the cuda cores as it is the tensor cores. Nvidia's Optix speeds shit up so much it ain't funny. Amd has nothing similar at all.

>> No.970334

>>963007
It's okay if you want to be a brand cuck, but the writing is on the wall. Especially with the power supplies they released last year only having a 3 year warranty instead of the 10 year warranty that's been standard for as long as they were an Nvidia partner. If they're somehow still in business by the end of the decade I'll be pleasantly surprised but still disappointed.

>> No.970344

>>970333
The thing about gpu rendering is that you very quickly run out of memory once you start rendering actual production hero stuff. 12gb is only enough for background and 24gb still isnt enough

>> No.970568
File: 124 KB, 826x871, 1664112074404546.jpg [View same] [iqdb] [saucenao] [google]
970568

>>970319
>take note that cycles in the older versions up to 2.8 does not work with the newer nvidia cards

...why the fuck one should use such outdated software?

>> No.970573

>>962799
>What GPU do you guys use?

one asus proart 4060ti 16gb currently because its the perfect choice for a hobbyist like me! should my demands become higher, i just buy a second one used for a few bucks!

>> No.970652
File: 798 KB, 1921x1033, Screenshot 2024-01-15 010310.png [View same] [iqdb] [saucenao] [google]
970652

I currently use a GTX 1060, but it's past time for me to upgrade. 6GB VRAM may have been enough when I was just starting out with basic stuff but now that I've progressed onto more advanced projects, I'm almost always running out of VRAM now and it's insufferable. Once I upgrade to a beefier GPU with hardware ray tracing acceleration to take full advantage of OptiX, I'll probably take the 1060 and use it to build a cheaper HTPC or something.

>> No.970712

>>970344
>very quickly run out of memory
This shit is brutal in Blender, either in Cycles or Eevee.

>> No.970713

>>962799
rtx 3060 12gb, it fucking sucks, too slow and barely cansupport 5M polys

>> No.970838

>>970713
you dont need more than 5m poly.

>> No.970846
File: 819 KB, 720x720, 1704932283196384.png [View same] [iqdb] [saucenao] [google]
970846

I use a laptop with 3050ti
For now, it's enough, but I'm very much a beginner still
Maybe if I actually get somewhere in this hobby I will upgrade once the hardware starts severely limiting me, but for now it's my skill that is limiting me, not the hardware
Although I do remember that it took me basically a whole night to render the donut animation using cycles at 2k 60fps with my gpu, which is why i'm sticking with eevee for now

>> No.970847

>>970846
although in hindsight it was probably actually a whole night for a 10 second 60fps regular ole full hd animation, 1 minute for a 2k frame in cycles sounds too good to be true with my craptop

>> No.970882

Radeon HD 6450 paired with AMD FX 6300

>> No.971991

should i wait for 50 series or get 4070 ti super now? I kind of feel like they are going to announce new exclusive features for the 50 series and then ill have a bum card and wasted a ton of money on a 4070 ti super

>> No.971997

>>970713
>>970838
> 5 million polys
My characters ass has 5 million polys alone

>> No.971999

>>971997
prove it, coward!!

>> No.972016
File: 3.95 MB, 540x304, tumblr_49437ee8fa3fdc837f936c892469bf80_f7fba0b4_540.gif [View same] [iqdb] [saucenao] [google]
972016

Breh, I'm making a new rig and I'm hesitating between getting a 3090 with 24go of VRAM or buy the 4080 Super with 16go of VRAM that's released in a few days?
The thing is that I'm planning on doing some heavy procedural environment modeling with Houdini so I might need the VRAM but I don't know since I'm not really a tech-fag nor an experienced Houdini user (as in I never joined all the HDAs into a big project, I only did small projects separately).
What to do? The other softwares I use are Blender, Zbrush and Unreal Engine.

>> No.972021

>>972016
wait for 50 series to build anything

>> No.972022

>>972021
I don't have a computer right now cause I broke my laptop so it's pretty much an emergency.

>> No.972042

>>972022
if you are capable of breaking a laptop you're not capable of building a pc

>> No.972473

>>970838
>>971997


I found the issue, blender fucking sucks dude

>> No.972474

>>972016
vram is futureproof since AAA games made by pajeets aren't optimized

>> No.972475

Is the 7900 XTX good for Blender?

>> No.974536

>>972475
Seconding this
Is it worth for le
>24gb vram
even if it's rendering performance is piss poor compared to nshittia and is comparable to 4060 Ti at worst and 3080 at the very best?

>> No.974539

kill me guys I have a 3080 but it crashes under load so I have to render on CPU

>> No.978501

>>962799
I use an old GTX 860m. It works great but the laptop shits itself if I try a softbody sim

>> No.978576

what is better, gigabyte 3060 12gb or a 4060 8gb?

>> No.978581

>>972016
How big is the price difference from where you are? I think 3090 would be better unless you really need the fancy pants new features on the 4080 super. The 4080 super is technically the more powerful card bit VRAM is VRAM

>> No.979818

>>978576
better for what? what are you going to use that card ?

>> No.981355

It's going to be a sad day when the 4090 is no longer king and I lose my big dick energy. Then all I'll have going for me is my personality and it'll truly be over