[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/3/ - 3DCG


View post   

File: 22 KB, 600x314, 17xp-pepethefrog_web1-facebookJumbo-v2.jpg [View same] [iqdb] [saucenao] [google]
596584 No.596584 [Reply] [Original]

About to pull the trigger on a 16 core Threadripper - talk me out of it. My current gpu is gtx 950

CPU vs GPU rendering thread. I've seen videos of redshift, but arnold seems higher quality

>> No.596586

I'm not gonna tell you how to spend your money, you fuck. I'm not your dad. Get some self-control and learn how to be financially responsible by yourself.

>> No.596587

>>596586
I have more than enough to cover it without going to a credit card. I just dont want to make a poor choice and miss out on the gpu rendering craze. I will be rendering to 4k.

>> No.596590

I'm not gonna tell you not to get a TR (I have one, it's a great processor), but GPU-rendering is the way to go. You're able to iterate faster during look-dev and you can actually fit enough rendering horsepower in a single machine to do decent animations.

>> No.596591

>>596590
I have read that a threadripper is almost equivalent to a 1080ti and with the 2080 coming soon...

>> No.596593

>>596584
GPU is the way to go, it scales lineally. Arnold will be launching GPU soon too.

Nowt wrong with getting a threadripper as well though

>> No.596595

>>596593
cpu ofc scales linerally as well and is easier to program.

>> No.596596

I would take the CPU with the best single-core performance (i7-7700k or 8700k) and use multiple GPUs for rendering.

>> No.596597

>>596596
>single core
but why?

>> No.596600

>>596597
Most programs still make mainly use of only a single core when working in realtime. For that you want the highest performance possible to experience not so much hiccups/processing/waiting for the machine etc. CPUs with lots of cores have a lower clockrate per core and are not that great realtime-performers but they excel in rendering.
So if you go CPU all the way you would have to decide/find a compromise what is more important - best realtime performance or best render times.
If you go the CPU+GPU route, you can take the i7 and a good GPU and you can always put in more GPUs.
Check the benchmarks on cpubenchmark dot net for a quick overview.

>> No.596601

>>596595
>>596595
>cpu ofc scales linerally as well
Sure, but multi-socket machines make no sense from a price-performance standpoint. I'd imagine a dual socket machine with a couple of non-shit Xeons is pretty expensive. I bet it would be cheaper to stuff a machine with four 1080ti cards, which would stomp all over a dual processor box for rendering.

>and is easier to program.
Not really our concern is it? I'm not trying to write a GPU renderer.

>> No.596602

>>596600
Compare the "High End CPU Chart" vs the "Single Thread CPU Chart" for your Threadripper-CPU

>> No.596603

>>596600
This is for 4k non real time lad.

>> No.596612

>>596591
It's not even close, the 1080ti is like 4x faster for rendering

>> No.596623

>>596584
Per-core performance will be similar to regular Zen arch, which is like 80% the speed of anything Intel, overall performance is of course awesome if CPU rendering is all you want to do for some reason.

>>596612
Depends on the renderer I suppose, Hybrid VRay is fairly similar between the top-end Intel/AMD CPUs and a single 1080 Ti. There aren't that many hybrid renderers right now and comparing between a GPU-only and CPU-only renderer is pointless when they'll produce different results.

Overall I think that a really fast CPU like an 8700K paired with a fast GPU or two is best.

>> No.596627

>>596591
Depends on the renderer.

>> No.596637

CPUs are dying

just get octane and a graphics card

>> No.596639

>>596596
>single core
>rendering

you idiots from Intel must stop holy fuck

>> No.596642

>>596612
Not true

>> No.596643

>>596639
...says the guy with no reading comprehension.
The point is to use a fast chip to take advantage of 90% of the work you'll be doing, and use GPUs for rendering. Even if you account for hybrid rendering, the CPU rarely amounts to much.

Consider what's better:
1. 1950X with some entry-level GPU - $1100
2. 8700K with 1080 TI - also $1100
Now let's compare:
1. Worse performance in poorly-threaded applications with so-so graphics capabilities, but at least Arnold will run fast (which may eventually become GPU-accelerated anyway). Programs with real-time components like Substance and Marmoset will run like crap.
2. Best performance in a wide variety of applications and best graphics capabilities, and you can still use the GPU to render fast. Plus, if you have the need for producing real-time content, you have the best hardware for it.

If money is no object and you can afford both a 1950X enough Ti's to fill out the motherboard, then this thread isn't even for you.

>> No.596690

>>596643
But whats better tho - Arnold written on and for the cpu or redshift / etc that's some horrible mishmashed frankencode written in a format that the GPU can even accept?

My money's on Arnold on could. Why? It's industry standard.

That's right, bois. It's industry

>> No.596701

>>596690
Arnold GPU goes into alpha/beta soon. Its coming!
CUDA only, same stuff as CPU render, only that it runs on the GPU.
And it will probably have Nvidias AI denoise.
Good times ahead. This will be fun.

>> No.596704

>>596701
I'll give it a test run when it's released in 2019.

>> No.596731

>>596701
The denoise only works for stills lad

>> No.596741

>>596731
A movie is a series of stills. Whats your point?
Does it produce visible artefacts?
I don't think so.

>> No.596742

>>596741
Ever seen a movie in theater that just swims in noise?

Yeah. That's what it's like. At 4k this is unacceptable and ESPECIALLY since it's cgi

>> No.596757

>>596742
You are making no sense, when you denoise there isn't any noise left, that's the point of it.
Also you only use it to get the last 10% of noise out of your image, that 10% which cost you 50% of the rendertime.

>> No.596758

>>596757
thats not the way it works. Once you compress it to shit and back like youtube does, sure, you can't see any noise.

HOWEVER

The result is now blurry as shit, like someone poured a jar of vaseline over the frame. This is unacceptable once you get past the youtube babby tier.

>> No.596766

>>596584
I don't like how that frog is looking at me.

>> No.596776

>>596758
Who the fuck was talking about youtube?
Can you make any sense?
I was talking about this:
https://www.youtube.com/watch?time_continue=23&v=2vJ_5nPVU0s

>> No.596777

>>596776
again, you're posting a link TO an extremely compressed source. That noise positively SWIMS in uncompressed video bruh

>> No.596780

>>596777
I've problems imagining how swimming in noise looks like and i've never seen it, but you are telling me it is unusable because it is somehow visible. Ok i got it now.

>> No.596781

>>596780
grow up.

>> No.596783

>>596781
suffocate slowly in a ditch

>> No.596800

>>596595
>cpu ofc scales linerally as well and is easier to program.
How do you add a second, a third or a fourth CPU to your build?

That's my point... You can build a machine with one GPU today then double or triple your rendering power down the line without replacing the entire machine.

>> No.596803

Vray is still king.

>> No.596805

>>596800
Just get the appropriate Mobo that handles multiple CPUs or build another comp altogether.

It's current year anon. You do have a job, right? Right?

>> No.596809 [DELETED] 
File: 73 KB, 1024x576, Chaos-Group-Releases-V-Ray-3.6-for-3ds-Max-with-V-Ray-Hybrid-Rendering.Still001-1024x576.jpg [View same] [iqdb] [saucenao] [google]
596809

>>596584
Hybrid rendering (using both CPU and GPU at the same time) is the future.

>> No.596810
File: 73 KB, 1024x576, Chaos-Group-Releases-V-Ray-3.6-for-3ds-Max-with-V-Ray-Hybrid-Rendering.Still001-1024x576.jpg [View same] [iqdb] [saucenao] [google]
596810

>>596584
Hybrid rendering (using both CPU and GPU at the same time) is the future.

>> No.596823

>>596805
>buy dual-socket mobo
>you are now stuck with under-clocked/overpriced server chips

>buy quad-socket mobo
>implying you can afford enterprise gear
>price/performance goes even further down the shitter.

I know you’re just meming, but there’s no scenario where CPU rendering gives you better bang for the buck.

>> No.596824

>>596823
No memes here lad. Think about this :
>it takes Arnold devs literally years longer to create the equivalent cpu results on the gpu.
>when they develop new things, again it takes exponentially longer for that work to reach a working state than if they had just stayed on the CPU.

Let that sink in.

>> No.596826

>>596824
That has nothing to do with my post but alright.

>it takes Arnold devs literally years longer to create the equivalent cpu results on the gpu.
You know... maybe that's an Arnold thing? I mean, I remember seeing the Pepe short back in 1999, and it took many many years after that for Arnold to actually come to market as a standalone renderer.
You look at the Redshift guys and they are making progress just fine on the GPU front.

>> No.596842

>>596826
Redshift looks video gamey. You can't deny that fact.

>> No.596848

>>596826
Wait, Pepe the frog meme is older than me? Wtf?

>> No.596883

>>596584
Looks like Octane is the winner here, every professional is using it.
How many cards do I need to run Octane like the professionals?
4 cards is minimum I think

>> No.596884

>>596883
Octane is limited in its ceiling. Its only good for as much RAM as your gpu has, which in 2017 isnt enough.

>> No.596917

>>596842
True. But it's an incredible engine and renders 10x faster than Arnold.

Anyway you should be doing stylized anyway. See >>596890

>> No.596930

>>596848
Pepes first appearance was 2005 in a book called Boy's Club created by matt furie. P.s delete your post you might get banned