[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/3/ - 3DCG


View post   

File: 30 KB, 500x359, 1426946523006.jpg [View same] [iqdb] [saucenao] [google]
475901 No.475901 [Reply] [Original]

I'm considering getting a Quadro. Is it worth it for 3d?

>> No.475907

>>475901
Not if you're just doing modeling. And depends what else you want to do. Quadro's are only really good for CAD these days, or if you need the extra memory. If you can, hold off until Q1 next year when Nvidia is bringing out it's next generation. We're moving from 28nm > 14nm, it's going to be a massive leap in performance. And GeForce will be using stacked RAM, so we'll have GPUs with up to 32GB of RAM. It would be a really bad choice to upgrade this year.

>> No.475915

>>475907
Thanks.

>> No.475917

is it true that you would just be good with buying a good geforce and avoid wasting hundreds if not thousands of dollars for a quadro? i know that this is an artificially segmented market, the geforce is just an handicapped quadro

>> No.475922

>>475917
I can confirm. I used a bunch of quadros at various workstations at school with maya and they were no faster than my 580 at home

>> No.475937

>>475917
Actually, the Quadro's are lowered clocked than their GeForce counterparts! haha, but yeah, drivers on the GeForce cards have some stuff left out, like CAD acceleration, OpenGL acceleration for older viewports in Maya and some other softwares, and double precision. But ever since Viewport 2.0 in Maya, Nvidia and AMD havn't been able to restrict performance on their gaming cards because it's a game engine style viewport.

So unless you're working in a scientific field or CAD work, or need huge amounts of memory, GeForce line is a better choice tbh. And like I said, you're really much better waiting for next year, it's not just a case of "oh well obviously the next series is better", it's that it's going to be an unprecedented leap in performance next year.

>> No.475940

>>475937
>it's that it's going to be an unprecedented leap in performance next year.

actually, the leap will happen this year in the 390x. Stacked HBM

>> No.475942

>>475940
Actually, no it won't. Stacked memory is only a tiny part of where the performance will be coming from. The 390x is still going to be on the increasingly aging 28nm fabrication node that we have been on for a couple years now, so it will be but a tiny increase in performance with some more bandwidth and memory space. Also, it's only HBM1 while Nvidia and AMD will use HBM2 next year. It's just yet another lame "refresh" of the current architecture on the same manufacturing size, which means almost no increase in transistor count if any.

>> No.475943

>>475942
Shrinking die size has more to do with power consumption and yield than it does with speed. The Cpu industry is a testament to this.

>> No.475953

>>475943
>Shrinking die size has more to do with power consumption and yield than it does with speed
Lmfao, you clearly know nothing about processors or fabrication. For GPUs, die shrinks are the most important factor for increasing speeds because it allows for an increase in core count. If you stay on the same node size, then all you can do is try to optimize the limited space you have, which means only minor performance yields, especially after already optimizing twice on the same node size.

>The Cpu industry is a testament to this.
Nope. The CPU industry is a testament to choosing to mainly increase the complexity of their cores instead of simply downsizing them and adding more cores. These days every time there is a die shrink with CPUs they focus more on efficiency of the core so they can clock it higher, and adding new instruction sets. For CPUs, parallelism isn't the most important aspect, it's about being able to crunch through single threads as fast as possible, and thus you must focus on clock speeds and efficiency over core count. The exact opposite is true for GPUs, they need to crunch as much data in parallel as possible, and to do that they have to increase their core counts to add more threads.

>Shrinking die size has more to do with yield than it does with speed
You don't follow the GPU scene at all, do you? Shrinking die size is WORSE for yields, the lower you go and the newer the process, the higher margin for error and worse yields. It's the very reason Nvidia and AMD are skipping 22nm, they were supposed to use it but yields haven't been good enough. The 14/16nm FinFET process promises to have better yields than 22nm is having, but it's still not going to be anywhere near as good as 28nm yields are because 28nm has had so much time for refinement over the years. You logic is backwards.

>> No.475954

>>475953
..continued


The 14nm FinFET process brings in the use of more efficient transistors, tri-gate instead of dual-gate, which alone brings a huge boost. Nvidia will also be implementing new hardware technology to give direct communication between the GPU, CPU and System RAM, greatly reducing latency and system IO, not just relying on low-level software code like Mantle and DX12.

Moral of the story, stop being an AMDfag, their GPUs coming out later this year are not going to be a significant performance increase over their previous ones. That will be next year, and Nvidia will be first to it due to how AMD has positioned their release schedule. All HBM1 on AMD cards is going to give you is a bit more memory to play with and the extra bandwidth to utilize the memory.

(Also, AMD cards fucking suck for 3DCG and are notoriously buggy with professional software. Nearly every program used in the industry is programmed on systems with Nvidia GPUs and are often accelerated by CUDA. You'd be a fool to use AMD in this industry :) )

>> No.475955
File: 9 KB, 401x367, 1431359870112.png [View same] [iqdb] [saucenao] [google]
475955

>>475953
>For CPUs, parallelism isn't the most important aspect, it's about being able to crunch through single threads as fast as possible, and thus you must focus on clock speeds and efficiency over core count. The exact opposite is true for GPUs


stopped reading here. 3/10

>> No.475958
File: 114 KB, 955x957, b8.png [View same] [iqdb] [saucenao] [google]
475958

Yeah, not falling for you trolling.