Quantcast
[ 3 / biz / cgl / ck / diy / fa / g / ic / jp / lit / sci / tg / vr / vt ] [ index / top / reports / report a bug ] [ 4plebs / archived.moe / rbt ]

Due to resource constraints, /g/ and /tg/ will no longer be archived or available. Other archivers continue to archive these boards.Become a Patron!

/g/ - Technology


View post   

[ Toggle deleted replies ]
>> No.79293971

>>79293943
>Why is AMD's memory latency going up
amd doesnt make memory tard

>> No.79293974

>>79293943
we still use extended data out ram and we just dress up as women but dont bother shaving our legs.

>> No.79293982
File: 96 KB, 1329x616, Capture1.png [View same] [iqdb] [saucenao] [google] [report]
79293982

>> No.79294144
File: 328 KB, 4488x1486, dualchannel.png [View same] [iqdb] [saucenao] [google] [report]
79294144

Why are there 8+ core mainstream CPUs offered with only dual-channel memory support, while quad-channel is exclusive to high-latency HEDT/server architectures with massive markups? Where does the jewing and stagnation end?

>> No.79294164
File: 122 KB, 1312x895, Capture3.png [View same] [iqdb] [saucenao] [google] [report]
79294164

>> No.79294176

>>79294046
>3000s
okay grandpa

>> No.79294432

>>79294332
ok but who asked?

>> No.79294498

>>79294064
good decisions , you shouldn't. either you wont understand any of it and feel big sad or in the off-chance u do understand it you'll feel big sad for buying reddit meme ryzen.

>> No.79294525

Have fun crashing in games for 20 extra fps

>> No.79294536

>OP defending TelAvintel
>for free

>> No.79294588

>>79294525
>benchmark clearly showing 60+ fps gain in avg and MASSIVE gain in smoothness due to higher 0.1% lows as well
>20 fps
cope harder. that benchmark doesnt even measure input lag , which is the biggest reason to oc ram.

>> No.79295017
File: 64 KB, 643x580, HWInfo64.png [View same] [iqdb] [saucenao] [google] [report]
79295017

>>79294767
liek dis?

>> No.79295198
File: 29 KB, 571x618, 1527962660813.png [View same] [iqdb] [saucenao] [google] [report]
79295198

if u cant perceive 1000Hz vs 500Hz polling difference of ur mice u are an atemporal worm and I thank u for rotting away while not reproducing

>inb4 u
oh honey no, ur life isnt worth 3 letters to be "you"

>> No.79295217
File: 33 KB, 1110x1110, Screenshot_2020-12-20 PCBuilding - RAM OC - SoTR.png [View same] [iqdb] [saucenao] [google] [report]
79295217

>>79293920
I wonder why you didn't link the page you stole that chart from, OP? Stay BTFO, poorfag loser. ;-)

>> No.79295338
File: 56 KB, 716x371, Memory-Wall.jpg [View same] [iqdb] [saucenao] [google] [report]
79295338

>> No.79296214

>>79293920
>Fortnite
Are you retarded, lad?

>> No.79296232

>>79296214
a mass appeal game like fortnite is exactly the kind of thing that will scale primarily off of CPU instead of GPU

>> No.79296272

>>79296232
Mass appeal for who? 10 year olds?
Even such a faggot as myself would not consider this a good benchmark.
You can't even make consistent tests on real gameplay. Because it's multiplayer gayme.

>> No.79296275

>>79296111
that 1usmus soft is a reddit meme. it literally spits out generic presets. that github guide covers timing tuning.

if you are satisfied with your freq and just wanna tune timings, id suggest doing sub timings (aside from tRFC & tTREFI) first, then primaries, then tertiary and finally trfc & trefi.

just stress test with tm5+anta config for 10-15mins after changing timing AS A GENERAL QUICK TEST. After you're done you wanna do a proper stress test, ATLEAST 3 cycle if you're doing anta config. but overnight testing will be better.

also the reason i said to tune tRFC and tREFI at the end is cus they are temp sensitive. so if they might take longer to error out due to heat building up.

>> No.79296279
File: 32 KB, 469x495, file.png [View same] [iqdb] [saucenao] [google] [report]
79296279

>>79294767
>>79295085
help me with this
this is 4800mhz ram, running at 4800 obviously fails to post
so im running at 4000 but have trouble adjusting clocks
what do?

>> No.79296309

>>79295085
I followed that link and become a supreme autist when it comes to this shit. It was fun but at the end of the day, even in WoW I'm not seeing much a difference from just setting XMP to 3600 on my RAM.

>> No.79296340

>>79296275
absolutely fucking based, thank you
>also the reason i said to tune tRFC and tREFI at the end is cus they are temp sensitive. so if they might take longer to error out due to heat building up.
even if i have really good airflow pulling across it before it hits the Noctua?

>> No.79296387

>>79296272
Fortnite has a replay mode, dumb fuck. It's the perfect RAM and CPU benchmark.

>> No.79296414

>>79296380
https://www.kingston.com/dataSheets/HX448C19PB3K2_16.pdf
i got these, 2x8GB. expensive yes i know. i just suck at OC.
mobo is Asus rog strix z490-g gaming (wi-fi)
thanks anyway.

>> No.79296514

>>79296340
if u have spare fans try pointing it at ram with zipties. also i doubt u have ""heatsinks"" removed cus the ""heatsinks"" that ram comes with are usually glued with foam, so it actually traps the heat instead.

if ur brave enough try removing the heattraps by heating them up with hairblower or w/e and then cutting the glue with knife. but be sure that ur fine with killing ur chip incase you fuck up.

>>79296414
oh, i thought u bought the gskill 4800 b-die ones, which are like 500+ $. anyways those look like rev-e kits. should be able to push smth like 1.55v dram into it. try 1.35v on sa and io as well. should be safe. and once again, everything is in that guide so just read that too.

>> No.79297043

>>79296514
i have the fins removed, but not the heatsinks, they're team dark pro and seem notably solid but it's probably not much different thermally
actually now that i think about it my ram probably isn't getting any real airflow since the whole reason i removed the fins was to fit it underneath the lead cpu fan
there's no way i'm risking a component like this in the hardware pandemic - do you think maybe adding a tiny meme fan to push air up between them from under the gpu would do the trick instead?

>> No.79299183

>>79294332

The 1680 V2 is an 8 core ( 10 core die ) cpu with 25 MB.

>> No.79299830

>>79299183
wrong.

>> No.79300101

>>79299875
It's an cherry picked one edge case scenario.

>> No.79300150

>>79293920
>RAM Doesn't Matter.png
>clearly shows ram mattering
how are you gonna have it?

dualrank as high as possible is kang

>> No.79300233

I play civ VI and the difference between 3200 c18 and 3600 c14 is like 12-15 seconds PER TURN.

>> No.79300239

>>79294075
>multi-channel dram.. better execution,, all applications
>parallelism.. is the critical enabing factor
>majority of power.. spent in the i/o interface
With Apple sticking dram chips in their M1 SoC,
do you think Intel or AMD are going to follow suite in the near future?
Or is it going to be years till x86 takes advantage of this?

>> No.79300263

>>79300169
It matters if you play Zoomnite made in Unreal Engine to milk young innocent children. In a real life scenario the difference is close to none.


>>79300205
Show me a real life scenario. Im all ears.

>> No.79300379

>>79300306
OP pic is a cherry picked scenario of biggest bloatware game ever made by Tencent slavemonkey on his break. It has no optimization *at all* meaning they simply glued assets together to ship it. When that happens then only way to fix muh FPS is by optimizing hardware which is not something you should be doing as a consumer anyway (there is a warranty label for a reason, silly). You are literally playing with fire, and your components for muh games fps. Overclocking is nothing but a waste of time for edge case hobbyst that cant do anything productive with their PC setups and go break it, just like monkeys.

>> No.79300675

>>79300552
>tldr everything you just wrote is completely pointless ?
Why did you reply if its pointless? Seems like only thing you can do is void your warranty to play with fire for something your components were NOT designed for and then once it breaks all you are left with is "MOM OMG LOOK MY FPS I WENT SO FA-oh shi".


If you have nothing productive to do but chase your childhood pyromanias on adult toys maybe you should leave your mothers basement, incel.

>> No.79300683

>>79300101
Faster RAM always matters.

>> No.79300744

>>79300683
Until you crash and lose all your data because you pushed components beyond their spec for what they were not factory tested by actual engineers who have Electrical Engineering degrees and not some pyromaniac at home.


Notice how no overclockers actually play their games?

1.Because they would crash 15minutes in.
2.Because miliseconds literally dont matter and theres absolutely no gain from lowering them. Only if you suck at games and cant accept that you arent that special of a snowflake even after countless hours, you start throwing a tantrum about "lag" "input latency" etc.

>> No.79300871

>>79300744
>Until you crash and lose all your data
Never happened.

>Notice how no overclockers actually play their games?
>1.Because they would crash 15minutes in.
Sounds like you just watch youtube videos of people doing overclocking and base this off them not spending hours playing games in the videos.
Overclockers aren't happen with overclocks that aren't stable.
If it isn't rock solid, then it isn't worth it. So you will not find an overclocker who has their system configured in a way where it will crash after 15 minutes of gaming.
Take your cope elsewhere.

>> No.79301233

>>79301027
>Linpack
I'll try that.

>>79301191
>running isolated cpu test and then ram test separately is not indicative of stability
Makes sense.
I'll switch up my stability testing.

>> No.79301343

>>79301233
>defends overclocking
>from unstable overclock
>so clueless takes OC advice from anti-OC guy

absolute state of overclocking pyromaniac incels

>> No.79301365

>>79301213
t-t-trust me overclocking good!! i did it in the past!!

>> No.79301485

>>79301389
If its so good then why arent you able to back it up with your current system being hand overclocked? Its clocking to its factory limits which are not constant nor with great margins from idle clock.


Unlike you i dont kill my components and know that manufacturer did its best to optimize them for stability and performance out of the box.

You admit yourself that you are currently using your cpus which are within their designed limits at the factory. Not going over them. Clock boost as a feature is made and implemented by cpu engineers that know what they are doing, unlike basement overclockers that have absolutely no idea as is evident from this very thread. Incel manchildren that want to be men but cant even shower their penis so they cope with bragging rights of "overclocking" that falls apart one step into actual proper stability test, unlike factory tested limits which do not.

>> No.79301639

>>79301485
>If its so good then why arent you able to back it up with your current system being hand overclocked?
Because I currently have a system that can't be overclocked.
I made purchasing decisions based on my finances at the time, but that's changing with my system next year.
But why does my current system need to be overclocked anyway? Why doesn't it matter that I ran a stable overclock on another system for years? Why doesn't it matter that I have had overclocked systems before that?

>You admit yourself that you are currently using your cpus which are within their designed limits at the factory. Not going over them. Clock boost as a feature is made and implemented by cpu engineers that know what they are doing,
Those limits are designed to get a maximum number of products performing to the spec as possible to account for binning.
They're set to what they are so that 100% of those model CPUs will perform to the spec despite the variance in production.
This is well understood.
This means that the majority of CPUs will have some room to lower voltages, or clock higher, and still be completely stable.
This is well understood.

>> No.79301699

>>79301639
poorfag pyromaniac cope: the post

>> No.79302106

>>79300927
>Retard doesn't understand what diminishing returns means.

>> No.79302784
File: 155 KB, 1731x945, result.png [View same] [iqdb] [saucenao] [google] [report]
79302784

>>79300239
I hope they don't. I'd take the convenience of being able to expand/replace ram over the latency improvements unless they're pretty substantial.
Found some memory latency charts from Anand tech and they don't look that much different. Noted that it's comparing lpddr4x 4266mhz against ddr4 3200mhz, but it's the best I can find.

>> No.79302979

>>79293920
>Did anyone else fall for the RAM overclocking meme?
Maybe
Last time I did it, the word 'meme' wasn't invented

>> No.79303437

>>79302784
Yeah, I'm not seeing any obvious latency improvement between those graphs either.
Light can go about 1 foot in a nano second, so at best the latency could only improve by a fraction of a nano second in moving the dram chips closer.

>expandable ram
The advantage in avoiding a standardized interface like a stick of ram
is that you can use incompatible lower power components (LPDDR4X)
and you can access the dram chips individually (which increases parallelism, but I can't guess how shortening the length of a cache line affects things).

Then again, maybe all this is moot with DDR5 on the horizon.

>> No.79303830

>>79293920
+80 fps for some overclocking. Pretty good.
Fortnite though.. Was it low settings? It's even higher gains at low, which most people play at.
Are secondaries done as well? They impact as well.
RAM overclocking isn't a meme, but it's not necessary either. CL16 3600 kits will do 95% of people fine. Those that want maximum frames and have high end systems (10900k, 3080 etc.) Would want 4200-4500mhz at cl16, cl17. If you play 1080p low that is (competitive games).

>> No.79303938

>>79293943
>AMD's memory latency going up
AMD's latency is going down not up. Meanwhile Intel has not advanced their tech in 5 years.

>> No.79304109

>>79303901
Why do AMD fanboys always act like the previous generation never happened? When the old generations were current, they would vehemently defend it as if their lives depended on it. Once the architecture becomes (painfully) outdated, they simply pretend nothing happened, or even openly claim the architecture was shit. Now that AMD finally released an architecture that isn't utter shit, the AMD fanboys gloat and pretend Zen 1 and 2 never existed, and if pushed, will claim the price to performance justified their existence.

>> No.79304257
File: 56 KB, 554x893, 4L_8jyxLlBC.jpg [View same] [iqdb] [saucenao] [google] [report]
79304257

>>79304240

>> No.79304275
File: 549 KB, 1920x1080, 4L_KpaJnB84.jpg [View same] [iqdb] [saucenao] [google] [report]
79304275

>>79304245
>my company
I simply side with whomever has lower latency: currently Intel.

>> No.79304279
File: 109 KB, 529x513, 4400.png [View same] [iqdb] [saucenao] [google] [report]
79304279

UNDER 40NS GANG WHERE WE AT?

>> No.79304289

>>79304261
>numbers are made up if I don't agree with them
Is this the best you've got?

>> No.79304301

>>79304196
>>79304275

The latency to memory has gone down quite significantly, in fact.
This was somewhat offset by the increase in cache connected to each CCX, which has doubled from 16 to 32MB, but increases (by itself) memory latency. AMD compensated by lowering the latency in other places.
This increase in cache available to each core, by itself, already nullifies the advantage of Intel's memory latency.
Cache has much lower latency and is much faster than any memory.
And AMD's CPUs are actually better in every real world application.

You do work for Intel, whether you are a paid shill or not, you are a shill. If you're not paid, it's pretty sad. Lying on an imageboard, for free, for a company that makes bad products based on stolen technology.

>> No.79304326

Are Samsung b-dies still the best latency modules?

>> No.79304327

>>79304301
Ok, then please post an LMC screenshot to prove your claims.

>> No.79304332

>>79304275
The best "real world" performance for Intel you could find is comparing to previous gen AMD products that came out even before the intel ones you're comparing them against, and only in the very specific conditions where Intel has an advantage.
Even with that gen Intel's overall performance is trash tier compared to AMD despite costing more, and even that benchmark doesn't show any real world difference since there are no 500Hz screens around.
5000 series wipes the floor with Intel even in games and single core performance, no matter what kind of nonsensical numbers a loser shill like you can conjure up.

>> No.79304349

No it's a meme

>> No.79304367

>>79304279
>CR2
ty for not reproducing
>>79304301
L3 is shared among cores honey. So in background your Trannycord app is raping and evicting that L3 from your main process. goes for all cores at once. and then they all rape two gates of gimped dual channel mem.

>In terms of parallelism, there can be up to 64 outstanding misses from the L2 to the L3, per core. Memory requests from the L3 to DRAM hit a 192 outstanding miss limit – which actually might be a bit low in scenarios where there’s a lot of cores accessing memory at the same time

>> No.79304376

>>79293947
agreed, if you wanted faster, buy faster in the first place

>> No.79304379

>>79293943
ITT: Intel shill literally has no more arguments to back up his shitty 400W toasters and starts jerking off about numbers nobody cares about

>> No.79304412

>>79304379
imagine the tumor to deny the most fundamental and most important bottleneck in entire computer engineering because you the small humble bugman bought what reddit hivemind told him to fit in.

AMDrones B T F O 25 years ago by a single page of computer architecture paper.

>> No.79304441

>>79304383
show us your MLC benchmark

>> No.79304530

>>79304474
>can't even compare to AMD's cooler, quieter, and cheaper operation.
Of course they run cooler, the CPU is being starved by the memory latency.

>> No.79304583
File: 1.32 MB, 2431x1781, chrome_AD2C9htXRe.png [View same] [iqdb] [saucenao] [google] [report]
79304583

>>79304477
What's wrong? Why can't you back up your shilling with real numbers? Could it be you are actually wrong?
I am still waiting

>> No.79304605
File: 1005 KB, 1999x1793, chrome_hnUUWzGy7Z.png [View same] [iqdb] [saucenao] [google] [report]
79304605

>>79304583
Damn these superior Intel CPUs are actually pretty slow I gotta say.
I'd love to pay 600$ for the privilege of waiting twice as long for a render or a compile

>> No.79304613
File: 639 KB, 1953x1699, chrome_EXImrLXF4M.png [View same] [iqdb] [saucenao] [google] [report]
79304613

>>79304605

>> No.79304615

With Intel you know you're paying for quality.
And that alone is worth the premium.

>> No.79304647

>>79304615
The quality of toaster oven hot, power hungry CPUs, full of security vulnerabilities.
No thanks, "Intel Quality" has meant defective broken products for years.
Security mitigations have reduced server performance by 40% or more since in some cases you had to turn off HT for maximum security.
That's Intel Quality.
AMD has not even come close to having as many issues.

>> No.79304669

>>79304632
>just wait rocket lake

>just wait for 10nm
>next year for sure
We've been waiting for that for 6 years now
When is that gonna be delivered?
Rocket Lake had to be backported to 14nm again, and it's an *8* core CPU, in 2021.
Don't make me laugh. Even if Intel takes back the single performance crown, they will lose in overall performance by a mile.
And it will just take a Zen3 refresh for AMD to win again.

>> No.79304692
File: 830 KB, 2208x1242, 4L_k6p4YzzG.png [View same] [iqdb] [saucenao] [google] [report]
79304692

>>79304556
Shouldn't a 2020 7nm architecture do a lot better against a 2015 14nm architecture?

>> No.79304751

>>79304412
That paper only applies to single-die, single-core CPU. It has no bearing on modern multi-core cpus that are moving towards chiplets.

>> No.79304782
File: 1.25 MB, 2918x1743, chrome_MP0bhMEdMv.png [View same] [iqdb] [saucenao] [google] [report]
79304782

>>79304692
And besides, the production process makes the biggest difference in the amount of cores, and also cache that you can put on the die.
On any multi core workload, there is no contest, Intel doesn't just lose, they don't even come close, not even with their HEDT that have more cores than AMD, ironically.
But they lose either way.

>> No.79304786

>>79304721
How is gaming not a real-world benchmark, but Cinebench is? In just about any gaming video, the stock 10900K is trading blows with a 5900 or 5950X.

>> No.79304788

>>79304751
oh so why is AMDs memory latency so bad then? if it has no bearing how come Intel multi core cpus benefit and scale so much with lower mem latency?

weak attempt, try again drone beep boop

>> No.79304856

>>79304836
Why are you so fixated on Intel? Wouldn't AMD be a lot better if they didn't rebrand a server architecture as consumer and fixed the memory latency?

>> No.79304880

>>79304856
No one is fixated on Intel except you. You literally hijacked this thread complaining about an imaginary issue with AMD CPUs to shill Intel products.

>Wouldn't AMD be a lot better if they didn't rebrand a server architecture as consumer and fixed the memory latency?
AMD has nothing to fix. They have made the best consumer and server architecture and that's why they beat your CPUs every time.
Intel has a lot to fix on the other hand, starting with their security issues.

>> No.79304885

>>79304836
>Your latency cannot save your shit architecture and it shows in real world numbers. Sorry.
why dont you have a benchmark with good RAM showing us evidence of this? We see it clear as day in OP that Intel scales with latency. Where is a comparison of AMD vs Intel on low latency memory? Can you pull it out of your shill package? All your posts dont have memory listed and if they do its $30 one at CL16 3600Mhz???

>> No.79304886
File: 307 KB, 640x434, crunch.png [View same] [iqdb] [saucenao] [google] [report]
79304886

>i9 10900K: $529.00
>R9 5950X: $1459.99
more than twice the price for a 5% increase? Is that worth it? If we're looking at game benchmarks surely putting the money into the GPU's the better choice

>> No.79304887

>>79304849
>Memory latency is overrated as hell unless you are running specialized code/workloads.
Exactly, it is irrelevant to most workstation workloads, where AMD is absolutely dominant, it doesn't matter in videogames, where Intel is best at, and it matters even less in most server applications where Intel doesn't even come close to the concurrency, and the amount of I/O AMD has.

>> No.79304931

>>79304846
My point is merely that you're being facetious by saying that intel has been "terrible for years" in terms of any meaningful applications. AMD has always been the way to go for enthusiasts, that I agree with and endorse the discussion of on a technology forum, but we don't need to pretend that intel is terrible or failing
>>79304914
i checked and those are the listen amazon prices. For an enthusiast that might be worth it

>> No.79304942

>>79304885
>We see it clear as day in OP that Intel scales with latency.
Actually we see clear as day in OP that latency matters fuck all. Even in those extreme cherry picked conditions it barely makes a difference.
I have already shown conclusive proof of AMD's superiority in actual numbers. You have shown absolutely nothing. Your shitty CPU has been destroyed by AMDs more and less expensive products in every way imaginable. I am sorry for your loss.

>> No.79304976

>>79304916
> Intel-shill /v/tard grasping this hard on "straws" trying to justify on how they overpaid on factory overclock memory that doesn't help them get into Platinum-Diamond on Zoomnite.

>> No.79304998

>>79304916
>>it doesn't matter in videogames,
Yeah, it doesn't. Even in those extreme cherry picked conditions it makes no real world difference, as is evident from the numbers displayed in OP.
You are delusional.

>> No.79305002

>>79304942
>fuck all
60 fps average increase
10-20% lows increase
>barely makes a difference
>shown conclusive proof of AMD superiority
i dont see a single proof. show me 4400 memory AMD vs Intel?

im sorry you have to shill AMD for $0.01 per post

>>79304976
x_D sit.

>>79304979
ok great show us how superior your cpu is. run AIDA mem bench and Memory Latency Checker.

>> No.79305022
File: 892 KB, 800x566, secrets of the universe.png [View same] [iqdb] [saucenao] [google] [report]
79305022

>>79293920
II nearly fell for the RAM OC meme. fortunately I am a sperg and realized that things like Mouse Latency affect just as large a margin.
PC latency improvement is so funny, not that I don't do it but it's in the ballpark of single digit milliseconds.
Your response time is up to 40ms better when you're exhaling vs when you're inhaling, fluctuates up to 200ms based on blood glucose levels. Literally not being female gives you a 40ms improvement.
Pic related, me after cutting all of my peripheral cables down to the min length for the 0.000008 ms gains.

>> No.79305052

>>79300675
>Why did you reply if its pointless?
Pointing out that you're pointless and inadequate isn't itself pointless.

Overclocking RAM doesn't break it.

>> No.79305139
File: 168 KB, 900x480, file.png [View same] [iqdb] [saucenao] [google] [report]
79305139

>>79305098

>> No.79305166

>>79305157
https://software.intel.com/content/dam/develop/external/us/en/protected/mlc_v3.8.tgz

>> No.79305188

>>79304979
its been 20minutes even AMD should be done with latency benches by now. whats the hold up?

>> No.79305206

>>79305161
>onto a mentally insane mind.
We all know you are mentally insane. That's expected considering you prefer intel. But thanks for confirming you are a schizo.
>>You haven't even shown it makes an actual difference for Intel
OP shows it makes absolutely no real world difference, even in an extreme case. Even in the most cherry picked scenario as shown by OP, Intel doesn't improve meaningfully by having faster memory. So there's no need to show the comparison with AMD since AMD crushes it in real conditions. If you want to find it yourself, you are welcome to do so. I am sure you have those at Intel.
>its been 20minutes even AMD should be done with latency benches by now. whats the hold up?
We have already shown why memory latency doesn't matter in the slightest. If that's all you have, then you're giving up.

>> No.79305207
File: 69 KB, 1294x478, terry1.jpg [View same] [iqdb] [saucenao] [google] [report]
79305207

>>79305192
>XMP
>Minecraft
>BTW you're a dumb faggot retard anyways.

>> No.79305214

>>79305046
>>79305107
>>79305085
MIT study has said we can perceive a 2ms difference in time, which would translate to 500Hz. Obviously this is some shitty logic but presumably that's what it'd take to "scientifically" reach the limit of what we couldn't tell was flashing in front of us. Maybe it wouldn't look right cus our vision is continuous but it's out best guess for now, so there's at least something to market with up to that point for now

>> No.79305217

>>79293947
It's wasn't before. It's just that Intel and Amd were seething hard that people were buying cheap CPUs and getting better performance then their top products with a just a cooling upgrade so their started to market (((unlocked))) CPUs for their top range enthusiast segment.

>> No.79305247

>>79305206
>100 fps gain
>no difference
I guess memory latency doesn't matter when AMD has higher memory latency even though there's no mention of AMD anywhere in OP's picture? Why are you trying so hard to bury the idea of memory latency?

>> No.79305248

>>79305214
not everyone has same CFF.
>>79305206
So finally you confirm Intel is superior to AMD because no evidence can be found in this thread that shows otherwise when you spend $25 more for low latency memory and get 20%-30% performance gain over best AMD can offer.
>>79305233
intel.com
cope harder

>> No.79305286

>>79305247
100FPS gain at framerates literally nobody plays at, and that's, again, the best possible gain for your case.
Yes it doesn't matter.
And no, it's not even 100FPS.
Because the gamersnexus benchmarks I posted before are at 3200Mhz C14, The OP picture shows 384 with 3200 C16, which is worse than 3200 C14.
The Intel in OP picture gains barely 45 FPS from 3200C16 to 3600C14. At 400FPS.
It's nothing. You are grasping at straws trying to find some extremely specific situation where your shitty CPU can match or beat AMD, and for all the talk about AMD losing at 4400 Mhz you haven't even shown it yourself.
Intel barely gains framerates in these specific conditions. It doesn't scale, your meme about latency doesn't even matter in the situations you are pointing out.

>> No.79305298

>>79305248
>and get 20%-30% performance gain over best AMD can offer.
You've actually shown 0 performance gain, even in the artificial situations you cited, which nobody can even benefit from.

>> No.79305302

Got my 5900x at 4.8Ghz with 64GB 3800mhz cl16 ram. the timings are 16 CL-19 tRCD-16 tRP-36 tRAS-60 tRC.

>> No.79305306

>>79305248
>intel.com
>cope harder
Intel.com? Sounds like malware.

>> No.79305481

Seems like the Intel employee has clocked out.
The blue evil has been vanquished once again, thanks for your help /g/
Bye

>> No.79305631

>>79305481
congrats on doing it for free

>> No.79305847

I'm still on an 8700K bought at launch (coming from a 2600K which still worked fine) and everyone in this thread is retarded.

Just buy what you want and let others get what they want.

How is some guy you never met buying stuff only marginally worse than what they could get any matter to you?

>>
Name (leave empty)
Comment (leave empty)
Name
E-mail
Subject
Comment
Password [?]Password used for file deletion.
Captcha
Action