Reviews Guides - PremiumBuilds https://premiumbuilds.com/category/reviews/ Mon, 31 Jan 2022 20:17:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.4 https://premiumbuilds.com/wp-content/uploads/2021/03/cropped-premiumbuilds-favicon-new-2-32x32.png Reviews Guides - PremiumBuilds https://premiumbuilds.com/category/reviews/ 32 32 160969867 MSI Z690 Tomahawk WiFi DDR4 Review: The Definitive Mid-Range Z690 Motherboard? https://premiumbuilds.com/reviews/msi-z690-tomahawk-wifi-ddr4-review/ https://premiumbuilds.com/reviews/msi-z690-tomahawk-wifi-ddr4-review/#respond Mon, 31 Jan 2022 20:16:51 +0000 https://premiumbuilds.com/?p=809001 The MSI Tomahawk brand gained a huge reputation with its B450 AMD offering, at the time a good value motherboard that supported high-end Ryzen CPUs at a lower cost. Each iteration since has grown in both price and features, and today we have the Z690 Tomahawk DDR4 Wifi. This board makes the most of the… Read More »MSI Z690 Tomahawk WiFi DDR4 Review: The Definitive Mid-Range Z690 Motherboard?

The post MSI Z690 Tomahawk WiFi DDR4 Review: The Definitive Mid-Range Z690 Motherboard? appeared first on PremiumBuilds.

]]>
MSI Z690 Tomahawk DDR4 Review

The MSI Tomahawk brand gained a huge reputation with its B450 AMD offering, at the time a good value motherboard that supported high-end Ryzen CPUs at a lower cost. Each iteration since has grown in both price and features, and today we have the Z690 Tomahawk DDR4 Wifi. This board makes the most of the Z690 platforms innate strengths with a wide range of connectivity, storage options and expansion slots, as well as a good foundation specification. But it’s $300 and that’s a lot of anyone’s money.

We’ve used this board throughout our evaluation of the Intel i7-12700K, giving us insight into the performance and features of this motherboard, so let’s discover whether it’s the right board for your Intel 12th generation PC build.

Features and specification

The MSI Tomahawk sits right in the middle of the Z690 specification. Overall this is a ‘high end’ chipset, and most boards have four M.2 slots at PCIe 4.0, six SATA ports, plenty of USB connectivity, and strong Voltage delivery circuitry.

The MSI Tomahawk is unremarkable in this regard, including all the features you’d expect to find.

Voltage Regulation16+2 Phase VRM, 70A Power stages.
RAMFour DDR4 RAM slots, ‘up to’ 5200MHz DDR4 compatibility claimed, 128GB max
Networking2.5 Gbps Intel I225V Lan controller
Wifi 6 – Intel Wifi + Bluetooth 5.2
USBSeven Total USB A on rear: Three 10Gbps Gen 3.2 Gen 2, 2X 5Gbps USB 3.2 Gen 1, 2X USB 2.0. 1X USB C 20Gbps.
M.2Four PCIe 4.0 M.2 slots: Three PCIe 4.0 X2, One PCIe 3.0 X4. 
PCIe SlotsThree Full Length PCIe slots:
CPU: PCIe 5.0 x16,
Chipset:  PCIe 3.0 x4, PCIe 3.0 x1,
One PCIe 3.0 x1 short slot.
AudioALC 4080 Codec, 5x 2.5mm Audio jacks + SPDIF on rear panel
SATASix SATA 6Gb/s connectors
Fans and CoolingCPU Fan(2A), Pump Fan (3A), Six System Fanheaders (1A)
MSI Z690 Tomahawk Review

There is a pleasing lack of conflicts, all M.2 Slots and SATA slots can be utilised at the same time. Whilst the additional PCIe slots do not boast the fastest specification, they also do not conflict with any other ports or slots. Whilst this Motherboard is not suited to multiple GPU setups (Except in rendering or non-bandwidth intensive applications where the PCIe 3.0 X4 slot is no hindrance to performance), for all common usage including gaming, production and general workloads it’s well configured and well-appointed.

The features notably missing really relate to more focussed overclocking usage: There’s a Flash BIOS button, but no Clear CMOS, power or reset buttons. A simple array of LEDs indicate boot problems (which we did not encounter at any stage) rather than a more informative segmented display. As for looks, there’s no RGB at all on this motherboard, although there is a handy switch to enable or disable attached RGB devices. Really these are features you wouldn’t expect to find on a mainstream board but at $300 we are looking for places MSI could have offered more value.

Our other slight criticism is the lack of USB ports at the rear and the fact that only a handful of them are high speed. Ideally on a board at this price point, we’d like to have seen a couple more USB A ports there.

Layout

The Tomahawk is pretty well laid out, maximising your ability to fit a wide range of additional cards and storage options into it.

The second main PCIe slot is 3 slots below the primary GPU slot, meaning it’s still accessible even with one of the larger latest GPUs fitted. The PCIe X1 slot does get obscured by any GPU larger than pure 2 slots though. And that lowest PCIe Slot is handily placed for an audio or network interface card, but be aware it is just at PCIe 3.0 X1 bandwidth, so it’s not suitable for high bitrate capture cards, for example.
Our one misgiving is the slightly awkward SATA port placement, four of them are at 90 degrees to the board and down in the lower corner. Whilst it’s not a deal-breaker, it’s not the tidiest of solutions. We prefer to see them flat along the edge of the board.

Everything else is conventionally and conveniently laid out with no major oversights or issues.

MSI Z690 Tomahawk Layout

BIOS

MSI use the ‘Click BIOS 5’ which is a relatively intuitive segmented BIOS with both ‘easy’ and ‘advanced’ modes. There’s not much to say here apart from that the BIOS is functional, practical, and depending on your affinity and familiarity, you’ll be able to do what needs to be done here without too much searching around. That said, the Alder lake CPU is complex, and there are a lot of options listed out and not all of them are intuitively titled. MSI claim a useful ‘legacy gaming mode’ which allows disabling of E-cores by using Scroll Lock. This can help in running some titles when anti-cheat or DRM software sees the unequal cores of Alderlake as two separate systems and refuse to run. However, we were unable to get this feature working within Windows 10. 

We also noted that after setting XMP the board likes to boot cycle once, before coming back up for a second boot with the settings and presumably memory training applied. This could be disconcerting for new builders but did not interfere with applying an XMP profile. Fan control is well laid out and easy to tweak. Memory overclocking is straight forwards and assisted by a number of saveable presets. Whilst these can be backed up to USB, sadly they get wiped in a CMOS Reset, which is a common occurrence during Memory overclocking. At least once you’ve settled on a set-up, it can be saved and applied as you please.

Overall, amongst BIOSes, MSI is still the one we get on best with, but we acknowledge that this is likely down to familiarity with the layout and settings on offer.

No BIOS update or tweaking of TPM, GTP, Boot modes or anything else was required to build this PC and in install and boot windows. Note that BIOS updates in future are likely to disable the ability to enable AVX512 following Intel’s mandate that this ‘quirk’ be erased in future BIOSes. 

Performance

Performance was tested as stock, without enhancement or ‘game boost’ mode applied. This board adheres to Intels Power limit specification of 190W PL1 for long term power limits, and the CPU performs accordingly. That is to say it’s bang on the performance of this CPU in any other board, including the more expensive MSI Z690 Carbon. Power limits can be removed entirely although the CPU itself will limit power draw. 

MSI Z690 Tomahawk vs Carbon Wifi

CPU overclocking was intuitive, and relatively successful, with a 5.0GHz all P- core clock easily applied along with a 50mV undervolt. This raised CineBench performance about 1,000 points to 23,000 and there was no detriment to stability.

MSI Z690 Tomahawk Performance

We spent considerably more time exploring memory overclocking and found this board a flexible and stable platform to tune our Samsung B-die test kit. 4000MHZ CL16-16-16-36 was easily obtainable in Gear 1, without adjusting Memory or IMC voltages. The features and flexibility of this motherboard, and Alderlake, impressed us and with fast RAM being a big performance modifier it’s nice to be able to optimise with ease.

Overall, we had absolutely no concerns over the performance of this motherboard. It’s an excellent pairing with an i7-12700K, it has the power headroom to run an i9-12900K, and there’s plenty of latitude for some overclocking too. 

Value

This motherboard is available in both DDR4 and DDR5 formats. It retails at $300 in the US, Around 300€ in Europe, and a more attractive £249 in the UK at present. The DDR5 version ruins any sense of value by requiring RAM kits that currently retail for at least the same amount again, and confer no real performance advantage vs the DDR4 version with well-specified and readily available RAM.

We feel that this board marks a watershed price point for Z690. With the high performing and versatile i7-12700K costing $400, moving towards that price point for a motherboard does not feel like a sensible allocation of budget. From $200 upwards boards like the MSI Pro Z690-A, and Gigabyte UD or Gaming X provide a rock-solid platform for an i5 or i7 K series CPU.

As we approach $400, not only do boards add a few features besides Thunderbolt connectivity that seems worthy of the price, they also require DDR5 RAM. By that point, we’re looking at a $400 or more premium for very little return in performance or functionality. 

Therefore, given the feature set of the MSI Tomahawk, we feel that $300 marks the absolute threshold price for this motherboard. If it’s more expensive, look at better value alternatives. If it’s closer to $250 in bundles, deals or sales, then it represents good value.

Competition

MSI Pro Z690-A

In terms of MSI’s own products, the notable competitors are the MSI Z690-A Pro which sacrifices the Audio codec to an ALC 897, has a slightly weaker but still capable 14 phase VRM design, and makes do with fewer high-speed USB ports, but otherwise offers the same performance and feature set at around $80 less. The MSI Gaming Edge WiFi is $20 or so more and has a slightly more jazzy aesthetic, along with some onboard RGB and two of the rear panel USB ports are upgraded to the faster 10gbs spec. Otherwise, again, it’s identical.

Asus TUF Gaming Z690-Plus WiFi

The ASUS TUF Z690 Gaming offers a slightly reduced specification, with key features on par with the MSI Z690-A PRO. There’s a 14 phase VRM, Four M.2 Slots, Just 4 SATA slots, and an array of PCIe Slots consisting of two full length, two single length, and one X4 length slot. It’s got just 6 rear USB A ports and one USB C, and decently specified Realtek Audio. It’s a little light on fan headers for a board in this price range, with just 3 Chassis fans in addition to the CPU and pump fan headers. It’s a decent board and offers everything you need, but we feel the price point is more $250 than the $300 current asking price. The Gigabyte Aorus Elite is better specified and lower-priced, and the Tomahawk is better specified at the same money.

Gigabyte Z690 AORUS ELITE AX DDR4

Gigabyte Z690 Aorus Elite AX DDR4 –This board is MSRP at $269 and offers a very well rounded specification. The VRM is on a par with the MSI Tomahawk, And it boards 2 more USB 2.0 Ports at the rear. It has an ALC 1220 Codec and a cut down 3 port rear audio output configuration. There’s some subtle RGB. Whilst early boards had some issues caused by the pre-release BIOS, these have now been rectified with the ‘F6’ BIOS update, and there’s a BIOS Q-flash function to allow easy updates. At a $30 saving over the Tomahawk, it’s a compelling motherboard for a PC built around the i7-12700K CPU. 

Conclusion

MSI Z690 Tomahawk Box Contents

Overall then, there’s a lot to like about the MSI Z690 Tomahawk. It gets most things right, is a great pairing with the i7-12700K, and makes full use of the most attractive features of the Z690 chipset. The question really comes about value, and whether you’re better off with a cheaper board with near-identical features and specifications like the Gigabyte Z690 Aorus Elite DDR4, Or perhaps even the Gaming X or MSI’s own Z690-A. Those are the boards we’d choose to pair with the i5-12600K in order to get maximum value. The Tomahawk meanwhile makes the most sense with an i7-12700 or as a cost-effective platform for an i9-12900K.

Pros:

  • All the features and connectivity offered by the Z690 Platform
  • Discrete looks
  • Excellent power delivery and performance
  • Plenty of headroom for overclocking and tweaking memory settings.
  • Well rounded overall specification and connectivity

Cons:

  • Feels slightly expensive at $300
  • No RGB
  • Slightly compromised USB options at rear panel.

The post MSI Z690 Tomahawk WiFi DDR4 Review: The Definitive Mid-Range Z690 Motherboard? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/reviews/msi-z690-tomahawk-wifi-ddr4-review/feed/ 0 809001
Intel Core i7-12700K Review: Alder Lake to the Rescue? Tested vs 5800X, i9-10850K and i9-11900K https://premiumbuilds.com/reviews/intel-core-i7-12700k-review/ https://premiumbuilds.com/reviews/intel-core-i7-12700k-review/#respond Tue, 11 Jan 2022 14:45:36 +0000 https://premiumbuilds.com/?p=808949 Intel’s new generation of CPUs was released last month including the core i7-12700K. We’ve been given one to test and review. In this article, we’ll put it through its paces against the flagships from the last year to see how it measures up. Intel has been lagging behind in the CPU wars for a couple… Read More »Intel Core i7-12700K Review: Alder Lake to the Rescue? Tested vs 5800X, i9-10850K and i9-11900K

The post Intel Core i7-12700K Review: Alder Lake to the Rescue? Tested vs 5800X, i9-10850K and i9-11900K appeared first on PremiumBuilds.

]]>
intel core i7-12700k review

Intel’s new generation of CPUs was released last month including the core i7-12700K. We’ve been given one to test and review. In this article, we’ll put it through its paces against the flagships from the last year to see how it measures up.

Intel has been lagging behind in the CPU wars for a couple of generations now. The 11th generation failed to challenge AMD’s Zen 3 line up and the 10 core i9-10900K is the last true powerhouse they released, now 18 months old. 

To remedy this Intel have redefined CPU architecture, releasing the 12th generation, known as ‘Alder Lake’ with a hybrid design with both powerful P-cores for performance, and more efficient e-cores. This apes ‘big-little’ design CPUs found on mobile devices where efficiency is king, but we still want some high-performance cores for demanding tasks.

This CPU is fabricated at 10nm, which should improve efficiency and lower power use. Meanwhile, there are 8 P-cores on the 12700K, which have hyperthreading and can hit 4.9GHz all core speeds, and 4 e-cores too, which clocks at 3.9GHz maximum and lack hyperthreading. That makes this a 12 physical, 20 logical core CPU. Backing it up it’s got 25MB L3 cache, Intels UHD770 integrated graphics, and the K specification means this CPU is unlocked, so it can be tweaked for performance on Z690 chipset motherboards.

This CPU is vital for Intel to stamp their authority on the enthusiast CPU market, so we’re eager to find out what it can do.


Test methodology and System

We’ve taken great care to ensure this test is fair. To do that we’ve controlled every variable that we can. All the synthetic and gaming results you’ll see are obtained with the same RAM settings across the CPUs under test. We’ve tested using an MSI Tomahawk Z690 Motherboard for the i7-12700K, A Z590 ROG Maximus XIII Hero for the 10th and 11th gen Intel CPUs, and the MSI Mortar B550 for the Ryzen 5800X.

For all the gaming and synthetic tests, we kept to Intel’s specifications for multi-core enhancements, power limits, and Thermal Velocity Boost. We did this because to our mind this is comparable to how we’ve tested the 5800X using PBO. Both CPUs were allowed to perform as they do with minimal set-up, according to the manufacturer’s intentions, but with the automatic optimisations in place. It’s also the default behaviour of the MSI Tomahawk Z690. 

We verified this behaviour with A-B testing in a number of metrics and with both our RAM settings and motherboard settings the results represent this CPU performing at its best, outside of more involved manual tuning or overclocking. RAM was set to 3600MHz CL16-16-16-32 in all tests except the specific memory tests.
We tested primarily with a Noctua NH-D15S cooler, but the performance was also verified with an Arctic Liquid Freezer II 240mm AIO. Thermal throttling was not encountered in any of the tests presented in this review.
For the GPU we used the EVGA RTX 3080 XC3 ultra but run our test settings in order to expose the CPU performance as much as possible, this powerful and consistent GPU helped us do that. 

So, let’s dig into our results!


1. Synthetic Tests

Cinebench R20

Cinebench R20 allows us to test multicore or single-core performance whilst rendering a scene. It is almost entirely independent of memory speed which allows us to isolate raw CPU performance. 

Cinebench 12700K benchmarks

We conducted three runs and averaged to obtain these results. The i7-12700K Clearly brings its core advantage to this test, with 12 physical cores overwhelming the 10 cores of the 10850K.  Running a single-core test demonstrates the performance of a single P-Core: The score of 737 points is a clear 100 points above that of the other three CPUs under test. Our main regret here is not having a 12 core 5900X available for test: No doubt it would be a close-run battle here for the multi-core crown. 


Blender

Using Blender to render a couple of scenes, we get a sense of the rendering performance of these CPUs. This test is highly multithreaded, using all cores to maximum capacity until the workload is complete. 

Blender 12700K benchmarks

Note that shorter bars are better indicating less time taken: In this test, we can see that for the ‘Classroom’ render, the i7-12700K is a full 100 seconds faster to complete this workload than the next fastest CPU, the Ryzen 5800X. In the shorter BMW27 test, the Alder lake CPU is 30 seconds faster than the second-fastest CPU, the i9-10850K.

We feel obliged to point out that we’re using this as a synthetic test of the CPUs, and if you’re actually looking to accelerate 3D rendering an NVidia GPU will complete the task in a fraction of the time of even the 12th Gen Intel CPU here. 

Clearly, the i7-12700K is very potent in multi-core workloads, with only the Ryzen 9 CPUs and the i9-12900K able to challenge it. It comfortably wins every test in this section.


3D Mark

Using 3D Mark we focus on the CPU component of the Fire Strike and Time Spy benchmarks. These tests do bring memory performance into play somewhat and also heavily favour higher core counts as it’s a parallel test that uses all cores. 

3D Mark 12700K

The i7-12700K stamps its authority on these tests as well, making significant gains over every other CPU on test. Just as in the other synthetic benchmarks, it’s the clear winner. 


2. Game benchmarks

We ran our gaming benchmarks at 1080p and high settings to isolate CPU performance as much as possible, but retained settings that are relevant in the real world. The RTX 3080 helps us see differences in underlying performance. 

Rainbow 6 Siege

Rainbow 6 Siege has an inbuilt benchmark which we’ve found very consistent.

R6 Siege 12700K benchmarks

In this benchmark, the i7-12700K turns the synthetic performance results into tangible performance gains, with 80FPS more than the 5800X, and more than 100FPS more than the flagship Intel 10th and 11th generation CPUs. 

Doom Enternal

Doom Eternal is also very well optimised and capable of high frame rates and we logged two minutes of play to give us these results:

Doom Et 12700K benchmarks

This test initially showed the Ryzen 5800X beating the 12700K by a small amount: That’s an interesting result given the apparent single-core advantage of the Intel CPU. Brief analysis showed that Doom eternal is one of the games that Windows 11 struggles with on Alder Lake, so a switch back to Win 10 and a re-test showed the 12700K improving to the tune of 10fps average. At 380 FPS the performance is no slouch on either, but the 8 Core Zen 3 CPU still holds its own here.  This result also highlights the challenges of a brand new platform and a new Operating System – performance refinements will continue as the operating system matures and better allocates tasks on this complex CPU.

Shadow of the Tomb Raider

Moving on to more demanding titles, Shadow of the Tomb Raider’s inbuilt benchmark has exceptional consistency and gives us a breakdown of CPU performance, it’s those numbers we’re looking at here to completely isolate it from GPU performance.

SoTR Game 12700K benchmarks

This test swings back to the i7-12700K’s favour, with a clear 40FPS advantage over the other CPUs. Note we have isolated CPU performance here, so this isn’t indicative of actual FPS which will be GPU limited. 


Red Dead Redemption 2

Red Dead Redemption 2 is another strong showing for the Ryzen 5800X.

RDR2 12700K benchmarks

Again it’s surprising to see the Ryzen 5800X doing well against the 12700K, with just a few FPS to the new CPUs favour. It’s possible we’re finding the limits of even an RTX 3080 at 1080p ultra settings, and whilst lower settings might show wider gaps we think it’s more interesting to demonstrate how close these CPUs can be ‘in the real world’. We re-ran this benchmark in Windows 10 and Windows 11 and found no appreciable performance difference, so this isn’t a case of the operating system limiting the new CPU architecture.


Flight Simulator 2020

And finally, the game that places the biggest demand on CPU power here, Flight Simulator 2020. This benchmark comprises a three-minute flight from La Guardia over Manhattan and delivers a stern test of the CPU. GPU utilisation stays under 70% here and performance is ultimately dependent on CPU speed. We’ve omitted the i9-11900K here as recent game updates have invalidated older testing with that CPU.

FS2020 12700K benchmarks

Here the i7-12700K is again the best performing CPU on test, using that spectacular single-core speed to deliver a 107FPS average. Note that core count doesn’t matter here, you can disable the 5800X or 10850K to 6 cores and obtain the same results. This test is all about cache size, and single-core speed and the 12700K has both in spades. We’ve got tonnes more in-depth testing on this game which will form a separate article, so if this sim is your focus you’ll want to keep an eye out for that. However, as a spoiler, the 12700K is absolutely the best option for this Simulator right now. 


Gaming performance conclusions

Our game testing sees the i7-12700K either match or beat every comparable CPU in gaming. The Ryzen 5800X runs it pretty close in a couple of titles, however, in others we see a commanding 10% or so FPS lead. We’ve purposefully run these tests at more representative settings, do demonstrate rather than overstate the differences you’ll find between these CPUs.

Nonetheless, the result here is clear: At $400 The i7-12700K beats the Ryzen 5800X, and the outgoing Intel flagships. Given what we know of the 5900X and 5950X, where their performance in games is largely dependent on that same single-core speed as the 5800X, they don’t offer any compelling advantage in gaming except for in a few specific titles. 


3. Memory Speed Scaling

RAM is the hot topic of Intels 12th Generation, since depending on your choice of motherboard you can use either DDR4 or DDR5 RAM. The newer specification remains very expensive and hard to find, whilst performance benefits outside of very specific tasks aren’t clear cut. We’ve tested with DDR4 Ram throughout this review: We feel it’s what the bulk of people will choose for this generation, particularly with the more sensibly priced i7-12700K.

However, the message persists that ‘intel doesn’t scale with RAM speed like Ryzen’ so we wanted to find out if the i7-12700K was sensitive to RAM speeds.

To illustrate this, we ran the Shadow of the Tomb Raider Benchmark in a variety of speed configurations:

12700K Mem Scaling

These tests cover the spectrum from ‘getting it wrong’ with default JDEC specification RAM, such as you’d encounter if you failed to set XMP, through to commonly available kits from 3200Mhz and 3600Mhz CL16, up to overclocked and somewhat optimised DDR4 RAM at 4000MHz Cl15-16-16-32 in Gear1.

You can see there is relatively consistent performance scaling as RAM latency decreases, but it’s not dramatic. We use Shadow of the Tomb Raider for this demonstration because it is responsive to RAM tweaking, many situations are not. Nonetheless, we can see that with a relatively affordable 3600MHz CL16 RAM kit, we have the bulk of performance on offer with minimal investment in both money and time. It remains our pick for the best RAM option for high-performance Intel CPUs into the 12th Generation. That said, we found memory overclocking easy and fun on this platform: If you do want to tweak, we can recommend a high-performance B-Die kit, and no doubt timings could be significantly optimised from those used to demonstrate this result.

We have separate content coming expanding on this aspect of Alder lake CPU performance. 


4. Power and Thermals

Power draw and the consequent heat output has long since been the cost of high performance on Intel’s CPUs. We ran tests to explore this on the i7-12700K. We opted for the popular NH-D15S Cooler to examine the performance of a top tier air cooling solution on this CPU.

12700K power and thermal benchmarks

This CPU Maintains the Intel standard of a 190W PL1 for the duration of this test. Core speeds remain at 4.7GHz throughout – and did not throttle even in an extended 10-minute test. CPU temperature is maintained at a thoroughly manageable 79 °C. We repeated this test with an Arctic Liquid Freezer 240mm AIO and obtained the same results – both coolers were plenty capable of handling this CPU at default settings.
We ventured into overclocking, adding 1000 points to our Cinebench R23 Score with a 5GHz P-core and 4GHz e-core target. Results came at the expense of a 240W Power draw, and temperatures in the mid 90’s despite a -50mV undervolt. If you do intend on overclocking this CPU, we’d advise a 280mm or 360mm AIO as a minimum. That said it was thoroughly manageable and entertaining to see an Intel CPU respond to overclocking positively once again. 


Who is this CPU for?

The i7-12700K suits a broad range of workloads and needs. It’s the sweet spot for high-end gaming, content creation and computational workloads. Whilst the Ryzen 9 CPUs offer more physical cores, the times when they are brought to bear on most peoples tasks are minimal. Meanwhile, the faster individual core speeds of the 12th generation assist much more of the time, delivering higher FPS in gaming, snappier processing in adobe apps and other tasks of that nature. The iGPU is also a bonus to many workloads, accelerating transcodes and transforms for video editors and digital artists.

The i5-12600K is a very valid option at around $100 less, for those workloads if you’re on a budget or for gamers who don’t need 8 P-Cores. The i9-12900K adds 4 more e-cores and remains the preserve of the high-end enthusiast. Most people will be better off saving money with an i7-12700K and buying a better GPU, more SSD space or more RAM.

The imminent release of the non-K CPUs also looks compelling. The first test of the i7-12700 show it performing incredibly close to the K variant: It may well be a sensible choice to keep budgets in control. Meanwhile, the i5-12400 looks set to become the new budget gaming champion, eclipsing the performance of the Ryzen 5600X in a $200 product.

AMD is now left somewhat out in the cold: Whilst the platform costs of the Zen 3 CPUs are lower, the 5800X at $400 still makes little sense against a $400 i7-12700K, and at $300 the i5-12600K matches or outperforms it an offsets the higher motherboard cost. AMD have a response in the pipeline in early 2022 with the ‘stacked V-Cache’ version of the 5800X, the 5800X3D CPUs, so it will be interesting to see how much 92MB total cache can make up the performance gap. The Ryzen 9 CPUs are still significantly more expensive, and their core counts don’t help most users nearly as much as the faster cores of Intels 12th Gen. You need a very specific workload for a Ryzen 9 to be the best choice of CPU right now. 

However, if you’re sitting there with an Intel 10th generation or a Ryzen Zen 3 CPU – I wouldn’t take the hype around this release as a cue to upgrade. This CPU is a good step forwards, but it’s not enough of a leap to warrant a platform change from those relatively recent and still high-performance CPUs unless you’re suffering poor performance due to CPU limitations. 


Conclusion

i7-12700K Thumb Art

In conclusion, it has been nice to be impressed by an Intel CPU. The i7-12700K is an absolutely storming CPU and excels across a range of workloads, from heavily multithreaded productivity tasks to gaming. This i7 CPU happily beats the last 2 flagship Intel CPU’s, and it’s only challenged in multithreaded superiority by the Ryzen 9 CPUs and the current flagship i9-12900K.

This generation has righted many of the wrongs from the 11th generation: Power draw and temperature are once again sensible. Performance is outstanding. Where the i9-11900K felt like you had to work to extract performance from it, the i7-12700K willingly demonstrates its prowess.

This CPU does many things right, and for most people looking to build a PC now, this or the i5-12600K are the right choices. However, if these CPUs and the accompanying $200+ Z690 motherboards push you over budget, keep an eye out: Early 2022 will see the value options become available, the i5-12400 and i3 parts based on this platform, as well as more affordable B660 motherboards. On the evidence of these flagship CPUs, and given the dearth of budget AMD CPU options at the moment, we should see Intel regain a dominant position In the CPU market. 

The post Intel Core i7-12700K Review: Alder Lake to the Rescue? Tested vs 5800X, i9-10850K and i9-11900K appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/reviews/intel-core-i7-12700k-review/feed/ 0 808949
Samsung Odyssey Neo G9 vs Odyssey G9: What are the Key Differences? https://premiumbuilds.com/comparisons/samsung-odyssey-neo-g9-vs-odyssey-g9/ https://premiumbuilds.com/comparisons/samsung-odyssey-neo-g9-vs-odyssey-g9/#respond Thu, 26 Aug 2021 14:02:00 +0000 https://premiumbuilds.com/?p=808614 Samsung’s been an innovator in the monitor sphere for years now, pushing the boundaries of both professional and gaming monitors. Offering monitors across the spectrum of budgets, their best work comes at the high-end. While offering some of the most expensive monitors we’ve ever seen, Samsung’s also pushed gaming monitors to new heights. This was… Read More »Samsung Odyssey Neo G9 vs Odyssey G9: What are the Key Differences?

The post Samsung Odyssey Neo G9 vs Odyssey G9: What are the Key Differences? appeared first on PremiumBuilds.

]]>
samsung odyssey g9 vs g9 neo

Samsung’s been an innovator in the monitor sphere for years now, pushing the boundaries of both professional and gaming monitors. Offering monitors across the spectrum of budgets, their best work comes at the high-end. While offering some of the most expensive monitors we’ve ever seen, Samsung’s also pushed gaming monitors to new heights. This was true of the original Odyssey G9, and the new Odyssey Neo G9 continues the tradition. Both monitors feature an incredible 49”, 5K ultrawide screen with everything a gamer could need. 240Hz refresh rate, 1ms response time, plenty of connection points, picture-by-picture – both offerings are simply top of the line. So, what are the differences between the two?

The first and most notable difference between these screens is their LED structure. The newer Odyssey Neo G9 is a Quantum Mini-LED monitor. As the newest monitor change on the block, mini-LED technology has received plenty of hype in the past, and it generally lives up to it. Featuring massively better contrast ratios, absolutely no backlight bleed, and better HDR, it offers one of the most significant changes to panels since QLED. Speaking of – the original Odyssey G9 utilizes this slightly older QLED technology for its massive panel. To make it clear, it’s not that this technology is bad. By all accounts and metrics, the G9 is still a cutting-edge monitor. The release of the Neo G9 could easily be thought of as Samsung simply flexing its technological muscles.

Neither of these monitors offers any compromise, and they are certainly not budget options. In fact, they are two of the most expensive monitors on the market, both retailing at $2500. The price of the Odyssey G9 has decreased slightly due to the release of the Neo G9, but it will still run consumers over $2,000. So what justifies the extravagant price tag? And, more importantly, does the release of the Samsun Odyssey Neo G9 make the Odyssey G9 defunct?


Specification Comparison

MonitorOdyssey G9Odyssey Neo G9
DesignSamsung Odyssey G9samsung odyssey g9 neo
Panel TypeQLED 1000R CurveMini-LED 1000R Curve
Response Time1ms1ms
Refresh Rate240Hz240Hz
Static Contrast1900-04-13 04:01:001,000,000:1
Brightness420 cd/m2420 cd/m2
Resolution5120 x 14405120 x 1440
Screen Size49”49”
Adaptive SyncingNVIDIA G-Sync, FreeSync Premium ProNVIDIA G-Sync, FreeSync Premium Pro
Price$2499$2499

1. Samsung Odyssey G9

Samsung Odyssey G9

The original Odyssey G9 was Samsung’s first foray into combining their professional and gaming monitor’s technology into one, beautiful package. They succeeded, putting forward an incredible 49” screen packed with just about every feature someone could think of. The QLED panel gets bright, features a 5K resolution, 240Hz refresh rate, and 1ms response time. Putting together a rig that can take full advantage of this screen is already a monumental task, so the extra $2500 might not make much of a dent.

One of the only weak points on this monitor is the contrast ratio. With a static contrast ratio of 2500:1, it offers fine functionality, but plenty of room for improvement. Users largely report that this is noticeable during scenes or games with deep blacks or lots of shadows. While the peak brightness of 1,000 can help with some of the problems, it’s not the optimal solution, and certainly not usable long-term. The other detriment is backlight bleed. Backlight bleed is when an LED or QLED panel “leaks” light from the sides due to uneven lighting. While not present on every model of the Odyssey G9, it is a common complaint among buyers that feels cheap given the hefty price tag.


There’s no gamer out there who could complain about the speed of this monitor, however. It’s rare to find 4K panels featuring 1ms response times and 240Hz refresh rates; the Odyssey G9 offers both in 5K. The 5k resolution is equal to running two 1440p monitors side-by-side. The panel is also NVIDIA G-Sync and AMD FreeSync Premium Pro certified for buttery smooth frames regardless of your GPU manufacturer.

Everything Else

The 1000R curvature is significant, but not overbearing. This is likely due to the size of the monitor; on a smaller screen, the dramatic curve could easily be too much. Instead, it works to draw your eyes in and immerse you in the gameplay. Alternatively, the screen easily splits into the equivalent of three normal-sized panels. This makes it great for working or playing games while catching up on some Netflix at the same time (or all three!). Samsung’s also included picture-by-picture and picture-in-picture, allowing you to show two different sources at once.

Port selection is more than enough to make the screen serve as the hub of an entertainment center. 2x Display Ports, 1x HDMI, 1x USB Hub, and 2x USB Ports. It also includes a 100 x 100 mount option for taking it off the stand and clearing up some desk space.

Samsung includes what they call Infinity Lighting, an LED ring on the back of the monitor. You can change the color in the monitor’s settings or turn it off if RGB isn’t your style. Core Sync, a lighting module on the back of the monitor.


2. Samsung Odyssey Neo G9

samsung odyssey g9 neo

The Samsung Odyssey Neo G9 seeks to improve on everything the original Odyssey G9 introduced. Almost everything remains the same, besides one key feature. As mentioned before, the biggest difference between the two monitors is the new mini-LED panel on the Neo. Among other things, it dramatically improves the contrast ratio and solves the problem of the backlight bleeding that was common in the original.

The static contrast ratio on the Neo G9 is 1,000,000:1. To note how much of an improvement that is over the original, its static contrast ratio is 2500:1. Deep blacks and proper color notes are basically guaranteed on this monitor. The sharp increase in contrast is thanks to the mini-LED technology, which changes the way that the LCD screen is lit. Because the LEDs are mini, more of them fit, giving the monitor more control over how bright it can get. This translates to brighter highs and darker lows while simultaneously fixing the backlighting problem.

Notably, better backlighting from mini-LEDs also improves HDR on the monitor. Samsung’s Quantum HDR2000 utilizes the new brightness to improve colors while retaining the 240Hz refresh rate and 1ms response time. In a market with plenty of HDR options that disable the advanced gaming features, it is a fantastic upgrade that does upgrade the feel of using the display.

Most other parts of the Neo G9 resemble the original. The connection ports have been slightly upgraded to include a headphone jack, and the Infinity Lighting system can now sync with your monitor to match the action. A feature called Auto Source Switch+ automatically detects when new devices connect and switches input, letting you get right into the action quicker. It’s a nice upgrade for certain users, but certainly not the star of the show.

Perhaps what’s most impressive about the Neo G9 is that it retails for the same price as the original Odyssey G9. Mini-LED technology is relatively cheap to produce, so Samsung was able to keep the same price point while upgrading the small issues with the original. This is fantastic news for consumers now and in the future when mini-LED technology becomes more common.


Final Verdict – Samsung Odyssey Neo G9

samsung odyssey g9 neo

The Odyssey Neo G9 offers a rare strict upgrade over the Odyssey G9. Taking a flagship monitor and improving the only complaints people had about it and releasing it for the same price is a notable move for a company of Samsung’s size, and it should be commended. It would likely have been easy for them to increase the price even further and claim the monitor as part of the future. Instead, they left it at the exact same MSRP, leaving only one question behind: what happens to the original Odyssey G9 now?

Truthfully, I expect the Odyssey G9 to fall out of production and stock quickly. Unless we see dramatic price drops, there seems to be simply no reason to choose it over the Neo. While some drops have already started, the monitors are still within a few hundred dollars of each other. That’s notable at lower price points, but users will end up paying at least $2,200 anyway – is far better contrast and no backlight bleed worth the difference? For most users, the answer is a resounding yes.

With that said, if the price of the Odyssey G9 drops below $2,000, there is certainly a conversation to be had there. It is still a blazing fast monitor that offers incredible specs, after all. The only reason it’s not at the top of most lists of the best monitor anymore is the release of the Neo G9. Keep an eye out for deals if you’re in the market for a massive workstation replacement.


Relevant Guides

Want to see how the Samsung Neo Odyssey G9 lines up against other great ultrawide monitors? We’ve got you covered with comparisons between other heavyweight contenders:

The post Samsung Odyssey Neo G9 vs Odyssey G9: What are the Key Differences? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/comparisons/samsung-odyssey-neo-g9-vs-odyssey-g9/feed/ 0 808614
Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/ https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/#respond Mon, 02 Aug 2021 15:29:18 +0000 https://premiumbuilds.com/?p=808586 In June, Nvidia released several new GPUs including the RTX 3080 Ti. This high-end GPU uses the same GA102 core as the RTX 3090 and RTX 3080 that bracket it, as well as 12GB of GDDR6X VRAM. This GPU offers an absolutely top draw experience but can it possibly justify the price tag? In this… Read More »Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost?

The post Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? appeared first on PremiumBuilds.

]]>


In June, Nvidia released several new GPUs including the RTX 3080 Ti. This high-end GPU uses the same GA102 core as the RTX 3090 and RTX 3080 that bracket it, as well as 12GB of GDDR6X VRAM. This GPU offers an absolutely top draw experience but can it possibly justify the price tag?

In this review, we’ve pitted it against the RTX 3080 and the AMD RX 6800 XT, as well as the top tier last-generation Nvidia card the RTX 2080 Ti to find out what it offers.

1. Specification Comparison

GPURTX 3080 TiRTX 3080RTX 3090RX 6800 XTRTX 2080 Ti
GPU CoreGA102-225-A1 8nmGA102-200-KD-A1 8nmGA102-300-A1 8nmNavi 21  8nmTU102-300A-K1-A1 12nm
Shader units1024087041049646084352
RTX Cores80688272 (AMD 1st Gen)68 (Nvidia 1st Gen)
Tensor Cores320272328/544 (1st Gen)
VRAM12GB GDDR6X10GB GDDR6X24GB GDDR6X16GB GDDR611GB GDDR6
VRAM Bus Speed384 bit320 bit384 bit256 bit352 bit
Pixel Rate186.5 GPixel/s164.2 GPixel/s189.8 GPixel/s288.0 GPixel/s136.0 GPixel/s
Texture Rate532.8 GTexel/s465.1 GTexel/s556.0 GTexel/s648.0 GTexel/s420.2 GTexel/s
TDP350W320W350W300W250W
Price (MSRP/Actual)$1,199/$1500$699/$1200+$1,499/$2000+$649/$1000$999/$600 (used)

VRAM

Looking at the key specification we can see how closely the RTX 3080 Ti matches the RTX 3090. The principal difference is the halving of VRAM capacity, from 24GB to 12GB. This is still ample for gaming, but reduces the cost of parts significantly with the Micron/Nvidia exclusive GDDR6X costing around 100$ per 10GB, and also the power draw with VRAM power consumption topping 100W in the RTX 3090. It uses the same 384-bit bus providing very high bandwidth access to VRAM and this is the real reason for the slight increase to 12GB over the 3080’s 10GB: The wider bus requires 12GB of VRAM or multiples of that.

Cores & Shader Units

The core itself loses just 256 of over 10,000 shader units vs the RTX 3090, and 8 Tensor cores and two RTX cores. This is a near-identical specification to the RTX 3090 which indicates that it should perform very similarly too. 

Of the other important specifications, we can’t compare ‘Shader units’ across the AMD card or the last generation RTX 2080ti as they’re different architectures, and the same goes for Ray Tracing cores. The RX 6800XT posts impressive theoretical fill rates, but from testing we know it matches the RTX 3080 incredibly closely in rasterised gaming performance.

Pricing

Finally, we come to pricing, and this is really where the controversy lies. The RTX 3090 was criticised for being too expensive at $1500, and not worth it for gaming where the 24GB VRAM went unused. Then of course everything went crazy, and the 3090 became a veritable money-printing machine thanks to its Ethereum mining capability.

$1500 doesn’t sound so bad when a card can earn $10 a day, but then of course prices rose to account for that with cards at well over $2,000 at retail and the second-hand market.

The RTX 3080 Ti launched at a nominal $1199 price point, but retail immediately saw that climb past $1500 except for the very few founder edition cards where retailers were bound to honour Nvidia’s pricing. So what we’re looking at here is a card that is retailing at around $1500 at this time. And, they’re all ‘Low hash Rate cards’ so you can’t mine as efficiently during downtime to recoup some of the cost. 

You can make a persuasive argument that no ‘gaming’ GPU is worth that, but that’s something we’ll consider after looking at the benchmark results. 

2. Benchmarks

We’ve divided the benchmarks up game by game, and run all resolutions so you can focus in on what’s most relevant to you. We’ll highlight at this point that none of the cards in this test should be run at 1080p, it’s simply a waste of their potential, but the numbers are there anyway.

Test Bench

We’ve maintained the same test bench of a Ryzen 5800X, B550 Motherboard, and 16GB of 3600MHZ CL16 RAM with infinity fabric and memory clock set 1:1. We ran a Fractal Designs Ion Platinum 860W Power supply to ensure adequate power. This is a high performance system with the 5800X the equal of any CPU available right now in terms of gaming performance. It’s optimised with good RAM speed, but not overclocked beyond PBO being enabled. 

 We want this test bench to represent the kind of system this GPU would actually be used with. In keeping with this, we run games at representative ‘high to ultra’ settings to show the kind of performance you can actually expect in-game. Simply cranking all settings to ultra often misrepresents a GPUs actual performance, through overburdening either it or the CPU with settings that haven’t been optimised and trash performance for little visual gain.

Synthetic benchmarks

First, looking at synthetic benchmarks through 3DMark testing, Firestrike is the Direct X11 test and renders in 1080p. The 6800XT excels in this, the RTX 3080 Ti still can’t beat its score, giving away nearly 5000 points. However, it does have a clear margin of performance to the RTX 3080 6,000 points behind it, and then the RTX 2080Ti is over 10,000 points behind the 3080 Ti overall.



Time Spy Shows the 3080 Ti leapfrog the RX 6800 XT in this DirectX12 based 1440p graphics test that’s more representative of current games. It’s 1,500 points ahead of the AMD card, and 2,500 ahead of the RTX 3080. There are over 5000 points lead above the RTX 2080 Ti.

Finally, to test Ray Tracing performance, we can take a quick look at the Scores in Port Royal. Here the RTX 3080 Ti uses it’s 12 Ray tracing core advantage to Romp home 2,000 points above the RTX 3080, and 4,000 points ahead of both the RTX 2080 Ti and RX 6800 XT. It’s the clear winner in this test. 

Gaming Benchmarks

Call of Duty: Warzone

Warzone is first up. This is tested by running a five-minute battle royale against Bots, and logging metrics. The recent update knocked performance back about 15% across the board, and I’ve had to omit the RX 6800 XT as we no longer have it available for testing – it performed near identically to the RTX 3080 so please take that as a proxy.



Warzone proves itself a stern test of both CPU and GPU, and can’t generate very high FPS as some other shooters can. The 3080 Ti only marginally outperforms the 3080 at 1080p, Scoring 221 FPS average to 213FPS. At 1440p again there’s only a 10 FPS difference, 180 FPS to 170FPS which isn’t in keeping with the on-paper specification difference. At Ultrawide 1440p we see a little wider gap, proportionally, with a 16 FPS difference. You can see the RTX 2080Ti is 30FPS behind throughout. And finally, at 4K we see the RTX 3080 Ti post just over 100FPS at 110, whilst the 3080 make 96FPS. Overall in Warzone, we don’t see a performance gap commensurate with either specification or Pricing of these GPUs.

Rainbow 6 Siege

Rainbow 6 Siege is much faster running across the board, and again re-testing means we omit the RX 6800 XT here. At 1080p, 1440p, 1440p ultrawide and 4K you can see the 3080 Ti posts about a 10% uplift versus the RTX 3080. There’s no yawning gap in performance here just a few more frames.



Doom Eternal uses Vulkan Drivers and is well optimised, and here we can compare the RX 6800XT which performs well at lower resolutions. The RTX 3080 Ti has a more commanding lead over the RTX 3080 in this title, particularly at higher resolutions. At 1440p it holds 337 FPS vs 273 for the RTX 3080, and at Ultrawide it’s 266FPS over the RTX 3080’s 238 FPS. At 4K the RT 3080 Ti manages 186 FPS in our testing, with the RTX 3080 and RX 6800XT tied at 160FPS. 

Red Dead Redemption 2

Moving on to the AAA titles in our test suite and looking at Red Dead Redemption 2, the RTX 3080 Ti again tops the charts but not by a huge amount. Just 10 FPS separates it from the RTX 3080 across the board, from 1080p to 4k. 


Shadow of the Tomb Raider

Shadow of the Tomb Raider has always shown good scaling with hardware and isn’t particularly CPU limited for the bulk of the benchmark run – although it is in the final village scene to provide a good overview of system performance. Here it’s no different, with a good 20% advantage over the RTX 3080, 40FPS faster at 1080p, twenty FPS at 1440p and 1440p ultrawide, and 17 FPS better at 4k. Those are fairly impressive steps up in isolation. 


Flight Simulator 2020

This demanding but gorgeous simulator delivers a cautionary tale. Our custom benchmark is designed to fully tax CPU and GPU with a low level 3 minute AI-controlled flight over Manhattan. We’ve shown results for both average performance and 1% lows here to better illustrate the results: This GPU is NOT the performance saviour for Flight Sim 2020. You can see that this game is CPU limited with all of these GPUs at 1080p, 1440p, and 1440p ultrawide. Only at 4K does the RTX 3080 Ti pull ahead, but even then it’s matched by the RTX 3080 and we’re STILL CPU limited to around 48FPS average. The long story cut short here is that despite reputations Flight sim actually isn’t GPU dependent: You need a top-flight CPU to make this game run well.

3. Ray Tracing: A Subjective assessment

Looking at Ray tracing performance, This is more of a subjective assessment of the experience. That’s for a few reasons: First is the hand-and-glove nature of RTX and DLSS with the upsampling technology giving a massive boost to performance but also allows you to tweak settings to your preference of fidelity against frame rates. Secondly because of the fast-evolving nature of RTX implementations in games. Games like Control and Metro Exodus remastered really do show this feature off well, with naturalistic lighting and well-judged effects. We’ve been playing Metro Exodus remastered at 1440p ultrawide and RTX on, but no DLSS and play is fast, fluid and utterly gorgeous – but you’d hope that to be the case with a range-topping GPU. The long and short of it is that this GPU offers one of the best gaming experiences currently available utilising these technologies from Nvidia, and that’s as you’d expect. 

Taking a quick look at temperatures and power draw: Running default settings and Logging metrics through a Time Spy run to give a load representative of Gaming, we see the 3080 Ti Draw around 390W to 400 Watts under load. Temperatures on this FTW3 card remain acceptable, with the core reaching around 75 Celcius, and The GDDR6X Memory junction temperatures at 86 Celcius. This is pretty good, under heavy load we can expect temperatures to reach 95C and GDDR6X will run as hot as 105C under continuous heavy loads or when airflow is restricted. Many owners resort to modifying their cards with thermal pads to transmit heat away from the VRAM and into the backplate. Overall, the power draw of this card in particular will demand a very capable power supply to run it, and you may want to investigate under volting it to keep power draw and temperatures lower as well. Particularly when comparing it to the RTX 3080, which draws around 340W, this card consumes around 20% more power for 10% or so more performance – not a great result comparatively speaking. A good quality 750W power supply should be considered a minimum for this Card, we did test it with a high-quality 650W Power supply, the Antec Earthwatts Gold, and it forced system shutdowns on a few occasions.

Conclusions: Can you justify the cost of the RTX 3080 Ti?

What we inevitably come to is the question of value: Is this GPU worth the $1,500 they’re currently retailing for? The answer is an unequivocal ‘No’. It’s simply impossible to justify the price of this GPU on performance grounds. You lose virtually nothing by opting for an RTX 3080 instead and lowering just a few settings for an equivalent experience. It has all the same features, capabilities, and it uses much less power. The trouble is of course RTX 3080’s aren’t readily available at anything close to MSRP.



So we’re left with a couple of ways to look at this: First, you could criticise Nvidia for releasing a marginally better product at a substantially higher price – that happened with the RTX 2080Ti as well but they still sold well. This all stems from the first GPU shortage in 2017, when GTX 1080Ti’s were changing hands for well north of $1000. This set a precedent and sent clear signals that enthusiasts (or desperate gamers) would out-bid miners to get their hands on the current best in class GPUs. Nvidia stopped being shy about the four-figure threshold for their flagship GPUs. Some people see this as being ripped off, others that it’s just a function of market forces. The bad feeling originates from the fact that Nvidia are exploiting market conditions to elevate the price of this GPU by perhaps $200. Remember the cost of GDDR6X VRAM and how halving the quantity likely saves $100 or more in parts cost alone? That ‘saving’ clearly isn’t being handed on to the customer here.

You can also look at products like the RTX 3080 Ti as a luxury good: they clearly are. But like an expensive watch, car, or handbag the price isn’t justified in any way by the features of the product. They’re prestige items, as much about proving you can afford ‘the best’ as actually need the performance. When viewed like this objective metrics break down: No-one cares that the latest limited edition Porsche is just 0.1 seconds faster to 60 for an additional $50,000. They just want the fastest Porsche, and there will still be a waiting list to buy them. 

How you feel about this is likely down to your own personal assessment of value. It’s absolutely gutting at the moment that products like this exist when there are no affordable options for gamers. If this card existed alongside $450 RTX 3060 Ti’s and $700 RTX 3080’s, it wouldn’t feel like such an egregious situation. It’s the fact that people feel compelled to spend this amount just to get a card that leaves a  hint of exploitation in the air. In short, you should only consider the RTX 3080 Ti if money literally isn’t a thing to you – in which case, presumably the RTX 3090 is also in reach. But that 24GB VRAM is still wasted on games.

I absolutely love the way this card performs in VR, in the most demanding titles, at high resolutions. I absolutely hate the price and the state of the market. Hopefully, the market will correct in time and we are seeing signs of that already. So unless you absolutely need a card of this calibre right now, my advice would be to wait – prices are only going to come down from here.  

The post Nvidia RTX 3080 Ti Review: Top Flight Gaming, But At What Cost? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/reviews/nvidia-rtx-3080-ti-review/feed/ 0 808586
What is GeForce Now? Is it worth it? https://premiumbuilds.com/knowledge-base/what-is-geforce-now/ https://premiumbuilds.com/knowledge-base/what-is-geforce-now/#respond Thu, 10 Jun 2021 21:27:26 +0000 https://premiumbuilds.com/?p=808320 GeForce NOW is a service from NVIDIA that lets users stream their library of games to most devices. With subscription services for games becoming more and more popular – such as the growing PlayStation Now and XBOX GamePass options – GeForce Now is up against some stiff competition. While certainly not a service for everyone,… Read More »What is GeForce Now? Is it worth it?

The post What is GeForce Now? Is it worth it? appeared first on PremiumBuilds.

]]>

GeForce NOW is a service from NVIDIA that lets users stream their library of games to most devices. With subscription services for games becoming more and more popular – such as the growing PlayStation Now and XBOX GamePass options – GeForce Now is up against some stiff competition. While certainly not a service for everyone, there are some definite upsides to the offerings.

GeForce NOW is cloud-based, allowing you to stream gameplay straight from NVIDIA’s servers to your device. Notably, service extends to:

  • Laptops
  • Desktops
  • Macs
  • Android Devices
  • SHIELD TV

Most device types are well covered (sorry iPhone users), making the service actually notable. Unlike most other services, GeForce NOW does not come with its own library of games. Instead, you connect your own library from platforms such as Steam, Epic Games, and Uplay.

Additional quality of life features like cloud saves and high-quality streaming breath some life into the service as well. While it seems like a great option on the surface, it is not for everyone. Let’s take a look at some of the different benefits and drawbacks to see if it is worth it for you.

The Benefits of GeForce Now

GeForce Now serves a select group of people. If you have a well-established library of games and are looking for a way to play them on different devices, it just may be the best option on the market. However, if you are looking for instant access to games, your search should extend further out. The only games that GeForce Now launches with are free-to-play games – you have to supply everything else.

Most of the benefits of this service include:

  • 1080p, 60 FPS live stream playing of your games
  • Support for most of your existing library
  • A free option

Almost all reviews of GeForce Now are encouraging to potential users. So long as you have access to a high-speed internet connection, games stream flawlessly and with little input lag to whatever device you wish. Supporting streaming up to 1080p at 60 FPS is good enough for most users. If you want to game at higher resolutions or frame rates, you will have to default to your main device.

Notably, GeForce Now does not support every game in your library. Battle.net, for example, used to be supported but no longer is. NVIDIA has compiled a list of supported games here – you can ctrl + F to easily search for your favorite games. While most popular games are allowed, its worth looking at before dropping any money on the service.

Of course, you can also just hop directly into the free option. NVIDIA offers a free tier with standard access to anyone interested. The only caveat is a 1-hour session length time limit. Once you have hit that, NVIDIA bumps you off the service temporarily to make room for other users.


Integration With Most Systems

The main benefit of GeForce Now is being able to play your games across devices where normally you could not. Pulling up Pathfinder: Kingmaker or Far Cry 5 on your smartphone or aging laptop is a truly unique experience. Plus, the service includes multiplayer and controller support. With a good connection, most games will feel exactly like you are playing them on a nice computer.

GeForce Now is all about flexibility and expansion. If you spend most of your time gaming at one high-end computer, you are better off saving your money and putting it toward new games. Otherwise, enjoy access to AAA titles from whatever device you want.


Where GeForce Now Falls Flat

GeForce Now does not really fall flat in any one area. Instead, its main downside is simple; it serves a niche purpose. There are many PC gamers – maybe even most PC gamers – who will see extremely few benefits from the service. Unlike others on the market that expand your library or have a fresh selection of rotating games, GeForce Now just aims to let you play your games in new places.

For what it offers, it may be a bit too expensive. This is especially true when you consider that the service used to offer a $4.99 per month tier, now removed from the website. In some ways, NVIDIA seems lost about where to push GeForce now. The market it serves is happy with the service, but likely not growing.

Otherwise, the largest fear is games losing support for GeForce Now. All of Battle.net’s offerings used to be available on the service before Activision Blizzard pulled out a few years ago. While not a death blow to GeForce Now, it is a solemn reminder that publishers can leave whenever they wish – much like any other streaming service.


Pricing GeForce Now

Most people who enjoy the GeForce Now service will benefit from upgrading to the Priority tier. At $9.99 per month or $99.99 per year, it comes with a few quality-of-life benefits. These include:

  • Priority access to gaming servers
  • Extended session lengths
  • RTX availability while streaming

For most, having no time cap on the gaming session will be the biggest benefit. RTX is nice for making the games look even better but will once again up the required internet speed. Priority access is also nice, as some users report spending up to 20 minutes in queue on the free plan.

We recommend trying GeForce Now for free for a short while before upgrading. Unlike most services, the benefits you get at the Priority tier are not all that exclusive. You have access to the same games and (almost) same quality of streaming for free.

If you are enjoying having access to your games across multiple devices and want to do it without time limits, the upgrade is absolutely worth it. Notably, NVIDIA has run sales on GeForce Now in the past. If you are patient, it may be worth waiting for one.

Still, even without a sale, those looking to play high-end games on lower-quality devices will find the $10 per month fee worthwhile.


Final Thoughts

GeForce Now is a great option for those looking to expand the devices they can play their games on. Popping open a new, AAA title on your smartphone or old desktop and easily streaming in high-quality is a fantastic experience. It is also one that very few other services replicate. Relying on your own library may be either a benefit or drawback, depending on how many games you own.

Before buying GeForce Now, we recommend trying out the free version. You will quickly get a feel for whether it is for you. Remember that a strong internet connection and access to games is a prerequisite to enjoying the service.


The post What is GeForce Now? Is it worth it? appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/knowledge-base/what-is-geforce-now/feed/ 0 808320
Intel Core i9-11900K Review: Intel’s Last Stand | Performance Analysis vs 5800X vs 10850K https://premiumbuilds.com/reviews/intel-core-i9-11900k-review/ https://premiumbuilds.com/reviews/intel-core-i9-11900k-review/#respond Tue, 30 Mar 2021 13:00:08 +0000 https://premiumbuilds.com/?p=806672 Intel’s brand new flagship, the Core i9-11900K releases today and we’ve been given one to test and review. In this article, we’ll put it through its paces against its closest competitor in specification: The AMD Ryzen 7 5800X, and we’ve got an Intel Core i9-10850K for comparison as well because it’s the current high-performance value… Read More »Intel Core i9-11900K Review: Intel’s Last Stand | Performance Analysis vs 5800X vs 10850K

The post Intel Core i9-11900K Review: Intel’s Last Stand | Performance Analysis vs 5800X vs 10850K appeared first on PremiumBuilds.

]]>
Intel Core i9-11900K Review

Intel’s brand new flagship, the Core i9-11900K releases today and we’ve been given one to test and review. In this article, we’ll put it through its paces against its closest competitor in specification: The AMD Ryzen 7 5800X, and we’ve got an Intel Core i9-10850K for comparison as well because it’s the current high-performance value champion.

Intel has been lagging behind in the CPU wars for six months now. They’ve lacked a CPU that can challenge AMD’s Zen 3 line up for raw performance and were missing features notably PCIe 4.0 support. The new 11th generation Rocket Lake CPUs seek to address that Intel is making some bold performance claims.

This i9-11900K CPU boasts 5.3GHz peak boost speeds using Thermal Velocity Boost, 8 cores, and 16 threads. It uses the new Xe architecture integrated GPU. The road to Rocket lake hasn’t been smooth though: it’s suffered a convoluted development, originally scheduled to be released on a 10nm production process, then backported to 14nm when that failed. This the end of the line for this architecture, this process node, and this socket as far as Intel are concerned. This should represent the pinnacle of their current capability so we’re eager to find out what it can do.

Test methodology

We’ve taken great care to ensure this test is fair. To do that we’ve controlled every variable that we can. All the synthetic and gaming results you’ll see are obtained with the same RAM settings across the 3 CPUs under test. We’ve tested using an up to date BIOS (0605), released just 6 days before this release. We’ve used exactly the same motherboard for both Intel CPUs, and a B550 motherboard for the Ryzen 5800X, the MSI Mortar. 

For all the gaming and synthetic tests, we kept to Intel’s specifications for multi-core enhancements, power limits, and Thermal Velocity Boost. We did this because to our mind this is comparable to how we’ve tested the 5800X using PBO. Both CPUs were allowed to perform as they do with minimal setup, according to the manufacturer’s intentions, but with the automatic optimisations in place. It’s also the default behaviour of this motherboard. 

We verified this behaviour with A-B testing in a number of metrics and with both our RAM settings and motherboard settings the results represent this CPU performing at its best, outside of more involved manual tuning or overclocking. RAM was set to 3600MHz CL16-16-16-32 in all tests except the specific memory tests. There’s also the issue of ‘Gear 1 and Gear 2’ memory controller settings analogous to Ryzen’s Infinity Fabric and memory controller ratio settings – these tests are run in Gear 1 with the memory controller speed matched to memory speed. We’ve also got a separate article digging deeper into the impacts of Memory speed on performance on this CPU.

The Test System

Intel Core i9-11900K Test Setup

We ran both Intel CPUs in the Asus Z590 ROG Maximus XIII Hero. With 14 phase 90Amp VRMs this high-end Z590 motherboard is an overclockers dream and we found it very flexible in terms of memory settings. We ran the tests with the 0605 BIOS from ASUS including Intel’s latest Microcode updates. We used a Fractal Design Celcius S28+ AIO Cooler and an Ion+ 860W Platinum Power supply. 

For RAM, we used our 16GB Samsung B-Die 4400MHz CL16 kit, but run it at 3600MHz CL16 in order to match as closely as possible the settings in our Ryzen testing.

For the GPU we used the EVGA RTX 3080 XC3 Ultra but run our test settings in order to expose the CPU performance as much as possible, this powerful and consistent GPU helped us do that. 

The Ryzen comparison system is identical with the exception of an MSI Mortar B550 Motherboard.


Specifications

CPUIntel Core i9-11900KIntel Core i9-10850KAMD Ryzen 5800X
DesignIntel Core i9-11900KIntel Core i9-10850KRyzen 7 5800X
Price$539~$380~$449
Cores/Threads8/1610/208/16
Process14nm14nm7nm
ArchitectureRocket Lake-SComet Lake-SZen 3
Peak boost5.2GHz5.2GHz4.7GHz
Boost TechnologiesTVB, Turbo Boost Max 3.0, Adaptive Boost TechnologyTVB, Turbo Boost Max 3.0PBO
On Board GraphicsUHD 750UHD 630None
Power Draw125W (TDP) ~250W Max125W (TDP) ~250W Max105W (TDP)

Intel Core i9-11900K Performance Analysis

Synthetic Benchmarks

Cinebench R20

Cinebench R20 is a test of single and multicore performance whilst rendering a scene. It is almost entirely independent of memory speed which allows us to isolate raw CPU performance. 

Intel Core i9-11900K Cinebench R20

We conducted three runs and averaged to obtain these results. In the multi-core tests, we see that the 10850K and 5800X are neck and neck, both ahead 100 points at 5990 to the 11900K’s 5860. The 4 point difference between the 10850K and 5800X is imperceptible but the last-gen Intel CPU has 10 cores, not 8 to obtain this result.

Looking at the single-core performance again averaged over 3 runs we can see the difference: Both the 11900K and 5800X score an identical 624 points average, whilst the 10850K lags 100 points behind with a score of 516. 

This result is close enough that cooling set up or silicon quality on the chip could influence it but on the raw numbers, the Ryzen 7 CPU performs best in Cinebench R20 overall by matching the 10850K’s multi-core score, and the 11900K’s single-core score.  


Blender

Using Blender to render a couple of scenes, again we get a sense of the rendering performance of these CPUs.

Intel Core i9-11900K Blender

Note that shorter bars are better indicating less time taken: Different scenes favour different aspects of a CPUs performance, and in this test, we can see that for the ‘Classroom’ render, the 11900K and 10850K are neck and neck at 435 seconds, but the 5800X finishes first about 20 seconds quicker.

In BMW27 the i9-10850K takes the lead at 135 seconds, the 11900K is 10 seconds slower, and the 5800X finishes last about 20 seconds behind. There’s no clear winner here, and I feel obliged to point out that we’re using this as a test of the CPUs, and if you’re actually looking to accelerate 3D rendering a GPU will complete the task in a fraction of the time of these CPUs.


3DMark

Moving on to gaming-oriented benchmarks, 3D Mark. Focussing in on the CPU component of the Fire Strike and Time Spy benchmarks, these tests do bring memory performance into play somewhat and also heavily favour higher core counts as it’s a parallel test that uses all cores. 

Intel Core i9-11900K 3D Mark

The i9-11900K places last in Fire Strike, 500 points behind, and splits the difference between the other two CPUs under test in Time Spy. 

So rounding out our synthetic benchmarks, we see a picture of the i9-11900K having a high single-core speed, on a par with the 5800X and able to match the 10850K in some workloads despite having 2 fewer cores. But it’s not faster and struggles to make a mark in these tests.


Game Benchmarks

We ran our gaming benchmarks at 1080p to isolate CPU performance as much as possible, but retained settings that are relevant in the real world. The RTX 3080 helps us see differences in underlying performance. 

Call of Duty: Warzone

Call of Duty: Warzone is our first test, and we ran a 5-minute battle Royale against bots to try and give an overview of performance, not a snapshot. This game is a mix of CPU and GPU performance and you need both to really achieve high frame rates even at 1080p.

Intel Core i9-11900K Review COD Warzone FPS

We can see that the 10850K and 11900K perform almost identically here, within a couple of FPS on Average scoring just over 200FPS, min and maximum metrics. The 5800X is the clear winner though with stellar performance and 240FPS average. It’s disappointing that we’re not realising a generational performance lift in this test.


Rainbow 6 Siege

Rainbow 6 Siege has an inbuilt benchmark which we’ve found very consistent.

Intel Core i9-11900K Review Rainbow 6 siege FPS

Here the i9-11900K falls about 20 FPS behind the 10850K on average but is 60 FPS behind the 5800X. Obviously, all three CPUs develop very high performance but it’s a shock to see Intel’s latest flagship unable to outperform either their last generation of AMD’s current equivalent in this highly CPU dependent game.


Doom Eternal

Doom Eternal is also very well optimised and capable of high frame rates and we logged two minutes of play to give us these results:

Intel Core i9-11900K Review Doom Game FPS

The 11900K and 10850K perform nearly identically here again, with the 5800X clearly in the lead demonstrating that even with higher settings we’re not GPU-limited in these tests thanks to the power of the RTX 3080.


Shadow of the Tomb Raider

Moving on to more demanding titles, Shadow of the Tomb Raider’s inbuilt benchmark has exceptional consistency and gives us a breakdown of CPU performance, it’s those numbers we’re looking at here to completely isolate it from GPU performance.

Intel Core i9-11900K Review SoTR FPS

This test is a close-run thing, the i9-10850K is marginally behind, the 5800X marginally in front on average. In reality, it’ll be your GPU that dictates performance in this game, but we’re seeing a trend in performance emerge now between these three CPUs.


Red Dead Redemption 2

Red Dead Redemption 2 hands another win to the Ryzen 5800X.

Intel Core i9-11900K Review Red Dead Redemption 2 FPS

Again it’s surprising to see the newest CPU bringing up the rear here, 15 FPS on average behind the 5800X and slightly behind the 10850K.


Flight Simulator 2020

And finally, the game that places the biggest demand on CPU power here, Flight Simulator 2020. This benchmark comprises a three-minute flight from La Guardia over Manhattan and delivers a stern test of the CPU. GPU utilisation stays under 70% here and performance is ultimately dependent on CPU speed.

Intel Core i9-11900K Review Flight Sim 2020 FPS

Here the i9-11900K outperforms the 10850K across the board, delivering 61 FPS on average. That’s not a bad score by any means, but the 5800X beats it once again at 63 FPS average, although performance is slightly less consistent with lower lows, 1% lows and 0.1% lows. Intel made bold claims in their launch presentation about the 11900K’s performance, stating that it was capable of beating the 5900X by 11% – it’s possible that that is the case in other tests or different circumstances, but in this benchmark, it’s not the case falling slightly behind on average.


Gaming performance conclusions

Rounding up the game testing sees the Intel Core i9-11900K in an interesting position: We’re used to seeing the latest component develop a commanding lead. In these tests Intel’s new flagship, the i9-11900K, not only failing to beat a six-month-old part from AMD, but on occasion struggling to match the last generation part from Intel themselves, and one that’s not even their top-flight product.


Memory Speed Scaling

There’s been some discussion online about memory ratios – ‘Gear 1 and Gear 2’ modes in relation to the i7-11700K and i9-11900K. It also helps explain how we arrived at our memory settings for these benchmarks. We’ll touch on this now to cover key points but if it interests you please see our companion article which digs deeper into the effects of memory latency on performance for this CPU and the i9-10850K.

Gear 1 and Gear 2 are simply the full speed or half speed memory controller ratios for the CPU to control RAM. Much like Ryzen’s ‘uclock’ setting this controller to half speed induces latency, and that latency induces a performance penalty. 

Let’s look at a couple of A-B tests in our most consistent benchmarks to demonstrate this effect: 

Intel Core i9-11900K Review Shadow of the Tomb Raider G1vG2

You can see that ‘Gear One’ offers a slight performance bump, a few FPS, but it’s not a marked difference.

RAM speed also has its own impact on latency. To demonstrate here’s a series of runs of Shadow of the Tomb Raiders benchmarks at different RAM frequencies, but timings retained at CL 16-16-16-32 up to 3600Mhz, and CL17 at 4000MHz for stability. We’re running Gear 2 throughout here because Gear 1 wasn’t stable at 4000MHz: Remember this CPU is only officially rated up to 3200MHz or a 1600MHz Memory Clock speed because the actual RAM clock speed is half the transfer speed.

Intel Core i9-11900K Review RAM Scaling

You can see how the performance gain is significant, but peaks at around 3600MHz and tails off at 4000MHz because we have to loosen timings to maintain stability. The detriment of running 2400MHz RAM is serious, and this data challenges the notion that ram speed is unimportant to Intel CPUs or less important than Ryzen. It clearly makes a big difference to potential performance. This is why we felt it was vitally important to give this CPU the same advantage as the 5800X, and as it happens that occurs around the same RAM settings, 3600MHZ CL16 and gear 1. Overall RAM latency clearly has a big impact on this CPU’s performance. 

If you’d like to see a more in-depth analysis of this including data from the 10850K, please read our linked article focussing on the topic here.


Power and thermals

Power draw and the consequent heat output has long since been the cost of high performance on Intel’s 14nm CPUs. We ran tests to explore this on the i9-11900K. 

The most illuminating result was using the all-core load in Cinebench, and toggling Thermal Velocity Boost to ascertain its effects on both CPU temperature and power draw. These numbers are reported by HWinfo64, total package power and temperature, and in both cases with the 280MM AIO running at full speed.

Intel Core i9-11900K Review Power and Thermals

The first run to the left shows behaviour with the thermal velocity boost enabled – you can see that stock power limits are enforced and the CPU regulates power to 250W. The ASUS motherboard allows this behaviour in its default configuration. All cores sit at about 4.7GHz and the CPU does a good job of holding temperatures at 70°C.  In the second run to the right, disabling Thermal Velocity boost actually allows the CPU to disobey power limits to achieve and maintain as high clock speeds as possible and it goes pretty wild, drawing up to 330W and hitting its new target of 90C before backing off the power and clocks to prevent overheating. Before that, a few cores are hitting 5.1GHz with most at 5Ghz. As a result of over-riding the power and thermal constraints, it scores 6042 points vs around 5900 points in the first run where the lower power limit is enforced. 

This second run is very much a ‘gloves off no limits’ approach, with normal behaviour overridden just to demonstrate the kind of power draw you may encounter if you’re looking to overclock this CPU. The first run is much more indicative of ‘normal’ behaviour and power draw, although in most cases after the higher power time limit, Tau expires, the package power will drop to 125W for extended full core loads. 

Another result of note is that simply changing the CPU cooler settings from automatic behaviour where it scales speed with CPU temperature to full speed all the time yields a 100 point increase in Cinebench R20 – cooling the CPU more aggressively and holding lower temperatures allows it to achieve higher performance. 

The power draw of this CPU can be pretty insane, and you do need both a very solid motherboard power delivery set up and a high-end cooling solution to get the best of it, particularly if you intend on overclocking it.


Conclusion

Intel Core i9-11900K Conclusions

So, where does this information leave us?

This CPU is a disappointment. We’ve got Intel’s flagship product here, and yet we see it fail to consistently outperform their last-generation chip, and fail to beat the primary competitor from AMD.
Let’s not pretend Intel haven’t tried: They’re used to the top dog position and if they could beat AMD they would. The Zen 3 CPUs were released six months ago so there was a clear target to aim for, and in the synthetics, we can see that they’ve matched it, like for like. But in the gaming tests, it can’t compete.

Ultimately what we’re seeing here is the consequences of the limitations of that ageing 14nm process. At 10nm perhaps this CPU would have run cooler, more efficiently, and at a higher clock speed. Perhaps it would have less cache latency helping gaming performance. But that’s not the case. Intel has laid it all on the table and this is it. 

Then we come onto the real issue, which is price. This is a $539 product. You have to ask yourself what justifies that price? The raw performance doesn’t, and to cap it all you need to invest at least $250 more in a motherboard plus a top tier cooling solution to support it. Not only is the Ryzen 7 5800X $100 cheaper, but it also delivered the results here running on a $150 motherboard. For the cost difference, you could have a 5900X and 12 core performance that blows the 11900K into the weeds for any application that can make use of them. 

This 11th CPU generation needs to be viewed as what it is – a stopgap that brings Intel up to the specification of Zen 3 chips, with native PCIe 4.0 support but cannot compete on raw performance. It’s also the end of the line for this process, a representation of what many years of refinement and tweaking can do but also what it can’t do. It can’t beat the competition. 

An area we haven’t assessed is the performance of the new integrated GPU – it has some features that may make a significant difference if you do a lot of video encoding or transcoding and again Intel make some bold claims in their productivity slides – so if you’re considering the 11th Generation for a PC focussed on those tasks it will pay to dig out more specific benchmarks. 

Finally, Asus released yet another BIOS just 5 days before this release, giving us insufficient time to re-test and revalidate all our results. It claims to enable ‘Adaptive Boost Technology’ for this specific CPU, the only one in the product stack to use it. That may give a small bump in multi-core workloads in a correctly configured system, but given that it’s a Beta, and this CPU has actually existed for some time prior to launch, we don’t see it making a step-change in performance. It’s something we’ll review later.


Alternatives?

ryzen 9 5950x vs 5900x

Ultimately, if you need a PCIe 4.0 platform for content creation or high-performance computing you’ll be looking at AMD anyway, the Ryzen 9 5900X and 5950X are seriously performant parts when available.
If you want a very powerful CPU on a budget then Intel caters to that at the moment with the i9-10850K which has been as low as $320, the i7-10700K or if you do want Rocket lake then I cannot see there is a huge gap in performance between this i9-11900K and the i7-11700K beneath it – it’s still an 8-core, 16-thread parts with very good gaming performance and more than enough versatility. And of course, if it’s just gaming you’re interested in, then the Zen 3 Ryzen 5 5600X and 7 5800X which are now more readily available offer the same or better performance at just $300 and $450 respectively, and with a lower platform cost, whilst the i5-10600K is discounted, doesn’t need as expensive of a motherboard, and offers excellent gaming performance as well. 

Nice try Intel, but sadly this CPU just isn’t good enough to justify its price tag. The box is really lovely though. 

The post Intel Core i9-11900K Review: Intel’s Last Stand | Performance Analysis vs 5800X vs 10850K appeared first on PremiumBuilds.

]]>
https://premiumbuilds.com/reviews/intel-core-i9-11900k-review/feed/ 0 806672